Collecting large labelled data sets is not always possible. For example, there are many factors that limit data availability such as militarily significant target types and Army has clear interest in the infrared (IR) domain where the deficiency of data is apparent. We propose a new approach to target recognition using transfer learning from visible image/video domain to IR domain for Armys applications. First, we train a deep convolutional neural network (CNN) using visible images from scratch, or some pre-trained CNNs such as AlexNet, ResNet, etc. can be custom trained using some specific visible data for a particular military application. Second, we propose to apply cycle consistent generative adversarial network (GAN) for unpaired image-to-image translation. That is, we will convert visible images to IR images via GAN. Third, we will also apply a game engine known as ARMA3 to synthesize realistic IR images. Fourth, the synthesized IR images from the above two steps will be further improved based on attention aware GAN (ATA-GAN). The improved infrared images will be used to fine-tune the CNN model for target classification. Fifth, the learned CNN model will then be transferred in the training process where labelled IR images are used.