Convolutional neural network (CNN) is one of the deep structured algorithms widely applied to analyze the ability to visualize and extract the hidden texture features of image datasets. The study aims to automatically extract the self-learned features using an end-to-end learning CNN and compares the results with the conventional state-of-art and traditional computer-aided diagnosis system’s performance. The architecture consists of eight layers: one input layer, three convolutional layers and three sub-sampling layers intercepted with batch normalization, ReLu and max-pooling for salient feature extraction, and one fully connected layer that uses softmax function connected to 3 neurons as output layer, classifying an input image into one of three classes categorized as nodules ≥ 3 mm as benign (low malignancy nodules), malignant (high malignancy nodules), and nodules < 3 mm and non-nodules ≥ 3 mm combined as non-cancerous. For the input layer, lung nodule CT images are acquired from the Lung Image Database Consortium public repository having 1018 cases. Images are pre-processed to uniquely segment the nodule region of interest (NROI) in correspondence to four radiologists’ annotations and markings describing the coordinates and ground-truth values. A two-dimensional set of re-sampled images of size 52 × 52 pixels with random translation, rotation, and scaling corresponding to the NROI are generated as input samples. In addition, generative adversarial networks (GANs) are employed to generate additional images with similar characteristics as pulmonary nodules. CNNs are trained using images generated by GAN and are fine-tuned with actual input samples to differentiate and classify the lung nodules based on the classification strategy. The pre-trained and fine-tuned process upon the trained network’s architecture results in aggregate probability scores for nodule detection reducing false positives. A total of 5188 images with an augmented image data store are used to enhance the performance of the network in the study generating high sensitivity scores with good true positives. Our proposed CNN achieved the classification accuracy of 93.9%, an average specificity of 93%, and an average sensitivity of 93.4% with reduced false positives and evaluated the area under the receiver operating characteristic curve with the highest observed value of 0.934 using the GAN generated images. © 2020, Springer-Verlag London Ltd., part of Springer Nature.