Facial gender recognition from images using traditional features


Digital images of face have become common in twenty-first century due to availability of inexpensive image-sensors and affordable networking between the devices. With more facial images in the digital ecosystem, the applications associated with these images becomes more common. The images are constantly being scrolled in social media portals, being scanned for biometric verifications with majority of applications relying on gender recognition as precursor such as in affect recognition, custom human-machine interfaces or simply surveillance.
Although a very naïve task for even human children, the task of recognizing gender from human facial images is a challenging task for machines due to the fact that faces may be covered partially, or tilted aggressively or the images may be too noisy. Various methods have been proposed in the literature in order to overcome such challenges.
While a variety of methods have been proposed for facial gender recognition within last two decades, their performance have been reported upon different datasets having different size, sources, lighting conditions, facial expressions, etc. These are the datasets not custom-made for gender recognition but majority were proposed for facial recognition only. Hence, most facial-gender recognition methods were seldom tested upon a standard dataset with well-balanced distribution between male and female classes.
Significance of the work
The main contribution of the experiment is extension in the earlier baseline results reported upon a well-balanced public facial image dataset- the LFW-Gender dataset (Jalal and Tariq, 2017).

Dataset used

This work used LFW-Gender dataset which has the following salient features:

•Specifically designed for facial gender recognition

•Contains 200*200 color images

•Equal number of male and female faces

•Predefined 4-folds structure

•No repetition of a person across train, validate and test subsets.

Specimen images from LFW-Gender dataset

FeaturesML modelAverage Testing Accuracy
Raw pixelskNN72.35
Raw pixelsLinear SVM78.28
Raw pixelsRBF SVM85.81
LDA featureskNN69.44
LDA featuresLinear SVM76.01
LDA featuresRBF SVM83.82
PCA featureskNN76.51
PCA featuresLinear SVM83.88
PCA featuresRBF SVM86.11
Random ProjectionskNN69.83
Random ProjectionsLinear SVM74.08
Random ProjectionsRBF SVM75.09
PixelsCNN from scratch87.95
PixelsFine-tuned CNN91.44
PixelsHybrid Quantum Neural network93.68
Previously Reported Results on LFW-Gender dataset



•RGB to Grayscale

•Downscaling to 50*50 pixels resolution

•Image normalization using training set mean and standard deviation  

New pixel value=(Pixel Value  – Train set mean)/(Train set Standard deviation)

Traditional Features Investigated


2.LBP-based  => 2500-d vectors

3.GLCM-based => 6-d vectors (Energy, Dissimilarity, Entropy, Angular second moment, Correlation, Contrast)

ML Models Used

1.Linear SVM

2.RBF-kernel SVM

3.K-nearest neighbours

4.Random Forests

Features:-  ML modelHOGLBPGLCM
Fold 1
Random Forest81.5081.5057.81
Fold 2
Random Forest80.3580.1358.43
Fold 3
Random Forest80.8878.6558.03
Fold 4
Random Forest80.8780.5858.29
Experimental Results: Test-set accuracy on 4 folds of LFW-Gender dataset images resized to 50*50 pixels

Analysis: Effect of image size on Classifier Performance


1.There is positive correlation between image size and test-accuracy till (150*150) in most cases

2.Exceptions: LBP features upon kNN or RF, may due to large-D vectors into ML models

3.Accuracy with HOG decrements after (100*100) indicating finer gradient info is disadvantages for the task


  1. Experimented with twelve classifiers formed out of possible combination of three popular traditional feature sets with four widely used machine learning models.
  2. The test-set performances of the investigated classifiers were analyzed for different input image resolutions.
  3. Finally, the classifiers’ performances are compared with those previously reported upon the same dataset.
  4. Extended the baseline results upon a standard dataset custom-developed for facial gender recognition task.
  5. It may prove useful for future computer-vision for solving this task for fair comparison of newly proposed methods upon a standard dataset.

Under the guidance of :

  1. Professor: linkedin.com/in/tanmoy-chakraborty-89553324
  2. Prof. Website: faculty.iiitd.ac.in/~tanmoy/
  3. Teaching Fellow: Ms Ishita Bajaj
  4. Teaching Assistants: Shiv Kumar Gehlot, Vivek Reddy, Pragya Srivastava, Chhavi Jain, Shikha Singh and Nirav Diwan.


Altman, N. S. (1992) ‘An introduction to kernel and nearest-neighbor nonparametric regression’, American Statistician, 46(3), pp. 175–185. doi: 10.1080/00031305.1992.10475879.

Azzopardi, G. et al. (2018) ‘Fusion of Domain-Specific and Trainable Features for Gender Recognition from Face Images’, IEEE Access, 6, pp. 24171–24183. doi: 10.1109/ACCESS.2018.2823378.

Azzopardi, G., Greco, A. and Vento, M. (2016) ‘Gender recognition from face images with trainable COSFIRE filters’, 2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS 2016, (August), pp. 235–241. doi: 10.1109/AVSS.2016.7738068.

Cortes, C. and Vapnik, V. (1995) ‘Support-Vector Networks’, Machine Learning, 20(3), pp. 273–297. doi: 10.1023/A:1022627411411.

Dalal, N. and Triggs, B. (2005) ‘Histograms of oriented gradients for human detection’, Proceedings – 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, I, pp. 886–893. doi: 10.1109/CVPR.2005.177.

Eidinger, E., Enbar, R. and Hassner, T. (2014) ‘Age and gender estimation of unfiltered faces’, IEEE Transactions on Information Forensics and Security, 9(12), pp. 2170–2179. doi: 10.1109/TIFS.2014.2359646.

Gao, W. et al. (2008) ‘The CAS-PEAL large-scale chinese face database and baseline evaluations’, IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans, 38(1), pp. 149–161. doi: 10.1109/TSMCA.2007.909557.

Gross, R. et al. (2010) ‘Multi-PIE’, Image and Vision Computing, 28(5), pp. 807–813.

Haralick, R. M., Dinstein, I. and Shanmugam, K. (1973) ‘Textural Features for Image Classification’, IEEE Transactions on Systems, Man and Cybernetics, SMC-3(6), pp. 610–621. doi: 10.1109/TSMC.1973.4309314.

Ho, T. K. (1995) ‘Random decision forests’, in Proceedings of the International Conference on Document Analysis and Recognition, ICDAR, pp. 278–282. doi: 10.1109/ICDAR.1995.598994.

Huang, G. B. et al. (2008) ‘Labeled faces in the wild: A database for studying face recognition in unconstrained environments’, in Workshop on Faces in ‘Real-Life’ Images: Detection, Alignment, and Recognition. Marseille, France. Available at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=

Jalal, A. and Tariq, U. (2017) ‘The LFW-Gender Dataset’, in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 531–540. doi: 10.1007/978-3-319-54526-4.

Jesorsky, O., Kirchberg, K. J. and Frischholz, R. W. (2001) ‘Robust face detection using the Hausdorff distance’, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2091, pp. 90–95.

Jonathon Phillips, P. et al. (2000) ‘The FERET evaluation methodology for face-recognition algorithms’, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(10), pp. 1090–1104. doi: 10.1109/34.879790.

Martinez, A. M. and Benavente, R. (1998) ‘The AR face database’, CVC Technical Report 24. doi: 10.1023/B:VISI.0000029666.37597.

Messer, K. et al. (1999) ‘XM2VTSDB: The Extended M2VTS Database’, Proceedings of the Second International Conference on Audio and Video-based Biometric Person Authentication (AVBPA’99), pp. 1–6. doi:

Milborrow, S. and Nicolls, F. (2008) ‘Locating facial features with an extended active shape model’, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 5305 LNCS(PART 4), pp. 504–513. doi: 10.1007/978-3-540-88693-8-37.

Mittal, S. and Dana, S. K. (2020) ‘Gender Recognition from Facial Images using Hybrid Classical-Quantum Neural Network’, in 2020 IEEE Students Conference on Engineering & Systems (SCES). Prayagraj: IEEE. doi: In press.

Mittal, Shubham and Mittal, Shiva (2019) ‘Gender Recognition from Facial Images using Convolutional Neural Network’, in 2019 Fifth International Conference on Image Information Processing (ICIIP). IEEE, pp. 347–352. doi: 10.1109/ICIIP47207.2019.8985914.

Ng, C.-B., Tay, Y.-H. and Goi, B.-M. (2015) ‘A review of facial gender recognition’, Pattern Analysis and Applications, 18(4), pp. 739–755. doi: 10.1007/s10044-015-0499-6.

Ng, C. B., Tay, Y. H. and Goi, B. M. (2012) ‘Recognizing human gender in computer vision: A survey’, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 7458 LNAI, pp. 335–346. doi: 10.1007/978-3-642-32695-0_31.

Ojala, T. and Pietikaeinen, M. (2005) ‘Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns’, IEEE Trans. Pattern Analysis and Machine Intelligence, 24(7), pp. 971–987.

Pedregosa, F. et al. (2011) ‘Scikit-learn: Machine Learning in Python Gaël Varoquaux Bertrand Thirion Vincent Dubourg Alexandre Passos PEDREGOSA, VAROQUAUX, GRAMFORT ET AL. Matthieu Perrot’, Journal of Machine Learning Research, 12, pp. 2825–2830. Available at: http://scikit-learn.sourceforge.net.

Phillips, P. J. et al. (2005) ‘Overview of the face recognition grand challenge’, Proceedings – 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, I, pp. 947–954. doi: 10.1109/CVPR.2005.268.

Ricanek, K. and Tesafaye, T. (2006) ‘MORPH: A longitudinal image database of normal adult age-progression’, FGR 2006: Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition, pp. 341–345. doi: 10.1109/FGR.2006.78.

Shan, C. (2012) ‘Learning local binary patterns for gender classification on real-world face images’, Pattern Recognition Letters, 33(4), pp. 431–437. doi: 10.1016/j.patrec.2011.05.016.

Sim, T., Baker, S. and Bsat, M. (2002) ‘The CMU Pose, Illumination, and Expression (PIE) database’, Proceedings – 5th IEEE International Conference on Automatic Face Gesture Recognition, FGR 2002, pp. 53–58. doi: 10.1109/AFGR.2002.1004130.

Singh, V., Shokeen, V. and Singh, B. (2013) ‘Comparison Of Feature Extraction Algorithms For Gender Classification From Face Images’, International Journal of Engineering Research and Technology, 2(5), pp. 1313–1318.

Sun, Y., Wang, X. and Tang, X. (2013) ‘Deep convolutional network cascade for facial point detection’, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 3476–3483. doi: 10.1109/CVPR.2013.446.

Yang, Z. and Ai, H. (2007) ‘Demographic classification with local binary patterns’, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 4642 LNCS, pp. 464–473.

Zhou, Y. et al. (2019) ‘Face and gender recognition system based on convolutional neural networks’, Proceedings of 2019 IEEE International Conference on Mechatronics and Automation, ICMA 2019, pp. 1091–1095. doi: 10.1109/ICMA.2019.8816192.

Zhou, Y. and Li, Z. (2016) ‘Real-time gender recognition based on eigen-features selection from facial images’, IECON Proceedings (Industrial Electronics Conference), pp. 1025–1030. doi: 10.1109/IECON.2016.7793080.

Leave a Reply

Your email address will not be published. Required fields are marked *

1 − one =