π₯ Facial Recognition & Verification
Author: Martin Badrous
This repository exposes a practical face-verification pipeline built on top of pretrained face recognition models.
Given two photographs, it extracts fixed-length embeddings and computes their similarity to decide whether they depict the same person.
The project is designed for demonstration and research purposes and is not intended for biometric authentication in critical applications.
π§ Overview
The original Facial Recognition GitHub repository provides a modern PyTorch training pipeline for facial expression or identity classification.
It features automatic dataset splitting, transfer learning with ResNet18 or EfficientNet-B0, mixed precision and extensive logging.
While powerful, it focuses on classification rather than verification.
This Hugging Face version refactors that work into a face verification system.
Instead of predicting a discrete label, we map each face into a 512-dimensional embedding space and measure how close two embeddings are.
π§± Model Architecture
We use the FaceNet architecture β an Inception-ResNet network pretrained on the VGGFace2 dataset.
The model provides a 512-dimensional embedding for each detected face.
During verification, cosine similarity between two embeddings is computed:
- A similarity close to 1.0 β same person
- A similarity close to 0.0 β different people
Reference model: py-feat/facenet
π§© Dataset
Evaluation is based on the Labeled Faces in the Wild (LFW) dataset β a benchmark of celebrity face pairs widely used for assessing verification algorithms.
Each pair is labelled as same or different.
FaceNet achieves β 99 % accuracy on LFW when fine-tuned on VGGFace2.
Although LFW is not included here (due to licensing), you can evaluate the model by downloading it from public sources and reusing the provided code.
βοΈ Usage
1οΈβ£ Install dependencies
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
2οΈβ£ Run the demo locally
python app.py
The Gradio interface will open in your browser.
Upload two images β the app will detect faces, extract embeddings, and show whether they belong to the same person, along with a similarity score.
If no face is detected, an appropriate message will be displayed.
π§ Verification API
The core logic resides in the src package.
You can import and use these utilities programmatically:
from PIL import Image
from src.verify_faces import verify_images
img1 = Image.open('path/to/photo1.jpg')
img2 = Image.open('path/to/photo2.jpg')
similarity, is_same = verify_images(img1, img2, threshold=0.8)
print(f"Cosine similarity: {similarity:.3f}")
print("Same person" if is_same else "Different people")
π Performance
Pretrained FaceNet models typically achieve:
| Metric | Typical Value |
|---|---|
| Accuracy (LFW) | β 99 % |
| Cosine Similarity (same) | > 0.8 |
| Cosine Similarity (different) | < 0.5 |
Performance may vary depending on image quality, resolution, and lighting.
For production systems, fine-tune on domain-specific data and calibrate your similarity threshold.
β οΈ Limitations
- Bias & Fairness: Pretrained facial models may exhibit demographic bias β they can perform better on certain ethnicities or genders. Evaluate thoroughly before deployment.
- Privacy: Handle biometric data in compliance with privacy laws (GDPR, HIPAA, etc.). Never store embeddings without consent.
- Security: This demo lacks spoofing or liveness detection β printed photos or digital screens can fool it.
π License
This project is licensed under the MIT License.
See the LICENSE file for details.
π Citation & Contact
If you use this project in academic work, please cite the original FaceNet paper.
Schroff et al., FaceNet: A Unified Embedding for Face Recognition and Clustering, CVPR 2015.
DOI: 10.1109/CVPR.2015.7298682
π© Contact: [email protected]
π§ Project Page: Hugging Face β Facial-Recognition-Verification