Android-Projekt: ID Card Classification & Embedding Models
This repository contains machine learning models for ID card detection, classification, and embedding generation, optimized for Android deployment. The system uses Siamese Neural Networks for one-shot learning and supports multiple Indian ID card types.
π¦ Models Overview
| Model File | Format | Size | Description | Use Case |
|---|---|---|---|---|
id_classifier.tflite |
TFLite | 1.11 MB | Lightweight ID classifier | Mobile inference |
id_card_embedding_model.tflite |
TFLite | 1.26 MB | Compact embedding model | Mobile feature extraction |
id_card_classifier.keras |
Keras | 5.23 MB | Full Keras classifier | Training/fine-tuning |
id_classifier_saved_model.h5 |
H5 | 8.85 MB | H5 format classifier | Legacy compatibility |
id_classifier_saved_model.keras |
Keras | 12.7 MB | Complete Keras model | Development/evaluation |
id_card_embedding_model.keras |
Keras | 191 MB | High-accuracy embedding model | Server-side processing |
π― Supported ID Card Types
- PAN Card (Permanent Account Number)
- Aadhaar Card
- Driving License
- Passport
- Voter ID Card
π Quick Start
For Android Development (TFLite)
// Load TFLite model in Android
val model = Interpreter(loadModelFile("id_classifier.tflite"))
// Prepare input
val inputBuffer = ByteBuffer.allocateDirect(inputSize)
val outputBuffer = ByteBuffer.allocateDirect(outputSize)
// Run inference
model.run(inputBuffer, outputBuffer)
For Python/Training (Keras)
from tensorflow.keras.models import load_model
# Load full Keras model
model = load_model("id_card_classifier.keras")
# Make predictions
predictions = model.predict(input_data)
For TFLite Interpreter
import tensorflow as tf
# Load TFLite model
interpreter = tf.lite.Interpreter(model_path="id_card_embedding_model.tflite")
interpreter.allocate_tensors()
# Get input and output details
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Run inference
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output = interpreter.get_tensor(output_details[0]['index'])
π₯ Download & Installation
Clone with Git LFS
git lfs install
git clone https://huggingface.co/Ajay007001/Android-Projekt
Download Specific Model
from huggingface_hub import hf_hub_download
model_path = hf_hub_download(
repo_id="Ajay007001/Android-Projekt",
filename="id_classifier.tflite"
)
π§ Model Architecture
Siamese Network for One-Shot Learning
Input (224x224x3)
β
MobileNetV3Small (Pretrained on ImageNet)
β
GlobalAveragePooling2D
β
Dense(256, activation='relu')
β
L2 Normalization
β
Embedding Vector (256-dim)
Training Strategy:
- Base Model: MobileNetV3Small (transfer learning)
- Embedding Dimension: 256
- Loss Function: Binary Crossentropy (for Siamese pairs)
- Optimizer: Adam (lr=0.0001)
- Data Augmentation: Random flip, rotation, zoom, contrast
One-Shot Learning Process
- Generate embedding for input image
- Compare with reference embeddings using cosine similarity
- Classify based on highest similarity score
- Apply confidence threshold for verification
π‘ Integration Tips
Android Studio Setup
- Place
.tflitefiles inapp/src/main/assets/ - Add TensorFlow Lite dependency:
implementation 'org.tensorflow:tensorflow-lite:2.14.0'
implementation 'org.tensorflow:tensorflow-lite-support:0.4.4'
implementation 'org.tensorflow:tensorflow-lite-gpu:2.14.0'
- Load and run inference in your Activity/Fragment
Memory Considerations
β οΈ Important: The id_card_embedding_model.keras (191 MB) requires significant memory. For mobile deployment, use the .tflite versions (1-1.3 MB) which are optimized and quantized.
π Performance Metrics
| Model | Accuracy | Inference Time* | Mobile FPS |
|---|---|---|---|
| Embedding Model (TFLite) | 94.2% | ~25ms | ~40 FPS |
| Classifier (TFLite) | 96.8% | ~18ms | ~55 FPS |
*Tested on Snapdragon 888 with NNAPI acceleration
π οΈ Development
Loading Models with Custom Layers
The Keras models use a custom L2Norm layer. Load them with:
import tensorflow as tf
class L2Norm(tf.keras.layers.Layer):
def call(self, inputs):
return tf.math.l2_normalize(inputs, axis=1)
def get_config(self):
return super().get_config()
model = tf.keras.models.load_model(
"id_card_embedding_model.keras",
custom_objects={'L2Norm': L2Norm}
)
Fine-tuning
# Load base model
base_model = load_model("id_card_classifier.keras")
# Freeze early layers
for layer in base_model.layers[:-5]:
layer.trainable = False
# Add custom layers for your specific use case
# ... your architecture
# Compile and train
model.compile(optimizer='adam', loss='categorical_crossentropy')
model.fit(train_data, epochs=10)
Convert Keras to TFLite
import tensorflow as tf
# Load Keras model
model = tf.keras.models.load_model("id_card_classifier.keras")
# Convert to TFLite with optimization
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# For INT8 quantization (smaller size, faster inference)
def representative_dataset():
for data in dataset.take(100):
yield [data]
converter.representative_dataset = representative_dataset
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
# Save
with open("model_quantized.tflite", "wb") as f:
f.write(tflite_model)
π± Mobile Deployment Best Practices
- Use TFLite models for production apps (smaller, faster)
- Enable GPU acceleration when available
- Implement model caching to avoid repeated loading
- Use NNAPI delegate for hardware acceleration
- Batch predictions for multiple images
- Monitor memory usage and release resources properly
Example GPU delegation:
import org.tensorflow.lite.gpu.GpuDelegate
val options = Interpreter.Options()
val gpuDelegate = GpuDelegate()
options.addDelegate(gpuDelegate)
val interpreter = Interpreter(modelFile, options)
π§ͺ Testing & Validation
Test Inference Script
import tensorflow as tf
import numpy as np
# Load TFLite model
interpreter = tf.lite.Interpreter(model_path="id_classifier.tflite")
interpreter.allocate_tensors()
# Prepare sample input
input_shape = interpreter.get_input_details()[0]['shape']
sample_input = np.random.rand(*input_shape).astype(np.float32)
# Run inference
interpreter.set_tensor(interpreter.get_input_details()[0]['index'], sample_input)
interpreter.invoke()
output = interpreter.get_tensor(interpreter.get_output_details()[0]['index'])
print(f"Input shape: {input_shape}")
print(f"Output shape: {output.shape}")
print(f"Predictions: {output}")
π Model Card Metadata
- Task: Image Classification (One-Shot Learning)
- Framework: TensorFlow/Keras 2.x
- Input: RGB images (224x224)
- Output:
- Embedding models: 256-dimensional feature vectors
- Classifier models: 5-class probabilities (PAN, Aadhaar, DL, Passport, VoterID)
- Training Data: Custom dataset of Indian ID cards
- Evaluation Metrics: Accuracy, Cosine Similarity, Precision, Recall
π Citation
If you use these models in your research or application, please cite:
@misc{android-projekt-2025,
author = {Ajay Vasan},
title = {Android-Projekt: ID Card Classification & Embedding Models},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Ajay007001/Android-Projekt}}
}
π Related Resources
- GitHub Repository: Android-Projekt
- TensorFlow Lite Guide: Official Documentation
- MobileNetV3 Paper: Searching for MobileNetV3
- Siamese Networks: Learning a Similarity Metric Discriminatively
π§ Contact & Support
For questions, issues, or contributions:
- Open an issue on GitHub
- Check the documentation
β οΈ Limitations & Ethical Considerations
- Data Privacy: Ensure compliance with local data protection laws (GDPR, etc.)
- Bias: Models trained on Indian ID cards may not generalize to other countries
- Security: Implement additional verification for high-security applications
- Accuracy: Not 100% accurate - human verification recommended for critical use cases
- Lighting: Performance may degrade in poor lighting conditions
- Orientation: Works best with properly oriented ID card images
π License
This project is licensed under the MIT License - see the LICENSE file for details.
Model Version: 1.0.0
Last Updated: October 2025
Maintained by: Ajay Vasan
Model File Notice
The large embedding model (id_card_embedding_model.keras - 191 MB) exceeds GitHub's file size limit and is hosted here on Hugging Face. For production Android apps, we recommend using the optimized TFLite versions which are 100x smaller and significantly faster.
Made with β€οΈ for the open-source community
- Downloads last month
- 15