david-shared-space / README.md
AbstractPhil's picture
Update README - Run 20251013_004438
0585bb7 verified
---
language: en
license: mit
tags:
- image-classification
- imagenet
- multi-scale
- feature-geometry
- david
datasets:
- imagenet-1k
metrics:
- accuracy
model-index:
- name: David-decoupled-deep_efficiency
results:
- task:
type: image-classification
dataset:
name: ImageNet-1K
type: imagenet-1k
metrics:
- type: accuracy
value: 62.94
---
# David: Multi-Scale Feature Classifier
**David** is a multi-scale deep learning classifier that uses feature geometry (pentachora/4-simplexes)
as class prototypes with role-weighted similarity computation (Rose Loss).
This version is using multiple variations of clip-vit inputs simultaneously into shared space.
The experiment will determine if entirely deviant variations such as clip-vit-b-patch32 and patch16 can
exist simultaneously in the same shared space with the correct checks and spacings applied.
## Model Details
### Architecture
- **Preset**: gated_expert_team
- **Sharing Mode**: decoupled
- **Fusion Mode**: deep_efficiency
- **Scales**: [128, 256, 384, 448, 512, 576, 640, 768, 896]
- **Feature Dim**: 512
- **Parameters**: 22,133,801
### Training Configuration
- **Dataset**: AbstractPhil/imagenet-clip-features-orderly
- **Model Variant**: ['clip_vit_b16', 'clip_vit_laion_b32', 'clip_vit_b32']
- **Epochs**: 10
- **Batch Size**: 1024
- **Learning Rate**: 0.01
- **Rose Loss Weight**: 0.1 → 0.8
- **Cayley Loss**: False
## Performance
### Best Results
- **Validation Accuracy**: 62.94%
- **Best Epoch**: 9
- **Final Train Accuracy**: 61.07%
### Per-Scale Performance
- **Scale 128**: 62.94%
- **Scale 256**: 71.08%
- **Scale 384**: 73.44%
- **Scale 448**: 74.29%
- **Scale 512**: 74.61%
- **Scale 576**: 75.04%
- **Scale 640**: 75.18%
- **Scale 768**: 75.58%
- **Scale 896**: 75.90%
## Usage
### Quick Model Lookup
**Check `MODELS_INDEX.json` in the repo root** - it lists all trained models sorted by accuracy with links to weights and configs.
### Repository Structure
```
AbstractPhil/david-shared-space/
├── MODELS_INDEX.json # 📊 Master index of all models (sorted by accuracy)
├── README.md # This file
├── best_model.json # Latest best model info
├── weights/
│ └── david_gated_expert_team/
│ └── 20251013_004438/
│ ├── MODEL_SUMMARY.txt # 🎯 Human-readable performance summary
│ ├── training_history.json # 📈 Epoch-by-epoch training curve
│ ├── best_model_acc62.94.safetensors # ⭐ Accuracy in filename!
│ ├── best_model_acc62.94_metadata.json
│ ├── final_model.safetensors
│ ├── checkpoint_epoch_X_accYY.YY.safetensors
│ ├── david_config.json
│ └── train_config.json
└── runs/
└── david_gated_expert_team/
└── 20251013_004438/
└── events.out.tfevents.* # TensorBoard logs
```
### Loading the Model
```python
from geovocab2.train.model.core.david import David, DavidArchitectureConfig
from huggingface_hub import hf_hub_download
# Browse available models in MODELS_INDEX.json first!
# Specify model variant and run
model_name = "david_gated_expert_team"
run_id = "20251013_004438"
accuracy = "62.94" # From MODELS_INDEX.json
# Download config
config_path = hf_hub_download(
repo_id="AbstractPhil/david-shared-space",
filename=f"weights/{model_name}/{run_id}/david_config.json"
)
config = DavidArchitectureConfig.from_json(config_path)
# Download weights (accuracy in filename!)
weights_path = hf_hub_download(
repo_id="AbstractPhil/david-shared-space",
filename=f"weights/{model_name}/{run_id}/best_model_acc{accuracy}.safetensors"
)
# Download training history (optional - see full training curve)
history_path = hf_hub_download(
repo_id="AbstractPhil/david-shared-space",
filename=f"weights/{model_name}/{run_id}/training_history.json"
)
# Load model
from safetensors.torch import load_file
david = David.from_config(config)
david.load_state_dict(load_file(weights_path))
david.eval()
```
### Inference
```python
import torch
import torch.nn.functional as F
# Assuming you have CLIP features (512-dim for ViT-B/16)
features = get_clip_features(image) # [1, 512]
# Load anchors
anchors_dict = torch.load("anchors.pth")
# Forward pass
with torch.no_grad():
logits, _ = david(features, anchors_dict)
predictions = logits.argmax(dim=-1)
```
## Architecture Overview
### Multi-Scale Processing
David processes inputs at multiple scales (128, 256, 384, 448, 512, 576, 640, 768, 896),
allowing it to capture both coarse and fine-grained features.
### Shared Representation Space
This variation shares multiple versions of clip-vit models in the same representation space.
### Feature Geometry
Each class is represented by a pentachoron (4-simplex) in embedding space with 5 vertices:
- **Anchor**: Primary class representative
- **Need**: Complementary direction
- **Relation**: Contextual alignment
- **Purpose**: Functional direction
- **Observer**: Meta-perspective
### Rose Loss
Similarity computation uses role-weighted cosine similarities:
```
score = w_anchor * sim(z, anchor) + w_need * sim(z, need) + ...
```
### Fusion Strategy
**deep_efficiency**: Intelligently combines predictions from multiple scales.
## Training Details
### Loss Components
- **Cross-Entropy**: Standard classification loss
- **Rose Loss**: Pentachora role-weighted margin loss (weight: 0.1→0.8)
- **Cayley Loss**: Geometric regularization (disabled)
### Optimization
- **Optimizer**: AdamW
- **Weight Decay**: 1e-05
- **Scheduler**: cosine_restarts
- **Gradient Clip**: 10.0
- **Mixed Precision**: False
## Citation
```bibtex
@software{david_classifier_2025,
title = {David: Multi-Scale Feature Classifier},
author = {AbstractPhil},
year = {2025},
url = {https://huggingface.co/AbstractPhil/david-shared-space},
note = {Run ID: 20251013_004438}
}
```
## License
MIT License
## Acknowledgments
Built with feature lattice geometry and multi-scale deep learning.
Special thanks to Claude (Anthropic) for debugging assistance.
---
*Generated on 2025-10-13 01:33:26*