CodeReality / EVAL_SUBSET.md
Vincenzo Gallo
Add CodeReality-1T Evaluation Subset (19GB)
6759906

CodeReality-1T Evaluation Subset

Location

The completed evaluation subset (19GB) is located at:

/mnt/z/CodeReality_Final/codereality-1t/eval/subset/data/

Subset Statistics

Overall Metrics

  • Files: 323 JSONL files
  • Size: 19.0 GB
  • Repositories: 2,049 estimated
  • Creation: Research value scoring with diversity sampling

Research Characteristics

Characteristic Count Percentage Details
Multi-repo files (5+ repos) 323 100% Files containing 5+ repositories each
Files with commit history 2,049 100% Complete git history available
Cross-language content 1,495 73% Repositories with multiple programming languages
Build system configurations 798 39% Makefile, package.json, build.gradle, pom.xml
Issue tracking data 1,209 59% GitHub/GitLab issues and discussions
Bug-fix commits 1,845 90% Commits identified as bug fixes
Test coverage 1,332 65% Repositories with test files
Documentation 1,843 90% README and documentation files

Repository Size Distribution

Size Range Repositories Average Size
Small (< 10MB) ~800 3.2 MB
Medium (10-100MB) ~1,100 42.1 MB
Large (> 100MB) ~149 187.3 MB

Language Diversity (Estimated)

  • JavaScript/TypeScript: ~35% of content
  • Python: ~20% of content
  • Java/C/C++: ~25% of content
  • Mixed/Other: ~20% of content

Structure

eval_subset/
├── data/               # 280 curated JSONL files (15.1GB)
├── docs/               # Usage examples and documentation
├── eval_metadata.json  # Complete subset metadata
└── USAGE_EXAMPLES.md   # Demonstration code examples

Usage

The subset is ready for:

  • Code completion benchmarks (Pass@k evaluation)
  • License detection training/testing
  • Cross-language analysis
  • Bug detection studies
  • Repository classification

Access

To use the evaluation subset:

import json
import os

eval_dir = "/mnt/z/CodeReality_Final/codereality-1t/eval/subset"

# Load metadata
with open(os.path.join(eval_dir, 'eval_metadata.json'), 'r') as f:
    metadata = json.load(f)

print(f"Subset: {metadata['eval_subset_info']['name']}")
print(f"Files: {metadata['subset_statistics']['total_files']}")
print(f"Size: {metadata['subset_statistics']['total_size_gb']} GB")

# Load sample data
data_dir = os.path.join(eval_dir, 'data')
for filename in os.listdir(data_dir)[:5]:  # First 5 files
    file_path = os.path.join(data_dir, filename)
    with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
        for line in f:
            repo_data = json.loads(line)
            print(f"Repository: {repo_data.get('name', 'Unknown')}")
            break  # Just first repo from each file

Benchmarks

Demonstration benchmarks available in ../benchmarks/:

  • license_detection_benchmark.py
  • code_completion_benchmark.py

See ../benchmarks/README.md for detailed usage instructions.

Metadata

Complete metadata available in eval_metadata.json includes:

  • Selection methodology
  • Research characteristics
  • Size distribution
  • File manifest with checksums
  • Recommended evaluation tasks

This subset represents a curated, research-ready portion of the complete CodeReality-1T dataset optimized for standardized evaluation and benchmarking.