AI & ML interests

None defined yet.

codelionΒ 
posted an update 5 days ago
view post
Post
2296
Introducing PTS Visualizer - an interactive tool for exploring how language models reason!

Visualize pivotal tokens, thought anchors, and reasoning circuits. See which tokens and sentences significantly impact success probability, explore embedding clusters, and trace reasoning step-by-step.

Try it: codelion/pts-visualizer

Explore PTS datasets:
- Qwen3-0.6B: codelion/Qwen3-0.6B-pts
- DeepSeek-R1: codelion/DeepSeek-R1-Distill-Qwen-1.5B-pts

Or upload your own JSONL files!

GitHub: https://github.com/codelion/pts
ZennyKennyΒ 
posted an update 7 days ago
view post
Post
1920
πŸ“ One of the coolest parts about being an early Strawberry user has been the opportunity to build on the app at the ground floor.

The platform already has a ton of great integrations that let you interact with your external apps directly with tools, but I wanted to add the ability to do stuff in Slack as well.

πŸ’ͺ So I took the base Anthropic Slack MCP server, added a whole bunch of new tools, and generalized it as an HTTP-based SSE-server and deployed it in like 2 minutes with Railway so that Strawberry could make use of it (as can Claude or any other MCP client).

Now, you can Chat with your Strawberry Companion (or Claude, or whatever) and do things like:
➑️ Get caught up across all of your Slack channels after a long weekend or noisy incident without having to read 20 threads in 10 different channels
➑️ Create, read, and edit Canvases, Messages, and Channels
➑️ Take any resources or content that you're using in your Chat and inject it directly into Slack without copy / paste

😎 I'm pretty pleased with the results, and I made a short demo video showing the results of the work (link in comments). The best part is, it's available on GitHub for anyone else to use too (link in the comments, instructions in the README). The setup takes about 5-10 minutes.
  • 2 replies
Β·
davidberenstein1957Β 
posted an update 9 days ago
daqcΒ 
posted an update 12 days ago
view post
Post
4186
Check out your 2025 Hugging Face Wrapped, a small experimental recap
hf-wrapped/2025
Β·
codelionΒ 
posted an update 17 days ago
view post
Post
2557
Recently, Essential AI released a new 8B base model EssentialAI/rnj-1 they highlighted the importance of data mix for pretraning -

"In the long run, we expect our methods to automatically represent, transform, and blend data to optimize measurable abilities in pre-training. Our work on modeling data taxonomies led to new approaches for jointly clustering and mixing data distributions under data repetition penalties. Many improvements in our STEM abilities can be traced back to this. "

This resonates with the recent work we did around optimal dataset mixing for pretraining where we saw have the right mix can increase the efficiency of training -
https://huggingface.co/blog/codelion/optimal-dataset-mixing
codelionΒ 
posted an update 19 days ago
ZennyKennyΒ 
posted an update 19 days ago
view post
Post
209
What a trip. Just walked through @burtenshaw and @evalstate tutorial on adding Hugging Face Skills to your Claude Code agent so you can fine tune LLMs by chatting with AI.

These are the kinds of innovations that are going to help everyone benefit from the power of Artificial Intelligence. Well done gentlemen and thank you for sharing.
  • 1 reply
Β·
codelionΒ 
posted an update 21 days ago
view post
Post
2294
Perplexity released a dataset (BrowseSafe) and benchmark to catch and prevent malicious prompt-injection instructions in real-time.

We trained a prompt injection classifier on BrowseSafe using adaptive-classifier with ModernBERT-base embeddings.

74.9% F1 on detecting prompt injection in web content.

Model -> adaptive-classifier/browsesafe
Dataset -> perplexity-ai/browsesafe-bench
Repo -> https://github.com/codelion/adaptive-classifier
  • 1 reply
Β·
codelionΒ 
posted an update 22 days ago
view post
Post
1598
I just published Ellora - 6 production-ready LoRA recipes for enhancing LLMs with specific capabilities. Each recipe costs under $100 to run and includes complete training code, data generation, and evaluation.

The 6 Recipes:
Recipe 1: Accuracy Recovery - Recover 75% of quantization losses with self-distillation
Recipe 2: Reasoning LoRA - Add structured thinking with GRPO (0% to 60% adoption, 75% quality boost)
Recipe 3: Tool Calling - Real execution on actual codebases
Recipe 4: Context Extension - Scale from 32K to 2M tokens (61x increase)
Recipe 5: Secure Code Generation - 97% vulnerability reduction using automated Semgrep analysis
Recipe 6: Execution-Aware World Models - Teaching models runtime behavior

Why Recipes?
Ellora provides methodologies, not frameworks. Use them with your existing tools (PEFT, LoRAX, vLLM, Unsloth, HuggingFace). Each recipe uses self-supervised data generation (Magpie approach) - no expensive human labeling required.

All recipes include Jupyter notebooks you can run immediately with clear success metrics.

GitHub: https://github.com/codelion/ellora
Full Article: https://huggingface.co/blog/codelion/ellora-lora-recipes

Built something with these recipes? I'd love to see what you create!
ZennyKennyΒ 
posted an update 25 days ago
view post
Post
253
😐 I keep seeing takes on LinkedIn from American business influencers melting down about Silicon Valley startup "dependence" on open-source Chinese models.

πŸ€” Can anyone describe a credible scenario where these models can be leveraged by the Chinese government to endanger American security interests or am I right to believe that this is just Red Scare nonsense?
  • 2 replies
Β·
ZennyKennyΒ 
posted an update about 1 month ago
view post
Post
429
The #feedback channel of app early access Slack Workspaces is some of the best unintentional comedy material I have ever come across tbh.
codelionΒ 
posted an update about 1 month ago
view post
Post
1982
Introducing OpenEvolve Prompt Optimizer - a Space that automatically evolves and optimizes your prompts using OpenEvolve!

This tool uses OpenEvolve to iteratively improve prompts by testing them on real datasets and evolving better versions. No more manual prompt engineering guesswork - let OpenEvolve find the optimal prompts for you.

How it works:
- Enter your initial prompt using {input} as a placeholder for dataset inputs
- Input any HuggingFace dataset name you want to use for optimization
- Specify the dataset split and field names for your use case
- Click Optimize Prompt and the system will validate everything first
- Compare your initial prompt vs the evolved best prompt side-by-side

Try it here: algorithmicsuperintelligence/prompt-optimizer

OpenEvolve GitHub: https://github.com/algorithmicsuperintelligence/openevolve
ZennyKennyΒ 
posted an update about 1 month ago
view post
Post
3157
πŸŽ‰ Wow. Congratulations @bfirsh and the Replicate team on the CloudFlare acquisition!

✌️ You've really built an incredible ecosystem and product offering and should be super proud.
codelionΒ 
posted an update about 1 month ago
view post
Post
2467
🎯 Introducing Chayan: A Calibrated 4-Model LLM Router Achieving 69% Accuracy on RouterArena

We're excited to share Chayan, a cost-efficient LLM router that intelligently routes queries between 4 models to maximize accuracy while minimizing cost. Chayan just submitted to the RouterArena leaderboard and achieved 69.05% accuracy on the benchmark!

πŸ”— Model: adaptive-classifier/chayan
πŸ”— Dataset: RouteWorks/RouterArena

πŸ“Š Performance Highlights

Chayan achieves impressive results on the RouterArena benchmark:
β€’ 69.05% accuracy (would rank #1 on current leaderboard)
β€’ $0.333 per 1K queries
β€’ +12.07pp improvement over all-mini baseline (56.98%)
β€’ 99% of perfect 2-model oracle performance at 57% lower cost

Compared to our previous 2-model router (61.43% accuracy), Chayan delivers +7.62pp improvement through smarter 4-model routing.

🧠 How It Works

Chayan uses an Adaptive K-NN classifier with prototype memory to route between 4 models:
β€’ openai/gpt-4o-mini (fast & cheap)
β€’ google/gemini-2.5-flash-lite (balanced)
β€’ google/gemini-2.5-flash (capable)
β€’ openai/gpt-4o (most powerful)

πŸš€ Getting Started

You can use Chayan directly from HuggingFace:

from adaptive_classifier import AdaptiveClassifier

Load Chayan
router = AdaptiveClassifier.load("adaptive-classifier/chayan")

Route a query
query = "What is the capital of France?"
predictions = router.predict(query, k=4)

Get top model recommendation
best_model = predictions[0][0]
print(f"Recommended model: {best_model}")

Built with the adaptive-classifier library: https://github.com/codelion/adaptive-classifier
codelionΒ 
posted an update about 2 months ago
view post
Post
3986
Want to experiment with pre-training dataset mixtures but don't want to process terabytes of data? We've got you covered.

We're releasing a collection of several carefully curated 1B token dataset samples specifically designed for rapid prototyping and pretraining experiments: https://huggingface.co/collections/codelion/pre-training-dataset-samples

These samples were created using reservoir sampling - an algorithm that guarantees statistically unbiased random samples from massive source datasets. This means results you get at the 1B token scale are representative of how these datasets behave at 100B+ token scales, letting you iterate quickly without the computational overhead.

The collection includes:
- finePDFs-1B: High-quality textbook-style educational content
- DCLM-baseline-1B: Filtered, diverse web content
- FineWeb-Edu-1B: Curated educational web resources

We used these exact samples to run 50+ systematic experiments on dataset mixing strategies, ultimately discovering that a 50-30-20 mixture of finePDFs + DCLM-baseline + FineWeb-Edu achieves 90%+ of GPT-2's performance with just 1/10th the training data.

Whether you're researching optimal data mixtures, testing curriculum learning strategies, or just want to quickly prototype a pretraining run, these samples give you a solid foundation to start experimenting immediately.

Read the full story of how we used these datasets to find the optimal pretraining recipe: https://huggingface.co/blog/codelion/optimal-dataset-mixing
  • 1 reply
Β·
ZennyKennyΒ 
posted an update about 2 months ago
view post
Post
334
πŸŽ‰ Novoyaz is live.

A few months ago, I built a quick POC in Hugging Face that used a fine-tuned variant of OpenAI's OSS-20B model that I trained to convert the text from pre-reform Russian-language documents into modern Russian orthography.

⚑️ This morning, I launched novoyaz.io.

This is a production app, the frontend for which I built in like two hours with Lovable, that uses that same fine-tuned model for transliteration, but now has a bunch of extra features that make using it even easier (like taking and uploading pictures with your on-device camera for example πŸ˜…).

πŸ‘‰ If you're a researcher, or know a researcher, for whom this app will improve their day-to-day workflows, please get in touch with me.
codelionΒ 
posted an update about 2 months ago
view post
Post
275
MARS Achieves Strong Results on Google DeepMind's IMO-Bench

We evaluated OptiLLM's MARS (Multi-Agent Reasoning System) approach on IMO-Bench, Google DeepMind's challenging mathematical reasoning benchmark with International Mathematical Olympiad-level problems.

What is MARS?

MARS is a multi-agent reasoning technique that works with any LLM. It uses 3 parallel reasoning agents that independently solve problems, then verifies their solutions through consensus and iterative refinement. The key advantage: it's model-agnostic and can be applied to any base model through OptiLLM's inference proxy.

Results on IMO-Bench:

AnswerBench (400 short-answer problems):
MARS: 36.0% (144/400 correct)
Baseline: 24.5% (98/400 correct)
Improvement: +11.5pp across all domains

Category breakdown:
- Algebra: 33% (vs 21% baseline)
- Combinatorics: 26% (vs 19% baseline)
- Geometry: 43% (vs 28% baseline)
- Number Theory: 42% (vs 30% baseline)

ProofBench (60 proof construction problems):
MARS: 26.7% (16/60 correct)
Baseline: 18.3% (11/60 correct)
Improvement: +8.4pp

Category breakdown:
- Number Theory: 42.9% (vs 14.3% baseline)
- Combinatorics: 37.5% (vs 31.2% baseline)
- Algebra: 18.8% (vs 25.0% baseline)
- Geometry: 7.1% (vs 0.0% baseline)

All results achieved using google/gemini-2.5-flash-lite-preview-09-2025 as the base model. The same MARS approach can enhance reasoning for any model through OptiLLM's OpenAI-compatible API.

Datasets available at:
AnswerBench: huggingface.co/datasets/Hwilner/imo-answerbench
ProofBench: huggingface.co/datasets/Hwilner/imo-proofbench

Try it yourself:

python optillm.py --approach mars --model google/gemini-2.5-flash-lite-preview-09-2025

Or via API with approach prefix:

model: "mars-google/gemini-2.5-flash-lite-preview-09-2025"

Full evaluation code and results available at: github.com/algorithmicsuperintelligence/optillm
codelionΒ 
posted an update about 2 months ago
view post
Post
3605
On this day in 2019, OpenAI released the final GPT-2 model as part of their staged release. I still remember that November well - so much was happening, but GPT-2's release felt like a watershed moment for the field. It showed us what was possible with carefully trained language models.

To recreate some of that GPT-2 magic, I recently tackled an interesting challenge: can you pretrain a language model with just 1 billion tokens - roughly 1/10th of what GPT-2 used - and still get comparable performance? After 50+ systematic experiments testing different dataset mixtures, the answer is yes.

The result is codelion/gpt-2-70m, which achieves over 90% of GPT-2's benchmark performance despite being trained on 10x less data. The key was finding the optimal dataset composition: 50% high-quality textbook PDFs, 30% filtered web content, and 20% educational resources. It even beats GPT-2 on TruthfulQA (47.31% vs 40.69%).

If you're interested in the full story of how we discovered this optimal mixture and why curriculum learning catastrophically failed, check out the complete article: https://huggingface.co/blog/codelion/optimal-dataset-mixing

Sometimes less really is more - when you mix it right.
  • 1 reply
Β·
codelionΒ 
posted an update about 2 months ago
view post
Post
390
The 1 Billion Token Challenge: Finding the Perfect Pre-training Mix

We trained a GPT-2 model to 90%+ performance using just 1/10th the training data through 50+ systematic experiments on dataset mixing strategies.

Key Finding:

A static mix of 50% finePDFs + 30% DCLM-baseline + 20% FineWeb-Edu consistently outperforms complex curriculum learning approaches. Static mixing is simpler, faster, and avoids catastrophic failures from hard distribution shifts.

Results:

Our GPT-2-70M model (70M parameters, 1B tokens) scores 38.15% on benchmarks vs GPT-2's 39.13% - only 0.98 points behind despite 10x less data and 44% fewer parameters. It even beats GPT-2 on TruthfulQA (47.31% vs 40.69%).

The takeaway: careful dataset curation matters more than total data volume.

Model: codelion/gpt-2-70m

Datasets: https://huggingface.co/collections/codelion/pre-training-dataset-samples

Full blog: https://huggingface.co/blog/codelion/optimal-dataset-mixing
ZennyKennyΒ 
posted an update about 2 months ago
view post
Post
346
Anyone got the scoop on a good OCR model that's available on inference?

Keen to make use of an endpoint (gated or not -- happy to pay for usage) for a personal project, but not so keen to pay for the GPU hosting myself.

πŸ™ˆπŸ™ˆπŸ™ˆ
  • 4 replies
Β·