AI & ML interests

CVPR Demo Track @ CVPR 2022

Recent Activity

DavidVivancosΒ 
posted an update 3 months ago
DavidVivancosΒ 
posted an update 4 months ago
DavidVivancosΒ 
posted an update 4 months ago
abidlabsΒ 
posted an update 4 months ago
view post
Post
10156
Why I think local, open-source models will eventually win.

The most useful AI applications are moving toward multi-turn agentic behavior: systems that take hundreds or even thousands of iterative steps to complete a task, e.g. Claude Code, computer-control agents that click, type, and test repeatedly.

In these cases, the power of the model is not how smart it is per token, but in how quickly it can interact with its environment and tools across many steps. In that regime, model quality becomes secondary to latency.

An open-source model that can call tools quickly, check that the right thing was clicked, or verify that a code change actually passes tests can easily outperform a slightly β€œsmarter” closed model that has to make remote API calls for every move.

Eventually, the balance tips: it becomes impractical for an agent to rely on remote inference for every micro-action. Just as no one would tolerate a keyboard that required a network request per keystroke, users won’t accept agent workflows bottlenecked by latency. All devices will ship with local, open-source models that are β€œgood enough” and the expectation will shift toward everything running locally. It’ll happen sooner than most people think.
Β·
abidlabsΒ 
posted an update 6 months ago
AbhaykoulΒ 
posted an update 6 months ago
view post
Post
3304
πŸš€ Ever dreamed of training your own Large Language Model from scratch? What if I told you it doesn't require a supercomputer or PhD in ML? 🀯

Introducing LLM Trainer - the educational framework that makes LLM training accessible to EVERYONE! Whether you're on a CPU-only laptop or scaling to distributed GPUs, we've got you covered. πŸ’»βž‘οΈπŸ–₯️

Why LLM Trainer? Because existing tools are either too simplistic (hiding the magic) or too complex (requiring expert knowledge). We bridge the gap with:

πŸŽ“ Educational transparency - every component built from scratch with clear code
πŸ’» CPU-first approach - start training immediately, no GPU needed
πŸ”§ Full customization - modify anything you want
πŸ“ˆ Seamless scaling - from laptop to cluster without code changes
🀝 HuggingFace integration - works with existing models & tokenizers

Key highlights:
βœ… Built-in tokenizers (BPE, WordPiece, HF wrappers)
βœ… Complete Transformer implementation from scratch
βœ… Optimized for CPU training
βœ… Advanced features: mixed precision, gradient checkpointing, multiple generation strategies
βœ… Comprehensive monitoring & metrics

Perfect for:
- Students learning transformers
- Researchers prototyping new ideas
- Developers building domain-specific models

Ready to train your first LLM? It's easier than you think!

πŸ”— Check it out: https://github.com/HelpingAI/llm-trainer
πŸ“š Docs: Getting Started Guide
πŸ’¬ Join the community: GitHub Discussions

#AI #MachineLearning #LLM #DeepLearning #OpenSource #Python #HuggingFace #NLP

Special thanks to HuggingFace and PyTorch teams for the amazing ecosystem! πŸ™
  • 1 reply
Β·
AbhaykoulΒ 
posted an update 8 months ago
view post
Post
4171
πŸš€ Dhanishtha-2.0-preview-0825 Is Here

The Intermediate Thinking Model just leveled up again.

With sharper reasoning, better tool use, and expanded capabilities, Dhanishtha-2.0-preview-0825 is now live and ready to impress.

🧠 What Makes Dhanishtha Special?
Unlike typical CoT models that only thinks one time, Dhanishtha thinks iteratively:

> Think β†’ Answer β†’ Rethink β†’ Improve β†’ Rethink again if needed.

πŸ”— Try it now: HelpingAI/Dhanishtha-2.0-preview-0825

πŸ”ž Dhanishtha NSFW Preview

For those exploring more expressive and immersive roleplay scenarios, we’re also releasing:

HelpingAI/Dhanishtha-nsfw
A specialized version tuned for adult-themed interactions and character-driven roleplay.

πŸ”— Explore it here: HelpingAI/Dhanishtha-nsfw

πŸ’¬ You can also try all of these live at chat.helpingai.co
Β·