AI & ML interests

Exploring Extreme Quantization techniques !

Parveshiiii 
posted an update 13 days ago
view post
Post
142
AIRealNet - SoTA - Image detection model

We’re proud to release AIRealNet — a binary image classifier built to detect whether an image is AI-generated or a real human photograph. Based on SwinV2 and fine-tuned on the AI-vs-Real dataset, this model is optimized for high-accuracy classification across diverse visual domains.

If you care about synthetic media detection or want to explore the frontier of AI vs human realism, we’d love your support. Please like the model and try it out. Every download helps us improve and expand future versions.

Model page: XenArcAI/AIRealNet
Parveshiiii 
posted an update 24 days ago
view post
Post
4454
Ever wanted an open‑source deep research agent? Meet Deepresearch‑Agent 🔍🤖

1. Multi‑step reasoning: Reflects between steps, fills gaps, iterates until evidence is solid.

2. Research‑augmented: Generates queries, searches, synthesizes, and cites sources.

3. Fullstack + LLM‑friendly: React/Tailwind frontend, LangGraph/FastAPI backend; works with OpenAI/Gemini.


🔗 GitHub: https://github.com/Parveshiiii/Deepresearch-Agent
Parveshiiii 
posted an update 28 days ago
view post
Post
3083
🚀 Big news from XenArcAI!

We’ve just released our new dataset: **Bhagwat‑Gita‑Infinity** 🌸📖

✨ What’s inside:
- Verse‑aligned Sanskrit, Hindi, and English
- Clean, structured, and ready for ML/AI projects
- Perfect for research, education, and open‑source exploration

🔗 Hugging Face: XenArcAI/Bhagwat-Gita-Infinity

Let’s bring timeless wisdom into modern AI together 🙌
Parveshiiii 
posted an update about 1 month ago
view post
Post
2437
🚀 New Release from XenArcAI
We’re excited to introduce AIRealNet — our SwinV2‑based image classifier built to distinguish between artificial and real images.

✨ Highlights:
- Backbone: SwinV2
- Input size: 256×256
- Labels: artificial vs. real
- Performance: Accuracy 0.999 | F1 0.999 | Val Loss 0.0063

This model is now live on Hugging Face:
👉 XenArcAI/AIRealNet

We built AIRealNet to push forward open‑source tools for authenticity detection, and we can’t wait to see how the community uses it.
Abhaykoul 
posted an update about 2 months ago
view post
Post
2900
🚀 Ever dreamed of training your own Large Language Model from scratch? What if I told you it doesn't require a supercomputer or PhD in ML? 🤯

Introducing LLM Trainer - the educational framework that makes LLM training accessible to EVERYONE! Whether you're on a CPU-only laptop or scaling to distributed GPUs, we've got you covered. 💻➡️🖥️

Why LLM Trainer? Because existing tools are either too simplistic (hiding the magic) or too complex (requiring expert knowledge). We bridge the gap with:

🎓 Educational transparency - every component built from scratch with clear code
💻 CPU-first approach - start training immediately, no GPU needed
🔧 Full customization - modify anything you want
📈 Seamless scaling - from laptop to cluster without code changes
🤝 HuggingFace integration - works with existing models & tokenizers

Key highlights:
✅ Built-in tokenizers (BPE, WordPiece, HF wrappers)
✅ Complete Transformer implementation from scratch
✅ Optimized for CPU training
✅ Advanced features: mixed precision, gradient checkpointing, multiple generation strategies
✅ Comprehensive monitoring & metrics

Perfect for:
- Students learning transformers
- Researchers prototyping new ideas
- Developers building domain-specific models

Ready to train your first LLM? It's easier than you think!

🔗 Check it out: https://github.com/HelpingAI/llm-trainer
📚 Docs: Getting Started Guide
💬 Join the community: GitHub Discussions

#AI #MachineLearning #LLM #DeepLearning #OpenSource #Python #HuggingFace #NLP

Special thanks to HuggingFace and PyTorch teams for the amazing ecosystem! 🙏
  • 1 reply
·
Parveshiiii 
posted an update 3 months ago
view post
Post
1107
🚀 Just Dropped: MathX-5M — Your Gateway to Math-Savvy GPTs

👨‍🔬 Wanna fine-tune your own GPT for math?
🧠 Building a reasoning agent that actually *thinks*?
📊 Benchmarking multi-step logic across domains?

Say hello to [**MathX-5M**]( XenArcAI/MathX-5M) — a **5 million+ sample** dataset crafted for training and evaluating math reasoning models at scale.

Built by **XenArcAI**, it’s optimized for:
- 🔍 Step-by-step reasoning with , , and formats
- 🧮 Coverage from arithmetic to advanced algebra and geometry
- 🧰 Plug-and-play with Gemma, Qwen, Mistral, and other open LLMs
- 🧵 Compatible with Harmony, Alpaca, and OpenChat-style instruction formats

Whether you're prototyping a math tutor, testing agentic workflows, or just want your GPT to solve equations like a pro—**MathX-5M is your launchpad**.

🔗 Dive in: ( XenArcAI/MathX-5M)

Let’s make open-source models *actually* smart at math.
#FineTuneYourGPT #MathX5M #OpenSourceAI #LLM #XenArcAI #Reasoning #Gemma #Qwen #Mistral

alielfilali01 
posted an update 3 months ago
Abhaykoul 
posted an update 3 months ago
view post
Post
4082
🚀 Dhanishtha-2.0-preview-0825 Is Here

The Intermediate Thinking Model just leveled up again.

With sharper reasoning, better tool use, and expanded capabilities, Dhanishtha-2.0-preview-0825 is now live and ready to impress.

🧠 What Makes Dhanishtha Special?
Unlike typical CoT models that only thinks one time, Dhanishtha thinks iteratively:

> Think → Answer → Rethink → Improve → Rethink again if needed.

🔗 Try it now: HelpingAI/Dhanishtha-2.0-preview-0825

🔞 Dhanishtha NSFW Preview

For those exploring more expressive and immersive roleplay scenarios, we’re also releasing:

HelpingAI/Dhanishtha-nsfw
A specialized version tuned for adult-themed interactions and character-driven roleplay.

🔗 Explore it here: HelpingAI/Dhanishtha-nsfw

💬 You can also try all of these live at chat.helpingai.co
·
Parveshiiii 
posted an update 3 months ago
view post
Post
1093
🚀 Launch Alert: Dev-Stack-Agents
Meet your 50-agent senior AI team — principal-level experts in engineering, AI, DevOps, security, product, and more — all bundled into one modular repo.

+ Code. Optimize. Scale. Secure.
- Full-stack execution, Claude-powered. No human bottlenecks.


🔧 Built for Claude Code
Seamlessly plug into Claude’s dev environment:

* 🧠 Each .md file = a fully defined expert persona
* ⚙️ Claude indexes them as agents with roles, skills & strategy
* 🤖 You chat → Claude auto-routes to the right agent(s)
* ✍️ Want precision? Just call @agent-name directly
* 👥 Complex task? Mention multiple agents for team execution

Examples:

"@security-auditor please review auth flow for risks"
"@cloud-architect + @devops-troubleshooter → design a resilient multi-region setup"
"@ai-engineer + @legal-advisor → build a privacy-safe RAG pipeline"


🔗 https://github.com/Parveshiiii/Dev-Stack-Agents
MIT License | Claude-Ready | PRs Welcome

erikkaum 
posted an update 3 months ago
view post
Post
2593
ZML just released a technical preview of their new Inference Engine: LLMD.

- Just 2.4GB container, which means fast startup times and efficient autoscaling
- Cross-Platform GPU Support: works on both NVIDIA and AMD GPUs.
- written in Zig

I just tried it out and deployed it on Hugging Face Inference Endpoints and wrote a quick guide 👇 You can try it in like 5 minutes!

https://huggingface.co/blog/erikkaum/test-driving-llmd-inference-engine
  • 1 reply
·
erikkaum 
posted an update 3 months ago
view post
Post
2099
We just released native support for @SGLang and @vllm-project in Inference Endpoints 🔥

Inference Endpoints is becoming the central place where you deploy high performance Inference Engines.

And that provides the managed infra for it. Instead of spending weeks configuring infrastructure, managing servers, and debugging deployment issues, you can focus on what matters most: your AI model and your users 🙌
Abhaykoul 
posted an update 3 months ago
view post
Post
3132
🎉 Dhanishtha-2.0-preview-0725 is Now Live

The Intermediate Thinking Model just got even better.
With the new update, Dhanishtha is now sharper, smarter, and trained further on tool use

🧠 What Makes Dhanishtha Different?
Unlike standard COT models that give one-shot responses, Dhanishtha thinks in layers:

> Think → Answer → Rethink → Improve → Rethink again if needed.

HelpingAI/Dhanishtha-2.0-preview-0725
Parveshiiii 
posted an update 4 months ago
view post
Post
2701
🧠 Glimpses of AGI — A Vision for All Humanity
What if AGI wasn’t just a distant dream—but a blueprint already unfolding?

I’ve just published a deep dive called Glimpses of AGI, exploring how scalable intelligence, synthetic reasoning, and alignment strategies are paving a new path forward. This isn’t your average tech commentary—it’s a bold vision for conscious AI systems that reason, align, and adapt beyond narrow tasks.

🔍 Read it, upvote it if it sparks something, and let’s ignite a collective conversation about the future of AGI.

https://huggingface.co/blog/Parveshiiii/glimpses-of-agi


Parveshiiii 
posted an update 4 months ago
view post
Post
2850
🧠 MathX-5M by XenArcAI — Scalable Math Reasoning for Smarter LLMs

Introducing MathX-5M, a high-quality, instruction-tuned dataset built to supercharge mathematical reasoning in large language models. With 5 million rigorously filtered examples, it spans everything from basic arithmetic to advanced calculus—curated from public sources and enhanced with synthetic data.

🔍 Key Highlights:
- Step-by-step reasoning with verified answers
- Covers algebra, geometry, calculus, logic, and more
- RL-validated correctness and multi-stage filtering
- Ideal for fine-tuning, benchmarking, and educational AI

📂 - XenArcAI/MathX-5M


  • 1 reply
·
Abhaykoul 
posted an update 4 months ago
view post
Post
3085
🎉 Dhanishtha 2.0 Preview is Now Open Source!

The world's first Intermediate Thinking Model is now available to everyone!

Dhanishtha 2.0 Preview brings revolutionary intermediate thinking capabilities to the open-source community. Unlike traditional reasoning models that think once, Dhanishtha can think, answer, rethink, answer again, and continue rethinking as needed using multiple blocks between responses.

🚀 Key Features
- Intermediate thinking: Think → Answer → Rethink → Answer → Rethink if needed...
- Token efficient: Uses up to 79% fewer tokens than DeepSeek R1 on similar queries
- Transparent thinking: See the model's reasoning process in real-time
- Open source: Freely available for research and development


HelpingAI/Dhanishtha-2.0-preview
https://helpingai.co/chat
  • 1 reply
·
Abhaykoul 
posted an update 4 months ago
view post
Post
4603
Introducing Dhanishtha 2.0: World's first Intermediate Thinking Model

Dhanishtha 2.0 is the world's first LLM designed to think between the responses. Unlike other Reasoning LLMs, which think just once.

Dhanishtha can think, rethink, self-evaluate, and refine in between responses using multiple <think> blocks.
This technique makes it Hinghlt Token efficient it Uses up to 79% fewer tokens than DeepSeek R1
---

You can try our model from: https://helpingai.co/chat
Also, we're gonna Open-Source Dhanistha on July 1st.

---
For Devs:
🔑 Get your API key at https://helpingai.co/dashboard
from HelpingAI import HAI  # pip install HelpingAI==1.1.1
from rich import print

hai = HAI(api_key="hl-***********************")

response = hai.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[{"role": "user", "content": "What is the value of ∫0∞𝑥3/𝑥−1𝑑𝑥 ?"}],
    stream=True,
    hide_think=False # Hide or show models thinking
)

for chunk in response:
    print(chunk.choices[0].delta.content, end="", flush=True)
  • 2 replies
·