Ashish Tanwer
ashishtanwer
AI & ML interests
None yet
Recent Activity
liked
a model
10 days ago
DiffSynth-Studio/Qwen-Image-i2L
liked
a dataset
13 days ago
yutori-ai/navi-bench
liked
a model
16 days ago
zai-org/AutoGLM-Phone-9B
Organizations
RAG
DataLabelling
LLM
-
Running3.03k
AnyCoder
π3.03kGenerate code with AI
-
RunningFeatured274
Qwen2.5 Coder Artifacts
π’274Generate code from natural language prompts
-
RunningFeatured923
QwQ-32B-Preview
π923QwQ-32B-Preview
-
Running on CPU Upgrade13.8k
Open LLM Leaderboard
π13.8kTrack, rank and evaluate open LLMs and chatbots
Evals
ClassicalML
Paper and resources for Classical ML
InfraML
Agents
Transformer
-
sentence-transformers/all-mpnet-base-v2
Sentence Similarity β’ 0.1B β’ Updated β’ 25.2M β’ β’ 1.21k -
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Paper β’ 1910.10683 β’ Published β’ 15 -
google-t5/t5-base
Translation β’ 0.2B β’ Updated β’ 2.03M β’ β’ 760 -
Attention Is All You Need
Paper β’ 1706.03762 β’ Published β’ 106
DataCleaning
Dataset
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper β’ 2306.01116 β’ Published β’ 41 -
HuggingFaceFW/fineweb
Viewer β’ Updated β’ 52.5B β’ 169k β’ 2.56k -
tiiuae/falcon-refinedweb
Viewer β’ Updated β’ 968M β’ 47k β’ 879 -
cerebras/SlimPajama-627B
Preview β’ Updated β’ 59.2k β’ 510
Training
-
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Paper β’ 1910.10683 β’ Published β’ 15 -
AutoTrain: No-code training for state-of-the-art models
Paper β’ 2410.15735 β’ Published β’ 59 -
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report
Paper β’ 2405.00732 β’ Published β’ 122 -
LoRA: Low-Rank Adaptation of Large Language Models
Paper β’ 2106.09685 β’ Published β’ 56
Diffusion
DataCrawling
Agents
RAG
Transformer
-
sentence-transformers/all-mpnet-base-v2
Sentence Similarity β’ 0.1B β’ Updated β’ 25.2M β’ β’ 1.21k -
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Paper β’ 1910.10683 β’ Published β’ 15 -
google-t5/t5-base
Translation β’ 0.2B β’ Updated β’ 2.03M β’ β’ 760 -
Attention Is All You Need
Paper β’ 1706.03762 β’ Published β’ 106
DataLabelling
DataCleaning
LLM
-
Running3.03k
AnyCoder
π3.03kGenerate code with AI
-
RunningFeatured274
Qwen2.5 Coder Artifacts
π’274Generate code from natural language prompts
-
RunningFeatured923
QwQ-32B-Preview
π923QwQ-32B-Preview
-
Running on CPU Upgrade13.8k
Open LLM Leaderboard
π13.8kTrack, rank and evaluate open LLMs and chatbots
Dataset
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper β’ 2306.01116 β’ Published β’ 41 -
HuggingFaceFW/fineweb
Viewer β’ Updated β’ 52.5B β’ 169k β’ 2.56k -
tiiuae/falcon-refinedweb
Viewer β’ Updated β’ 968M β’ 47k β’ 879 -
cerebras/SlimPajama-627B
Preview β’ Updated β’ 59.2k β’ 510
Evals
Training
-
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Paper β’ 1910.10683 β’ Published β’ 15 -
AutoTrain: No-code training for state-of-the-art models
Paper β’ 2410.15735 β’ Published β’ 59 -
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report
Paper β’ 2405.00732 β’ Published β’ 122 -
LoRA: Low-Rank Adaptation of Large Language Models
Paper β’ 2106.09685 β’ Published β’ 56
ClassicalML
Paper and resources for Classical ML
Diffusion
InfraML