# Optimum.Neuron

## Docs

- [EC2 Setup](https://huggingface.co/docs/optimum.neuron/main/ec2-setup.md)
- [Supported architectures](https://huggingface.co/docs/optimum.neuron/main/supported_architectures.md)
- [Optimum Neuron Container](https://huggingface.co/docs/optimum.neuron/main/containers.md)
- [Quickstart](https://huggingface.co/docs/optimum.neuron/main/quickstart.md)
- [🤗 Optimum Neuron](https://huggingface.co/docs/optimum.neuron/main/index.md)
- [Llama-3.3-70b performance on AWS Inferentia2 (Latency & Throughput)](https://huggingface.co/docs/optimum.neuron/main/benchmarks/inferentia-llama3.3-70b.md)
- [Llama-3.1-8b performance on AWS Inferentia2 (Latency & Throughput)](https://huggingface.co/docs/optimum.neuron/main/benchmarks/inferentia-llama3.1-8b.md)
- [Model Weight Transformation Specs](https://huggingface.co/docs/optimum.neuron/main/training_api/transformations.md)
- [LoRA for Neuron](https://huggingface.co/docs/optimum.neuron/main/training_api/lora.md)
- [Neuron TRL Trainers](https://huggingface.co/docs/optimum.neuron/main/training_api/trl_trainers.md)
- [NeuronTrainer](https://huggingface.co/docs/optimum.neuron/main/training_api/trainer.md)
- [Setting up your development environment](https://huggingface.co/docs/optimum.neuron/main/contribute/dev_environment.md)
- [Contributing Custom Models for Training](https://huggingface.co/docs/optimum.neuron/main/contribute/contribute_for_training.md)
- [Adding support for new architectures](https://huggingface.co/docs/optimum.neuron/main/contribute/contribute_for_inference.md)
- [Export a model to Neuron](https://huggingface.co/docs/optimum.neuron/main/guides/export_model.md)
- [Introduction](https://huggingface.co/docs/optimum.neuron/main/guides/benchmark.md)
- [Neuron Model Cache](https://huggingface.co/docs/optimum.neuron/main/guides/cache_system.md)
- [optimum-neuron plugin for vLLM](https://huggingface.co/docs/optimum.neuron/main/guides/vllm_plugin.md)
- [Inference pipelines with AWS Neuron (Inf2/Trn1)](https://huggingface.co/docs/optimum.neuron/main/guides/pipelines.md)
- [Distributed Training with `optimum-neuron`](https://huggingface.co/docs/optimum.neuron/main/guides/distributed_training.md)
- [NeuronX Text-generation-inference for AWS inferentia2](https://huggingface.co/docs/optimum.neuron/main/guides/neuronx_tgi.md)
- [🚀  Tutorials: How To Fine-tune & Run LLMs](https://huggingface.co/docs/optimum.neuron/main/training_tutorials/finetune_llms_overview.md)
- [Getting started with AWS Trainium and Hugging Face Transformers](https://huggingface.co/docs/optimum.neuron/main/training_tutorials/fine_tune_bert.md)
- [🚀 Continuous Pretraining of Llama 3.2 1B on SageMaker Hyperpod with Pre-built Containers](https://huggingface.co/docs/optimum.neuron/main/training_tutorials/pretraining_hyperpod_llm.md)
- [🚀 Instruction Fine-Tuning of Llama 3.1 8B with LoRA](https://huggingface.co/docs/optimum.neuron/main/training_tutorials/finetune_llama.md)
- [🚀 Fine-Tune Qwen3 8B with LoRA](https://huggingface.co/docs/optimum.neuron/main/training_tutorials/finetune_qwen3.md)
- [Models](https://huggingface.co/docs/optimum.neuron/main/model_doc/modeling_auto.md)
- [IP-Adapter](https://huggingface.co/docs/optimum.neuron/main/model_doc/diffusers/ip_adapter.md)
- [PixArt-Σ](https://huggingface.co/docs/optimum.neuron/main/model_doc/diffusers/pixart_sigma.md)
- [Load adapters](https://huggingface.co/docs/optimum.neuron/main/model_doc/diffusers/lora.md)
- [PixArt-α](https://huggingface.co/docs/optimum.neuron/main/model_doc/diffusers/pixart_alpha.md)
- [Latent Consistency Models](https://huggingface.co/docs/optimum.neuron/main/model_doc/diffusers/lcm.md)
- [InstructPix2Pix](https://huggingface.co/docs/optimum.neuron/main/model_doc/diffusers/pix2pix.md)
- [Flux](https://huggingface.co/docs/optimum.neuron/main/model_doc/diffusers/flux.md)
- [Stable Diffusion XL Turbo](https://huggingface.co/docs/optimum.neuron/main/model_doc/diffusers/sdxl_turbo.md)
- [Stable Diffusion](https://huggingface.co/docs/optimum.neuron/main/model_doc/diffusers/stable_diffusion.md)
- [ControlNet](https://huggingface.co/docs/optimum.neuron/main/model_doc/diffusers/controlnet.md)
- [Stable Diffusion XL](https://huggingface.co/docs/optimum.neuron/main/model_doc/diffusers/stable_diffusion_xl.md)
- [YOLOS](https://huggingface.co/docs/optimum.neuron/main/model_doc/transformers/yolos.md)
- [BERT](https://huggingface.co/docs/optimum.neuron/main/model_doc/transformers/bert.md)
- [Whisper](https://huggingface.co/docs/optimum.neuron/main/model_doc/transformers/whisper.md)
- [CLIP](https://huggingface.co/docs/optimum.neuron/main/model_doc/transformers/clip.md)
- [Sentence Transformers 🤗](https://huggingface.co/docs/optimum.neuron/main/model_doc/sentence_transformers/overview.md)
- [Create your own chatbot with llama-2-13B on AWS Inferentia](https://huggingface.co/docs/optimum.neuron/main/inference_tutorials/llama2-13b-chatbot.md)
- [Deploy Llama 3.3 70B on AWS Inferentia2](https://huggingface.co/docs/optimum.neuron/main/inference_tutorials/deploy-llama-3-3-70b.md)
- [Sentence Transformers on AWS Inferentia with Optimum Neuron](https://huggingface.co/docs/optimum.neuron/main/inference_tutorials/sentence_transformers.md)
- [Notebooks](https://huggingface.co/docs/optimum.neuron/main/inference_tutorials/notebooks.md)
- [Deploy Mixtral 8x7B on AWS Inferentia2](https://huggingface.co/docs/optimum.neuron/main/inference_tutorials/deploy-mixtral-8x7b.md)
