--- license: mit base_model: - inclusionAI/Ling-mini-2.0 --- ## Introduction Use https://github.com/im0qianqian/llama.cpp to quantize. For model inference, please download our release package from this url https://github.com/im0qianqian/llama.cpp/releases . ## Quick start ```bash # Use a local model file llama-cli -m my_model.gguf # Launch OpenAI-compatible API server llama-server -m my_model.gguf ``` ## Demo ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fde26773930f07f74aaf912/MC4h9G33YjvpboRA4LPfO.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fde26773930f07f74aaf912/0YcuTJFLs6k9K4Sgzd-UD.png) ## PR Let's look forward to the following PR being merged: - [#16063 model : add BailingMoeV2 support](https://github.com/ggml-org/llama.cpp/pull/16063) - [#16028 Add support for Ling v2](https://github.com/ggml-org/llama.cpp/pull/16028)