This model was converted to onnx format from grammarly/coedit-xl using ONNXRUNNTIME. Refer to the original model card for more details on the model.

Then it was quanted to 8BIT

By A Cool student

Downloads last month
53
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for IDK100boysaj/coedit-xl-onnx-8bit

Quantized
(3)
this model

Space using IDK100boysaj/coedit-xl-onnx-8bit 1