An alternative to launching the evaluation locally is to serve the model on a TGI-compatible server/container and then run the evaluation by sending requests to the server. The command is the same as before, except you specify a path to a YAML configuration file (detailed below):
lighteval endpoint {tgi,inference-endpoint} \
"/path/to/config/file" \
<task_parameters>There are two types of configuration files that can be provided for running on the server:
To launch a model using Hugging Face’s Inference Endpoints, you need to provide
the following file: endpoint_model.yaml. Lighteval will automatically deploy
the endpoint, run the evaluation, and finally delete the endpoint (unless you
specify an endpoint that was already launched, in which case the endpoint won’t
be deleted afterwards).
model_parameters:
reuse_existing: false # If true, ignore all params in instance, and don't delete the endpoint after evaluation
# endpoint_name: "llama-2-7B-lighteval" # Needs to be lowercase without special characters
model_name: "meta-llama/Llama-2-7b-hf"
revision: "main" # Defaults to "main"
dtype: "float16" # Can be any of "awq", "eetq", "gptq", "4bit" or "8bit" (will use bitsandbytes), "bfloat16" or "float16"
accelerator: "gpu"
region: "eu-west-1"
vendor: "aws"
instance_type: "nvidia-a10g"
instance_size: "x1"
framework: "pytorch"
endpoint_type: "protected"
namespace: null # The namespace under which to launch the endpoint. Defaults to the current user's namespace
image_url: null # Optionally specify the docker image to use when launching the endpoint model. E.g., launching models with later releases of the TGI container with support for newer models.
env_vars: null # Optional environment variables to include when launching the endpoint. e.g., `MAX_INPUT_LENGTH: 2048`To use a model already deployed on a TGI server, for example on Hugging Face’s serverless inference.
model_parameters:
inference_server_address: ""
inference_server_auth: null
model_id: null # Optional, only required if the TGI container was launched with model_id pointing to a local directorymodel_name: The Hugging Face model ID to deployrevision: Model revision (defaults to “main”)dtype: Data type for model weights (“float16”, “bfloat16”, “4bit”, “8bit”, etc.)framework: Framework to use (“pytorch”, “tensorflow”)accelerator: Hardware accelerator (“gpu”, “cpu”)region: AWS region for deploymentvendor: Cloud vendor (“aws”, “azure”, “gcp”)instance_type: Instance type (e.g., “nvidia-a10g”, “nvidia-t4”)instance_size: Instance size (“x1”, “x2”, etc.)endpoint_type: Endpoint access level (“public”, “protected”, “private”)namespace: Organization namespace for deploymentreuse_existing: Whether to reuse an existing endpointendpoint_name: Custom endpoint name (lowercase, no special characters)image_url: Custom Docker image URLenv_vars: Environment variables for the endpointinference_server_address: URL of the TGI serverinference_server_auth: Authentication credentialsmodel_id: Model identifier (if using local model directory)lighteval endpoint inference-endpoint \
"configs/endpoint_model.yaml" \
"lighteval|gsm8k|0"lighteval endpoint tgi \
"configs/tgi_server.yaml" \
"lighteval|gsm8k|0"model_parameters:
reuse_existing: true
endpoint_name: "my-existing-endpoint"
# Other parameters will be ignored when reuse_existing is truereuse_existing: true)Common error messages and solutions:
reuse_existing: true or choose a different endpoint nameFor more detailed information about Hugging Face Inference Endpoints, see the official documentation.
Update on GitHub