license: mit
configs:
- config_name: default
data_files:
- split: latency
path: datasets/latency.jsonl
- split: resource_util
path: datasets/resource_util.jsonl
- split: runtime_efficiency
path: datasets/runtime_efficiency.jsonl
- split: maintainability
path: datasets/maintainability.jsonl
- split: security
path: datasets/security.jsonl
- split: humanevalclassify
path: datasets/humanevalclassify.jsonl
NoFunEval: Funny How Code LMs Falter on Requirements Beyond Functional Correctness
[Published at COLM'24]
Abstract:
Existing evaluation benchmarks of language models of code (code LMs) focus almost exclusively on whether the LMs can generate functionally-correct code. In real-world software engineering, developers think beyond functional correctness. They have requirements on "how" a functionality should be implemented to meet overall system design objectives like efficiency, security, and maintainability. They would also trust the code LMs more if the LMs demonstrate robust understanding of such requirements.
We propose a new benchmark NoFunEval to evaluate code LMs on non-functional requirements and simple classification instances for both functional and non-functional requirements. We propose a prompting method, Coding Concepts (CoCo), as a way for a developer to communicate the domain knowledge to the LMs. We conduct an extensive evaluation of twenty-two code LMs. Our finding is that they generally falter when tested on our benchmark, hinting at fundamental blindspots in their training setups. Surprisingly, even the classification accuracy on functional-correctness instances derived from the popular HumanEval benchmark is low, calling in question the depth of their comprehension and the source of their success in generating functionally-correct code in the first place.
Arxiv: https://arxiv.org/pdf/2401.15963.pdf
Github: http://aka.ms/nofuneval
Generation
Environment Setup
Create a virtual environment.
bash setup.sh
NoFunEdit
python3 src/nofunedit_generation.py --data_subset <subset from nofunedit: eg-latency> --model_path <model name from HF: eg-WizardLM/WizardCoder-15B-V1.0> --temperature <temperature to be set for model generation: eg-0> --max_new_tokens <maximum number of new tokens to be generated: eg-5192> --prompt <type of prompt to use from our dataset: eg-base_prompt> --num_samples <number of samples to be generated: eg-1> --precision <floating point format: eg-fp16> --batch_size <number of examples to send to llm engine at once: eg-1>
Classification
python3 src/classification_generation.py --data_subset <subset from non_func or humanevalclassify: eg-latency> --model <model name from HF: eg-WizardLM/WizardCoder-15B-V1.0> --temperature <temperature to be set for model generation: eg-0> --max_new_tokens <maximum number of new tokens to be generated: eg-5192> --prompt <type of prompt to use from our dataset: eg-base_prompt> --precision <floating point format: eg-fp16> --batch_size <number of examples to send to llm engine at once: eg-1>
Evaluation Scripts
Evaluation
python3 src/evaluation.py --data_subset <subset from nofunedit: eg-latency> --model_path <model name from HF: eg-WizardLM/WizardCoder-15B-V1.0> --prompt <type of prompt to use from our dataset: eg-base_prompt> --num_samples <number of samples to be generated: eg-1> --score_k <K values for score@k: eg-1,5,10,20> --metric <eval_metric to be used: eg-diffbleu>
Example eval script (For maintainability)
bash evaluation_example_script.sh
Parameters
| Parameter | Description |
|---|---|
data_subset |
The subset of data to use. Options: latency, resource_util, maintainability, security, runtime_efficiency for nofunedit. Additionally humanevalclassify for classification. |
model_path |
The path of the model from HF. Example: WizardLM/WizardCoder-15B-V1.0. |
prompt |
Prompt to use. Options: base_prompt, one-shot, chain_of_thought, coding_concepts. |
num_samples |
Number of samples to generate. Example: 1 (We used 1 for greedy and 20 for higher temperature). [NoFunEdit - Generation only] |
max_new_tokens |
Budget for new token generation for a model. Example: 1200 (NoFunEdit: We used 1200 for runtime_efficiency and security for all prompts than CoT where 1500 was used. For others, we used 5192 or max possible limit. Classification: We used 4 for all generations). |
temperature |
Temperature for model generation. Example: 0 (We used 0 for greedy and 0.8 for higher samples) |
score_k |
K vales for Score@K. Example: 1,5,10,20 (Should not be greater than num_samples and is comma separated) [Eval only] |
metric |
Metric to be used for evaluation. Option: diffbleu, codeql, codeql-diffbleu (to be run after first two params are run), classification, runtime [Eval only] |
VLLM Parameters (for generation)
| Parameter | Description |
|---|---|
batch-size |
Batch size. Default: 1 |
precision |
Floating point format: Default: fp16 |
tensor_parallel_size |
Default: 1 |
swap_space |
The size (GiB) of CPU memory per GPU to use as swap space: Default: 4 |