ManavSinghal157 commited on
Commit
6a91027
·
verified ·
1 Parent(s): d4d0605

Adding generation and evaluation code along with setup

Browse files
.gitattributes CHANGED
@@ -53,3 +53,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
53
  *.jpg filter=lfs diff=lfs merge=lfs -text
54
  *.jpeg filter=lfs diff=lfs merge=lfs -text
55
  *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
53
  *.jpg filter=lfs diff=lfs merge=lfs -text
54
  *.jpeg filter=lfs diff=lfs merge=lfs -text
55
  *.webp filter=lfs diff=lfs merge=lfs -text
56
+ src/evaluation/tree-sitter/build/my-languages_c++.so filter=lfs diff=lfs merge=lfs -text
57
+ src/evaluation/tree-sitter/build/my-languages_kotlin.so filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -23,4 +23,39 @@ We propose a new benchmark NoFunEval to evaluate code LMs on non-functional requ
23
 
24
  Arxiv Link: https://arxiv.org/pdf/2401.15963.pdf
25
 
26
- Work on code release is under progress.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
  Arxiv Link: https://arxiv.org/pdf/2401.15963.pdf
25
 
26
+ [Work on code release is under progress.]
27
+
28
+ # Generation
29
+ ### NoFunEdit
30
+ ```console
31
+ python3 src/nofunedit_generation.py --data_subset <subset from nofunedit: eg-latency> --model_path <model name from HF: eg-WizardLM/WizardCoder-15B-V1.0> --temperature <temperature to be set for model generation: eg-0> --max_new_tokens <maximum number of new tokens to be generated: eg-5192> --prompt <type of prompt to use from our dataset: eg-base_prompt> --num_samples <number of samples to be generated: eg-1> --precision <floating point format: eg-fp16> --batch_size <number of examples to send to llm engine at once: eg-1>
32
+ ```
33
+ ### Classification
34
+ ```console
35
+ python3 src/classification_generation.py --data_subset <subset from non_func or humanevalclassify: eg-latency> --model <model name from HF: eg-WizardLM/WizardCoder-15B-V1.0> --temperature <temperature to be set for model generation: eg-0> --max_new_tokens <maximum number of new tokens to be generated: eg-5192> --prompt <type of prompt to use from our dataset: eg-base_prompt> --precision <floating point format: eg-fp16> --batch_size <number of examples to send to llm engine at once: eg-1>
36
+ ```
37
+ # Evaluation
38
+ ```console
39
+ python3 src/evaluation.py --data_subset <subset from nofunedit: eg-latency> --model_path <model name from HF: eg-WizardLM/WizardCoder-15B-V1.0> --prompt <type of prompt to use from our dataset: eg-base_prompt> --num_samples <number of samples to be generated: eg-1> --score_k <K values for score@k: eg-1,5,10,20> --metric <eval_metric to be used: eg-diffbleu>
40
+ ```
41
+
42
+ ## Parameters
43
+
44
+ | Parameter | Description |
45
+ | ----------------------------- | ---------------------------------------- |
46
+ | `data_subset` | The subset of data to use. Options: `latency`, `resource_util`, `maintainability`, `security`, `runtime_efficiency` for nofunedit. Additionally `humanevalclassify` for classification.|
47
+ | `model_path` | The path of the model from HF. Example: `WizardLM/WizardCoder-15B-V1.0`.
48
+ | `prompt` | Prompt to use. Options: `base_prompt`, `one-shot`, `chain_of_thought`, `coding_concepts`. |
49
+ | `num_samples` | Number of samples to generate. Example: `1` (We used `1` for greedy and `20` for higher temperature). **[NoFunEdit - Generation only]**|
50
+ | `max_new_tokens` | Budget for new token generation for a model. Example: `1200` (NoFunEdit: We used `1200` for runtime_efficiency and security for all prompts than CoT where `1500` was used. For others, we used `5192` or max possible limit. Classification: We used `4` for all generations).|
51
+ | `temperature` | Temperature for model generation. Example: `0` (We used `0` for greedy and `0.8` for higher samples) |
52
+ | `score_k` |K vales for Score@K. Example: `1,5,10,20` (Should not be greater than num_samples and is comma separated) **[Eval only]** |
53
+ | `metric` | Metric to be used for evaluation. Option: `diffbleu`, `codeql`, `codeql-diffbleu` (to be run after first two params are run), `classification`, `runtime` **[Eval only]**|
54
+
55
+ #### VLLM Parameters (for generation)
56
+ | Parameter | Description |
57
+ | ----------------------------- | ---------------------------------------- |
58
+ | `batch-size` | Batch size. Default: `1`|
59
+ | `precision` | Floating point format: Default: `fp16` |
60
+ | `tensor_parallel_size` | Default: `1` |
61
+ | `swap_space` | The size (GiB) of CPU memory per GPU to use as swap space: Default: `4` |
eval_setup.sh CHANGED
@@ -1,18 +1,28 @@
1
  #!/bin/bash
2
- mkdir $HOME/codeql-home
3
 
4
- wget https://github.com/github/codeql-cli-binaries/releases/download/v2.5.0/codeql.zip -P $HOME/codeql-home/
5
- unzip $HOME/codeql-home/codeql.zip -d $HOME/codeql-home/
6
 
7
- git clone https://github.com/github/codeql.git $HOME/codeql-home/codeql-repo
8
- cd $HOME/codeql-home/codeql-repo
9
  git checkout 20416ae0342c66aa05bc099af8e5a020b018a978
10
-
11
- echo 'export PATH="$HOME/codeql-home/codeql:$PATH"' >> ~/.bashrc
12
- source ~/.bashrc
13
-
14
- codeql resolve languages
15
- codeql resolve qlpacks
 
 
 
 
16
 
17
- mv -v ~/src/evaluation/qls_for_security/cpp/* ~/codeql-home/codeql-repo/cpp/ql/src/
18
- mv -v ~/src/evaluation/qls_for_security/python/* ~/codeql-home/codeql-repo/python/ql/src/
 
 
 
 
 
 
 
1
  #!/bin/bash
2
+ mkdir codeql-home
3
 
4
+ wget https://github.com/github/codeql-cli-binaries/releases/download/v2.5.0/codeql.zip -P codeql-home/
5
+ unzip codeql-home/codeql.zip -d codeql-home/
6
 
7
+ git clone https://github.com/github/codeql.git codeql-home/codeql-repo
8
+ cd codeql-home/codeql-repo
9
  git checkout 20416ae0342c66aa05bc099af8e5a020b018a978
10
+
11
+ codeql-home/codeql/codeql resolve languages
12
+ codeql-home/codeql/codeql resolve qlpacks
13
+
14
+ cd ../../
15
+
16
+ mv -v src/evaluation/qls_for_security/cpp/* codeql-home/codeql-repo/cpp/ql/src/
17
+ mv -v src/evaluation/qls_for_security/python/* codeql-home/codeql-repo/python/ql/src/
18
+
19
+ cd src/evaluation/tree-sitter/
20
 
21
+ git clone https://github.com/tree-sitter/tree-sitter-cpp.git
22
+ git clone https://github.com/tree-sitter/tree-sitter-c.git
23
+ git clone https://github.com/tree-sitter/tree-sitter-python.git
24
+ git clone https://github.com/tree-sitter/tree-sitter-java.git
25
+ git clone https://github.com/tree-sitter/tree-sitter-javascript.git
26
+ git clone https://github.com/tree-sitter/tree-sitter-scala.git
27
+ git clone https://github.com/jiyee/tree-sitter-objc.git
28
+ git clone https://github.com/fwcd/tree-sitter-kotlin.git
requirements.txt CHANGED
@@ -1,23 +1,15 @@
1
- os
2
- sys
3
- re
4
- json
5
- pandas
6
  argparse
7
- csv
8
- tqdm
9
  vllm
10
  nltk
11
  scipy
12
  transformers
13
  jinja2
14
- datasets
15
  datetime
16
  jsonlines
17
  statistics
18
- tempfile
19
- subprocess
20
  tree_sitter
 
21
  joblib == 1.1.0
22
  numpy == 1.23.1
23
  pandas == 1.4.4
 
1
+ datasets
 
 
 
 
2
  argparse
 
 
3
  vllm
4
  nltk
5
  scipy
6
  transformers
7
  jinja2
 
8
  datetime
9
  jsonlines
10
  statistics
 
 
11
  tree_sitter
12
+ tiktoken
13
  joblib == 1.1.0
14
  numpy == 1.23.1
15
  pandas == 1.4.4
src/classification_generation.py CHANGED
@@ -6,12 +6,12 @@ from transformers import AutoTokenizer
6
  import jsonlines
7
  from tqdm import tqdm
8
  from vllm import LLM, SamplingParams
9
-
10
  #Input all the arguments
11
  parser = argparse.ArgumentParser()
12
  parser.add_argument("--data_subset", type = str, default = "latency", help = "type of non-func requirement")
13
  parser.add_argument("--temperature", type = float, default = 0.0, help = "temperature")
14
- parser.add_argument("--max_new_tokens", type = int, default = 5192, help = "max length of tokens")
15
  parser.add_argument("--top_p", type = float, default = 0.95, help = "top_p")
16
  parser.add_argument("--prompt", type = str, default = "base_prompt", help = "type of prompt")
17
  parser.add_argument("--num_samples", type = int, default = 1, help = "number of samples")
@@ -24,75 +24,72 @@ parser.add_argument("--swap_space", type = int, default = 4, help = "The size (G
24
  parser.add_argument("--batch_size", type = int, default = 1, help = "Number of examples to send to llm engine at once.")
25
  args = parser.parse_args()
26
  argsdict = vars(args)
27
-
28
  def extract_single_predictions(input_string):
29
-
30
  if input_string.strip().split()[0].lower() == "A".lower():
31
  return "A"
32
  elif input_string.strip().split()[0].lower() == "B".lower():
33
  return "B"
34
  return None
35
-
36
- def model_query(all_messages, batch_size = 1):
37
-
38
- llm_tokenizer = AutoTokenizer.from_pretrained(
39
  args.model_path,
40
- truncation_side = "left",
41
- padding_side = "right", # padding on the right is needed to cut off padding in `complete_code`
 
42
  )
43
- if args.num_samples == 1:
44
- GREEDY = True
45
- else:
46
- GREEDY = False
47
- assert args.num_samples % batch_size == 0, "num_samples must be divisible by batch_size"
48
- sampling_params = SamplingParams(
49
- n=batch_size, # for multisamples we sample multiple times
50
- temperature=args.temperature if not GREEDY else 0.0,
51
- top_p=args.top_p if not GREEDY else 1.0,
52
- top_k=50 if not GREEDY else -1,
53
- max_tokens=args.max_new_tokens,
54
- stop_token_ids=[llm_tokenizer.eos_token_id])
55
- llm = LLM(model=args.model_path,
56
- tensor_parallel_size=args.tensor_parallel_size,
57
- swap_space=args.swap_space,
58
- trust_remote_code=True)
59
- # tokenizer="hf-internal-testing/llama-tokenizer" if 'llama' in args.checkpoint_path.lower() else None,)
60
- llm_outputs = llm.generate(left_prompts, sampling_params)
61
- predictions = [extract_single_predictions(output.outputs[0].text) for output in llm_outputs]
62
- return predictions
63
-
64
  dataset_path = os.path.join("datasets",f"{args.data_subset}.jsonl")
65
-
 
 
66
  max_tokens=[]
67
  generations=[]
68
  left_prompts = []
69
  right_prompts = []
70
  data=[]
71
-
72
  with jsonlines.open(dataset_path) as data_file:
73
  for data_item in data_file:
74
  data.append(data_item)
75
  left_prompts.append(data_item["classification_left_prompt"])
76
  right_prompts.append(data_item["classification_right_prompt"])
77
-
78
  print("Starting model inference...")
79
- left_predictions = model_query(all_messages=left_prompts, batch_size=args.batch_size)
80
- right_predictions = model_query(all_messages=right_prompts, batch_size=args.batch_size)
81
-
 
 
82
  generations = []
83
  for i, data_item in tqdm(enumerate(left_predictions)):
84
  #Model Inference
85
  curr_sample = data[i]
86
- curr_sample["left_output"] = left_predictions[i]
87
- curr_sample["right_output"] = right_predictions[i]
88
  for prompt in ["base_prompt", "coding_concepts","chain_of_thought","one_shot","classification_left_prompt","classification_right_prompt"]:
89
  if(prompt in curr_sample):
90
  del curr_sample[prompt]
91
- generations.append(curr_sample)
92
-
93
  generations = pd.DataFrame(generations)
94
- path = os.path.join("generations","classification",os.path.split(args.model_path)[1],args.data_subset,args.prompt,f"{args.num_samples}_samples")
95
  if not os.path.exists(path):
96
  os.makedirs(path)
97
  path=os.path.join(path, "generated_outputs.jsonl")
98
- generations.to_json(path, orient="records", lines=True)
 
6
  import jsonlines
7
  from tqdm import tqdm
8
  from vllm import LLM, SamplingParams
9
+
10
  #Input all the arguments
11
  parser = argparse.ArgumentParser()
12
  parser.add_argument("--data_subset", type = str, default = "latency", help = "type of non-func requirement")
13
  parser.add_argument("--temperature", type = float, default = 0.0, help = "temperature")
14
+ parser.add_argument("--max_new_tokens", type = int, default = 8, help = "max length of tokens")
15
  parser.add_argument("--top_p", type = float, default = 0.95, help = "top_p")
16
  parser.add_argument("--prompt", type = str, default = "base_prompt", help = "type of prompt")
17
  parser.add_argument("--num_samples", type = int, default = 1, help = "number of samples")
 
24
  parser.add_argument("--batch_size", type = int, default = 1, help = "Number of examples to send to llm engine at once.")
25
  args = parser.parse_args()
26
  argsdict = vars(args)
27
+
28
  def extract_single_predictions(input_string):
29
+
30
  if input_string.strip().split()[0].lower() == "A".lower():
31
  return "A"
32
  elif input_string.strip().split()[0].lower() == "B".lower():
33
  return "B"
34
  return None
35
+
36
+ model_basename = args.model_path.split("/")[-1]
37
+
38
+ llm_tokenizer = AutoTokenizer.from_pretrained(
39
  args.model_path,
40
+ truncation_side="left",
41
+ padding_side="right", # padding on the right is needed to cut off padding in `complete_code`
42
+ trust_remote_code=True,
43
  )
44
+ GREEDY = True
45
+ sampling_params = SamplingParams(
46
+ n=1, # for multisamples we sample multiple times
47
+ temperature=args.temperature if not GREEDY else 0.0,
48
+ top_p=args.top_p if not GREEDY else 1.0,
49
+ top_k=50 if not GREEDY else -1,
50
+ max_tokens=args.max_new_tokens,
51
+ stop_token_ids=[llm_tokenizer.eos_token_id])
52
+ llm = LLM(model=args.model_path,
53
+ tensor_parallel_size=args.tensor_parallel_size,
54
+ swap_space=args.swap_space,
55
+ trust_remote_code=True)
56
+
 
 
 
 
 
 
 
 
57
  dataset_path = os.path.join("datasets",f"{args.data_subset}.jsonl")
58
+
59
+ args.num_samples = 1
60
+
61
  max_tokens=[]
62
  generations=[]
63
  left_prompts = []
64
  right_prompts = []
65
  data=[]
66
+
67
  with jsonlines.open(dataset_path) as data_file:
68
  for data_item in data_file:
69
  data.append(data_item)
70
  left_prompts.append(data_item["classification_left_prompt"])
71
  right_prompts.append(data_item["classification_right_prompt"])
72
+
73
  print("Starting model inference...")
74
+ left_llm_outputs = llm.generate(left_prompts, sampling_params)
75
+ left_predictions = [extract_single_predictions(output.outputs[0].text) for output in left_llm_outputs]
76
+ right_llm_outputs = llm.generate(right_prompts, sampling_params)
77
+ right_predictions = [extract_single_predictions(output.outputs[0].text) for output in right_llm_outputs]
78
+
79
  generations = []
80
  for i, data_item in tqdm(enumerate(left_predictions)):
81
  #Model Inference
82
  curr_sample = data[i]
83
+ curr_sample["left_output"] = left_predictions[i]
84
+ curr_sample["right_output"] = right_predictions[i]
85
  for prompt in ["base_prompt", "coding_concepts","chain_of_thought","one_shot","classification_left_prompt","classification_right_prompt"]:
86
  if(prompt in curr_sample):
87
  del curr_sample[prompt]
88
+ generations.append(curr_sample)
89
+
90
  generations = pd.DataFrame(generations)
91
+ path = os.path.join("generations","classification",args.data_subset,os.path.split(args.model_path)[1],args.prompt,f"{args.num_samples}_samples")
92
  if not os.path.exists(path):
93
  os.makedirs(path)
94
  path=os.path.join(path, "generated_outputs.jsonl")
95
+ generations.to_json(path, orient="records", lines=True)
src/evaluation.py CHANGED
@@ -4,12 +4,9 @@ import jsonlines
4
  import pandas as pd
5
  import time
6
  from jinja2 import Environment, FileSystemLoader
7
- import json
8
  import csv
9
  from statistics import mean
10
  from utils import pass_at_k_continuous_vals, diff_bleu, post_process_generations, statistical_significance_test, remove_comments, remove_blank_lines,get_files_with_syntax_errors
11
- from datasets import load_dataset
12
-
13
 
14
  parser = argparse.ArgumentParser()
15
  parser.add_argument("--data_subset", type=str, default="latency", help="latency/resource_util/runtime_efficiency/maintenance/security")
@@ -21,12 +18,52 @@ parser.add_argument("--score_k", type=str, default="1,5,10,20", help="K value fo
21
  parser.add_argument("--metric", type=str, default="runtime", help="runtime/diffbleu/codeql-diffbleu")
22
  args = parser.parse_args()
23
 
24
- generations_path = os.path.join("generations",args.data_subset,args.model,args.prompt,f"{args.num_samples}_samples","generated_outputs.jsonl")
25
-
26
  args.model = args.model_path.split("/")[-1]
27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  # To calculate runtimes(Applicable for non-func runtime_efficiency)
29
- if args.metric == "runtime":
30
 
31
  start_time = time.time()
32
 
@@ -40,21 +77,21 @@ if args.metric == "runtime":
40
 
41
  for l in range(args.num_samples):
42
 
43
- generated_answers = post_process_generations(generated_answers=generation['generated_answers'][l],model = args.model,prompt = args.prompt,pl = "Python")[1]
44
  parsed_generations.append(generated_answers)
45
 
46
- samples.append(dict(problem_id = generation['problem_id'],submission_id_v0 = generation['submission_id_v0'],cpu_time_v0 = generation['cpu_time_v0'],cpu_time_v1 = generation['cpu_time_v1'],input=generation['input'],target=generation['target'],
47
  generated_answers=parsed_generations, inference_time=generation['inference_time']))
48
 
49
  samples = pd.DataFrame(samples)
50
- path = os.path.join("evaluation","pie-perf","generated_outputs.jsonl")
51
  samples.to_json(path, orient="records", lines = True)
52
 
53
- env = Environment(loader = FileSystemLoader(os.path.join("evaluation","pie-perf","data","sample")))
54
  template = env.get_template('sample_eval_config_template.yaml')
55
- output_path = os.path.join("evaluation_results",args.data_subset,args.model,args.prompt,f"{args.num_samples}_samples","generated_outputs.report")
56
  rendered_yaml = template.render(output_path = output_path)
57
- config_file_path = os.path.join("evaluation","pie-perf","data","sample","sample_eval_config.yaml")
58
  f=open(config_file_path,"w")
59
  f.write(rendered_yaml)
60
  f.close()
@@ -63,7 +100,7 @@ if args.metric == "runtime":
63
  if not os.path.exists(path):
64
  os.makedirs(path)
65
 
66
- run_file = os.path.join("evaluation","pie-perf","src","codenet_eval","run_eval.py")
67
 
68
  os.system(f'python3 {run_file} --eval_config {config_file_path}')
69
  k_values = list(map(int,args.score_k.split(",")))
@@ -79,11 +116,10 @@ if args.metric == "runtime":
79
 
80
  samples = pd.DataFrame([results])
81
  samples.to_json(os.path.join(path,"results.jsonl"), orient="records", lines=True)
82
- print(",".join(list(map(str,scores))))
83
 
84
  # To calculate diffbleu(Applicable for all splits)
85
  elif args.metric=="diffbleu":
86
- generations_path="final_generations/android/starcoder/base_prompt/1_samples/2023-11-06/generated_outputs.jsonl"
87
 
88
  k_values = list(map(int,args.score_k.split(",")))
89
  overall_score={}
@@ -103,9 +139,9 @@ elif args.metric=="diffbleu":
103
 
104
  for l in range(args.num_samples):
105
 
106
- generated_answers = post_process_generations(generated_answers = generation['generated_answers'][l],model = args.model,prompt = args.prompt,pl = "Python")
107
  passed += generated_answers[0]
108
- diff_score_bleu = diff_bleu(source_code = generation['source_code'],target = generation['target'],generated_answers = generated_answers[1],pl = "Python")
109
  scores.append(diff_score_bleu)
110
 
111
  scores.sort(reverse = True)
@@ -125,15 +161,14 @@ elif args.metric=="diffbleu":
125
 
126
  results["Passed"] = (passed*100)/(count*args.num_samples)
127
  samples = pd.DataFrame([results])
128
- path = os.path.join("evaluation_results",args.data_subset,args.model,args.prompt,f"{args.num_samples}_samples")
129
  if not os.path.exists(path):
130
  os.makedirs(path)
131
- samples.to_json(os.path.join(path,"results.jsonl"), orient="records", lines=True)
132
  print("Pass Rate: {}, DiffBleu Score: {}".format(scores[0],scores[1]))
133
 
134
  # To run codeql(Applicable for security and maintenance)
135
  elif args.metric=="codeql":
136
- generations_path="final_generations/security/starcoder/base_prompt/1_samples/2023-11-06/generated_outputs.jsonl"
137
 
138
  all_check_paths={}
139
  query_lang = {}
@@ -144,12 +179,12 @@ elif args.metric=="codeql":
144
 
145
  query = generation['codeql_check'].split("/")[-1].split(".ql")[0]
146
 
147
- try:
148
- all_check_paths[query].append(generation['codeql_check'])
149
- except:
150
- all_check_paths[query]=generation['codeql_check']
151
 
152
- code_path="evaluation_results/{}/{}/{}/{}_samples/generated_code/{}/".format(args.data_subset,args.model,args.prompt,args.num_samples,query)
153
 
154
  if not os.path.exists(code_path):
155
  os.makedirs(code_path)
@@ -165,7 +200,7 @@ elif args.metric=="codeql":
165
 
166
  for index in range(len(generation['generated_answers'])):
167
 
168
- code_path_indexed = code_path + "{}_{}{}".format(generation['code_file_path'].split("/")[-2]+"_"+generation['code_file_path'].split("/")[-1].split(ext)[0],index,ext)
169
 
170
  f=open(code_path_indexed,"w+")
171
 
@@ -188,13 +223,13 @@ elif args.metric=="codeql":
188
 
189
  for query in all_check_paths.keys():
190
 
191
- code_path_generations="evaluation_results/{}/{}/{}/{}_samples/generated_code/".format(args.data_subset,args.model,args.prompt,args.num_samples)
192
 
193
- code_path_db="evaluation_results/{}/{}/{}/{}_samples/generated_code_db/".format(args.data_subset,args.model,args.prompt,args.num_samples)
194
  if not os.path.exists(code_path_db):
195
  os.makedirs(code_path_db)
196
 
197
- code_path_results="evaluation_results/{}/{}/{}/{}_samples/generated_code_results/".format(args.data_subset,args.model,args.prompt,args.num_samples)
198
  if not os.path.exists(code_path_results):
199
  os.makedirs(code_path_results)
200
 
@@ -217,11 +252,11 @@ elif args.metric=="codeql":
217
  for generation in reader:
218
  query = generation['codeql_check'].split("/")[-1].split(".ql")[0]
219
 
220
- code_path="evaluation_results/{}/{}/{}/{}_samples/generated_code/{}/".format(args.data_subset,args.model,args.prompt,args.num_samples,query)
221
  scores=[]
222
- code_path_results="evaluation_results/{}/{}/{}/{}_samples/generated_code_results/{}.csv".format(args.data_subset,args.model,args.prompt,args.num_samples,query)
223
- code_path_generations="evaluation_results/{}/{}/{}/{}_samples/generated_code/{}/".format(args.data_subset,args.model,args.prompt,args.num_samples,query)
224
- code_path_db="evaluation_results/{}/{}/{}/{}_samples/generated_code_db/".format(args.data_subset,args.model,args.prompt,args.num_samples)
225
 
226
  errors=[]
227
 
@@ -253,7 +288,7 @@ elif args.metric=="codeql":
253
  ext=".c"
254
  pl="C"
255
 
256
- filename = "{}_{}{}".format(generation['code_file_path'].split("/")[-2]+"_"+generation['code_file_path'].split("/")[-1].split(ext)[0],index,ext)
257
 
258
  index+=1
259
 
@@ -267,10 +302,10 @@ elif args.metric=="codeql":
267
  for k in k_values:
268
 
269
  overall_score[k].append(pass_at_k_continuous_vals(n = args.num_samples,k = k,vals = scores))
270
- print(scores)
271
  scores_dump.append(scores)
272
  scores=[]
273
- path = os.path.join("evaluation_results",args.data_subset,args.model,args.prompt,f"{args.num_samples}_samples")
274
  f = open(os.path.join(path,"results.txt"),'w')
275
  f.write(str(scores_dump))
276
  f.close()
@@ -283,9 +318,9 @@ elif args.metric=="codeql":
283
  results["syntax_errors"] = syntax_errors
284
  results["no_of_syntax"] = len(syn_errors)
285
  samples = pd.DataFrame([results])
286
- path = os.path.join("evaluation_results",args.data_subset,args.model,args.prompt,f"{args.num_samples}_samples")
287
- samples.to_json(os.path.join(path,"results.jsonl"), orient="records", lines=True)
288
- print(",".join(list(map(str,scores))))
289
 
290
 
291
 
@@ -295,23 +330,24 @@ elif args.metric == "codeql-diffbleu":
295
  overall_score={}
296
  for k in k_values:
297
  overall_score[k]=[]
298
- generations_path = os.path.join("generations",args.data_subset,args.model,args.prompt,f"{args.num_samples}_samples","generated_outputs.jsonl")
299
  passed = 0
300
  count = 0
301
  with jsonlines.open(generations_path) as reader:
302
- res_path = os.path.split(args.file)[0].split('/')
 
303
  res_path.insert(1,"evaluation_results")
304
- res_path = os.path.join("/".join(res_path),"results.txt")
305
  codeql_results = eval(open(res_path).read())
306
  for generation,res in zip(reader,codeql_results):
307
  scores=[]
308
-
309
 
310
  for l in range(len(generation['generated_answers'])):
311
  generated_answers=post_process_generations(generated_answers=generation['generated_answers'][l],model=args.model,prompt=args.prompt,pl=generation['pl'])
312
- count += generated_answers[0]
313
 
314
- diff_score_bleu=res[l]*diff_bleu(source_code=generation['source_code'],target=generation['target'],generated_answers=generated_answers[1],pl=generation['pl'])
315
 
316
  scores.append(diff_score_bleu)
317
 
@@ -327,6 +363,6 @@ elif args.metric == "codeql-diffbleu":
327
  results["Passed"] = (passed*100)/(count*args.num_samples)
328
  scores.append((passed*100)/(count*args.num_samples))
329
  samples = pd.DataFrame([results])
330
- path = os.path.join("evaluation_results",args.data_subset,args.model,args.prompt,f"{args.num_samples}_samples")
331
- samples.to_json(os.path.join(path,"results.jsonl"), orient="records", lines=True)
332
- print(",".join(list(map(str,scores))))
 
4
  import pandas as pd
5
  import time
6
  from jinja2 import Environment, FileSystemLoader
 
7
  import csv
8
  from statistics import mean
9
  from utils import pass_at_k_continuous_vals, diff_bleu, post_process_generations, statistical_significance_test, remove_comments, remove_blank_lines,get_files_with_syntax_errors
 
 
10
 
11
  parser = argparse.ArgumentParser()
12
  parser.add_argument("--data_subset", type=str, default="latency", help="latency/resource_util/runtime_efficiency/maintenance/security")
 
18
  parser.add_argument("--metric", type=str, default="runtime", help="runtime/diffbleu/codeql-diffbleu")
19
  args = parser.parse_args()
20
 
 
 
21
  args.model = args.model_path.split("/")[-1]
22
 
23
+ generations_path = os.path.join("generations","edit",args.data_subset,args.model,args.prompt,f"{args.num_samples}_samples","generated_outputs.jsonl")
24
+
25
+ if(args.metric == "classification"):
26
+
27
+ generations_path = os.path.join("generations","classification",args.data_subset,os.path.split(args.model_path)[1],args.prompt,f"{args.num_samples}_samples","generated_outputs.jsonl")
28
+
29
+ left_predictions=[]
30
+ right_predictions=[]
31
+ left_labels=[]
32
+ right_labels=[]
33
+
34
+ with jsonlines.open(generations_path) as reader:
35
+ for generation in reader:
36
+ left_predictions.append(generation['left_output'])
37
+ left_labels.append(generation['classification_left_label'])
38
+ right_predictions.append(generation['right_output'])
39
+ right_labels.append(generation['classification_right_label'])
40
+
41
+ left_accuracy = sum([1 if prediction == label else 0 for prediction, label in zip(left_predictions, left_labels)]) / len(left_labels)
42
+ left_consistency = sum([1 if prediction is not None else 0 for prediction in left_predictions]) / len(left_predictions)
43
+ right_accuracy = sum([1 if prediction == label else 0 for prediction, label in zip(right_predictions, right_labels)]) / len(right_labels)
44
+ right_consistency = sum([1 if prediction is not None else 0 for prediction in right_predictions]) / len(right_predictions)
45
+
46
+ joint_accuracy = [1 if left_prediction == left_label and
47
+ right_prediction == right_label else 0
48
+ for left_prediction, left_label, right_prediction, right_label
49
+ in zip(left_predictions, left_labels, right_predictions, right_labels)]
50
+
51
+ joint_accuracy = sum(joint_accuracy) / len(joint_accuracy)
52
+
53
+ result_string = {"Model":args.model, "left_accuracy":round((left_accuracy*100),1), "right_accuracy":round((right_accuracy*100),1), "joint_accuracy":round((joint_accuracy*100),1), "left_consistency":round((left_consistency*100),1), "right_consistency":round((right_consistency*100),1)}
54
+
55
+ output_path = os.path.join("evaluation_results","classification",args.data_subset,os.path.split(args.model_path)[1],args.prompt,f"{args.num_samples}_samples")
56
+
57
+ if not os.path.exists(output_path):
58
+ os.makedirs(output_path)
59
+
60
+ samples = pd.DataFrame([result_string])
61
+ samples.to_json(os.path.join(output_path,"results.jsonl"), orient="records", lines=True)
62
+ print("{}".format(result_string))
63
+
64
+
65
  # To calculate runtimes(Applicable for non-func runtime_efficiency)
66
+ elif args.metric == "runtime":
67
 
68
  start_time = time.time()
69
 
 
77
 
78
  for l in range(args.num_samples):
79
 
80
+ generated_answers = post_process_generations(generated_answers=generation['generated_answers'][l],model = args.model,prompt = args.prompt,pl = generation['pl'])[1]
81
  parsed_generations.append(generated_answers)
82
 
83
+ samples.append(dict(problem_id = generation['problem_id'],submission_id_v0 = generation['submission_id_v0'],cpu_time_v0 = generation['cpu_time_v0'],cpu_time_v1 = generation['cpu_time_v1'],input=generation['source_code'],target=generation['target_code'],
84
  generated_answers=parsed_generations, inference_time=generation['inference_time']))
85
 
86
  samples = pd.DataFrame(samples)
87
+ path = os.path.join("src","evaluation","pie-perf","generated_outputs.jsonl")
88
  samples.to_json(path, orient="records", lines = True)
89
 
90
+ env = Environment(loader = FileSystemLoader(os.path.join("src","evaluation","pie-perf","data","sample")))
91
  template = env.get_template('sample_eval_config_template.yaml')
92
+ output_path = os.path.join("evaluation_results","edit",args.data_subset,args.model,args.prompt,f"{args.num_samples}_samples","generated_outputs.report")
93
  rendered_yaml = template.render(output_path = output_path)
94
+ config_file_path = os.path.join("src","evaluation","pie-perf","data","sample","sample_eval_config.yaml")
95
  f=open(config_file_path,"w")
96
  f.write(rendered_yaml)
97
  f.close()
 
100
  if not os.path.exists(path):
101
  os.makedirs(path)
102
 
103
+ run_file = os.path.join("src","evaluation","pie-perf","src","codenet_eval","run_eval.py")
104
 
105
  os.system(f'python3 {run_file} --eval_config {config_file_path}')
106
  k_values = list(map(int,args.score_k.split(",")))
 
116
 
117
  samples = pd.DataFrame([results])
118
  samples.to_json(os.path.join(path,"results.jsonl"), orient="records", lines=True)
119
+ print("{}".format(results))
120
 
121
  # To calculate diffbleu(Applicable for all splits)
122
  elif args.metric=="diffbleu":
 
123
 
124
  k_values = list(map(int,args.score_k.split(",")))
125
  overall_score={}
 
139
 
140
  for l in range(args.num_samples):
141
 
142
+ generated_answers = post_process_generations(generated_answers = generation['generated_answers'][l],model = args.model,prompt = args.prompt,pl = generation['pl'])
143
  passed += generated_answers[0]
144
+ diff_score_bleu = diff_bleu(source_code = generation['source_code'],target = generation['target_code'],generated_answers = generated_answers[1],pl = generation['pl'])
145
  scores.append(diff_score_bleu)
146
 
147
  scores.sort(reverse = True)
 
161
 
162
  results["Passed"] = (passed*100)/(count*args.num_samples)
163
  samples = pd.DataFrame([results])
164
+ path = os.path.join("evaluation_results","edit",args.data_subset,args.model,args.prompt,f"{args.num_samples}_samples")
165
  if not os.path.exists(path):
166
  os.makedirs(path)
167
+ samples.to_json(os.path.join(path,"results_{}.jsonl".format(args.metric)), orient="records", lines=True)
168
  print("Pass Rate: {}, DiffBleu Score: {}".format(scores[0],scores[1]))
169
 
170
  # To run codeql(Applicable for security and maintenance)
171
  elif args.metric=="codeql":
 
172
 
173
  all_check_paths={}
174
  query_lang = {}
 
179
 
180
  query = generation['codeql_check'].split("/")[-1].split(".ql")[0]
181
 
182
+ # try:
183
+ # all_check_paths[query].append(generation['codeql_check'])
184
+ # except:
185
+ all_check_paths[query]=generation['codeql_check']
186
 
187
+ code_path="evaluation_results/edit/{}/{}/{}/{}_samples/generated_code/{}/".format(args.data_subset,args.model,args.prompt,args.num_samples,query)
188
 
189
  if not os.path.exists(code_path):
190
  os.makedirs(code_path)
 
200
 
201
  for index in range(len(generation['generated_answers'])):
202
 
203
+ code_path_indexed = code_path + "{}_{}{}".format(generation['file_path'].split("/")[-2]+"_"+generation['file_path'].split("/")[-1].split(ext)[0],index,ext)
204
 
205
  f=open(code_path_indexed,"w+")
206
 
 
223
 
224
  for query in all_check_paths.keys():
225
 
226
+ code_path_generations="evaluation_results/edit/{}/{}/{}/{}_samples/generated_code/".format(args.data_subset,args.model,args.prompt,args.num_samples)
227
 
228
+ code_path_db="evaluation_results/edit/{}/{}/{}/{}_samples/generated_code_db/".format(args.data_subset,args.model,args.prompt,args.num_samples)
229
  if not os.path.exists(code_path_db):
230
  os.makedirs(code_path_db)
231
 
232
+ code_path_results="evaluation_results/edit/{}/{}/{}/{}_samples/generated_code_results/".format(args.data_subset,args.model,args.prompt,args.num_samples)
233
  if not os.path.exists(code_path_results):
234
  os.makedirs(code_path_results)
235
 
 
252
  for generation in reader:
253
  query = generation['codeql_check'].split("/")[-1].split(".ql")[0]
254
 
255
+ code_path="evaluation_results/edit/{}/{}/{}/{}_samples/generated_code/{}/".format(args.data_subset,args.model,args.prompt,args.num_samples,query)
256
  scores=[]
257
+ code_path_results="evaluation_results/edit/{}/{}/{}/{}_samples/generated_code_results/{}.csv".format(args.data_subset,args.model,args.prompt,args.num_samples,query)
258
+ code_path_generations="evaluation_results/edit/{}/{}/{}/{}_samples/generated_code/{}/".format(args.data_subset,args.model,args.prompt,args.num_samples,query)
259
+ code_path_db="evaluation_results/edit/{}/{}/{}/{}_samples/generated_code_db/".format(args.data_subset,args.model,args.prompt,args.num_samples)
260
 
261
  errors=[]
262
 
 
288
  ext=".c"
289
  pl="C"
290
 
291
+ filename = "{}_{}{}".format(generation['file_path'].split("/")[-2]+"_"+generation['file_path'].split("/")[-1].split(ext)[0],index,ext)
292
 
293
  index+=1
294
 
 
302
  for k in k_values:
303
 
304
  overall_score[k].append(pass_at_k_continuous_vals(n = args.num_samples,k = k,vals = scores))
305
+
306
  scores_dump.append(scores)
307
  scores=[]
308
+ path = os.path.join("evaluation_results","edit",args.data_subset,args.model,args.prompt,f"{args.num_samples}_samples")
309
  f = open(os.path.join(path,"results.txt"),'w')
310
  f.write(str(scores_dump))
311
  f.close()
 
318
  results["syntax_errors"] = syntax_errors
319
  results["no_of_syntax"] = len(syn_errors)
320
  samples = pd.DataFrame([results])
321
+ path = os.path.join("evaluation_results","edit",args.data_subset,args.model,args.prompt,f"{args.num_samples}_samples")
322
+ samples.to_json(os.path.join(path,"results_{}.jsonl".format(args.metric)), orient="records", lines=True)
323
+ print("{}".format(results))
324
 
325
 
326
 
 
330
  overall_score={}
331
  for k in k_values:
332
  overall_score[k]=[]
333
+ generations_path = os.path.join("generations","edit",args.data_subset,args.model,args.prompt,f"{args.num_samples}_samples","generated_outputs.jsonl")
334
  passed = 0
335
  count = 0
336
  with jsonlines.open(generations_path) as reader:
337
+
338
+ res_path = os.path.split(generations_path)[0].split('/')
339
  res_path.insert(1,"evaluation_results")
340
+ res_path = os.path.join("/".join(res_path[1:]),"results.txt")
341
  codeql_results = eval(open(res_path).read())
342
  for generation,res in zip(reader,codeql_results):
343
  scores=[]
344
+ count += 1
345
 
346
  for l in range(len(generation['generated_answers'])):
347
  generated_answers=post_process_generations(generated_answers=generation['generated_answers'][l],model=args.model,prompt=args.prompt,pl=generation['pl'])
348
+ passed += generated_answers[0]
349
 
350
+ diff_score_bleu=res[l]*diff_bleu(source_code=generation['source_code'],target=generation['target_code'],generated_answers=generated_answers[1],pl=generation['pl'])
351
 
352
  scores.append(diff_score_bleu)
353
 
 
363
  results["Passed"] = (passed*100)/(count*args.num_samples)
364
  scores.append((passed*100)/(count*args.num_samples))
365
  samples = pd.DataFrame([results])
366
+ path = os.path.join("evaluation_results","edit",args.data_subset,args.model,args.prompt,f"{args.num_samples}_samples")
367
+ samples.to_json(os.path.join(path,"results_{}.jsonl".format(args.metric)), orient="records", lines=True)
368
+ print("{}".format(results))
src/evaluation/qls_for_security/cpp/cwe_125_mitre-eg-1.ql ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /**
2
+ * @name Check for CWE-125 in mitre-eg-1
3
+ * @description
4
+ * @kind problem
5
+ * @problem.severity warning
6
+ * @id custom-cpp/cwe-125-mitre-eg-1
7
+ */
8
+
9
+ import cpp
10
+ import semmle.code.cpp.controlflow.Guards
11
+ import semmle.code.cpp.rangeanalysis.SimpleRangeAnalysis
12
+
13
+ from ArrayExpr ae
14
+
15
+ where
16
+
17
+ //choose expressions that represent index and size
18
+ ae.getArrayOffset().toString() = "index"
19
+
20
+ and not ( exists( GuardCondition gc, Expr e, Expr expr |
21
+ e.toString() = "index"
22
+ and expr.toString() = "size"
23
+ and gc.ensuresLt(e, expr, 0, ae.getBasicBlock(), true) )
24
+ and lowerBound(ae.getArrayOffset()) >= 0 )
25
+
26
+ select ae, "cwe_125 found in"+ae.getFile().toString()
src/evaluation/tree-sitter/build/my-languages_c++.so ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f34864d8b75a065ec3ba7de234be6881e02c40c19d06e5ca8da55dcf6de69a6
3
+ size 3340728
src/evaluation/tree-sitter/build/my-languages_c.so ADDED
Binary file (701 kB). View file
 
src/evaluation/tree-sitter/build/my-languages_java.so ADDED
Binary file (492 kB). View file
 
src/evaluation/tree-sitter/build/my-languages_javascript.so ADDED
Binary file (580 kB). View file
 
src/evaluation/tree-sitter/build/my-languages_kotlin.so ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:858db301394b79240bc75b136ec70c3dc2c96e0d6faae2d6d844183506eaddde
3
+ size 12724728
src/evaluation/tree-sitter/build/my-languages_objectivec.so ADDED
Binary file (543 kB). View file
 
src/evaluation/tree-sitter/build/my-languages_python.so ADDED
Binary file (543 kB). View file
 
src/evaluation/tree-sitter/build/my-languages_scala.so ADDED
Binary file (543 kB). View file
 
src/utils.py CHANGED
@@ -255,47 +255,47 @@ def check_syntax(code,language):
255
  code += '\n'
256
 
257
  if(language.lower() == "java"):
258
- path = 'tree-sitter/tree-sitter-java'
259
  elif(language.lower() == "python"):
260
- path = 'tree-sitter/tree-sitter-python'
261
  elif(language.lower() == "scala"):
262
- path = 'tree-sitter/tree-sitter-scala'
263
  elif(language.lower() == "c"):
264
- path = 'tree-sitter/tree-sitter-c'
265
  elif(language.lower() == "c++"):
266
- path = 'tree-sitter/tree-sitter-cpp'
267
  elif(language.lower() == "objectivec"):
268
- path = 'tree-sitter/tree-sitter-objc'
269
  elif(language.lower() == "javascript"):
270
- path = 'tree-sitter/tree-sitter-javascript'
271
  elif(language.lower() == "kotlin"):
272
- path = 'tree-sitter/tree-sitter-kotlin'
273
  else:
274
  return(False)
275
 
276
  Language.build_library(
277
- 'build/my-languages_{}.so'.format(language.lower()),
278
  [
279
  path
280
  ]
281
  )
282
 
283
  if(language.lower() == "java"):
284
- LANGUAGE = Language('build/my-languages_java.so', 'java')
285
  elif(language.lower() == "python"):
286
- LANGUAGE = Language('build/my-languages_python.so', 'python')
287
  elif(language.lower() == "scala"):
288
- LANGUAGE = Language('build/my-languages_scala.so', 'scala')
289
  elif(language.lower() == "c"):
290
- LANGUAGE = Language('build/my-languages_c.so', 'c')
291
  elif(language.lower() == "c++"):
292
- LANGUAGE = Language('build/my-languages_c++.so', 'cpp')
293
  elif(language.lower() == "objectivec"):
294
- LANGUAGE = Language('build/my-languages_objectivec.so', 'objc')
295
  elif(language.lower() == "javascript"):
296
- LANGUAGE = Language('build/my-languages_javascript.so', 'javascript')
297
  elif(language.lower() == "kotlin"):
298
- LANGUAGE = Language('build/my-languages_kotlin.so', 'kotlin')
299
 
300
  parser = Parser()
301
  parser.set_language(LANGUAGE)
 
255
  code += '\n'
256
 
257
  if(language.lower() == "java"):
258
+ path = 'src/evaluation/tree-sitter/tree-sitter-java'
259
  elif(language.lower() == "python"):
260
+ path = 'src/evaluation/tree-sitter/tree-sitter-python'
261
  elif(language.lower() == "scala"):
262
+ path = 'src/evaluation/tree-sitter/tree-sitter-scala'
263
  elif(language.lower() == "c"):
264
+ path = 'src/evaluation/tree-sitter/tree-sitter-c'
265
  elif(language.lower() == "c++"):
266
+ path = 'src/evaluation/tree-sitter/tree-sitter-cpp'
267
  elif(language.lower() == "objectivec"):
268
+ path = 'src/evaluation/tree-sitter/tree-sitter-objc'
269
  elif(language.lower() == "javascript"):
270
+ path = 'src/evaluation/tree-sitter/tree-sitter-javascript'
271
  elif(language.lower() == "kotlin"):
272
+ path = 'src/evaluation/tree-sitter/tree-sitter-kotlin'
273
  else:
274
  return(False)
275
 
276
  Language.build_library(
277
+ 'src/evaluation/tree-sitter/build/my-languages_{}.so'.format(language.lower()),
278
  [
279
  path
280
  ]
281
  )
282
 
283
  if(language.lower() == "java"):
284
+ LANGUAGE = Language('src/evaluation/tree-sitter/build/my-languages_java.so', 'java')
285
  elif(language.lower() == "python"):
286
+ LANGUAGE = Language('src/evaluation/tree-sitter/build/my-languages_python.so', 'python')
287
  elif(language.lower() == "scala"):
288
+ LANGUAGE = Language('src/evaluation/tree-sitter/build/my-languages_scala.so', 'scala')
289
  elif(language.lower() == "c"):
290
+ LANGUAGE = Language('src/evaluation/tree-sitter/build/my-languages_c.so', 'c')
291
  elif(language.lower() == "c++"):
292
+ LANGUAGE = Language('src/evaluation/tree-sitter/build/my-languages_c++.so', 'cpp')
293
  elif(language.lower() == "objectivec"):
294
+ LANGUAGE = Language('src/evaluation/tree-sitter/build/my-languages_objectivec.so', 'objc')
295
  elif(language.lower() == "javascript"):
296
+ LANGUAGE = Language('src/evaluation/tree-sitter/build/my-languages_javascript.so', 'javascript')
297
  elif(language.lower() == "kotlin"):
298
+ LANGUAGE = Language('src/evaluation/tree-sitter/build/my-languages_kotlin.so', 'kotlin')
299
 
300
  parser = Parser()
301
  parser.set_language(LANGUAGE)