From 5fe541d0323c883ee0df8bff0bb9a1699ecc5439 Mon Sep 17 00:00:00 2001 From: nmd2k Date: Sun, 13 Oct 2024 11:04:13 +0700 Subject: [PATCH 1/2] Minor update --- leaderboards/codemmlu/index.html | 6 +++--- leaderboards/codemmlu/results.json | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/leaderboards/codemmlu/index.html b/leaderboards/codemmlu/index.html index 9fc39c4..2fe6ac3 100644 --- a/leaderboards/codemmlu/index.html +++ b/leaderboards/codemmlu/index.html @@ -141,7 +141,7 @@

CodeMMLU Leaderboard

-

CodeMMLU: A Multi-Task Benchmark for Assessing Code Understanding Capabilities of CodeLLMs

+

A Multi-Task Benchmark for Assessing Code Understanding Capabilities of CodeLLMs


@@ -151,7 +151,7 @@

alt="blog" class="img-fluid" /> - leaderboard id="Complete" checked /> - +

diff --git a/leaderboards/codemmlu/results.json b/leaderboards/codemmlu/results.json index 6e06758..d7e5816 100644 --- a/leaderboards/codemmlu/results.json +++ b/leaderboards/codemmlu/results.json @@ -9,7 +9,7 @@ "realtask_accuracy": 38.26, "syntactic_accuracy": 67.22, "semantic_accuracy": 66.08, - "prompted": false, + "prompted": true, "size": null, "direct_complete": false, "lazy": false, @@ -25,7 +25,7 @@ "realtask_accuracy": 77.18, "syntactic_accuracy": 60.41, "semantic_accuracy": 57.81, - "prompted": false, + "prompted": true, "size": null, "direct_complete": false, "lazy": false, @@ -41,7 +41,7 @@ "realtask_accuracy": 45.26, "syntactic_accuracy": 61.68, "semantic_accuracy": 53.65, - "prompted": false, + "prompted": true, "size": null, "direct_complete": false, "lazy": false, From f8f0cff3a082877dbac043d7928d5cc17aa3a6b2 Mon Sep 17 00:00:00 2001 From: nmd2k Date: Mon, 21 Apr 2025 11:22:41 +0700 Subject: [PATCH 2/2] Update codemmlu leaderboard --- codemmlu/index.html | 815 +++++++----- codemmlu/scripts/convert_csv_to_json.py | 53 + .../static/data/_results.json | 0 codemmlu/static/data/results.json | 1106 +++++++++++++++++ leaderboards/codemmlu/_results.json | 301 ----- .../codemmlu/images/codemmlu-logo.png | Bin 33299 -> 0 bytes leaderboards/codemmlu/index.html | 766 +----------- 7 files changed, 1702 insertions(+), 1339 deletions(-) create mode 100644 codemmlu/scripts/convert_csv_to_json.py rename leaderboards/codemmlu/results.json => codemmlu/static/data/_results.json (100%) create mode 100644 codemmlu/static/data/results.json delete mode 100644 leaderboards/codemmlu/_results.json delete mode 100644 leaderboards/codemmlu/images/codemmlu-logo.png diff --git a/codemmlu/index.html b/codemmlu/index.html index 9f15ec4..4914a91 100644 --- a/codemmlu/index.html +++ b/codemmlu/index.html @@ -43,6 +43,11 @@ + + + + + @@ -140,7 +247,7 @@

CodeMMLU: A Multi-Task Benchmark for As @@ -227,7 +340,7 @@

Abstract

- Recent advancements in Code Large Language Models (CodeLLMs) have predominantly focused on open-ended code generation tasks, often neglecting the critical aspect of code understanding and comprehension. To bridge this gap, we present CodeMMLU, a comprehensive multiple-choice question-answer benchmark designed to evaluate the depth of software and code understanding in LLMs. CodeMMLU includes over 10,000 questions sourced from diverse domains, encompassing tasks such as code analysis, defect detection, and software engineering principles across multiple programming languages. Unlike traditional benchmarks, CodeMMLU assesses models’ ability to reason about code rather than merely generate it, providing deeper insights into their grasp of complex software concepts and systems. Our extensive evaluation reveals that even state-of-the-art models face significant challenges with CodeMMLU, highlighting deficiencies in comprehension beyond code generation. By underscoring the crucial relationship between code understanding and effective generation, CodeMMLU serves as a vital resource for advancing AI-assisted software development, ultimately aiming to create more reliable and capable coding assistants. + Recent advancements in Code Large Language Models (CodeLLMs) have predominantly focused on open-ended code generation tasks, often neglecting the critical aspect of code understanding and comprehension. To bridge this gap, we present CodeMMLU, a comprehensive multiple-choice question-answer benchmark designed to evaluate the depth of software and code understanding in LLMs. CodeMMLU includes over 10,000 questions sourced from diverse domains, encompassing tasks such as code analysis, defect detection, and software engineering principles across multiple programming languages. Unlike traditional benchmarks, CodeMMLU assesses models' ability to reason about code rather than merely generate it, providing deeper insights into their grasp of complex software concepts and systems. Our extensive evaluation reveals that even state-of-the-art models face significant challenges with CodeMMLU, highlighting deficiencies in comprehension beyond code generation. By underscoring the crucial relationship between code understanding and effective generation, CodeMMLU serves as a vital resource for advancing AI-assisted software development, ultimately aiming to create more reliable and capable coding assistants.

@@ -253,7 +366,7 @@

Overview

- CodeMMLU enables us to assess LLMs’ capabilities in coding and software tasks from a novel perspective, extending beyond traditional code generation and completion. Our analysis reveals several notable findings: (1) previously unexplored bias issues in CodeLLMs, aligning with those observed in natural language MCQA tasks; (2) GPT-4 consistently achieving the highest average performance among closed-source models, while (3) the Meta-Llama family demonstrated the greatest accuracy among open-source models; (4) scaling laws related to model size were partially observed within the same model family but not across different families, suggesting the significant influence of pretraining datasets, methodologies, and model architectures; (5) advanced prompting techniques, such as Chain-of-Thought (CoT), consistently degraded performance, raising concerns about CodeLLMs’ reasoning abilities on complex, step-by-step tasks; and (6) benchmarks like HumanEval, when converted from open-ended code generation to MCQA format, show that LLMs perform worse on MCQA, raising concerns about their real capability to understand and comprehend code. These findings highlight the current shortcomings of CodeLLMs and the intricate relationship between model architecture, training data quality, and evaluation methods in determining performance on software-related tasks. + CodeMMLU enables us to assess LLMs' capabilities in coding and software tasks from a novel perspective, extending beyond traditional code generation and completion. Our analysis reveals several notable findings: (1) previously unexplored bias issues in CodeLLMs, aligning with those observed in natural language MCQA tasks; (2) GPT-4 consistently achieving the highest average performance among closed-source models, while (3) the Meta-Llama family demonstrated the greatest accuracy among open-source models; (4) scaling laws related to model size were partially observed within the same model family but not across different families, suggesting the significant influence of pretraining datasets, methodologies, and model architectures; (5) advanced prompting techniques, such as Chain-of-Thought (CoT), consistently degraded performance, raising concerns about CodeLLMs' reasoning abilities on complex, step-by-step tasks; and (6) benchmarks like HumanEval, when converted from open-ended code generation to MCQA format, show that LLMs perform worse on MCQA, raising concerns about their real capability to understand and comprehend code. These findings highlight the current shortcomings of CodeLLMs and the intricate relationship between model architecture, training data quality, and evaluation methods in determining performance on software-related tasks.

@@ -282,251 +395,396 @@

Overview

-

Evaluation Results

+

Evaluation Results

CodeMMLU revealed significant performance differences across models, as shown in the table below. OpenAI's GPT-4o outperformed all models on CodeMMLU, demonstrating its quality across diverse tasks. Notably, despite not being the latest model, the instructed version of Meta-Llama-3-70B achieved the highest score among open-source models from 8 families. While LLMs perform well on knowledge-based tasks, they struggle with real-world problems, particularly in defect detection tasks.

-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Model nameSize (B)Syntactic knowledgeSemantic knowledgeReal-world tasksCodeMMLU
Closed-source models     
AnthropicClaude-3-sonnet@20240229-67.2266.0838.2653.97
OpenAIGPT-4o-2024-05-13-60.4157.8277.1867.0
GPT-3.5-turbo-0613-61.6853.6445.2651.7
Open-source models     
Meta LlamaCodeLlama-34b-Instruct-hf3456.8146.9323.5538.73
Meta-Llama-3-70B7063.3857.6435.2948.98
Meta-Llama-3-70B-Instruct7064.9062.9660.8462.45
Meta-Llama-3.1-70B7064.0959.008.2237.56
Meta-Llama-3.1-70B-Instruct7064.4262.2556.1160
MistralMistral-7B-Instruct-v0.3754.4251.2531.8543.33
Mixtral-8x7B-Instruct-v0.146.761.1754.8924.9042.96
Codestral-22B-v0.12260.3452.1137.8647.6
PhiPhi-3-medium-128k-instruct1458.5454.5637.8948.03
Phi-3-mini-128k-instruct3.853.0148.6522.3637.93
QwenQwen2-57B-A14B-Instruct5761.3457.4830.4846.34
CodeQwen1.5-7B-Chat749.6646.5856.3749.82
YiYi-1.5-34B-Chat3458.3255.5940.2749.39
Yi-1.5-9B-Chat955.6455.0637.1547.23
Deep SeekDeepSeek-coder-7b-instruct-v1.5756.6747.9028.4641.21
DeepSeek-coder-33b-instruct3353.6546.1121.4736.6
DeepSeek-moe-16b-chat16.431.7435.4327.3331.01
DeepSeek-Coder-V2-Lite-Instruct1659.9154.7633.6246.51
InternLMInternLM2-5-20b-chat2057.8555.5130.4444.89
StarCoder2StarCoder2-15b-instruct-v0.11556.5849.0742.7947.94
-
Summary performance of LLM family on CodeMMLU. The evaluation results (accuracy %) of different language models across CodeMMLU task.
+ + +
+
+ +
+
+
+
+
+
- -
-

- For more benchmark detail, please check πŸ‘‰ HERE πŸ‘ˆ -

-
- -
CodeMMLU accuracy by task on LLMs. While knowledge tasks are following the scaling law, real-world tasks offer more challenges to LLMs which indicate the performance of instruction tuning and data quality when evaluating on CodeMMLU.
-
+ + +
@@ -534,53 +792,24 @@

Evaluation Results

- +
-
-
-

Enhancing Functional Correctness and Dependency Invocation abilities

-
-

- Two approaches are investigated to enhance the performance of generated code in terms of both functional correctness and dependency invocation. -

    -
  • Multi-round Debugging: Leveraging test execution outputs and incorporating self-refinement through multiple rounds can dramatically boost a model's performance in generating accurate code and effectively utilizing dependencies.
  • -
    - -
    Figure 2: Improvement of the performance of several models on RepoExec after 3-round debugging process.
    -
    -
  • Instruction tuning: RepoExec also comes with a valuable instruction-tuning training dataset. The experimental results, highlighted in the table below, clearly demonstrate the effectiveness of this approach with just a single round of generation.
  • -
    - -
    Table 2: Improvement of the performance of several models on RepoExec after instruction tuning.
    -
    -
-

-
-
+

πŸ“ Notes

+
+ +
  • Evaluated using CodeMMLU
  • +
  • Models are ranked according to Accuracy using greedy decoding.
  • +
  • "Size" here is the amount of activated model weight during inference.
  • +
    +
    -
    --> - - - - - - - - - - - - - - - - - - - - - + @@ -606,13 +835,15 @@

    BibTeX

    Contact us
    - 🌐: fpt-aicenter - Ⓜ️: support.ailab@fpt.com - + 🌐 fpt-aicenter + Ⓜ️ support.ailab@fpt.com

    +
    +

    -

    Acknowledgements
    - This page was built using the Academic Project Page Template which was adopted from theΒ NerfiesΒ project page. + This page was built using the Academic Project Page Template which was adopted from the Nerfies project page. + You are free to borrow the of this website, we just ask that you link back to this page in the footer.
    This website is licensed under a Creative + Commons Attribution-ShareAlike 4.0 International License.

    diff --git a/codemmlu/scripts/convert_csv_to_json.py b/codemmlu/scripts/convert_csv_to_json.py new file mode 100644 index 0000000..24a7bd9 --- /dev/null +++ b/codemmlu/scripts/convert_csv_to_json.py @@ -0,0 +1,53 @@ +import csv +import json +import os + +def convert_csv_to_json(csv_path, json_path): + # Dictionary to store the results + results = {} + + # Read CSV file + with open(csv_path, 'r') as csv_file: + csv_reader = csv.DictReader(csv_file) + + for row in csv_reader: + model_name = row['model'].strip() + + # Create model entry + model_entry = { + "link": model_name, + "open-data": "None", + "pass@1": { + "instruct": None, + "complete": float(row['codemmlu']) + }, + "realtask_accuracy": float(row['fundamental']), + "syntactic_accuracy": float(row['syntatic']), + "semantic_accuracy": float(row['semantic']), + "prompted": True, # Instruction models + "size": float(row['size']) if row['size'] else None, + "direct_complete": False, + "lazy": False, + "elo_mle": 874 + } + + results[model_name] = model_entry + + # Write JSON file + with open(json_path, 'w') as json_file: + json.dump(results, json_file, indent=4) + +def main(): + # Get the absolute path to the script's directory + script_dir = os.path.dirname(os.path.abspath(__file__)) + + # Construct paths relative to the script directory + csv_path = os.path.join(script_dir, '..', 'static', 'data', 'CodeMMLU_update_res.csv') + json_path = os.path.join(script_dir, '..', 'static', 'data', 'updated_results.json') + + # Convert CSV to JSON + convert_csv_to_json(csv_path, json_path) + print(f"Conversion complete. JSON file saved to: {json_path}") + +if __name__ == "__main__": + main() \ No newline at end of file diff --git a/leaderboards/codemmlu/results.json b/codemmlu/static/data/_results.json similarity index 100% rename from leaderboards/codemmlu/results.json rename to codemmlu/static/data/_results.json diff --git a/codemmlu/static/data/results.json b/codemmlu/static/data/results.json new file mode 100644 index 0000000..d620484 --- /dev/null +++ b/codemmlu/static/data/results.json @@ -0,0 +1,1106 @@ +{ + "Claude 3.7 Sonnet": { + "link": "Claude 3.7 Sonnet", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 61.65 + }, + "realtask_accuracy": 60.92, + "syntactic_accuracy": 52.78, + "semantic_accuracy": 76.26, + "prompted": true, + "size": null, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Claude 3.5 Sonnet": { + "link": "Claude 3.5 Sonnet", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 59.81 + }, + "realtask_accuracy": 58.56, + "syntactic_accuracy": 52.23, + "semantic_accuracy": 73.45, + "prompted": true, + "size": null, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Cluade 3.5 Haiku": { + "link": "Cluade 3.5 Haiku", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 57.25 + }, + "realtask_accuracy": 57.83, + "syntactic_accuracy": 49.24, + "semantic_accuracy": 68.2, + "prompted": true, + "size": null, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Claude 3 Sonnet": { + "link": "Claude 3 Sonnet", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 53.97 + }, + "realtask_accuracy": 38.26, + "syntactic_accuracy": 67.22, + "semantic_accuracy": 66.08, + "prompted": true, + "size": null, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "GPT o3-mini": { + "link": "GPT o3-mini", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 62.36 + }, + "realtask_accuracy": 62.77, + "syntactic_accuracy": 53.08, + "semantic_accuracy": 75.5, + "prompted": true, + "size": null, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "GPT 4o": { + "link": "GPT 4o", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 67.0 + }, + "realtask_accuracy": 77.18, + "syntactic_accuracy": 60.41, + "semantic_accuracy": 57.82, + "prompted": true, + "size": null, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "GPT 4o-mini": { + "link": "GPT 4o-mini", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 38.43 + }, + "realtask_accuracy": 20.33, + "syntactic_accuracy": 48.66, + "semantic_accuracy": 55.9, + "prompted": true, + "size": null, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "GPT 3.5-turbo": { + "link": "GPT 3.5-turbo", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 51.7 + }, + "realtask_accuracy": 58.52, + "syntactic_accuracy": 61.68, + "semantic_accuracy": 84.88, + "prompted": true, + "size": null, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Meta Llama3.3 70B Inst": { + "link": "Meta Llama3.3 70B Inst", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 40.66 + }, + "realtask_accuracy": 30.96, + "syntactic_accuracy": 44.31, + "semantic_accuracy": 52.76, + "prompted": true, + "size": 70.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Meta Llama3.1 405B Inst": { + "link": "Meta Llama3.1 405B Inst", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 58.23 + }, + "realtask_accuracy": 57.1, + "syntactic_accuracy": 50.82, + "semantic_accuracy": 71.41, + "prompted": true, + "size": 405.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Meta Llama3.1 70B": { + "link": "Meta Llama3.1 70B", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 37.56 + }, + "realtask_accuracy": 65.65, + "syntactic_accuracy": 64.09, + "semantic_accuracy": 87.02, + "prompted": true, + "size": 70.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Meta Llama3 70B": { + "link": "Meta Llama3 70B", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 48.98 + }, + "realtask_accuracy": 63.1, + "syntactic_accuracy": 63.38, + "semantic_accuracy": 86.02, + "prompted": true, + "size": 70.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "CodeLlama34B Inst": { + "link": "CodeLlama34B Inst", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 38.73 + }, + "realtask_accuracy": 23.55, + "syntactic_accuracy": 56.81, + "semantic_accuracy": 46.93, + "prompted": true, + "size": 34.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "DeepSeek R1": { + "link": "DeepSeek R1", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 43.91 + }, + "realtask_accuracy": 38.08, + "syntactic_accuracy": 42.39, + "semantic_accuracy": 56.77, + "prompted": true, + "size": 671.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "DeepSeek V3": { + "link": "DeepSeek V3", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 49.08 + }, + "realtask_accuracy": 45.06, + "syntactic_accuracy": 48.3, + "semantic_accuracy": 57.53, + "prompted": true, + "size": 685.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "DeepSeekCoder 7B_v1.5 Inst": { + "link": "DeepSeekCoder 7B_v1.5 Inst", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 41.21 + }, + "realtask_accuracy": 28.46, + "syntactic_accuracy": 56.67, + "semantic_accuracy": 47.9, + "prompted": true, + "size": 7.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "DeepSeekCoder 33B Inst": { + "link": "DeepSeekCoder 33B Inst", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 36.6 + }, + "realtask_accuracy": 52.16, + "syntactic_accuracy": 53.65, + "semantic_accuracy": 74.89, + "prompted": true, + "size": 33.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "DeepSeekMoE 16B Chat": { + "link": "DeepSeekMoE 16B Chat", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 31.01 + }, + "realtask_accuracy": 41.48, + "syntactic_accuracy": 31.74, + "semantic_accuracy": 41.94, + "prompted": true, + "size": 16.4, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Mixtral 8x7B Inst": { + "link": "Mixtral 8x7B Inst", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 42.96 + }, + "realtask_accuracy": 61.83, + "syntactic_accuracy": 61.17, + "semantic_accuracy": 85.02, + "prompted": true, + "size": 46.7, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Phi4": { + "link": "Phi4", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 49.19 + }, + "realtask_accuracy": 47.82, + "syntactic_accuracy": 45.34, + "semantic_accuracy": 57.46, + "prompted": true, + "size": 14.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Phi4 Mini Inst": { + "link": "Phi4 Mini Inst", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 34.85 + }, + "realtask_accuracy": 19.75, + "syntactic_accuracy": 41.94, + "semantic_accuracy": 51.59, + "prompted": true, + "size": 12.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Phi3 Medium Inst (128k)": { + "link": "Phi3 Medium Inst (128k)", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 48.03 + }, + "realtask_accuracy": 37.89, + "syntactic_accuracy": 58.54, + "semantic_accuracy": 54.56, + "prompted": true, + "size": 14.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Qwen2.5 14B Inst": { + "link": "Qwen2.5 14B Inst", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 51.38 + }, + "realtask_accuracy": 51.49, + "syntactic_accuracy": 46.38, + "semantic_accuracy": 58.7, + "prompted": true, + "size": 14.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "QwQ 38B Preview": { + "link": "QwQ 38B Preview", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 46.34 + }, + "realtask_accuracy": 30.48, + "syntactic_accuracy": 61.34, + "semantic_accuracy": 57.48, + "prompted": true, + "size": 57.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "QwenCoder2.5 14B Inst": { + "link": "QwenCoder2.5 14B Inst", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 47.65 + }, + "realtask_accuracy": 43.27, + "syntactic_accuracy": 46.22, + "semantic_accuracy": 57.74, + "prompted": true, + "size": 14.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "QwenCoder2.5 32B Inst": { + "link": "QwenCoder2.5 32B Inst", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 56.4 + }, + "realtask_accuracy": 53.89, + "syntactic_accuracy": 50.63, + "semantic_accuracy": 69.61, + "prompted": true, + "size": 32.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "CodeQwen1.5 7B Chat": { + "link": "CodeQwen1.5 7B Chat", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 49.82 + }, + "realtask_accuracy": 46.06, + "syntactic_accuracy": 49.66, + "semantic_accuracy": 67.62, + "prompted": true, + "size": 7.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Yi1.5 34B Chat": { + "link": "Yi1.5 34B Chat", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 49.39 + }, + "realtask_accuracy": 40.27, + "syntactic_accuracy": 58.32, + "semantic_accuracy": 55.59, + "prompted": true, + "size": 34.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Yi1.5 9B Chat": { + "link": "Yi1.5 9B Chat", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 47.23 + }, + "realtask_accuracy": 60.05, + "syntactic_accuracy": 55.64, + "semantic_accuracy": 75.46, + "prompted": true, + "size": 9.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "InternLM2.5 20B Chat": { + "link": "InternLM2.5 20B Chat", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 44.89 + }, + "realtask_accuracy": 30.44, + "syntactic_accuracy": 57.85, + "semantic_accuracy": 55.51, + "prompted": true, + "size": 20.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "CodeLlama-7b-Instruct-hf": { + "link": "CodeLlama-7b-Instruct-hf", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 27.01 + }, + "realtask_accuracy": 4.78, + "syntactic_accuracy": 50.14, + "semantic_accuracy": 41.22, + "prompted": true, + "size": 7.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "CodeLlama-7b-Python-hf": { + "link": "CodeLlama-7b-Python-hf", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 29.49 + }, + "realtask_accuracy": 19.36, + "syntactic_accuracy": 38.7, + "semantic_accuracy": 36.87, + "prompted": false, + "size": 7.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "CodeLlama-13b-Instruct-hf": { + "link": "CodeLlama-13b-Instruct-hf", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 30.25 + }, + "realtask_accuracy": 10.53, + "syntactic_accuracy": 50.58, + "semantic_accuracy": 43.0, + "prompted": true, + "size": 13.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "CodeLlama-13b-Python-hf": { + "link": "CodeLlama-13b-Python-hf", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 29.82 + }, + "realtask_accuracy": 56.98, + "syntactic_accuracy": 12.89, + "semantic_accuracy": 4.88, + "prompted": false, + "size": 13.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "CodeLlama-13b-hf": { + "link": "CodeLlama-13b-hf", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 28.51 + }, + "realtask_accuracy": 6.65, + "syntactic_accuracy": 50.58, + "semantic_accuracy": 42.95, + "prompted": false, + "size": 13.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "CodeLlama-34b-Python-hf": { + "link": "CodeLlama-34b-Python-hf", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 9.4 + }, + "realtask_accuracy": 9.37, + "syntactic_accuracy": 15.57, + "semantic_accuracy": 5.34, + "prompted": false, + "size": 34.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Meta Llama3 8B": { + "link": "Meta Llama3 8B", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 51.89 + }, + "realtask_accuracy": 53.84, + "syntactic_accuracy": 54.14, + "semantic_accuracy": 47.8, + "prompted": false, + "size": 8.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Meta Llama3 8B Instruct": { + "link": "Meta Llama3 8B Instruct", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 46.04 + }, + "realtask_accuracy": 38.38, + "syntactic_accuracy": 58.1, + "semantic_accuracy": 48.21, + "prompted": true, + "size": 8.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Meta Llama3.1 70B": { + "link": "Meta Llama3.1 70B", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 37.56 + }, + "realtask_accuracy": 8.22, + "syntactic_accuracy": 64.09, + "semantic_accuracy": 59.0, + "prompted": false, + "size": 70.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Meta Llama3.1 70B Instruct": { + "link": "Meta Llama3.1 70B Instruct", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 60.0 + }, + "realtask_accuracy": 56.11, + "syntactic_accuracy": 64.41, + "semantic_accuracy": 62.25, + "prompted": true, + "size": 70.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Meta Llama3.1 8B": { + "link": "Meta Llama3.1 8B", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 42.06 + }, + "realtask_accuracy": 31.58, + "syntactic_accuracy": 53.95, + "semantic_accuracy": 48.09, + "prompted": false, + "size": 8.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Meta Llama3.1 8B Instruct": { + "link": "Meta Llama3.1 8B Instruct", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 45.22 + }, + "realtask_accuracy": 35.7, + "syntactic_accuracy": 56.54, + "semantic_accuracy": 50.36, + "prompted": true, + "size": 8.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Mistral 7B_v0.1 Instruct": { + "link": "Mistral-7B-Instruct-v0.1", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 45.55 + }, + "realtask_accuracy": 41.49, + "syntactic_accuracy": 52.74, + "semantic_accuracy": 46.16, + "prompted": true, + "size": 6.7, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Mistral 7B_v0.2 Instruct": { + "link": "Mistral-7B-Instruct-v0.2", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 39.14 + }, + "realtask_accuracy": 26.01, + "syntactic_accuracy": 52.14, + "semantic_accuracy": 47.97, + "prompted": true, + "size": 7.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Mistral 7B_v0.3 Instruct": { + "link": "Mistral-7B-Instruct-v0.3", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 43.33 + }, + "realtask_accuracy": 31.85, + "syntactic_accuracy": 54.42, + "semantic_accuracy": 51.25, + "prompted": true, + "size": 7.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Mixtral 8x7B_v0.1 Instruct": { + "link": "Mixtral-8x7B-Instruct-v0.1", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 40.93 + }, + "realtask_accuracy": 13.49, + "syntactic_accuracy": 61.17, + "semantic_accuracy": 54.89, + "prompted": true, + "size": 46.7, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Codestral 22B_v0.1": { + "link": "Codestral 22B_v0.1", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 47.6 + }, + "realtask_accuracy": 37.86, + "syntactic_accuracy": 60.34, + "semantic_accuracy": 52.11, + "prompted": false, + "size": 22.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Phi3 medium Instruct (4k)": { + "link": "Phi3 medium Instruct (4k)", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 50.95 + }, + "realtask_accuracy": 43.17, + "syntactic_accuracy": 58.42, + "semantic_accuracy": 56.34, + "prompted": true, + "size": 14.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Phi3 mini Instruct (128k)": { + "link": "Phi3 mini Instruct (128k)", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 37.93 + }, + "realtask_accuracy": 22.36, + "syntactic_accuracy": 53.01, + "semantic_accuracy": 48.65, + "prompted": true, + "size": 3.8, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Phi3 mini Instruct (4k)": { + "link": "Phi3 mini Instruct (4k)", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 39.99 + }, + "realtask_accuracy": 27.63, + "syntactic_accuracy": 54.73, + "semantic_accuracy": 46.65, + "prompted": true, + "size": 3.8, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Phi3 small Instruct (8k)": { + "link": "Phi3 small Instruct (8k)", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 43.69 + }, + "realtask_accuracy": 26.81, + "syntactic_accuracy": 57.6, + "semantic_accuracy": 56.92, + "prompted": true, + "size": 7.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Phind CodeLlama 34B_v2": { + "link": "Phind CodeLlama 34B_v2", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 39.96 + }, + "realtask_accuracy": 25.51, + "syntactic_accuracy": 57.57, + "semantic_accuracy": 47.47, + "prompted": false, + "size": 7.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Qwen2 0.5B Instruct": { + "link": "Qwen2 0.5B Instruct", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 34.21 + }, + "realtask_accuracy": 29.55, + "syntactic_accuracy": 38.58, + "semantic_accuracy": 37.53, + "prompted": true, + "size": 0.5, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Qwen2 1.5B Instruct": { + "link": "Qwen2 1.5B Instruct", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 34.03 + }, + "realtask_accuracy": 15.18, + "syntactic_accuracy": 51.54, + "semantic_accuracy": 47.5, + "prompted": true, + "size": 1.5, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Qwen2 7B": { + "link": "Qwen2 7B", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 53.28 + }, + "realtask_accuracy": 49.3, + "syntactic_accuracy": 58.31, + "semantic_accuracy": 55.23, + "prompted": false, + "size": 7.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Qwen2 7B Instruct": { + "link": "Qwen2 7B Instruct", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 51.3 + }, + "realtask_accuracy": 42.66, + "syntactic_accuracy": 59.9, + "semantic_accuracy": 57.08, + "prompted": true, + "size": 7.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "CodeQwen1.5 7B": { + "link": "CodeQwen1.5 7B", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 42.56 + }, + "realtask_accuracy": 36.76, + "syntactic_accuracy": 52.51, + "semantic_accuracy": 43.65, + "prompted": false, + "size": 7.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "CodeQwen1.5 7B Chat": { + "link": "CodeQwen1.5 7B Chat", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 49.82 + }, + "realtask_accuracy": 56.37, + "syntactic_accuracy": 49.66, + "semantic_accuracy": 41.18, + "prompted": false, + "size": 7.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "Yi 1.5 6B Chat": { + "link": "Yi 1.5 6B Chat", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 44.13 + }, + "realtask_accuracy": 33.57, + "syntactic_accuracy": 55.1, + "semantic_accuracy": 50.91, + "prompted": false, + "size": 6.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "DeepSeekCoder 33B": { + "link": "DeepSeekCoder 33B", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 6.69 + }, + "realtask_accuracy": 11.05, + "syntactic_accuracy": 0.0, + "semantic_accuracy": 5.33, + "prompted": false, + "size": 33.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "DeepSeekCoder 6.7B": { + "link": "DeepSeekCoder 6.7B", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 27.06 + }, + "realtask_accuracy": 4.8, + "syntactic_accuracy": 49.45, + "semantic_accuracy": 41.81, + "prompted": false, + "size": 6.7, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "DeepSeekCoder 6.7B Instruct": { + "link": "DeepSeekCoder 6.7B Instruct", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 29.4 + }, + "realtask_accuracy": 8.54, + "syntactic_accuracy": 50.8, + "semantic_accuracy": 42.94, + "prompted": true, + "size": 6.7, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "DeepSeekCoder 7B": { + "link": "DeepSeekCoder 7B", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 37.48 + }, + "realtask_accuracy": 17.19, + "syntactic_accuracy": 58.79, + "semantic_accuracy": 50.35, + "prompted": false, + "size": 7.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "DeepSeekMoE 16B": { + "link": "DeepSeekMoE 16B", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 29.31 + }, + "realtask_accuracy": 18.53, + "syntactic_accuracy": 39.98, + "semantic_accuracy": 36.56, + "prompted": false, + "size": 16.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "DeepSeekCoder V2-Lite": { + "link": "DeepSeekCoder V2 Lite", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 40.88 + }, + "realtask_accuracy": 23.47, + "syntactic_accuracy": 59.44, + "semantic_accuracy": 51.71, + "prompted": false, + "size": 16.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "DeepSeekCoder V2-Lite Instruct": { + "link": "DeepSeek-Coder-V2-Lite-Instruct", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 46.51 + }, + "realtask_accuracy": 33.62, + "syntactic_accuracy": 59.91, + "semantic_accuracy": 54.75, + "prompted": true, + "size": 16.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "InternLM2.5 7B Chat": { + "link": "internlm2_5-7b-chat", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 42.64 + }, + "realtask_accuracy": 27.43, + "syntactic_accuracy": 57.32, + "semantic_accuracy": 53.13, + "prompted": true, + "size": 7.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "StarCoder2 15B Instruct": { + "link": "StarCoder2 15B Instruct", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 47.94 + }, + "realtask_accuracy": 42.78, + "syntactic_accuracy": 56.57, + "semantic_accuracy": 49.07, + "prompted": true, + "size": 15.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + }, + "StarCoder2 7B": { + "link": "StarCoder2 7B", + "open-data": "None", + "pass@1": { + "instruct": null, + "complete": 35.64 + }, + "realtask_accuracy": 27.42, + "syntactic_accuracy": 45.87, + "semantic_accuracy": 39.77, + "prompted": false, + "size": 7.0, + "direct_complete": false, + "lazy": false, + "elo_mle": 874 + } +} \ No newline at end of file diff --git a/leaderboards/codemmlu/_results.json b/leaderboards/codemmlu/_results.json deleted file mode 100644 index 12ae803..0000000 --- a/leaderboards/codemmlu/_results.json +++ /dev/null @@ -1,301 +0,0 @@ -{ - "CodeLlama-34B-Instruct": { - "link": "https://huggingface.co/codellama/CodeLlama-34b-hf", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 38.73 - }, - "prompted": true, - "size": 34, - "direct_complete": false, - "lazy": false, - "elo_mle": 942 - }, - "Meta-Llama-3-70B": { - "link": "https://huggingface.co/meta-llama/Meta-Llama-3-70B", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 48.98 - }, - "prompted": false, - "size": 70, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "Meta-Llama-3-70B-Instruct": { - "link": "https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 62.45 - }, - "prompted": true, - "size": 70, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "Meta-Llama-3.1-70B-Instruct": { - "link": "https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 60 - }, - "prompted": true, - "size": 70, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "Meta-Llama-3.1-70B": { - "link": "https://huggingface.co/meta-llama/Llama-3.1-70B", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 37.56 - }, - "prompted": false, - "size": 70, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "Mistral-7B-Instruct-v0.3": { - "link": "https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 43.33 - }, - "prompted": true, - "size": 7, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "Mixtral-8x7B-Instruct-v0.1": { - "link": "https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 42.96 - }, - "prompted": true, - "size": 7, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "Codestral-22B-v0.1": { - "link": "https://huggingface.co/mistralai/Codestral-22B-v0.1", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 47.6 - }, - "prompted": true, - "size": 22, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "Phi-3-medium-128k-instruct": { - "link": "https://huggingface.co/microsoft/Phi-3-medium-128k-instruct", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 48.03 - }, - "prompted": true, - "size": 14, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "Phi-3-mini-128k-instruct": { - "link": "https://huggingface.co/microsoft/Phi-3-mini-128k-instruct", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 37.93 - }, - "prompted": true, - "size": 3.8, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "Qwen2-57B-A14B-Instruct": { - "link": "https://huggingface.co/Qwen/Qwen2-57B-A14B-Instruct", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 46.34 - }, - "prompted": true, - "size": 57, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "CodeQwen1.5-7B-Chat": { - "link": "https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 49.82 - }, - "prompted": true, - "size": 7, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "Yi-1.5-34B-Chat": { - "link": "https://huggingface.co/01-ai/Yi-1.5-34B-Chat", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 49.39 - }, - "prompted": true, - "size": 34, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "Yi-1.5-9B-Chat": { - "link": "https://huggingface.co/01-ai/Yi-1.5-9B-Chat", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 47.23 - }, - "prompted": true, - "size": 9, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "DeepSeek-coder-7b-instruct-v1.5": { - "link": "https://huggingface.co/deepseek-ai/deepseek-coder-7b-instruct-v1.5", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 41.21 - }, - "prompted": true, - "size": 7, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "DeepSeek-coder-33b-instruct": { - "link": "https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 36.6 - }, - "prompted": true, - "size": 33, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "DeepSeek-moe-16b-chat": { - "link": "https://huggingface.co/deepseek-ai/deepseek-moe-16b-chat", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 31.01 - }, - "prompted": true, - "size": 16.4, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "DeepSeek-Coder-V2-Lite-Instruct": { - "link": "https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 46.51 - }, - "prompted": true, - "size": 16, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "InternLM2-5-20b-chat": { - "link": "https://huggingface.co/internlm/internlm2_5-20b-chat", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 44.89 - }, - "prompted": true, - "size": 20, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "StarCoder2-15b-instruct-v0.1": { - "link": "https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 47.94 - }, - "prompted": true, - "size": 15, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "Claude-3-sonnet@20240229": { - "link": "", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 53.97 - }, - "prompted": true, - "size": null, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "GPT-4o-2024-05-13": { - "link": "", - "open-data": "None", - "pass@1": { - "instruct": null, - "complete": 67 - }, - "prompted": true, - "size": null, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - }, - "GPT-3.5-turbo-0613": { - "link": "", - "open-data": null, - "pass@1": { - "instruct": null, - "complete": 51.7 - }, - "prompted": true, - "size": null, - "direct_complete": false, - "lazy": false, - "elo_mle": 874 - } -} \ No newline at end of file diff --git a/leaderboards/codemmlu/images/codemmlu-logo.png b/leaderboards/codemmlu/images/codemmlu-logo.png deleted file mode 100644 index 235d66f5be19f0828b85e78b0afe44d16344cd21..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 33299 zcmeFZbzGEP*FOq~lr#cT(u06B3^jCjgS51CO9?35-Q6iEDIL-+-AH$Loonv@BUE- z4yeNd1N?=-0$vWVynj8v2A=%_&0*mG(KZQqhyHv7UO=yZzl#{yd^R9^r)*+lY~oQ zS8FR9M;=!`ioa^`0MF23CJM5@syJElQGAe7Bonc9Fd+joGBYw$@S~8CkwF}cO?i|> z#sAS9_{2x?$;ru%hl$C>#f8y@jnUS@jERMto12N5m5G&=0jR;?=w{<&;L2d*Ncnds z|LjN9#L>vX+|J3|)`kq)uYsYhvlAZ$1$3Z){ro*oW21kKW9RH(_17rIMocDFCe|i4 zPL510j4Vw5Q#&CiCkJ!Gf3>x7WHk7z#2Gp$BS0V~17m1w=KrxZK&$^~`WG#sja&`v z{vR7TnVbHbD9{1_)e8E}-_8D_<=;z$7#r~zIhYtY**g3mrNDQ8ky^#X{y$6q`Gu7^ z|Np>25GE)cf$#oaRB{#UPmH~C-s6gLFALbCn-mRq^9)aU;~~o?9uBf)A5ZH3 zV0awz689HM4w;<$ar*j|XKUmLcfD2e94&9ty*EGUY|Bz|;X}sU?Qok$zATP)$XALi zq<3&3g8meWy4m~FCiHOsD8hrK7H+|O4J#&yr3g<)2us}=vh1bX{CD-Q*x7n7{!tM( z{42JBm~9)Q9#9clKUlHi?^dtBg@_9VSLW2~<5xfj0mc$7n)&C5Vw72;AI29dLN)$s zg{4R_X$c(*=priz`1F0}aOT%Pr-QY}@z*@RQYeliy~9$!xZ3}N`}ZhuNSB0v4~$jx z2|f;q-u>SB^dEDR_H#%3dtfA0b~tH2+?L0mu79oQHFS9h|Nrv-|K0ol5_4vWlLLp9 zzBswAF71a%e1IVH7XKnEobJviNIEsd3~1d zjy6H963+nEdd(vzo`;$(eb>-i5ce(!EuU5=Mo?N$ZnEK19aQ?2W&dPNc>ljos=;|58U;Br*sF7wZ* zgjp7ntMJ!BJRjd3@d#pC+t%BHFAt}^39DpDBIw+r%kKLO^p9?P% zqY0IoEL7lMf3pKbDk{FhVVNL&TXwaW}6k>kEP9bd!^M5o6@KI?LHRvvy}#C%O1zS3~`3)btdy99fx{f1#j<<-o91t zZbjNXuIxg0nASAw=WIPi+mBi&&aD7=a+MSi(9cP`1hAN2Hx!BFGa#C8y>1L{XWE{q z9k+&P8zd&9n-1$zo#Q$ZlH#^zQCGa5J!OuRhF>|KB%5W>(qy>i#J_2B-Z$k_&m~m2 zSA!qG0{G8iaSFa1Ii_=b^RRv;!7RhA$NdAv<4f}>!8Wf)VHFXCWSIg-auH#A#al+L zns>j+zwwI*i<(Hk>tV-X6#mq6vesp?;(1+*&dKL~@##txjXf&kJ5n4spj%8@JAUcC z9bxbTFFKC$v6>FO*@qXwOUw#?!$+(_tgTaJ<#RjzZ5Xp!eDBve$9DweE6zH}$voFl z9=Z?wz3Wbix}U4+M5)AjW(ojhIR|XlP(-*F4h$?+R@>vDtpmVNA9`_-V2?-1I`&7L zDv{AicvO4BiOj5hq^O0=TECNQ<3TtzpZ z4$Bzm#Mc1|dtU3NZh}@$8(VwP8*aB!wMVriu!o!>E75m{VQeE&$DYR?(+PcFYG~m? z+VKHC-iyb41Di*D;H|)S8AmOo>b#4FO$CnVx%QU~n1$apy1Uq|cG{h|Ajf{@-0(FH z;;X<+&Cb1|O7ro9%?mzXs3gj8nvilhsOlZa@HjGmarJnW-AfO*830%ujfbCx_B)PT zfLZke^rp%5ywaS0Z6y1Om1>nljod<5z&Oa|9qguj<(G-6?76y*DuNj5UJL>run?8u zWVRl4ht+v||5p18j5rkqJ~i>MH`y$~b6+P$>EWQZ&g|9+KDXnRY$LGF%??D;8g2o$ zxnkDvkd)W8Z+E*YPu<8xMubRs3_9WIkrU{Ud0YhpTw~8xSEn2k5Rr)eq6%6J55p1< z;}LO`8xDVr@;eN3%=F3g9>yIiKI&k-vji-aL3fWXU1E4Y-J|Tlw=w(WO(XO7C9Ym2jvbEs0B;)WJfqsrCDTV21Z2wH({s(Vl=UvF|bX5{m@ic{dfrDDgHCB<;y ze||XML=?FBcFE&URZ9yO?*q^%|G=r;S$WrBvPj9{{;+;9l&N(`DrS%)&xlQwqT4h< zinSZmu$iE()yY7E?PT{2RSHvnx5z3RS8^27AR8=X_MWK%l6CNNzdQj|DUrheEEz18 zwFq$L`)d;P2L8NS&Sh7Tr?bSizV%3fx6)6tX3|mQG2~YQF&0hzV^#V7B;MO$>2Vr; zm2JU@X`Hrtp`uF>lc?xLim&X$Xr02lL2sSLgd7U`KR9+{Nqk7P$@biJ3lXFx11v+7 zCe=J{!#3fU?$3|c+BEUZaBII4WBN_|%;MOEt+j^W=XV{$_iXz{qQ9hL%R|VxI?j|< zGi(_!S(b|>NSxzzJTIq&7ju9K>p&+wla5x&m?s!A4=_^WbT+l1Muz-m!Tx82j?`!? zc63_Ty}WJ=2djY9h!>%`=xBp1AS4JTg?+XJsb@{?KEkfds2Y4s5I|FGm;4~dPCb|7 zmg@o1=Nk!aINI{!EYL4K1af8P*4OLszJ~tBlDKHo5sr#!dt+Z5d#iR+R~`z66@6W3 zy)9MX2VO$u&|Lr5eKN+wewxgW@PHoQax5_cXd~_?$@)n)VFC@w*BFpUwS z+plg#HAySFMb7r3C>=%jOwjWpWN3uU-eR&j~ zp-2a{x(4L^*%gpB%>?d~A&(cot1YHVqTDZM&?fNJ-jHp4vG<807ez4Gs3s70D8fcW zBQ*YzOIB=${#?l^+-zTROg1o8s-cacm?r@dBA3c*!r;GY-!k)m|6>Zri4sNWniDKi zh*Wjol-G=??U117Jf#MHlj->z*dG+={#2`w55=Z3CJRgM;;K%4*l~*biG<|}ur{Rh zzM-E7on0P*(57a=wyj0++hZSPWX*9j@ole*eM|=F`$|{}T%YIdOmD_ff#+t()vVDp zUebOCd@ybxg#RL=SHO3hL>#bUTmS4NKJLomW_yl@Ltx6C)EVth*%lM$81N6A1`i;i47R)9o> z3mmCN-0DYwB)j)3mA3~k8{zpZ-TKF`>z`K zTICVht}rpw@~m4PO$TQHIP!16IbsJG*xjWuCrFZ&N%5~LkqwC` z9Oc~Mtn)-|f?Om2=t}-9cc{M?v)5d8lV3OJ*}*qIU40u35dcih{2)`t$I@{2az_{dYL$@~0Lrr9eOi$9#oK z^3UXc+*yRY012Shpv{OLws3uG)2eA8PaF|jOqPg`?oX$xH6pW2&${6vpGO7l2BeP? z-LbC=39$`v3O|Hx&Aya+81742z*Mj#pxEFC2T z!A&A!DNj^yM}p=&I{s4a^Q^*AWZx$B@8@SWcvo&ChM#i4*$1;?muPgqHT2xXY?=i}Q~90t3GN8uP#r@eW_d?%Oco z`Ye2ZdnXzKEz6P=bxAGRrNO=9pLw!%(uY+UT8)i8^a@;=L?kQ*6ZTLak^AjC``+__l9UP#~C}B7v7=v*h04}qcl0ipi zW#q)qVn-3o|TYIn4G^IHn8&NlafIZAovX z9e!cTbbLW|0o$4XeSDa6UP9G5!mPSe7C%{a0e;lG_INO;_`ln~~lRMtZpTbTZhg>m3Aj!%-80-EcVw zyeI81Hme62asG7I#3R7Xo_Ya&542n_TmEjijsBB{dsgYMn`8+S|7lFzB#!sESp~3@ zacb!K^s!F-1xxjlw)Ro^8a^SBAv@%Lg>Qj)=CW=QAGoN>ZR@`Q?CTk7=cm4p=f5+b zCh_Lba%~SO-@*s2P<{CZzdZZ-F+dAw#^^_gfLy%?xT~|M+>(JhM)R+~H30Mpd$o(% z;|2(9>*+T({CcS(UnMOUD}72j6$2?u4Dp*E&IYaZ6vL4(ei8x7frzocGJ2=#5<4i8 zfkl6vu$}Ua8AExoa9+Qx(HMq*uGC;1aLS9rxp5@#G^vn^EZDI>bRAy+w(yS)l}}-0 z#?DZ~J~xtItMUaJr_4b_6rK5E&9u(Vz0UWy_8H`HYXkHZH5M0uo!;pGLCFFnZsH-X zC1&E$iA-ah8sJrqxF2gAlD??X8$o$ ztT+8(CRQmUA%=sXHP{r!UHeDHQ@G>bMV91OdsBlvzV*zb zwO1t|{1bj5wox*b0uvX|UoidLp=f~>tA(W=W7>>;Itdi$WAeToOp|9ZS*V^O=@qEpcbvRfSl)8N&0XNW4@M4@>iLRC&I|a+IaOVURQgQ z0_OCja$+*WhnNO)_IIP#4{F~aM@Am?T`0Q0!AA_JmPPO9 z$Ep#Z#q8}2kMf7(=cfvU!QPNBw`k(Jouw76^E{fr(r-6co}cO=fbp@3SW^D(GayNg z0v}3Y`xkB!+a9RZnUqQ-Kdz9Aikx$L)#Bfk+riJ*TACT@pYn(=z15)HAC}`>dToA{ z-!PrC{u=+=P5JzvayI+i^`~&68yp1)R;qh%F^wyLPrU-;0m|S55~jTUFI8C+yPGhg!v}0JibFmET|hb3&8DF$YAdNGdUI`c-)`tp1v$?qVa>5kVjcM6x%G0B0eINAZ>%Sau%YBj9IHGPON8ywA-=&kq|x=RNosy4S;6 zgsvQoq&U9e^NKNugKxzT8E4}g;@c>-Qgt<|t)Dxq0G}ZAj=5l{CNuyF#Hc?47WvE% z>h)_+sV$;+&HhPfAQJ@vDT$bJXBV@#;h5@KlQo@M=GciV&D zdeVWofU>f3)5H1Qg1x=wQdO)SAC7DEq{+QarXVQBQfRqKq&XYsz(()GDG?txLB?C73cPBAFg4Z;u_kednJ7 zh-@%J1Ou1G0u&>Hz4I7!+PqpgL{3rxkIEG=!pB}o!$#gNtUQ4-y`S9L^}u}T%F-LC z>3kk_o_CwYNf+k5q~6VVb4#zwCHaML?-u_s?Mo8Ee$fq7XSMAY-nF6Xx+|HU@HEm+ zUa65zvCw*&oMyLRjbh<{lqA>AHF_X(pC({R>VM_2%>-37I-F@ zW}a6_rW`w)(cB7U=s8myU9|5d$;KE~Vct_DHcEJ}Y4~Fj6KoII5>gd=C793DX^OtE9|^Vp$U)9E$Pjb z*GSagpvJc+F1lhS0+=vFtmK|g)7rLk8J%J4xlu>__Xm^>A?G56F)z`~l5ym@gsgq> z!grpvS0>-t5>n`l&z>RZXjoHa6vWAD@<`m(?k6}?l!D7SBRIwbUmK?jVzrXB;zr5~ z)BCVf33d8@o-@IB0~{?*a-lm=#)(VYy>67j#UctoaOTeN9@9Xr$^L7C0BNMuRdsD^ zlPSTA;~GZYpR=!3Qe-BDsP1N{!%NYyqBhuEz5+Jt8gXW@}?-4APLQl_$#?sP_y zaTc?T4X__aUkL{kB}iuRSx&#-f3u#!BNgGsGmS?I7d!Jls*2tWjFEIiGJszsr9tHK zB)gpVV%{nKbD58n_^@x{7!guf_o6?rO4aOHcp#$im_{Y}e7-dt3+0oj?29xMVPKIt z!yu2LRNGeC-vx3w)$6?3IL4l|qH+V=(0~NBP%mglmc-{5xQIay{;l3k`FESUPHxIn zmv=rNLADd~dxUdkp8CZVf)vJyt@lgED3`Au!48Sopy+;3QZZASS)WBNbqZ^CZYDRN z4tON}9;Exa0f{<8EU4gK3=J`H&@yg7=G?7kRRYk6K&0Le_CJim92w!==FJP|3XQkf zJj369tL@p3Rkf%wb^e5Ktmq+_!bxi}F~V`lnrA=;(Tqmp<;10HE;+kx$$Ba*02QPB zCNZl17WDWKtH571VN_>y;Rrb2%sG}HC-S7ut`=;1x6X z!l<CM1{&tdj=8|En~w#L85@R;DdGp`Gb0i+}~L`KeC{n|6% z+S$5}lKzilTHgrY^Ws!c3dP}b8oO0@`WkOOa`liFnh&0de za~%Rf4rmj%KZ?}DeVt7?AM1Uj9J9*)WN|E7MsKjVHejuZ3Vi-G^u6PZ+sv@9vc!mp zyXHKC8>7^ZX*>w9DK5q(UZq6Jq5)2*4-Sodobx^klZn_%)U+W0iA=^E+rvp#M825= zA`Z|fR{z{9un4ia;`K*1zmqgq!8|c4K0{2$DRtj>>)+ur9EDzbaeH?;xH<2N60})F zeE-lStw`{>^!VNV=S3~|>Cf*gXFe$qH8~`F*RJI;tNc<V;1#@Dl)bE-OK`x+3XjTyrwP^%<}utAES$YI=|>eCNXYcs(fL1Ci7J za{4PH9j$qY=Ex|6*(CVMY(V6!F}l)@A2t$9pK9Dx?rE%B8mGf~0?$1%c`Tz$8T1s- zy>4s~D`mVoK(c2RVRrH6&$gY%K`1DE;d1?%-qtpW+_Ho-#q|+XTwam@o+`_?T{3le zY{$U~j87DEARz_7YcR?DTaSbBG`7S*5idl8s4O;&&UfqtExrM*Ke38fm4kTS&OWBM zGQq$)gq-;nypEX(AdDmB3l5Hc9Su59kCZkC0wdqpoLtk5l6o0zbtv01Z+&$54q37z z^xb_?o1AvuotxC^EFU8^8i$IMUBr6nusmg$(TNk^kCU2Q3>3daTd9!4cXO-H?E4Cy zV;gDU)vK~EdvKekceu~nHW>&It8@3i_t^HVjm|1RaEa5IH1lJCKlq}DQZf5D!nKa- z_9vJo<#6i(WcQxF8O0g>bQDr^uUfc7S=v$(`*PP1`b_^z1grZitzcUk3huUPR`gmsQN@Gkc)uk}ZY1%SQzb|s!;|gKR2&Xoi zW{2tOHk6rHA}8-C$IiV?Xyq?i5a6;6TRfd}Hts3;JioKf(WCL2|@fWSaO z3A~WKv&gR=HIxQ1WUh*Z%!!Asam>FP1E8)+kSm2w+s&WQR_}Jmqv5^?8a(ZmOp09H zqfrDms5`lLX39BJfVM6DR>zUdy(c={Nd)uG$0Ri0#o714U@Me$N!3$9*b#C3K3$;< zIVTWFs%$Htff?MvOVdK8qAZSPx%XzvQnjYRWb53atfOIa9*WeoWJXWx0a5EYK>YCe z-z_~XzSO|H~`6k#T9e+21>>_|Lxz)JUN~cZet=5Unc;YsC4wEsfKK!`tE_cuq z>2Hbt1xtQT>a^y-CEvq}u$u5g=j4uNRl*q#*eh3;GhQ;Agn_Hc8Pp%1fR6#z^i1-! z(9e%-FL0Tk&{r8(hY4PL+^t<>9l>F?Q0nD<9UR5g6%Y_$>Ji)|0NJ%;%)&D$W|tsw z53PE2XI&&kG{ZT7_zHB-$8oWqPIf2EdqpJrJo@X{H3pt-cw61qxwu&BbcaeWN?e+~ z57QBL4ab1vtul)~a9|e!o$+%V>Eta9e%L+L?d3#e=*L$bEFeC4OjbYD+wlULJg2Lcl?ej}$;<+D->7Hfx^SV9!5^n1iq+&;VK9I6?U=%Vokqy3IOOGSQzmU>iOkE^Qm@bc89*G-+xS9lf3KA`aLrt;hSm>a}{`Ez;Gv_ zf@>B2B+|1r?5A;a(pIl-6 z#C+`MyBhe=*}waH-;%aistWlJBw?92mAt?w3*03Zil<4jQJ*h|1p^pBQROc(K4_Hd zB-e7LY2RS2%lZ;dDXb&PAP`!G0BIc8jur>K0R`yRT5|^sjdBZb9UQLzczVD*?2UfGlO*+JyES*ee0LW{B9_ZawzUgn_w^SZh}KL;eXUj<~yn0+sRBY@^& z^a<>7O}T1%+_6xt^?0z)9CJJpKQ=W3+5*+wTPirA%m7D}ga^nNk=sKy<)wLa?2GDb z)|2MU&m@{zF(?&D=DcNRnltpfxGGjhbsRpDzS!R@HJO^9NU5P4i69{^){fGgTWK?2`P{XMrE3 z!&&H?UzES6&pDEsTemW)TVjdY9(*XZP^otPrL16~+$MB!#4(-3kvZ-dB$qCJVo)G` zeYD=vtdLBc@1j6JtD)dFG`J?l9kAH6^wUH^D9|`4JW_Nij*dQWt*Ais1Cqn0TLiWN z(tU_0rb5j{fLs)fMzhp8cTZks>XS=Mix`RXPW(D)RD@jZ#DHVp4SuQB6}oe=2$9A7 z02*lb%EU7glSnukT~gND3^j){uVy(nWzrVY^RC*W)FMpPHR-`(RcfNjc|p)(!hPL; z!P$r?d9Fs*i(_B&HnSfC#JZE=x8NGQWf3tPMcnfG=hXtUQUiVK4WE7_6BBTRL}nFY z^b32b0gR~*-VGQZJnn9AN?P#+9&F3s0g;~iCnn$V~VwI#97haL?& zvxPUysA@DE%t&ba}BZ^gCzv@BkoG&oV&9z z38}A-m6$evk%?G&b`#uDi<~6^iDT0R>_Y*KUV2d5H=uF`0HmZt^(q}dj{3qQCorj! z>>b73lu*^-EoW~b)Xf*6;S<&xJgcXl#~6T}cW@sgYSdou;V5?SnPIjdaulzS-C21o zPIr?(^+Db!8gy2q)PzUVEtFTpi1oahAs6cDNWHlJp%-OYz!z;jo8#F!%75=lNR&#B ztA9sC!`k+;Bp!)RNCyj%7MRTN2C}mjkne^{1Rk1a?xYYppnm;MPT6rXJ9Pt{!=g(( z7o1jW_|1uPin6*`>bzKFzwe#OhB)<|OM=26UnsG~gs{(!BU(VnjdG@ivIGf_K&`Yg zN9xleGFqcs4_mNGY|h!ia*GEeS4x{Ns=(ugiuU2#)aIEUZbIQT+_E!%o8S84ZWWvt zQ``X_JUt*uL1u33F2z8eSnBc+R5mGd8|P1T+?&d=AYGEZ@4IE2D#ixuDBGiH70Hu% z{_x}5#2bT5EhSQH#Uej0Gub(4_r|d(#f?*sK$*PYE)ti;amP`z)*Y-4vR%1*q!F{_(oB25s7Ic<@MCU(q3Y59_d;2zpoD`>$vpU#kTH z&Akdf6Zx|nPova4Hi7dA@nHE%PDc&<58{s!ksl1Ka&lVwYWHGCWAB%i>Fsw)~h7>uH6lIHFM+qAIZ6k{Ez=3^Q=;`wD&iX>wPjj^8M+A4ypi!c=T3>|mh~yGgL$Un zV)_TSB%eRey_j(P{HawoCZ$G^hNaguNb?({WsVS|5 zceF|NGMe3vuh4Z@s%kGmBp-f@*$}(tpd~28@JuGV_mH`7%V1Op`R?$cj{m`wkh{Bw zE$q8fm#eN**>7)@ip>Ivy^3L~Y<-Do?kyPLC#l%;UVESp83*Kty&MOwdD1k5GSMuv zYIdAca5Zi$l5den2UV=gC+qt<2fgDLRAq~u0VJZC20>}0+!hne4zWBha8=~y$Hp+FS(X97Z{cQ{ zTqMBC&{4v0QXLwKN0{|i&7A~?mPAuirdm^$u;eNRg64SHNhAb>E_S!iTvUZFl6wX& zlNVBJq89U#?^>OwW$OHJW+ZjOYs?KE5r`29h3OYPM}R;@`Q{E6GSpdb?^&h7;lSFW zlfe17r>W3l-4l9tA0O zP@x9g5(%=3!gX(~x*ADXZ;v)A-I-7To;YeHnxAZx8mI)XP;{qAq~gspsd;67+=DKm z&GDGBN+8d@h9YK*W3?MY4R%W@;I#+jgTxw_M&NgCr$5mDNJ;3`>ZFwo9)orH6@n|w zCk#>6uD4-0-yU^ootQxpYhPI^7Zw_*v+`1ma||wtZeMV-2)97Jb1oI(Oj^C(EQnLS zE^$|>ozqs2lwHow&qH^vM8g(UAoW+zap{TN<0wjrOUt(&K41PGKA2jn(Y+~0IAsgN zLHny35#ud!u{uo&-;`Gult3^U4@qFFS8tAD~wlcnk8jU*aur4g9rAPqtT0EACF-lG5-GX0b@Z}%Nk-cVSwM`3-mM=bZj{eeNXFV1AUBE8V(~csT4RvL z31o|C1wq3gaMuAB7D_w=%nYmsiii9mV9P`7+cH4F?xJLh$!W>&DIzWa!+1FFFbNzv zR@Rh_q_55qlqOZE%gv`sUrU&lIc3U`q*yPPa+W7*o3@gbrgXXxuix3ulSGuf^#un45&li3XsY{KPpgWjqtaB;VD(QWI=;7Jun6wB6Ixnj0 zWeCX6*Ow)dqiGNH6o78G7wbhWwEEXl52-?@-?PPh6qxIZGAOQLn#>C%qWF$|&^5^O z{g#5Wm{z++oG?GTO58L*%t&@$JtL9bXbIa~mcxyIdGwp4|LiHVgy?u2Qm$Za&0U!I z;yVUZloV@X(I8K%lB*Lq?@cqs*m-4XA|hk?^)plK_hQFmPO`o4)2K#Az9{JFap}2F zg8QcBQl)CigRK{?qP|%-4cOGS7>`bS=9%pyn733(r%ZDUBv}9Km%nTYOLq-NE!w9b z8SWA+u08i4eJ^mYEGLazXra<+9$J9T&@2*LS&%}g(n3WsjI`f1LR$No2AOM*xEkQf zB$Cv$m6P>vsJT`UWQi$$O}<`@8A+@-<&;m-wASMVjz2ifi{S^&wBHL|j1E0Zf>arz zrwSap@?Nr;bmkCD{45a%TP}@csOziHkTN~#M7e-cj>FoimbB^HeP1SW=O%tTDCMW9|rD>OP+=HytsmZ}V zQ-hnqyGF)8Ms)p%i!TdOdEtM>$c7{iEG-bM!*k_CQfC? z*`W}H6;TW44ZouvsXQHf>ICxFD+OiAg`dRc8em1IK9?LH7ViUc#>RGHLH4PCzfgc$ zsd3aJ#rUf)9%p%s=6)g}(*Vz0?<~yBcYPX;VhaLeD=*!!Nly3a!ifgWbf;o#Ci>|z ztXN(3Iu^R$u+F;vfx!l;)-<{nJn%_Er}Dp)_O6nd2GUjZK4qg1Z^~(f%~7C<$Gc{% zylIp;9h+8f1oQ)5!}SAm?wvi4N{I3wNaMZU2(zaC54XTS#%u6PE(%kFWj#hGTI)_q zA&2Lep(brd3xk|Ly@%aBul6TJMh;pkE@U_)mf3#ng*hr`2%pl(Gj1`fo_k=g-Pc{w_BtLma z-`Y>_bIzB@VEK)A`a=f;WF19Qv`X^~@<(9FC#ye5G~fHv+(?~2)rVUH618qtIg!)c zY+ddyb$4{Wg6Y^pw4W;?Yof(q4`;&0h(5XbQ=MvUX?$yKYJbDa-(iC`Wmc_uzVEGC zoobbX! zC$Q^Lq{F(qQoU%LWY0O7eQ8-5_UciC9Q){a={R*N&w3u&LYWYq9o~wj1xxgKZ$Qe> zkV5tfj!iokJH684nh zif;+tygnpT1ript$kJvRa!_qQYAH$kacu2AhGmGze!Ns-!0anQx3-Vlk3_YY0Q{FV z9~Gd)j-@mCu8dB=+q7YGZJl`S-ZTzEW#tPh$qR5FQ7 zu}@{ZW{|xzFNwZnzH9RSUTi~6f&EG(N>-fXT=ZJRgVu4_^|5?^cF@Yd* zBCk%lLi8A4V={Y&fP;B&EV&>=bYPy3@5=D1UQa%T>OWm+!&gs;+=NN535jH?NF(%-^Qog~b`Bs$f&?gY= zxt$QKFnG~JJ=}5T6>XjkN`I{8dqly>O%UZUZ&b~>u9olu2yo|ADy5NvDq@WDvvQPt z4jp6DQ8j1w#HZ%~xE+9m>0GpMD;~@oS4OuhuS)UW?w(8-Wm|ks5`nl@weJDB;K?A< zcOy_c{9O_}1Ntr?=Eit~EhZdFM2|P`&dbn;r;BCe?)G76`o%oMbeaD=l;pAKG&-2P z`ZEA-vC2Hx&PO^9BVhBhxUj+AAa2zFB6jZar;Pwy(>-_jLo(zeGGP63(FgGWurHZs59cj(rO7vU3_I6FpIz^UF7vz`+zQl1Lr9RQ`u zM30wq$0oDOy=txH za$36@2%7sb5=?LgJT_>DT5JiG(B&0N*=F9x=y2{<<+#Ttcu?QPgRJK-L0|I8j>q2P zr!4TseJQmEd%Xz7V>siW0t+;YAScj>QlwJtIF*R*kSR4*06xT3MrUI;>Du_!S|I5F z5VWK&lIgNogl9g&FZC}5<@>2T7a%m+&#Q>3#Ud1oj`E{gEihEY?83w#!8Z8n*;}{j8j;hP285q4j#mWO z0em7=@N2#?gyTI2t+Hrs{|JZuFXCPG1tA-T98PmbOwS(N?dOy3at}})Hs9Wub^$;A zdcsH)?4|BGQ%@JxFU7#0gqn5lfr(`;rVL-fcz>rOy=gj;C^E;K#vYj_~i zlFdvJi_}65kVq0J9Pp6E*64wlRDG=CTOzfPlJ$5*v@q~cpjO?H2g>zRsq#58OQG8{ zQkOjs|K=!6gk$O;eJwPS8SG}gMlxTcGo7rg`9n^A@~C{(r7f1B09dj{+OOtWBp~y0 zIs53Nn~VP3^7><0SvAz*t$YTOXw4Il(rs?Yq4$ZVO=ruO@cnhuj~`ld9DdDa`sK9) zAkl)xl=QuGnXzfE6sqF2*vN>Xdu;0uFNOF>ML0E~Z}j|`s|09a_Go4QV0pCPnlMzTaKII{*@o~2j{Img@i7gchdQ~8DM zR3jNHB)2ze)_#=pqx!MpdTmn)m^xJBeLO73Q(x2Up5k>16tlU`Ol_9H?go#(SpV}2 zMEi5J=U786mtTm!t1BMqg%zDZ=W+`hegpLta{Zy#zo=`$5s1%?05hOK{`1C{fh1Ox|9DJ}hndt0$R{C`%|uQ@xt9gtmVjLK{)BkA z3Jz5c_)EMJ(B=b`q@PWaLopN_TH5}((pwC2MjWrO)wV7fu1K)y3D}B9LF7c0m8HJ3 z+Ee7LJG$1J%#%_M{JH%EBzrVGTX~I&?36bhS>VXu)FOjj43`D{{(>F8&cNtd%mqc2 zE6qSJU%p&GRl3ciK}P4=Y%erZi8Td6wnQvFS@(EC7p99-;?@mgMQANWQqg|oo#-6> zMT${sBq=YpzgSFA6^o>stWv$`&*ucLo}`iy#tKsj6)HGH0V#iRNH6qWU>dXr>}>&A z^0hJ9`l;mTY(QodTAHW83h|>?W;tlM6o7;T0bLm;bP=|%Bsbq!pclYywX5kqR5zKG z>7fivgeo+J-5l7+3yBEg#(wAHS@`j|EPG^OpRigWleIB$^7^kM)gHbATyKTo?)*BB zrI{7B3UJLvU=YEvOV(77kaKr#Gs=@NS^yAAO@+QQu{etf9ia;-wUBbIfjPka07Bx2 z+Ec%jrD+#*&OD!h+k&-R@Tp`4NM16S24mtoILie9>6ALUSYTjyzeE2o0P0SFj8%Sm zOBHTDKnT115_Adh?_r30+)pbht0F1yk1|{&^Ao(2$HtVZv0DJtkp+?y#%!^5{sev) zb|>N39cGDF9stMdWK!<(0XM~l1G)CbfUL9I2pp>;szbE+Js=@;f`3YY9(qX<^Rws4 zYDb30H&lW91GZ6c7en1_XUW&T)TBu~XXHXMHF}6z&S*A>;WOsrK)7YsD2atMt!~TU zj1hglYM*mx?`0gV%0V$YvLVy^NnZBf#CCDT<6VE?`Yp zuD}sEe>9f5tC5NYNj*P3vh@(ywBBxYwgI3O(q;y5y>zWT0+#$ehZ+8KYE3=u+T?Rl zTJPrLMA7l|n~IT_^oi1(#fjQxt?z!%++rkL=>{){K>^Ppvb)>;ghA`r%-CI8m4LqV zI5MWMa~P62E}!nKltNoCCS>XyAbTCdxLcueHsk$MS4hD;C0|q70Xr3x$_FX&)G& zH%3V`T|c5?(T5wn3y@2{C(7rSVXaM_-{NA8;4;4!CYy zTU*N@8AXfja|$G7`!O%L{^V-A?l}9QIGG1}9_#_Rxhzwy;J5(sTs}9?1@&r?rr~@` zIaia~hmkb_8380QY*~xSFJMSzwhCb|3ifTtp6Nd2$~NT7=R-ny|A;R}yVw2DyTo~X zgsgcdkhnOzoQC5{gD!ZcNvq%h;Bx0&lqr!XrvU))rhL3oClbrZxXwP70-(`rx6i)F z8KldeH0B!JWbUee#0Ak7rG-fL^r*1cl30u=%9`P@a-7Etkf4`7Bxhi;NyOY=SwmAn ziYJj`?GisbTx-znDVdD`7t!)8_ZzG9ObgYeRV2J;O;K}PKndHmttGWqDpO*0WruH3 zur>)8_h)`2GDPXAk$l+{b#_sg-cvMNo_!IVDmIAyf3^3OZ&9`FyMQAC5<_lcZ=xcI8;@vqD5=RaK=9R&h=P8i7)qoLE2&OR6LMje8CX zoqaFgzZzt^5&Rjm z3xXr{y^uM5-vPZzZ9K@{mOctJjXfx!EJs+LcY-^!Z6V;F{u zJF*RT0?y~kKYPoEdd66;|-2-fYsyLQW7C$DiQij#c`7`~+v ztJ#p2rxE2FcnNI=2tVXAC3vdlZkn2PK_9|+OF|(dL&FV&#NcthD@Jb06KdGWcztPbi!CTv)7|LlKqy?el)!+D7!#_!V8OY z0y4e$wZcTLYNGtaXQb<;O%i{VauiP)$~GjP^(55 z(NB1@pY_-tG9&;p&rM3|6pE%)AtF`yJvmpkOZjB?%Qf)7>n1x+Kr$_h428WEsnQ0@ z%SY}NB?_D$Le;~dksiTL(nB4zb3YCXSyqEJggWURuJAB%4sMH@oJc+E`|?kIfn3x_ zRh`x6xpea1sG~`=$lShBH3-U7C$B*q0HULuJ)j`Z?j8=|LX$O7XBowV`+ge|bx(TO zB9Ol#N%6($PVl|C7J!|{Du^Nfo+#3J>=x!|LxFq7hibohhb;@k#dhlOMv1n+ao|w( zNzjRu_Sagw5$!xZUwL@sr{(pb4KME(L zFVxPFejFRZZU$B3_gC+HOsp(d>x}bwmJ(-*$%iFKf;IEf@U;V*f(rfDUdWcjLf%#b zYaWUJWoF;?%iRFgkdIGjg=^noQ!`WleX7bF?FUi88Hpff@r%g~e@p@rRpMO{xYv66 z^BEva2M`)T_`Ug2Z5gBj=f5enb{ZM`>we=_jRo_=3(8Re?Aq>E!UFDV@*mm$JatO} z%LqNJf6?Y(SklP{q9QVx-*t{CM?l?@VXQ>Wefp=XP)A{|CUI{lIKtQrxa$iP5G$Plkwba>^a^Qs}I|$sImF zYrW+)>v=0;@KH5ci0*}eS{QtqEh2kF3y4uPz)@i*&l>v6>k^KE*j7HJu69*aYtXHL z#oVNdl&27A@=D>%LiJNg(i_}BiE?M;6Wf50g>G*E6?c^Fpvg9qeUohh{aOE^DNC;m~?UN2ygpM_;l{qrXW7TS>S!)?J3V0J&m%Rg#`Iv1&QB`dAoZ)1 z8Zs4MHyh6GRqKCSNEnw8iWar^|7ap+Q$t^L#xfP%Ew2e*MMCaK0-= zuWyt7{W@S4cZfT4v%jAGnS3MBZ$8{ILM;2%8~7s3_Hk`3zCEe87A$N9+A$Sud@AVU zNkBVCGm)0MnQ{LLsVnUwWmtnRQ8vR(vAN>Rw5pQ53-zX|fRggDX&|_w%gftXZ>lOq(o72xMEM7`Rv3BOkL~ z&DdML&Zv?Jj{YI1&Q+(5T->V3`PYUr-Jj8#u#1@Izqa%(z@}etq-^y6CTR1hKyRFa z2+Jne8>~s>98=lPlG(!^M#kt^>21#!x5F$twhwwYD|$C4<2Q*T+-H#W^PyZMx}sxM zNi`Qh*OD#x9kK2G;x8?Ipka(!~UQs!#V0LB%R9eDNe!v=mlVO?tsBJ zi$@0YNUo38El~bp2?WngJzJPjvSv7QKqmAz=k_0a%96O}HoS+U8WL)pY}tCHs>CWG__{q0j4GMpmOSv4m{_nN^_D=(R%I*gFkAPT&Ud#e_&@AW zAQzXfP<`t%hxSE5QZ`nvhVyHe5Es!J{b>z&vsC(2H}dEfnJ5l+#qVnA1RWtOEM1dH@+ zB8a=(l@Ex+w?Hl1tYZsdi`F0a)2bDvgG5fIPPf$V*8q-0ELf9Tkbs+`oS#7aK{bm4 z*Q)nB%2-DAJfBE-SOLq(lPx8`^` zwsL-3r-MSOn4i13S7sLIdWenW>yjlYBMGxH6vTFd~p3hXt|LwBt+ds^r17bswO zN{X~`mp3qfl^gM4ONJTSpm2Z<%3HLR2#y0R^XPj2jb(`z@skeEYmmWwI(O4_T{16S z0YGfGPh@xc`a3St;+y~B z9O&yZiN7;wdb92ZyOt{IzGd*)%$wL_P-(1jTWD3c$Uy%%hmk){Fr3a!^%IxgUn!DF zr2&!uD$nYDfYH*l3+`BftMKyO;qtM`$sXatIQ=m6Kwb4Oh;zKx(?@_0)|6da;w2Cj zK4`rhui*NsuH+Q>@QI>{u1!DyLU9Ii9p?iWS{M^bQB@q3J$umR-S9P^UT; z&qATEK|Lh@wZMxnj{rRSgH?~kjj@RdysZWB1A-k%1#rmd0$MXUrYeey9X?vLV3+y1 zs7Uz%kdngi)qsc<8*JxRxm5CuDs--)emka%QIbq)g`K9z+GLF^aYNMzjjE@a&pF~r zpGfyYe~&zfioviV0kU^1r<1!`YtJa@M4ZxiB?s-v1*Y=q&DH)p za}tjtFCfOiogqat5`o;%Af5IZ*QdcRIlpm!d#k8+wceam{Lb_^%9ooo4& zWRq}6y@+)(ak{@~xd$+s@1El*2&@eV z*e=VUS`+CX%fG6RWe5dPs`Tr(2SqJ^1>)0Jg17>20i|SrrjX$^>&=Z z9@Q3N#(RxW|41Bcw~1;_c)^B6lx+mRJXx+GF~m)(&}D z*7iyiej_6ZPQiQl$CgE#cCTGXHR?Cx0qD(CXhd-*El!ftYt}cBiVc6K&i( zrU1rd`*-z6wbOL<_m3-fXW%58ZBW2UTj`Ic@%lR>Cx-)M#FKW&HmxSRf%i@RKzH8^ z^h*{ZZ;)xF6ROAjz(9-}xmP#qtRCV0T7ifI0;Ze1PV^z2Ud2+=Um)0?#yB@qC}`f=sa3_|TI~1XdVpWhw&h65_pGS`0}xECOP@cT%U7uEBthbA}}C zA1l5Rgezd}gG0yD)Cyz2`RiBlGuUNIC~=^ZYKaTrM;U+ffc=<)$=>9}%i7Y|%}A@h zKgaTQR(D+9DVu}pqqeg^3n31dAo1J=};g@wMXj((?{R6aA6UGmqbXydVseD z!jD%1Kt*=6)R%Y&Vghpdq~E4%=M@J>m%*S<{*ZvG%% zH5t}Nwe}dr8K4;5nOYH@W9dEjDSrd;3WQm;qkX2*h3pe<2_>_G;5fJmn^_HGPN&Fs z^GG+UZEpsg#yu}6%NUT2=-^<{sDJ}cN%|f+r)}Z^i4qM@XBDiU^^;lSh`0&#(JK&5 zhZjA&?;bakYQ#4Lj_4<0g!B$~)^HOest;YTjnVm1HE>V~ch+8)s|3nU5nU-`RQuyL z<;d96EFOr^U_U$FGTv-rz9;PIED;a9F@3h$BKlgL=P@N9b zFgE1?ds!(8tytrbXTy7oP0N;y8*zo^|9QR`Qm1&EZ+z|@N+%n5d)_1CWCW>`?fFr5 zU~yj}(Y4C@B`6f8MChhdj4;V1i2sHwAnrOA9*VoIsy?cQ7IxE61Us)~7bqIF`~K1l zdg|-MKed%ln00W6d|NO3W)YlKw6hWVceD>@mgNG&QNS!r#kuKFxe<`xrJgWEy1kmh zfyt5xJT-lwm^=;C)wCjVd!xrk@%W{2Z6Q>w8HmT=kh=@U;3LPm)N>%Dfw&_g4(})w z9(z0`%qi{%)SjhZh1v;%@w2^_aUTUT`X9%$W->YSRu=bl&@Ob1^}rd%--&YSF<6YE z%*AoPZbXjai>Otv$bJR0#}9-E>k1{IEh>f{Ksq$jl&kjjq7kMj{0i(WeZWP|0V%e9 zx~E)RdA2J|G6B%fRm!5Ye(s7UJtLKe*>499azh9co>=;i7uo$37Fvy@#&~HzmSa(Y zEPEz-xmh5>omjT+4*RsoNt`k)9N2*`8U;MPig|b0)l8XP!zHV7z!6amf(|?feOATW zZ=C_8W;a0i{iF5v`la{fR`Jn#!HJBz$MfHZky7uVgw?|~%m^HO@(qv+JpduQV;!zf z7Wu<4v;_vMxhC!=d2OGBR%B)WDP1vS*9`@m7EPGrsBi}W+$@5>@Q$kP%f}b5@eL8w zRwYw$ckr+1o;=R`S9A0llEC2YQECxyPk`t?csHd|&2@oPO+>*WJGG@RR)-W2{{~gw zX|sW~{9`Uvwnko$>H9TskJ6B&W+sQ)$!X9=HkGz|t;VZ3ct+zt!5|?WVeE3V^jf;R zLl&(5C5cw??ejx)A)8yH0ST-MT_Fl2JYzq}oTx&Y3PYcuZ4dZt8$9-OVNV0hX8Oo5 zR~pj=3KIkzdXgFgNUnY%Mhi78`V4?;rrvr{vOrz#SO<|z=6f-ByFP?cA%0cr7U7x& z#g0gJ3TJ_cMmhwKGjUbY*5GaAd=JxPS`~FsoXpMny5o1%D9*%&Sqmb$OV6C2wVoXW zqDC6-EES87X|TA6-&?ASiJJZ3tVBEH-G>pVA>n&_^WW+X-^4$-fASzFPe5w7vcu__ zD?6lMj-GL5&15| zW#Pg@NapusJ2pdVkh!EL^6p!V5(3rp5K}{=?VWznH>U|M_fAtAj7S_+hPQWi){^wK zs$MDVCjO^Vh}Vc+$%-yc{q1198NzO0x1XNFaLsIs3xPbUqjR!K-?eO*Cr$fRMHk-L zuo!SPjp)*2R_gs#67ZQI6^f-wR1B48^5S^P3(82mJ`B@{3R1e~L)ZtuyF-YK6i=9B zSyS&2Ne<1J{?IGJPA97|)%O$Jq$8@B8n`Uv@{ ziKec*0e8Y)SUBCY4@wiM5I2u8Uov5~ZXy-Ro284$dmKw})duJah+lY!Nf274@OBoi zyd^3HloXNU@2)$)9`?{#v#yl%Ld|~kAI~^e7-N+~GCgC{!AZ~`^T_zkB|35tkOUr? z223xTa+zQkz&oKp%)9!V22pk>n0y)G(0?M&esLISl<`{pa0;1^q~*D-R3R>to1hhl zKiKX$jd=dx$ES;)rXL=`&sjPLxC?vcN}FWeJZ@l1a(;W=QzaXSdn4|uiH-cP4L*Pf zR_lvRFU|SJNnTfTz(i!|JNP-6>>?F8)>G{A4hJ^6mPrMNI5LS=qM|>MmLGz>b%=N9I@(j(b#d>|YC|EPrBMiwMcRa|a~Ixj zy2h!s4hxE4ZlK_W@3ry+CK?7g{Y8y9$SX#Y_M(jUuyvYMzpGj!KoFk*l=rwSjpC`MngPx54g zZW3kKH}`DDE~?Pw}@k(S)HPc`zDRv4Xr7(!oJptj=QfLSb((-w!P(p1Ku%AVae zLi)L77`}dY5Zugk_otL9%cV$$5Bd?aq+Y=puF z6@HdufmHsuAeY!P$39j0Z0B869@rUTF8SH*!ybxO_)~eh_aV*;W5#Cx?F2zJzke+@ zQekk3sfpTfjBA!z3MKt3jquKTQ@7GEq0yk*r@A<>Ga|b7E-=MPpfy6>x#X=5@p77M z3gz}>-kxA;z-A>sMVS_zg3G(tvk)Ltks_oe&Lb~bXv$9g`0JVfbLTIw=s!JQX!6c3 zYO~0t=(LNnCZJNRlDS|DDWiC4UKW9xO={4VcBc`M0=~m}|9%Rolw$%Ny_0L)vzN_6 zVb;5#^rUk;|XvoF49OPGdV>!on`sTxn$s^2**pe+okSf$L^+zo z-07d}?xP^C9uK+Zb=0D1g?{|lNLm#9c$pd zC(w1t!)0MLvQN)nPWgY^P+RokeC z-K)%ivOh0h-DOXIiKqa0{Vsog>hNesC_=!1nXy5^2gk^Fj9t3H--w1!4iHJ8A99lpyh&C`>|veUly|o z=@_ueXoO#rBHywE5rS2GLKoaRI%c9lwcqwEDc*;I#QxJ>)hx~BVi+sf3Y zi4<^*kzt(|#N6s)4_5H4$LC#3&wE$Y_Y8KCaOVRWjeT;k1G#MqVEaNp44`&0yo?O0 zAPeq)9J$Q_KJ@7IK8c`v8RDtn!!k}&Lxy|zcR3;JJ=37*qW0dEA#&6%LKF3gYieuG z5WZ{%+@$Xiq?WGE=TjFbP!5>;s{#qkA_S_#u8sh>~?3n)4LEL zl4aW&I=0?b0Z@fLgFkDBF{7bwL{avmcG7watIxz-p!f1E@QL9Bm~wqHtVhhs`BL)3 zbM8IbQ!~0AD!Pa!udy(x0G~H;A$U1QvS=4l}rpvf5+e>KRs>Jhvy6HPePcv?o8IVqD(ZYsV$0|EEuI7};u3eYFS)pr2S`tAd zb*y6w5mWyI>5QJ_;>3G%IhtbIA{CESHS3nB1tgT`fDQHIEu}@8u-faP zM%rQu)8A8t;Wc?YR-_LaNEzB|11TV!$0?@H#QGv!#0jr;{&B7Ov9_yEZ-ss$6(&#a zEtNjJ+^P5pHXW-h$^7>Wrvh;1G8mbjoYZVn111fyLUHG11ICHP%pq!)Y$5lE>f-o6 z*nDiCUX{7Mc!$P<9$TXb_L*`*RNky8hKi>&3|7*Z++zr>B!9N=p_=k5Mx`|N=PDKQ zKWvo?UxO6E&JLm;!X|FO8RJCzoD)1IO#DRmyONT4jv-|LN>V6;gAzuF%06!@v8+1O zoz8aHF1bOe0&y}7!JGU{A)oE2-+LpA5;sM)3$gW-MfgHnp^Dm#;5Tg7z(K8tUbE9S z)tv2lmgnOMb^5K1%|#`}@BbnIaI2&eSVpP*HvQ{XQHB#8gRAR0HiZcfgWLw(pY43% z-yGh}bAzWMui3;5LXKB9%v>;to{{o3{0>fvFOhl@#Bteunf4vH%h7FT7h7v&E^!uT zlAO~#m5G@Vd+AA0E?mH-N&T&Xi0pXi8B;Dv-bL+;M+{i2orayNj@G44cNh@7X-qR?jU)HTF?=lC~s7yhCEl1K#P0n?{`nQs;T2^#z+-5I&aQhL& zr9{?WJ-M_!03xUwP;J#il zr9db^xCB6ebR@ZmQNyFqkjHEYH5!z`I#hEbP&*>ZU=#hNj)an>zpSKH8Lp(350kVx zZ^UMLSk6~|rvkp&pCiSd*LE^#`Lf3o0F*Xh@utQ24;hmMVu-T9l{Xl<9*r3y6n|44 z0VnC1R@lHXE#+!hs;%@r8Urg7lO9tt!#|PWE47O*ncm4&h1%<-Kap^prNeg_HFXtG zCc_!vdi;6yX1sW*r63BD#|CvS&-FdMN22I?U#-~kk;QW*88de@U_-ET=#$MrDFuc- zZ2gIaj2P3|q@iEIX0EfLZFwtlfOvUzJ=EZbuYtwC5F;@nxAvmm#U#CaS4l{>j4Fw| z!q?4(=5Hr+oxX{0LY;C0R+!WEF2`qTYc+_V7&hxsDBlNnBvz=}V6r;swm0yb-rq*pGtkn_@WS zpLJprh421krBj*d1zbHMk23RS?RpoE`0rd;We|oyvL15o0k@+34WE(6W?;*Yj&6-B z2hA14puaKKK-QJ|=s{yAMaZD*i#E$c{t$zAi6kTJuVmq}V|iMcPt)MNiqxbDT@N?E zjB0)42rmg1u$aq9{dYCEu^VgAP? z$0Y1j9tXzDxJM-2u`4lr;1l1@Z0* z!uHLZHRk$sIC9ly742ozlo%gSW$X}51PDKoM2_@opy7$UEXD~d=2_B7uxzl1JQb5v zp=Jb550z>vl};aLGidW6wt^oLw=caI`O<$gFWjjPcJ}&GS{%IVf{w?&pw&cB1T?U+ zr12b-MegA_eixZsOnMQFFZfm!`rHd0q?hOxeDa{GJ9{DGl}1`Cp_`mvG)}>bm7LMT zX<}CU8a-lYkhK5hlwYA-6K@t2*FrF!X|yMUj08?kyR8oILeTtR)RMXOKsO(;)*a}s zDuTbu6KRL<#|@%8wp26APGNeR{J>MFc9sF_d)Y$}P)sxRSbJn6I>Lu)CDa)}lo1M?i;(+G#njB)aO=q35dyUOO`^&<62hh$$oQ zu7Jib=@3m66t`8$Q;_#w{#(f5uA+KB`&+y10@N?6)YX--&N%=Ceo%O9!0FmGknQO0@7-30irYaj!!8-a{g!!Qg~Z2LOOE1&+dD*L+X* zBOxAAYTW=}S<^_tI1i^m`2&u(5o!eC8$8=q$hYW~M&<020$S9-sW5{VaqCpbFF{E~ ziA&h(9bitg5GtznTbadrxZ6x_4Hty}hac}~#B`t@dKas>7&I`EttMvrojrt?`TQUD z+?6>sq*q2@NM}uY-ni%Gn}E0;<%@3Gms86v03j|`4-5Ey_16f*y*J-V6gP?Zq)fncRmTqk z4iK|KwyI%JBuXgLVqY144|1W3oauC=vyfge~*MDvuX*c zhUtFfg~3(%oikvsbjRAzuKWaB--A@{>^GZ#`JW$z63ZH@DyIPHxakHp)=q&_+<*%x zT-Ft;_9`UBdJ@nfbkL`{N?4U1nCLADDbRV7 zNK1ZFioNhMDv4yNC~cD+y{^`Aa<$-B{b1rhXRRwqjYij1j>vwE=PnfXGlCbdc>TU2 zz9i#o7`?=+FUf*JP5G?P43+(~YX7O;f)xf{y#O?~kVH&EH&J+W+l1A(D`fa{Qi=DLYcdyhFjNWMwNd1PDckqzz-#l=#6Tb%7v=J_B};Q><% z(Y^P8ZDPDNw37z4FV@QXd&7Xo2Q;xdu?C zPs?ByI+dF-o;1aS3N0vEO^AQ<@_ZTu@dX{ZfCC**8pN2Ea~f^!|B)ij=kte=RuZey z-K0T%Fx!~r0$rk5;t=xXz-Gn%#fu1=m}9LfQef~ioxu0%n;$_B6_>{Ea}woObjt*- zpiJKNjB%j84(?Z>m=&V3cRyJHsH{L}l2=zpLhf-1ZFZsi#$fVN zNBDh!lOYrgid2I`&6_~oD5Db>fL`6o!ZD(btfC;|U+_2o23EIIFh<)(31jIEgQtZa zcF)#bk~#b0EGNm_M0#uzkTNp=$z7tz;Q9NL2y`&B{VDpV_4a0@NHfDJJo$Q~jE47$ zcj7J=Xidlnshqw>8|ecz!h<^K9lWb$$mk-Zd4=o_?&;QO&kW?^L7B20eS(L=W%3;8 zGU$Eg`3Joch=W@Eulm3zVh07yALssLjL!r!#XZXij@b$52zjk$AT6gOW7vaOL z*kj>A`KNj4l@t9P_#m!=dSw0d^L%D@b|VIG`ZKNQa9X2%`GB0MSB<#Bc&s3KTc`lQ z05^V~R_LjQjg^G>PnO`i$bb2BES!|*-FuRI6bUaBg2~RhET}v?s5d~;72%WCc?v%5 z)aT(y^bSY@Ws*Aq{=^(l%srDmb|!d_If>a-BOth6=0OvsHsFaqZ4zS8AMrK*Pmvzr zynNVeDmMkwN|GgT`v743@o0DFBQXm?#TkWVc+jh_uQ|4L`$TokQu{?4gZG+alJ}|b zt{yw+)Bc~I;>ibY6WHXcu|JT>5)+XdDT;r{X32`D>H-?3O=wXk51@|Sj{gXsmFAqV zT^l9iB(^As%YYz3l?+SfdP{o^kQHiJlku|Btbza3`*#rP5Ni%{@(%<2G2)SLmvwiK zKoHn`N4y2hoFpC%_f8X%I7}+mHnTdPAyk^JC0Uby5`%sm!FxJrton$5P;pqTtz)G zF>z_4-AMOI1oZn+GY#zH!Up_nia zyt2dOEJ_?ZHrd&sxYv4y1-SE6S?Dt`2^%X^MrSFzj69uYTL&%|wf(@dMRF8&*FOxd z>a5p`uJhhKS(1+2rc9mzliPE{a~?f$Xt02yCWoMbUFrQd@HwhGL)C&~g2a;)=SkT# zDji00UBi-|m5l*+t5S)I9dx)i1ZBGF$7RoC0`1JwpE@dZMf4@Ogbx>})N?e@&g2%c z^g@F__;dngN2t+v`LPVZ6-q%ng7%jt8(0dn3Y%tq0?*5Vg{%ATiMY5taXgDs0QPh}r^<)@2dk@vnp#?QIWliv zJ&Dpzmlc$dm|b4Gv63<)CUo5JpNHTS-tSCanwc@FuX@Sz?sV)4{6<^OwH&9d0WX%j zqldxuF14U#&XLxt^{vT86bQenE3=y}J8bavhw z-ly(X%x>VlygWHrU?L0sa1+Uyf)Rv=b?E2f@8A4w#OI`}_12CB-&(|ej?LW2$jjyK z`v#gADBU`B!Qu6oWjL4LSy?;=CYBmjK|#U%@|?E0KikSN7|g*zW%HZcmahKsE4HB~ z|EoG)&1!Lf=uSJxmnz@fJHNz*p6_5C`n0dF+eNcZUrO$=4JqPu_YZx`+EYrqG?cx-pUB3 zSRHIktPU8Id!TdN*lCZJ@B5i&XH9zpNer$)6@KQ*yFjLY)=SSJIM7WcrtJBwtgPDc z@$tQ$^OHr%XV3cUUoL)So!awFUVfSVW~k>h#$`yNP#OzjktrLhtC2fBKmVtl|JCOR z)8cfMOcuM)fQ!E$GU&yHh1XtfN_{WK*$+CLMPFNi$vL^(5nn<=0{P}m>-FWy$N7Z? z>8ml~z}G*Ex_GGbK;ne3WUnR0a@`;U%}*#FouI%ako z+BLDc);pOFMFM=xuFi*nCyf;G0cF zYSG=(^UYJ({Z-!iGlLe~*AZ^2(cqVRCq^JDczz&~+Tad6t{R0_af1j_1S${Bre z?E|C62%^K3OIvwChQ1RA7ZXmjh}Javk|-7?9Cx}{dGwu>?n11FyE!w^mvn>t-%rWJ zGlQ@tMPD*5kg4BQ2oHTHGID6>|9|-Z|K|T+Pn+M2;aQ%de6KnGVSxYCl(ZDf<(~%s EFB{C19{>OV diff --git a/leaderboards/codemmlu/index.html b/leaderboards/codemmlu/index.html index 2fe6ac3..38b8eae 100644 --- a/leaderboards/codemmlu/index.html +++ b/leaderboards/codemmlu/index.html @@ -1,747 +1,21 @@ - - - - - - - - - - - - - - CodeMMLU Leaderboard - - - - - - - - - -
    -

    CodeMMLU Leaderboard

    -

    - -

    A Multi-Task Benchmark for Assessing Code Understanding Capabilities of CodeLLMs

    -
    -

    -
    - blog - leaderboard - data -
    -
    - github - paper -
    -
    - - - -
    -
    - - -
    -
    -
    -
    -
    -
    -
    -
    -

    πŸ“ Notes

    -
    -
      -
    1. - Evaluated using - CodeMMLU -
    2. -
    3. - Models are ranked according to Accuracy using greedy decoding. -
    4. - - - -
    5. - "Size" here is the amount of activated model weight during inference. -
    6. -
    -
    -
    -
    -

    πŸ€— More Leaderboards

    - In addition to CodeMMLU leaderboards, it is recommended to - comprehensively understand LLM coding ability through a diverse set of - benchmarks and leaderboards, such as: - -
    - -
    -

    πŸ™ Acknowledgements

    -
    -
      -
    • - We thank the EvalPlus and BigCode teams for providing the leaderboard template. -
    • -
    -
    -
    +
    +

    CodeMMLU Leaderboard

    + + + + πŸ† Leaderboard + + +
    +
    +
    +
    - - - - \ No newline at end of file +
    \ No newline at end of file