CRUXEval: A Benchmark for Code Reasoning, Understanding and Execution Paper • 2401.03065 • Published Jan 5, 2024 • 11
Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation Paper • 2305.01210 • Published May 2, 2023 • 3
AGIBench: A Multi-granularity, Multimodal, Human-referenced, Auto-scoring Benchmark for Large Language Models Paper • 2309.06495 • Published Sep 5, 2023 • 1
MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI Paper • 2311.16502 • Published Nov 27, 2023 • 37
PromptBench: A Unified Library for Evaluation of Large Language Models Paper • 2312.07910 • Published Dec 13, 2023 • 16
Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting Paper • 2310.11324 • Published Oct 17, 2023 • 1
When Benchmarks are Targets: Revealing the Sensitivity of Large Language Model Leaderboards Paper • 2402.01781 • Published Feb 1, 2024 • 4
VBench: Comprehensive Benchmark Suite for Video Generative Models Paper • 2311.17982 • Published Nov 29, 2023 • 9
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal Paper • 2402.04249 • Published Feb 6, 2024 • 6
OpenToM: A Comprehensive Benchmark for Evaluating Theory-of-Mind Reasoning Capabilities of Large Language Models Paper • 2402.06044 • Published Feb 8, 2024 • 1
G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment Paper • 2303.16634 • Published Mar 29, 2023 • 3
LLM Comparator: Visual Analytics for Side-by-Side Evaluation of Large Language Models Paper • 2402.10524 • Published Feb 16, 2024 • 23
Mind Your Format: Towards Consistent Evaluation of In-Context Learning Improvements Paper • 2401.06766 • Published Jan 12, 2024 • 2
Beyond Probabilities: Unveiling the Misalignment in Evaluating Large Language Models Paper • 2402.13887 • Published Feb 21, 2024 • 1
Functional Benchmarks for Robust Evaluation of Reasoning Performance, and the Reasoning Gap Paper • 2402.19450 • Published Feb 29, 2024 • 3
Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference Paper • 2403.04132 • Published Mar 7, 2024 • 39
LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code Paper • 2403.07974 • Published Mar 12, 2024 • 3
Replacing Judges with Juries: Evaluating LLM Generations with a Panel of Diverse Models Paper • 2404.18796 • Published Apr 29, 2024 • 71
WildBench: Benchmarking LLMs with Challenging Tasks from Real Users in the Wild Paper • 2406.04770 • Published Jun 7, 2024 • 28
JP-TL-Bench: Anchored Pairwise LLM Evaluation for Bidirectional Japanese-English Translation Paper • 2601.00223 • Published 5 days ago • 1