SWE-bench

Can Language Models Resolve Real-World GitHub Issues?

ICLR 2024

Carlos E. Jimenez*, John Yang*,
Alexander Wettig, Shunyu Yao, Kexin Pei,
Ofir Press, Karthik Narasimhan

🎉 Check out our latest work, SWE-agent, which achieves a state of the art 12.47% resolve rate on SWE-bench!

Leaderboard

Model
% Resolved
Date
Logs
Trajs
Verified?

SWE-agent + GPT 4

12.47

2024-4-2

🔗

🔗

✓

RAG + Claude 3 Opus

3.79

2024-4-2

🔗

-

✓

RAG + Claude 2

1.96

2023-10-10

🔗

-

✓

RAG + GPT 4

1.31

2024-4-2

🔗

-

✓

RAG + SWE-Llama 13B

0.70

2023-10-10

🔗

-

✓

RAG + SWE-Llama 7B

0.70

2023-10-10

🔗

-

✓

RAG + ChatGPT 3.5

0.17

2023-10-10

🔗

-

✓

The % Resolved metrics refers to the percentage of SWE-bench instances (2294 total) that were resolved by the model.
Submissions: Please follow the instructions and add your results to the SWE-bench/experiments repository for consideration.

Leaderboard (Lite)

SWE-bench Lite is a subset of SWE-bench that's been curated to make evaluation less costly and more accessible. If you'd like to learn more, please read our blog post.

Model
% Resolved
Date
Logs
Trajs
Verified?

SWE-agent + GPT 4

18.00

2024-4-2

🔗

🔗

✓

SWE-agent + Claude 3 Opus

11.67

2024-4-2

🔗

🔗

✓

RAG + Claude 3 Opus

4.33

2024-4-2

🔗

-

✓

RAG + Claude 2

3.00

2023-10-10

🔗

-

✓

RAG + GPT 4

2.67

2024-4-2

🔗

-

✓

RAG + SWE-Llama 7B

1.33

2023-10-10

🔗

-

✓

RAG + SWE-Llama 13B

1.00

2023-10-10

🔗

-

✓

RAG + ChatGPT 3.5

0.33

2023-10-10

🔗

-

✓

Resources

You can download the SWE-bench task instances from HuggingFace or directly as a JSON file (development, test sets). For your convenience, to fine tune your own model for evaluation on SWE-bench, we provide five pre-processed datasets at different retrieval settings ("Oracle", 13K, 27K, 40K, 50K "Llama"). We recommend using the 13K, 27K, or 40K datasets for evaluation. The 50K "Llama" dataset is provided for reproducing the results of the SWE-bench paper.

SWE-bench Lite is also available for download from HuggingFace.

We also provide the full SWE-Llama model weights at 13b and 7b parameters, along with their PEFT LoRA weights.

About

SWE-bench is a dataset that tests systems' ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python repositories. Evaluation is performed by unit test verification using post-PR behavior as the reference solution. Read more about SWE-bench in our paper!

Citation:

@inproceedings{
    jimenez2024swebench,
    title={{SWE}-bench: Can Language Models Resolve Real-world Github Issues?},
    author={Carlos E Jimenez and John Yang and Alexander Wettig and Shunyu Yao and Kexin Pei and Ofir Press and Karthik R Narasimhan},
    booktitle={The Twelfth International Conference on Learning Representations},
    year={2024},
    url={https://openreview.net/forum?id=VTF8yNQM66}
}

Disclaimer: SWE-bench is for research purposes only. Models trained and evaluated on SWE-bench can produce unexpected results. We are not responsible for any damages caused by the use of SWE-bench, including but not limited to, any loss of profit, data, or use of data.

Correspondence to: {carlosej,jy1682}@princeton.edu