A Scalable and Extensible Approach to Benchmarking NL2Code for 18 Programming Languages

Federico Cassano, John Gouwar, Daniel Nguyen, Sydney Nguyen, Luna Phipps-Costin, Donald Pinckney, Ming-Ho Yee, Yangtian Zi, Carolyn Jane Anderson, Molly Q Feldman, Arjun Guha, Michael Greenberg, Abhinav Jangda
, 2022

Large language models have demonstrated the ability to condition on and generate both natural language and programming language text. Such models open up the possibility of multi-language code generation: could code generation models generalize knowledge from one language to another? Although contemporary code generation models can generate semantically correct Python code, little is known about their abilities with other languages. We facilitate the exploration of this topic by proposing MultiPL-E, the first multi-language parallel benchmark for natural-language-to-code-generation.

MultiPL-E extends the HumanEval benchmark (Chen et al, 2021) to support 18 more programming languages, encompassing a range of programming paradigms and popularity. We evaluate two state-of-the-art code generation models on MultiPL-E: Codex and InCoder. We find that on several languages, Codex matches and even exceeds its performance on Python. The range of programming languages represented in MultiPL-E allow us to explore the impact of language frequency and language features on model performance. Finally, the MultiPL-E approach of compiling code generation benchmarks to new programming languages is both scalable and extensible. We describe a general approach for easily adding support for new benchmarks and languages to MultiPL-E.

PDF available on arXiv

  url = {https://arxiv.org/abs/2208.08227},
  author = {Cassano, Federico and Gouwar, John and Nguyen, Daniel and Nguyen, Sydney and
            Phipps-Costin, Luna and Pinckney, Donald and Yee, Ming-Ho and Zi, Yangtian and
            Anderson, Carolyn Jane and Feldman, Molly Q and Guha, Arjun and
            Greenberg, Michael and Jangda, Abhinav},
  title = {A Scalable and Extensible Approach to Benchmarking NL2Code for 18 Programming Languages},
  publisher = {arXiv},
  year = {2022},