/Alexander_Shypula/

PhD Student @ UPenn

 
My name is Alex and I’m a researcher interested applications of machine learning to software engineering and program synthesis. Most of my work to-date has focused on AI for program optimization and leveraging tools from program analysis, compilers, and architecture research.

I’m currently a second year PhD student at the University of Pennsylvania where I’m advised by Osbert Bastani. Before that, I spent a year working at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) with Yoon Kim (MIT CSAIL) and Jie Chen (MIT-IBM Watson AI Lab). Before that, I was a Master’s student at at Carnegie Mellon University’s (CMU) School of Computer Science in the Artificial Intelligence and Innovation program where I was also a member of Neulab where I was advised by Professor Graham Neubig.

 
 
 

~/Publications

Shypula A*, Madaan A*, Zeng Y, Alon U, Gardner J, Hashemi M, Neubig G, Ranganathan P, Bastani O, Yazdanbakhsh A. “Learning Performance-Improving Code Edits.”. Under Review, ICLR 2024.

Shypula A, Yin P, Lacomis J, Goues CL, Schwartz E, Neubig G. “Learning to Superoptimize Real-world Programs.”. Best Paper, ICLR 2022 Deep Learning for Code Workshop.

*Equal contribution

~/CV and Contact

You can reach me at shypula 👨‍💻 seas ☕ upenn 🚴 edu.

~/Background

I’ve been interested in improving the performance of deep learning models for code since my first systems course I took. AI for programming is interesting to me because of the unique feedback loops available as well as the ability to analyze and execute code that is simply not present in natural language. Because of these things, I hope that research focuses not only on how to imitate human programmers, but on how AI models can teach us ways to program and reason better like AlphaGo in move 37.

I’ve been inspired by work on program superoptimization, program analysis, and all the exciting and cool work on RL over the years. With regard to exciting directions related to program synthesis, I love this paper for pointing out the potential for progress in this direction. Some general problems I think are important to work on now should be categorized into two bucksts: Progress (better AI programming assistants, writing unit tests, writing integration tests, software testing, program optimization, understanding generated code, debugging programs, etc.) and Safety (reducing the risks of LLMs used by malign actors, reducing the risks of producing unsafe code, etc.).

~/Other

I am currently really into specialty coffee, and I’m always looking to connect with people on this topic as well; hit me up if you want to chat about coffee! I’ve recently even begun to roast coffee to learn more about the process.