/Alexander_Shypuλa/

Researcher @ MIT CSAIL

 
My name is Alex and I’m a researcher interested in neural program synthesis, applications of machine learning to software engineering, and related areas like neurosymbolic AI and natural language processing.

I recently finished my Master’s at Carnegie Mellon University’s (CMU) School of Computer Science in the Artificial Intelligence and Innovation program where I was also a member of Neulab where I was advised by Professor Graham Neubig.

I am currently working at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) with Yoon Kim (MIT CSAIL) and Jie Chen (MIT-IBM Watson AI Lab).

 
 
 

~/Publications

Shypula A, Yin P, Lacomis J, Goues CL, Schwartz E, Neubig G. “Learning to Superoptimize Real-world Programs.” arXiv preprint arXiv:2109.13498 (2021).

~/CV and Contact

If you would like the most recent copy of my CV, you can locate it here.

You can reach me at shypula 👨‍💻 mit 🏔️ edu.

~/Background

The spark for me came when I was writing the malloc systems code for CMU’s intro to system’s course. It was highly error-prone, it required careful programming skill, and its performance relied on increasingly complex use of data structures from implicit lists, to linked lists, to binned linked-lists. Nevertheless, we had a large test suite that would estimate the performance of our storage allocator and provide feedback as if it were an RL environment. I wondered, “could an artificially intelligent agent learn to write code like this from scratch,” re-discover classic data structures, and teach us something new? I thought it was compelling, becuase the agent would have to write programs to explain its choices, which would be more interpretable than, say, a black-box neural network.

The path to superintelligent programming machines is challenging, because the discrete search space for programs is huge and sparse (due to the brittleness of syntax and sensitivity of semantics). On the path to get there, there are lots of super interesting and useful problems to explore in neural program synthesis / neural program modeling and software engineering.

Before my research career, I majored in Business at NYU’s Stern School of Business and studied Chinese. I loved learning Chinese, because for some reason as a kid, I thought it required some kind of superintelligence to learn: it taught me that discipline, a love of learning, and time are truly the ingredients to master anything, not some kind of innate intelligence. It was when I worked at IBM and saw the potential of deep learning to change our daily lives: from how we write programs, solve problems, and run drive-thrus, that I made the leap to devote my life to AI research and its applications. It has taken a lot of work, but I haven’t looked back once.

~/Other

Besides CS research, I like walking up big, tall, snowy things: some mountains on my list are (in increasing altidude) Mt. Rainier, Denali, Aconcagua, and Cho Oyu. Here are some photos of me doing a (very) stochastic gradient ascent on a practice rescue line in a crevasse on Mt. Baker.

I also absolutely love coffee from all over the world, especially Panama and East Africa.