MIT Debugging Program Recycles Code For A Good Cause

An MIT program that can find and repair vulnerabilities by looking for more secure code and borrowing from it holds a lot of promise. But will we forget how to code in the process?

David Wagner, Executive Editor, Community & IT Life

July 8, 2015

4 Min Read
<p align="left">(Image: <a href="" target="_blank">Tom Morris</a> via Wikimedia Commons)</p>

9 Ways Technology Is Slowly Killing Us All

9 Ways Technology Is Slowly Killing Us All

9 Ways Technology Is Slowly Killing Us All (Click image for larger view and slideshow.)

MIT has presented a new system, called Codephage, for finding and repairing vulnerabilities in a piece of software. Codephage promises faster development, more secure software, and a lot of convenience. But does it come at a cost?

Codephage finds an existing bug in a piece of software and looks at how other programs have handled the problem before. The program doesn't have to be written in the same language, and Codephage doesn't need to see the source code of any of the "donor" codes in order to make the change. Instead, Codephage examines the execution of a program during a security check and sees how it handles the same problem the original code couldn't handle.

Codephage does this by using two inputs. One is a safe input for the original program, and one is a bug input. Codephage uses the safe input to analyze the way the donor program executes it, and creates a string of logical constraints from watching the program. Then it feeds the bug input into the donor program. If the donor program can handle the new input, it analyzes the difference between the two and creates a new set of logical constraints. It then goes back to the recipient code to see where the constraints don't match. It can write new language from scratch to fix the constraints, even if the exact code might be different from the donor program and may not even be in the same language. It checks to see if the buggy input works. If not, it can go back and look for additional logic it missed.

The amazing part (if the first part wasn't amazing enough) is that the whole process only takes between two and ten minutes.

Since security checks are 80% of code, finding and fixing vulnerabilities is a huge part of any development process, and Codephage aims to end the "grunt work" developers need to go through.

"The longer-term vision is that you never have to write a piece of code that somebody else has written before," MIT professor Martin Rinard said in an article for MIT News. "The system finds that piece of code and automatically puts it together with whatever pieces of code you need to make your program work."

Sounds great. It sounds like a way to get rid of zero day vulnerabilities, make coding faster, save money, and have everyone live a more secure and happy existence.

[Now imagine debugging a ship in space. Read Rosetta Mission: Debugging a Comet Landing.]

One can't help but question the long-term vision of never having to write code someone has already written before. There are two issues I can't reconcile.

The first is the idea that even though Codephage is writing its own code, I can't help but feel like this is stealing. Yes, it is seeking open source code and therefore it isn't really stealing in a monetary sense. But one wonders if programming something to seek existing solutions to coding problems isn't sort of like an artist taking the smile from the Mona Lisa and the soup can from Andy Warhol and the pose of the statue of David, sticking them together and calling them art. Is there a point where an elegant solution, even in open source, and even with Codephage writing its own code, deserves more credit than this?

I'm no programmer, so I'll leave that one to the developers to decide.

But here is my other problem. Isn't this sort of like erasing our memories of how we got here? If I can write a program that has dozens of vulnerabilities I don't know how to patch, then run Codephage and have it patch for me, what did I learn? How will I learn to patch vulnerabilities that no one has ever learned to patch before if I never learn to patch the ones I've already got? It almost becomes a sort of developer's groupthink. By letting Codephage do the "hard part" we never learn to do the hard part. What happens in x number of years when the only people who know how to do the hard part are the people looking for vulnerabilities we haven't learned to fix yet?

It is similar to the idea that very few humans know how to make fire anymore. That's no problem in a world of matches and electric light and ovens, but the second you are in the wilderness you are done for.

Codephage is clearly a winning idea, and as it grows in strength it can make the world more secure. We only need to be sure we run it not because we are lazy, but because we've already put in the work. If we use it that way, it promises a world that's a little more secure, and that's a good thing. If we get lazy, we might find in the long run we are a lot less secure, and that is, obviously, a bad thing.

About the Author(s)

David Wagner

Executive Editor, Community & IT Life

David has been writing on business and technology for over 10 years and was most recently Managing Editor at Before that he was an Assistant Editor at MIT Sloan Management Review, where he covered a wide range of business topics including IT, leadership, and innovation. He has also been a freelance writer for many top consulting firms and academics in the business and technology sectors. Born in Silver Spring, Md., he grew up doodling on the back of used punch cards from the data center his father ran for over 25 years. In his spare time, he loses golf balls (and occasionally puts one in a hole), posts too often on Facebook, and teaches his two kids to take the zombie apocalypse just a little too seriously. 

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like

More Insights