Monthly Archives: October 2014

Is POPL really one research community?

I was recently in Princeton for the program committee meeting of the POPL conference. It was a lot of fun. David Walker, the program chair, offered excellent leadership, and I am excited about the program that we ended up selecting. I look forward to seeing many of you at the conference (Mumbai, January 2015).

POPL is a broad conference, and you really feel this when you attend its PC meeting. You inevitably discuss papers with fellow PC members whose backgrounds are very different from your own. Of the papers discussed, there are many that use techniques about which you only have rudimentary knowledge.

One thing I kept wondering at the meeting was: is POPL really one research community? Or is it really a union of disjoint sets of researchers who work on different themes within POPL, for example types or denotational semantics or abstract interpretation? Perhaps researchers in these sub-communities don’t really work with each other, even if they share a vision of reliable software and productive programming.

The question was bugging me enough that I decided to try to answer it through an analysis of actual data. The results I found were intriguing. The takeaway seems to be that POPL is indeed one family, but not a particularly close one. Continue reading

13 Comments

Filed under Process, Research, Scientists

Built, Broken, Fixed: BIBIFI Security Contest Report

Earlier in the summer I discussed a security-oriented programming contest we were planning to run called Build-it, Break-it, Fix-it (BIBIFI). The contest completed about a week ago, and the winners are now posted on the contest site, https://builditbreakit.org.

Here I present a preliminary report of how the contest went. In short: well!

We had nearly a dozen qualifying submissions out of 20 or so teams that made an attempt, and these submissions used a variety of languages — the winners programmed in Python and Haskell, and other submissions were in C/C++, Go, and Java (with one non-qualifying submission in Ruby). Scoring was based on security, correctness, and performance (as in the real world!) and in the end the first two mattered most: teams found many bugs in qualifying submissions, and at least one team was scoring near the top until other teams found their program did not pay much attention to security.

We have much data analysis still to do, to understand more about what happened and why. If you have scientific questions you think we should investigate, after reading this report, I’d love to know them. In the end, I think the contest made a successful go at emphasizing security is not just about breaking things, but also about building them correctly.

Continue reading

2 Comments

Filed under Education, Software Security