[This is the second part of a two-post series. The first part is here.]
Recently, I talked about using program verifiers in teaching proof techniques in undergraduate CS, and my own experience with using the Dafny verifier in an algorithms class. One thing that that post was missing was a student perspective. So I asked three of my students — Julie Eisenberg, Rebecca Smith, and Matthew Dunn-Rankin — what they thought of Dafny. In this post, I summarize their insightful feedback and ponder about its implications.
The takeaway message for me is that verification tools such as Dafny are an excellent way of introducing students to formal proofs. At the same time, some care is needed as we integrate these tools with our pedagogy. In particular, the automated reasoning capabilities of Dafny aren’t an unadulterated blessing, and we must make sure that students understand how to do formal proofs fully manually. We ought to also invest more effort on tools that make the proof process more productive, for example by giving users more feedback on why an attempt at a formal proof failed. Continue reading
This post continues our ongoing series on young PL researchers who are about to start independent research positions in academia and research labs. This week, we feature Ravi Chugh, who is starting as an assistant professor at the University of Chicago in the Fall. Continue reading
I am in the process of putting together a MOOC on software security, which goes live in October. At the moment I’m finishing up material on buffer overflows, format string attacks, and other sorts of vulnerabilities in C. After presenting this material, I plan to step back and say, “What do these errors have in common? They are violations of memory safety.” Then I’ll state the definition of memory safety, say why these vulnerabilities are violations of memory safety, and conversely say why memory safety, e.g., as ensured by languages like Java, prevents them.
No problem, right? Memory safety is a common technical term, so I expected its definition would be easy to find (or derive). But it’s much trickier than I thought.
My goal with this post is to work out a definition of memory safety for C that is semantically clean, rules out code that seems intuitively unsafe, but does not rule out code that seems reasonable. The simpler, and more complete, the definition, the better. My final definition is based on the notion of defined/undefined memory and the use of pointers as capabilities. If you have better ideas, I’d love to know them!
[This is the first part of a two-post series. The second part is here.]
In previous posts on this blog, I have talked about how research on programming languages and tools can contribute to K-16 education. In this post, I will share with you some of my experiences while trying to use a PL tool — specifically, a semi-automated program verifier — in an undergraduate algorithms class. The experience was a success in many ways. A program verifier gives students a hands-on understanding of how algorithms are mathematical objects, and verification tools are now mature enough for productive classroom use. I think every CS program would benefit students by introducing them to these tools at some point. At the same time, the experience exposed me to some serious limitations of current-day program verification, and opened up a number of directions for new research in this area. Continue reading
Our “New Scientists on the Block” series features up-and-coming PL researchers who are about to begin independent research careers in universities or research laboratories. This week, we interview Aws Albarghouthi, who is starting as an assistant professor at the University of Wisconsin, Madison, in January 2015.
In a funny coincidence following my last post on language adoption, the IEEE Spectrum in my mailbox today contained a pointer to IEEE’s version of the top N languages.
They use a variety of data sources, some of which overlap with the SocioPLT’s, e.g., counting projects on GitHub. Some sources are different, e.g., using Google and Twitter to count occurrences of “X programming” for a language X. The site allows you to create your own rankings to weigh the various data sources differently, e.g., to focus only on GitHub projects or to increase the weight of Google Trends.
Comparing the IEEE top twenty with the SocioPLT top twenty shows some interesting differences.
In my last article I discussed how the failure to find the Heartbleed bug sooner was in some sense a failure to refine or deploy what is otherwise effective technology for static analysis. In particular, commercial static analysis tools purposely will ignore potential bugs so as to avoid reporting too many false alarms, i.e., favoring completeness over soundness. The companies that make these tools aim to provide a profitable service to a broad market, and their own investigations indicate soundness is not important for sales. Instead, to be viable, tools must help developers find real, important bugs efficiently, and not necessarily every bug. A challenge to researchers is to find ways to push the business proposition back toward soundness while retaining efficiency (and other desirable criteria); Andy Chou’s POPL’14 keynote outlines other useful challenges.
While Heartbleed is ostensibly about the adoption and improvement of static analysis, in this article I explore the related question of fostering the adoption of programming languages. I summarize impressive research by Leo Meyerovich and Ariel Rabkin on adoption research questions and adoption practices that appeared at OOPSLA’12 and OOPSLA’13, respectively. I think there are some interesting results here, with implications for improving the adoption of languages. Their results also raise new questions for further research (but too late for yesterday’s POPL deadline — good luck to all submitters!).
In this post, I’ll continue our ongoing discussion of applications of PL research in computer-assisted education. Specifically, I’ll summarize a talk that Loris D’Antoni of Penn gave at this year’s Workshop on Programming Language Technology for Massive Open Online Courses (PLOOC). I was intrigued by this work, and I think a lot of you may be too.
Filed under Education, MOOCs
It was recently reported that the Heartbleed OpenSSL bug has still not been patched on over 300,000 servers. This is down from 600,000 that were discovered two months ago, when the bug was first publicized. Add to this delay the nearly two years that the bug went undetected, and that’s a lot of exposure.
In this blog post, I wonder about how programming languages and tools might have played a role in finding Heartbleed before it got into production code. Then I explore possible approaches to improving this state of the affairs. I welcome your thoughts and ideas! Continue reading