What is soundness (in static analysis)?

The PLUM reading group recently discussed the paper, DR CHECKER: A Soundy Analysis for Linux Kernel Drivers, which appeared at USENIX Securty’17. This paper presents an automatic program analysis (a static analysis) for Linux device drivers that aims to discover instances of a class of security-relevant bugs. The paper is insistent that a big reason for DR CHECKER’s success (it finds a number of new bugs, several which have been acknowledged to be true vulnerabilities) is that the analysis is soundy, as opposed to sound. Many of the reading group students wondered: What do these terms mean, and why might soundiness be better than soundness?

To answer this question, we need to step back and talk about various other terms also used to describe a static analysis, such as completeness, precision, recall, and more. These terms are not always used in a consistent way, and they can be confusing. The value of an analysis being sound, or complete, or soundy, is also debatable. This post presents my understanding of the definitions of these terms, and considers how they may (or may not) be useful for characterizing a static analysis. One interesting conclusion to draw from understanding the terms is that we need good benchmark suites for evaluating static analysis; my impression is that, as of now, there are few good options. Continue reading

35 Comments

Filed under Abstract interpretation, Program Analysis

What does the SIGPLAN EC do?

I’ve served as Chair of ACM SIGPLAN for the last two years. It’s been a pleasure and a privilege to support the programming languages community, working with my fellow members on the SIGPLAN Executive Committee (EC). The current SIGPLAN EC is entering its third and final year of service. Elections for the next EC will be held in early 2018, and the newly elected members will begin serving in July of that year. Who will they be?

In this blog post I describe, in Q&A format, the activities and responsibilities of the SIGPLAN EC and its officers. My hope is that this post will inform possible volunteers about what they can expect to do if elected to the EC, and will help voters match candidates’ aptitudes to each position’s responsibilities. This post will also highlight some of the accomplishments of the current and past ECs, hopefully giving the community an idea of what we’ve been up to, on their behalf.SIGPLAN logo

Continue reading

Leave a Comment

Filed under Process

Jean Sammet, a Remembrance

This past weekend, trailblazing computer scientist Jean Sammet passed away at the age of 88. I learned this news through emeritus colleague Marv Zelkowitz who lives in the same retirement community that Jean did. He saw the notice of her passing over the weekend on a community bulletin board. (There’s an interesting story about this; see the end of this post!)

Jean Sammet visiting UMD in the late 1970s. Photo by Ben Shneiderman

Continue reading

2 Comments

Filed under Scientists

Measuring Single vs. Double-blind Reviewing

My colleague Emery Berger recently pointed me to the paper Single versus Double Blind Reviewing at WSDM 2017. This paper describes the results of a controlled experiment to test the impact of hiding authors’ identities during parts of the peer review process. The authors of the experiment—PC Chairs of the 2017 Web Search and Data Mining (WSDM’17) conference—examined the reviewing behavior of two sets of reviewers for the same papers submitted to the conference. They found that author identities were highly significant factors in a reviewer’s decision to recommend the paper be accepted to the conference. Both the fame of an author and the author’s affiliation were influential. Interestingly, whether the paper had a female author or not was not significant in recommendation decisions. [Update: a different look at the data found a penalty for female authors; see addendum to this post.]

Blinded scales of justice.

Fairness is blind

I find this study very interesting, and incredibly useful. Many people I have talked to have suggested that we scientifically compare single- with double-blind reviewing (SBR vs. DBR, for short). A common idea is to run one version of a conference as DBR and compare its outcomes to a past version of the conference that used SBR. The problem with this approach is that both the papers under review and the people reviewing them would change between conference iterations. These are potentially huge confounding factors. While the WSDM’17 study is not perfect, it gets past some of these big issues.

In the rest of the post I will summarize the details of the WSDM’17 study and offer some thoughts about its strengths and weaknesses. I think we should attempt more studies like this for other conferences. Continue reading

5 Comments

Filed under Process, Research, Science

POPL’17 in Paris: Some highlights

Last week I attended the 44th ACM SIGPLAN Symposium on Principles of Programming Languages (POPL 2017). The event was hosted at Paris 6 which is part of the Sorbonne, University of Paris. It was one of the best POPLs I can remember. The papers are both interesting and informative (you can read them all, for free, from the Open TOC), and the talks I attended were generally of very high quality. (Videos of the talks will be available soon—I will add links to this post.) Also, the attendance hit an all-time high: more than 720 people registered for POPL and/or one of its co-located events.

In this blog post I will highlight a few of my favorite papers at this POPL, as well as the co-located N40AI event, which celebrated 40 years of abstract interpretation. Disclaimer: I do not have time to describe all of the great stuff I saw, and I could only see a fraction of the whole event. So just because I don’t mention something here doesn’t mean it isn’t equally great.[ref]I also attended PLMW just before POPL, and gave a talk. I may discuss that in another blog post.[/ref]

Continue reading

14 Comments

Filed under Program Analysis, Research, Scientists, Semantics, Software Security, Types

Gold Open Access publication: Is now the time?

While scientific papers were once available only to those willing to pay expensive fees to journal publishers, papers are now increasingly made available for free, as they enjoy some form of open access (OA)Not all forms of open access are the same, however. While the ACM SIGPLAN Executive Committee (of which I am the Chair) is generally happy with the OA rights SIGPLAN[ref]ACM SIGPLAN is the Association of Computing Machinery‘s Special Interest Group on Programming Languages.[/ref] authors currently enjoy, it may be time to push for even stronger rights. Reasonable people may disagree about the costs and benefits of such rights. As such, we would like your feedback (even if you are not a SIGPLAN member).

Please consider filling out this open access survey. It should take 5-10 minutes.

The remainder of this blog post discusses the issues in more depth. We invite your feedback!

Continue reading

15 Comments

Filed under Policy, Process, Science, Uncategorized

PRNG entropy loss and noninterference

I just got back from CCS (ACM’s conference on Computer and Communications Security) in Vienna, Austria. I really enjoyed the conference and the city.

At the conference we presented our Build it, Break it, Fix it paper. In the same session was the presentation on a really interesting paper called Practical Detection of Entropy Loss in Pseudo-Random Number Generators, by Felix Dörre and Vladimir Klebanov.[ref]You can download the paper for free from the conference’s OpenTOC.[/ref] This paper presents a program analysis that aims to find entropy-compromising flaws in PRNG implementations; such flaws could make the PRNG’s outputs more predictable (“entropy” is basically a synonym for “uncertainty”). When a PRNG is used to generate secrets like encryption keys, less entropy means those secrets can be more easily guessed by an attacker.

A PRNG can be viewed as a deterministic function from an actually-random seed to a pseudo-random output. The key insight of the work is that this function should not “lose entropy” — i.e., its outputs should contain all of the randomness present in its seed. Another way of saying this is that the PRNG must be injective. The injectivity of a function can be checked by an automatic program analysis.

Using the tool they developed, the paper’s authors were able to find known bugs, and an important unknown bug in libgcrypt’s PRNG. Their approach was essentially a variation of prior work checking noninterference; I found the connection to this property surprising and insightful. As discussed in a prior post, security properties can often be challenging to state and verify; in this case the property is pleasingingly simple to state and practical to verify.

Both injectivity and noninterference are examples of K-safety properties, which involve relations between multiple (i.e., K) program executions. New technology to check such properties is being actively researched, e.g., in DARPA’s STAC program. This paper is another example of the utility of that general thrust of work, and of using programming languages techniques (automated program analysis) for proving security properties.

Continue reading

5 Comments

Filed under Program Analysis, Software Security

Expressing Security Policies

[This blog post was conceived by Steve Chong, at Harvard, and co-authored with Michael Hicks.]

Enforcing information security is increasingly important as more of our sensitive data is managed by computer systems. We would like our medical records, personal financial information, social network data, etc. to be “private,” which is to say we don’t want the wrong people looking at it. But while we might have an intuitive idea about who the “wrong people” are, if we are to build computer systems that enforce the confidentiality of our private information, we have to turn this intuition into an actionable policy.

Defining what exactly it means to “handle private information correctly” can be subtle and tricky. This is where programming language techniques can help us, by providing formal semantic models of computer systems within which we can define security policies for private information. That is, we can use formal semantics to precisely characterize what it means for a computer system to handle information in accordance with the security policies associated with sensitive information. Defining security policies is still a difficult task, but using formal semantics allows us to give security policies an unambiguous interpretation and to explicate the subtleties involved in “handling private information correctly.”

In this blog post, we’ll focus on the intuition behind some security policies for private information. We won’t dig deeply into formal semantics for policies, but we provide links to relevant technical papers at the end of the post. Later, we also briefly mention how PL techniques can help enforce policies of the sort we discuss here.

Continue reading

9 Comments

Filed under Semantics, Software Security

Unblinding Double-blind Reviewing

Peer review is at the heart of the scientific process. As I have written about before, scientific results are deemed publishable by top journals and conferences only once they are given a stamp of approval by a panel of expert reviewers (“peers”). These reviewers act as a critical quality control, rejecting bogus or uninteresting results.

But peer review involves human judgment and as such it is subject to bias. One source of bias is a scientific paper’s authorship: reviewers may judge the work of unknown or minority authors more negatively, or judge the work of famous authors more positively, independent of the merits of the work itself.

Double-blind: Authors are blind to their reviewers, who are blind to authors

The double-blind review process aims to mitigate authorship bias by withholding the identity of authors from reviewers. Unfortunately, simply removing author names from the paper (along with other straightforward prescriptions) may not be enough to prevent the reviewers from guessing who the authors are. If reviewers often guess and are correct, the benefits of blinding may not be worth the costs.

While I am a believer in double-blind reviewing, I have often wondered about its efficacy. So as part of the review process of CSF’16, I carried out an experiment:[ref]The structure of this experiment was inspired by the process Emery Berger put in place for PLDI’16, following a suggestion by Kathryn McKinley.[/ref] I asked reviewers to indicate, after reviewing a paper, whether they had a good guess about the authors of the paper, and if so to name the author(s). This post presents the results. In sum, reviewers often (2/3 of the time) had no good guess about authorship, but when they did, they were often correct (4/5 of the time). I think these results support using a double-blind process, as I discuss at the end.

Continue reading

7 Comments

Filed under Process, Science

Carbon Footprint of Conference Travel

Conferences are the heart of the PL research community. The best PL research is published at conferences, following a rigorous peer review process on par or better than the process of high-quality journals. Conferences are also where science gets done. As the respective community gathers to learn about the latest results, its members also network and interact, developing collaborations or carrying on projects that could produce the next breakthroughs. At conferences, students and young professors can rub elbows with luminaries, and researchers can develop problems and exchange ideas with practitioners.

Photo of airplane and sunset

Air travel warms the planet disproportionately

One drawback of conferences (compared to other forms of research publication) is their cost. The monetary cost is obvious: at least one of a paper’s authors must pay to attend the conference, a cost that includes registration, travel, and hotel. Another cost that is less often considered is the environmental cost. In particular, I’m thinking of the impact that travel to/from the conference has on global warming. Most conference attendees travel great distances, and so travel by airplane. But air travel is particularly bad for global warming. So I wondered: what is the cost of conference travel, in terms of carbon footprint?

To get some idea, I decided to estimate the carbon footprint of the PLDI’16 program committee (PC) meeting, held just before and at the same venue as POPL’16. The result directly sheds some light on the carbon footprint of in-person PC meetings, and by treating it as a sample of the PL community overall, sheds light on the carbon footprint of PL conferences. In this blog post I present the results of my analysis and conclude with thoughts about possible actions to mitigate environmental cost.

Continue reading

4 Comments

Filed under Policy, Process, Science