Interview with Facebook’s Peter O’Hearn

pete-1In this post, I interview Peter O’Hearn, programming languages professor, researcher, and evangelist. Peter now works at Facebook on the Infer static analyzer, which was publicly released back in June 2015. In this interview we take a brief tour of Peter’s background (including his favorite papers) and the path that led him and Infer to Facebook. We discuss how Infer is impacting mobile application development at Facebook, and what Peter hopes it can achieve next. Peter also shares some lessons he’s learned at Facebook regarding PL research and the sometimes surprising impact PL researchers can and are having on industrial software development.

What is your academic background?

I got a BSc in Computer Science from Dalhousie University followed by MSc+PhD at Queen’s (both universities in Canada). For my PhD I was fortunate to be supervised by Bob Tennent, an early proponent of denotational semantics who emphasized strongly how semantics can influence language design. After that I held academic positions at Syracuse University, Queen Mary University of London, and University College London (where I maintain a professor position).

Can you say a little about your research?

I have worked on denotational semantics, mathematical logic, program logic, and automated program verification/analysis. My work started off rather theoretical,  involving category theoretic models and the like, and progressively moved towards practice. A main career goal has been to advance reasoning about programs in a way that takes it closer to real world problems, and this has involved jumping between theoretical and practical work.

A key moment was when John Reynolds and I developed Separation Logic. This opened up new avenues to a number of previously unapproachable problems, particularly involving concurrency and involving automated verification for programs that mutate a program heap. It spurred work with Calcagno, Distefano, Berdine, Yang and others on automatic program analysis, culminating program analyses based on Separation Logic applied to industrial code bases.

In 2013 you started working at Facebook. How did that come about, and what have you been up to since?

Since 2013 I have been an Engineering Manager at Facebook, following the acquisition of the startup company I co-founded, called Monoidics Ltd, which was putting some of our program analysis research ideas into practice. In particular, Monoidics developed the Infer static analyzer, which Facebook has now made publicly available. It is technically a shape analysis, which is a kind of analysis that goes deep into the program heap to, for instance, distinguish cyclic from acyclic pointer structures.  More importantly, though, Infer behaves in a compositional way, where the analysis results of a composite program are computed from the results of its parts, and this has been crucial for its deployment inside Facebook. (Ed: links to some relevant papers appear below.)

Infer is wired into the development process for our mobile apps, including the main Facebook apps for Android and iOS, Facebook Messenger, and Instagram, among others. It participates in the code review process by writing comments, potential bugs, on code modifications (diffs) when they are submitted for review by our engineers, before these diffs are committed to production. Other participants in review include human reviewers and tests run by the continuous integration system. Because Infer is compositional, it can operate incrementally, quickly producing results on a code diff even if the diff is part of a codebase in the millions of lines.

This deployment model has been hugely important for the success of the analyzer. Infer makes comments in the natural workflow of the engineers, who have been acting on the analysis reports to the tune of a fix rate hovering around 8o% in recent months. Reporting at code review time avoids the expensive human context switch that results from long-running batch mode analysis that produces buglists (say, in overnight runs); we have found batch mode deployment to be considerably more difficult to have success with.

Infer is open source, as are several other Facebook projects. Why release the code?

We at Facebook are interested in technology that will enable us to develop better software, faster.

Program verification is an area whose potential is only beginning to be tapped; there are lots of great ideas in the area, but I feel there is much more practical value to be unlocked for programmers. In the case of Infer itself, although we have had some degree of success deploying it we don’t have all the answers. The tool can improve by paying attention to problems of people outside Facebook who use it. And by putting it out there, we can get feedback and contributions from the research community. So we can have a two-way flow of ideas from people out in the world and our technical team. This will then directly impact Facebook engineers and products, as well as hopefully play a part in the advancement of verification technology generally.

In general, in order to fulfil our mission of making the world more open and connected, a lot of software needs to be built, and we can’t build it all ourselves. Our goal at Facebook is to open source as much of our technology as possible, in particular the technology we feel would be valuable for the broader engineering community at large.[ref]Ed: We’ve also heard from Avik Chaudhuri from FB about Flow, and about another language, Hack, that are publicly available.[/ref] If we have something that fundamentally improves our ability to build valuable software, we’re eager to share it with the world so we can get feedback to that work and also so that other engineers can leverage the work we’ve done to allow us all to move fast and innovate.

How does your job at Facebook differ from your routine as a professor?

One big difference is the kinds of questions I consider. As an academic I can see now that I was mostly concerned with novelty: how can I find new techniques to push the frontier of knowledge forward. In industry I ask myself: how can we help programmers do their jobs better. Whether the answer involves novelty is not as important as whether we can actually help people. I personally like both kinds of question, but it is a real difference of perspective and now I am of the opinion that academic computer science estimates the significance of novelty maybe a little too much. From an industrial perspective we would have more to go on if there were more corroboration and less uncorroborated novelty.

Another big difference is how working at FB we are judged on clear impact — how do our techniques affect the code in apps used by over a billion people? This is very different from considering potential, of perhaps many years out. We can and do consider long-range questions, but questions about immediate impact are there, as is only right in this setting. This makes the work exhilarating, and challenging.

Finally, a real eye-opener has been how evidence-based Facebook engineering is. We measure as much as we can to inform our engineering judgments. For Infer we use a number of performance measures, which involve both CPU-time of the analysis and the resource it takes within the continuous integration ecosystem. We also measure the rate at which developers respond to Infer’s comments (the fix rate), which is a much easier quantity to measure than false positive rate (unless you redefine the false positive concept in terms of fix rate).

Do you have any lessons learned from your time at Facebook to share with programming languages researchers?

I would send a positive message to PL researchers concerning how well the field has been doing. PL research has provided languages that people in industry actually use, such as OCaml and Haskell, but even more importantly concepts from the PL research community are showing up in recent industrial languages like Rust, Swift, GoHack, and Flow. It is really a great time for seeing PL research impact. And, beyond the explicit languages themselves are programming idioms. For example, I have seen functional programming idioms show up in industrial code bases for languages that are not usually thought of as functional; the programming ideas themselves can transport into many languages. This sort of idea-flow does not require the academics to make industrial-grade implementations.

In fact, I am personally of the opinion that construction of prototypes is often fine for the research community to focus on. To make a more robust implementation usually costs much more than a research team can or should afford. Yes, there are some important counterexamples (e.g., OCaml), but I would not advise the academic community to necessarily always shoot for “reliable, usable, working artifacts”. That is, at least in the way I understand the meaning of these terms from the context of having a tool that users can rely on. To require people to try to make industrial-grade tools could have the detrimental effect of slowing things down and stifling the development of fresh ideas.

Note that this is not a comment against the drive towards repeatability (which I support) or corroboration (which would be even better). But, we should be careful not to require academics to do what would take many person years to do well in industry.

What papers have you written that you are most proud of (that PL Enthusiast readers might check out)?

Let me mention three.

    1. Local Reasoning about Programs that Alter Data Structures. Peter W. O’Hearn, John C. Reynolds, Hongseok Yang: CSL 2001: 1-19.
    2. Resources, Concurrency and Local Reasoning. Peter O’Hearn: Theor. Comput. Sci. 375(1-3): 271-307 (2007).
    3. Compositional Shape Analysis by Means of Bi-Abduction. Cristiano Calcagno, Dino Distefano, Peter W. O’Hearn, Hongseok Yang: J. ACM 58(6): 26 (2011).

The first is probably the best paper I have written, it is where years of thought related to program logic, AI, etc came together; includes the “principle of  local reasoning” that underpins the efficiency of proof tools including Infer and others. Also, I tried to write this paper in a simple way where one did not need to be a died-in-the-wool theorist to read it; the reader can judge whether it succeeds in that!

The second is written in a tutorial style concentrating on fluency rather than metatheory, to try to get ideas across. Steve Brookes wrote a companion paper sorting out the foundations.

The last is the basis of Monoidics/Facebook Infer. More important for compositionality and the use of abduction than for shape.

What research are you working on now?

I still want to understand concurrency, scalably. I would like to have analyses that I could deploy with high speed and low friction (e.g., not copious annotations or proof hints) and produce high-quality reports useful to programmers writing concurrent programs without disturbing their workflow too much. Then it could scale to many programmers and many programs. Maybe I am asking for too much, but that is what I would like to find.

Any closing thoughts?

One thing is that I would like to say that moving to industry has made me more convinced than ever of the huge value of the education work that PL researchers engage in. I have met people who have taken programming courses from PL researchers and the undoubtedly strongly-held and coherently argued viewpoints they see in these courses ends up impacting our codebases and even some of the directions people go in internally. And I am talking about people inside Facebook trained as engineers, not researchers. The PL community needs to go on teaching the principles of programming and programming languages in a deep and uncompromising way, and we need to teach about verification and how it can be used; a lot of work is need on the latter, especially. We should do this at the undergraduate as well as graduate level. These activities can have a big impact on future software.

Follow Peter’s activities and other news about Infer at @fbinfer.

2 Comments

Filed under Interviews, PL in practice, Program Analysis, Scientists

2 Responses to Interview with Facebook’s Peter O’Hearn

  1. First rate interview, fascinating subject.

    There’s additional background on abductive inference as it derives from Aristotle and C.S. Peirce in the following project report:

    Functional Logic : Inquiry and Analogy

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.