Increasing the Impact of PL Research

[This article is cross-posted on PL Perspectives, the SIGPLAN blog.]

Programming languages research has been going on since (at least) the first general-purpose language compilers were developed in the 1950s and it still has a lot to offer to today’s pressing problems. Indeed, we might right now be in another golden age of programming language design, what with Rust poised to become a major systems programming language; with TypeScript reimagining yet co-existing with JavaScript; with WebAssembly trending towards a high-performant yet safe mobile code platform; and even classic languages like C++ undergoing many major design revisions. Even libraries like TensorFlow and PyTorch and languages like Julia are turning machine learning specialists into language designers and compiler writers.

Even in this vibrant environment, my sense is that the size of the PL research community, and the impact of its research, is lower than it should be. My social network tells me that grad school applications signaling PL interest are declining, and while many researchers have won Turing awards for PL ideas these are becoming fewer and further between. Compare this state of affairs to that in the machine learning and security communities, which are growing rapidly in size and stature. Is there anything the PL community can do to increase the impact of its great work?

My recommendation is a concentrated effort to diversify PL research enthusiasts, and through them broaden the impact of PL-minded work. 

We in PL can expand our tent. Education and outreach can help others to see that PL—its problems, methods, and ethos—is different and more exciting than they realized. We can lower the barrier to entry by engaging in a little housecleaning around expectations of core knowledge. We can also venture outside our tent, taking our knowledge and ideas to join other communities and address their problems. All of these steps will follow naturally from a focus on collaborative efforts attacking substantial problems, such as deployable AI or a quantum programming stack, the solution to which involves PL techniques, but many others besides.

Background: What is an area?

PL is an area of computer science research. What does this mean? I want to answer this question because the conception of an area nudges fledgling researchers (i.e., potential graduate students) toward considering it or not, and likewise guides organizations’ hiring decisions.

CSrankings’ Area List (PL expanded)

In my thinking, a research area is a combination of two things. First, it is a collection of foundational mathematics, notation, vocabulary, and techniques. Second, it is a collection of hard problems and ambitious applications around which those techniques have developed. 

An area—both of these things together—has a corresponding community of researchers and enthusiasts. This community runs (organizes and gate-keeps) the area’s journals and conferences, and establishes its general ethos. We can think of an area as roughly corresponding to an ACM Special Interest Group (SIG) and its flagship conferences; this is what CSrankings does. PL as an area is thus represented by SIGPLAN, which administers the POPL, PLDI, ICFP and OOPSLA conferences (among others).

That an area covers both (big) problems and a set of techniques and ideas can lead to confusion about what an area is, and I think that PL is hurt by this confusion in particular.

Impact Boost #1: Expand the Tent

The first opportunity to increase PL impact is get more people interested in the PL area and able to contribute to it. We can do that by showing that PL is more than they probably thought it was, and also by lowering the barrier to entry.

PL as an area began with computer science researchers tackling problems of programming language design and implementation, and integrally, understanding what programs mean (semantics). For the first POPL (in 1973), researchers focused on the topics of parsing, language feature design, (proved-correct!) compilation, optimization, formalization of language types and semantics, and automated program reasoning. 

PL researchers still work on these problems but approach them in new ways. For example, as we have written about many times on this blog, PL researchers (Swarat Chaudhuri, Eran Yahav, Mayur Naik, and others) are bringing techniques from machine learning to the problems of automated reasoning, optimization, and program repair. 

Conversely, PL researchers are regularly applying the area’s broad palette of 60 years’ worth of design principles, techniques, and formalisms to solve pressing problems first identified by other areas. For example, Wasserman and Su’s approach to stopping SQL injection, a security problem, applies techniques for parsing, a PL problem. Işıl Dillig’s group’s recent work on managing database schema evolution adapts general programming languages research—synthesis, program verification—to the more specific language of SQL and DB for migration problems. My prior work on general-purpose authenticated data structures (ADSs) uses standard PL techniques—type systems and operational semantics—to generalize an idea developed in the crypto community—recursive hashing—to general computations. (See my prior blog post for more examples!)

Education and Outreach

To increase the impact of PL research, I would suggest we strengthen connections to other areas, through education and outreach. In particular, we can try to educate those in other areas about PL problems and their traditional solutions. In doing so, more seasoned researchers may see how their techniques apply to PL problems, and conversely how PL techniques might apply to their problems. The interaction may attract young researchers to PL, as well as potential collaborators. 

What specific steps can we take? One thing is to contribute to blogs like this one, aiming to inform non-PL researchers of PL results and ideas, and share posts with colleagues. Another is to broadly publicize events like Programming Languages Mentoring Workshops (PLMW) which show aspiring researchers what PL is about, and events like the Oregon Programming Languages Summer School (OPLSS), which provide in-depth education.

At the same time, to do any of this successfully we need to lower barriers to understanding PL results. Martin Vechev once observed to me that PL methods present a very high wall for those outside the community, and that’s why they do not read our papers. What can we do?

All undergraduates from a CS department have seen calculus and statistics, which makes many machine learning concepts readily digestible. However, comparatively many fewer have seen formal logic, which is the basis for many PL techniques and (importantly) PL notation. One solution to this problem is to integrate more of this material into standard curricula (as I have done in our required PL course at UMD). Another (and related) is to standardize notation to something that is simpler and/or more broadly understood. For example, when the reduction relation is actually a function, why not write it as one (rather than as rules of inference)? And, why continue to employ slightly varying notation for standard concepts (e.g., capture-avoiding substitution may be e[v/x] or e[x/v] or e[x ↦ v]). These things sap the reader’s mental energy. Maybe an effort like https://distill.pub/ but for PL papers would facilitate moving in a more understandable direction.

Impact Boost #2: Publish by Problem, not Technique

As I mentioned above, PL researchers are regularly tackling problems identified by other areas. When doing so, I suggest we publish results in those other areas’ conferences. Conversely, publish solutions to PL problems in PL conferences, no matter the technique used.

Why do this? Consider the case of applying a PL technique to a non-PL problem. Publishing your result in a PL venue makes sense because that community knows the techniques, vocabulary, etc. and the publication builds social capital within the community. The three works I mentioned above (Wasserman and Su, Dillig, me) were all published at POPL or PLDI. 

But this approach risks lowering the work’s overall impact. The PL venue’s reviewers may be less savvy to incorrect claims around the problem itself, and less aware of related work done outside the PL community. Those most interested in the application area are not regular attendees of the PL conference, and so may miss (or discount) papers published there. 

The same argument applies to non-traditional (e.g., machine learning) approaches to solving PL problems. Mayur Naik observed to me that publishing ML solutions to PL problems in ML venues exhibits the above problems, but in reverse—PL people don’t know about the work and if they do they may not understand it, since it’s not directed at them. (Publishing at both venues is hard because of the need for novelty.)

Publishing at a venue that the paper’s problem/application area is associated with ought to enhance relevance and impact, which should be the ultimate goal. Consider the Penrose project, which aims to generate visual diagrams from math notation, a relatively well known synthesis problem in PL conferences (but—see comment from Jonathan Aldrich, a contributor to the project, in the comments section). They published their work at SIGGRAPH and got a lot more coverage and interest because the SIGGRAPH community really values this work. 

Doing this raises the bar for the authors. The paper risks rejection by reviewers not familiar with the paper’s vocabulary or methods. Even if the paper is published, its impact may be reduced for the same reason, i.e., because readers cannot understand it. Submitting to a venue outside of PL may seem too difficult/risky, despite the impact benefits. That’s the conclusion we came to with our ADS paper (mentioned above); maybe we made the wrong choice. 

Favoring Accessibility

Addressing my concern comes back to communication and education. How can we present the use of PL techniques in a way that is more accessible to those outside of the community? Enhancing general education and standardizing notation will help. What else?

One idea is to avoid PL-specific formalism when possible. For example, can a solution be characterized not as a language, but rather as a library or API? This would have been possible for our ADS paper for example. Alternatively, the main result could be published in a PL venue and then another version could be published where those from the problem area will see it and can digest it. This could be in a blog post, magazine, video-recorded talk, etc.

Another idea is to develop short monographs that explain PL techniques, ideas, and problems for those in other, relevant areas: PL for security people; PL for ML people; PL for DB people, etc. These could be written by PL researchers who have already started to cross the chasm (or those who have gone the other way). 

Impact Boost #3: Go Big (and Collaborative)

My third suggestion is easy to say and hard to do: Join or organize collaborations to tackle big, pressing problems for which PL offers ideas for part of, but not all of, the solution. Doing so will have direct impact, but should also help address the problems of education and inclusiveness I’ve discussed above. Here are a few examples.

DARPA’s CRASH/SAFE project aimed to reimagine a secure computing infrastructure, “a combined effort that looks at hardware, programming languages, operating systems, and theorems all at the same time.” As part of this project, PL researchers at UPenn and Harvard worked together to develop foundational techniques on verification and testing, and developed new languages and hardware mechanisms. This was done in collaboration with those working on theorem proving (e.g., at Cornell and UT Austin) and on building the hardware (e.g., at Draper).

MIT’s center for Deployable Machine Learning works “towards creating AI systems that are robust, reliable and safe for real-world deployment.” It brings together researchers in Algorithms, Theory, Graphics/Vision, AI, PL, and others to wrestle with challenging, cross-cutting problems. Armando Solar-Lezama is a PL researcher and member of the Center. He is the leader of a new multi-institution NSF Expeditions Award on neurosymbolic programming, which Swarat Chaudhuri wrote about on this blog (I notice that CVPR had a day-long tutorial on the topic). Beyond MIT’s center, I’m seeing a lot of interest in applying formal methods and PL techniques to problems of AI fairness and robustness (to defend against adversarial inputs); see for example this CAV 2020 keynote by Pushmeet Kohli

Project Everest brings together researchers in PL, Systems, and Cryptography, with the goal of producing highly efficient, fully verified components in the HTTPS stack. The project grew out of the earlier Ironclad project, in which systems researchers like Jay Lorch and Bryan Parno worked with PL researchers like Rustan Leino and Chris Hawblitzel to build formally verified, low-level distributed systems services. Researchers in applied cryptography and foundational security, including Cedric Fournet, Karthikeyan Bhargavan, Markulf Kohlweiss, Antoine Delignat-Lavaud and Santiago Zanella-Beguelin helped set a focus towards building verified, secure communication components, while guiding the formally verified cryptography implementation. Meanwhile, the underlying programming language and verification technology was driven by the F* team, which involves PL researchers like Nikhil Swamy, Aseem Rastogi, Jonathan Protzenko, Tahina Ramananandro, and Catalin Hritcu. All of this work relies heavily on automated theorem provers and the work by Leonardo de Moura and Nikolaj Bjorner. At this point, verified code developed by the project is running in Firefox, Linux, Microsoft Azure, and several other industrial products.

Concluding Remarks

To close, I’ll quote my friend and collaborator David Darais who, as I’ve done above, distinguishes an area’s “problem domains” from its “solution domains.” Right now, other areas’ problem domains, like those in machine learning and security, are at the forefront. David says,

Undergraduates get hooked on these areas’ problems by taking courses, solving toy problems, and loving it. For any given problem, PL has its own powerful solution that can’t be replicated in other disciplines. But it’s hard to get excited about chasing solutions, and then later find problems to use them on. We should be reframing PL as a toolbox that applies to everything, including medicine, bio, art, music, graphics, systems, machine learning, databases, etc. Become an expert in the PL toolbox and have a secret weapon nobody else has in the problem space you care about. That’s what we should be pitching to the younger generation to think about PL as their research path.

The prospects and potential are immense; let’s get to work!

Thanks to Emery Berger, David Darais, Sankha Guria, Robert Rand, Nikhil Swamy, and Ian Sweet for suggestions and comments on drafts of this post.

3 Comments

Filed under Process, Research

3 Responses to Increasing the Impact of PL Research

  1. Jonathan Aldrich

    Thanks for the shout-out to Penrose–indeed, we hope that publishing in SIGGRAPH will highlight what PL techniques can do in this domain! Interestingly, Penrose doesn’t really use PL synthesis techniques–our diagram synthesis primarily uses constrained numerical optimization. However, as you would expect, we do leverage PL techniques related to language design, extensible parsing, typing, modularity, extensibility, pattern matching, and semantics. In our current work we are also using automatic differentiation along with code generation to make optimization in the back end go faster. So many useful PL applications here!

  2. Pingback: Increasing the Impact of PL Research | SIGPLAN Blog

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.