Expressing Security Policies

[This blog post was conceived by Steve Chong, at Harvard, and co-authored with Michael Hicks.]

Enforcing information security is increasingly important as more of our sensitive data is managed by computer systems. We would like our medical records, personal financial information, social network data, etc. to be “private,” which is to say we don’t want the wrong people looking at it. But while we might have an intuitive idea about who the “wrong people” are, if we are to build computer systems that enforce the confidentiality of our private information, we have to turn this intuition into an actionable policy.

Defining what exactly it means to “handle private information correctly” can be subtle and tricky. This is where programming language techniques can help us, by providing formal semantic models of computer systems within which we can define security policies for private information. That is, we can use formal semantics to precisely characterize what it means for a computer system to handle information in accordance with the security policies associated with sensitive information. Defining security policies is still a difficult task, but using formal semantics allows us to give security policies an unambiguous interpretation and to explicate the subtleties involved in “handling private information correctly.”

In this blog post, we’ll focus on the intuition behind some security policies for private information. We won’t dig deeply into formal semantics for policies, but we provide links to relevant technical papers at the end of the post. Later, we also briefly mention how PL techniques can help enforce policies of the sort we discuss here.

Continue reading

8 Comments

Filed under Semantics, Software Security

Unblinding Double-blind Reviewing

Peer review is at the heart of the scientific process. As I have written about before, scientific results are deemed publishable by top journals and conferences only once they are given a stamp of approval by a panel of expert reviewers (“peers”). These reviewers act as a critical quality control, rejecting bogus or uninteresting results.

But peer review involves human judgment and as such it is subject to bias. One source of bias is a scientific paper’s authorship: reviewers may judge the work of unknown or minority authors more negatively, or judge the work of famous authors more positively, independent of the merits of the work itself.

Double-blind: Authors are blind to their reviewers, who are blind to authors

The double-blind review process aims to mitigate authorship bias by withholding the identity of authors from reviewers. Unfortunately, simply removing author names from the paper (along with other straightforward prescriptions) may not be enough to prevent the reviewers from guessing who the authors are. If reviewers often guess and are correct, the benefits of blinding may not be worth the costs.

While I am a believer in double-blind reviewing, I have often wondered about its efficacy. So as part of the review process of CSF’16, I carried out an experiment: 1 I asked reviewers to indicate, after reviewing a paper, whether they had a good guess about the authors of the paper, and if so to name the author(s). This post presents the results. In sum, reviewers often (2/3 of the time) had no good guess about authorship, but when they did, they were often correct (4/5 of the time). I think these results support using a double-blind process, as I discuss at the end.

Continue reading

Notes:

  1. The structure of this experiment was inspired by the process Emery Berger put in place for PLDI’16, following a suggestion by Kathryn McKinley.

6 Comments

Filed under Process, Science

Carbon Footprint of Conference Travel

Conferences are the heart of the PL research community. The best PL research is published at conferences, following a rigorous peer review process on par or better than the process of high-quality journals. Conferences are also where science gets done. As the respective community gathers to learn about the latest results, its members also network and interact, developing collaborations or carrying on projects that could produce the next breakthroughs. At conferences, students and young professors can rub elbows with luminaries, and researchers can develop problems and exchange ideas with practitioners.

Photo of airplane and sunset

Air travel warms the planet disproportionately

One drawback of conferences (compared to other forms of research publication) is their cost. The monetary cost is obvious: at least one of a paper’s authors must pay to attend the conference, a cost that includes registration, travel, and hotel. Another cost that is less often considered is the environmental cost. In particular, I’m thinking of the impact that travel to/from the conference has on global warming. Most conference attendees travel great distances, and so travel by airplane. But air travel is particularly bad for global warming. So I wondered: what is the cost of conference travel, in terms of carbon footprint?

To get some idea, I decided to estimate the carbon footprint of the PLDI’16 program committee (PC) meeting, held just before and at the same venue as POPL’16. The result directly sheds some light on the carbon footprint of in-person PC meetings, and by treating it as a sample of the PL community overall, sheds light on the carbon footprint of PL conferences. In this blog post I present the results of my analysis and conclude with thoughts about possible actions to mitigate environmental cost.

Continue reading

2 Comments

Filed under Policy, Process, Science

Rise of the Robots: Review and Reflection

I recently read Martin Ford’s Rise of the Robots with the UMD CS faculty book club. The book considers the impact of the growth of information technology (IT) on the human labor market, and how the trend towards greater automation could eventually eliminate a substantial number of jobs. The result could be a radical, and disruptive, reshaping of the global economy.81fncUPB6cL

I would recommend the book. I found it well-written and thought provoking. Ford capably argues from past economic and technology trends and also digs into particular problems, products, and research in order to extrapolate future impact. Of the ten faculty who discussed the book, nine of us (including me) were convinced that future automation will be increasingly disruptive to human labor markets.

While reading the book, I found myself wondering about my own role, and that of my field, in addressing this situation we’ve contributed to. Many computer scientists have high-minded ideals and wish to help society through IT innovation. What can we do to ensure that those ideals are realized, rather than perverted into the dystopian future that Ford is warning us about? Continue reading

10 Comments

Filed under Algorithms, Book Reviews, Policy, Software engineering

SecDev: Bringing Security Innovation Into Design & Development

The IEEE Cybersecurity Development (SecDev) Conference is a new conference focused on designing and building systems to be secure. It will be offered for the first time in Boston, MA, on November 3-4, 2016. This event was conceived, and is being organized, by Rob Cunningham; I’m pleased to be the PC Chair.

As stated in the call for papers, this first iteration of the conference is seeking short (5-page) papers, extended (1-page) abstracts, and tutorial proposals. The submission deadline is June 21, 2016 — if you have new results, old results you’d like to repackage, a tool, a process, a vision, or an idea you’d like to share with those working to make systems more secure, please consider submitting a paper!

This blog post explains why I think we need  this conference, what I expect the first year to look like, and what sort of papers we hope to get, in question & answer format. Continue reading

Leave a Comment

Filed under Process, Research, Software Security

Confluences in Programming Languages Research

[This guest post is by David Walker, a professor at Princeton, and recent winner of the SIGPLAN Robin Milner Young Researcher Award. –Mike]

Every once in a while it is useful to take a step back and consider where fruitful new research directions come from. One such place is from the confluence of two independent streams of thought. This is an idea that I picked up from George Varghese, who gave a wonderful talk on the topic at ACM SIGCOMM 2014 and summarized the ideas in a short paper for CCR. 1 This blog post considers confluences in the context of programming languages research, reflects upon the role such confluences have played in my own research, and suggests some things we might learn from the process. My keynote talk from POPL 2016 2  touches on many of these same themes.

Continue reading

Notes:

  1. George Varghese. Life in the Fast Lane: Viewed from the Confluence Lens. ACM SIGCOMM Computer Communication Review 45 (1), pp 19-25, January 2015. (link)
  2. David Walker. Confluences in Programming Languages Research (Keynote).  ACM SIGPLAN Symposium on Principles of Programming Languages. pp. 4-4, January 2016. (abstract, videoslides)

Leave a Comment

by | April 11, 2016 · 1:00 pm

Interview with Matt Might, Part 2

Matt Might at the White House, Jan 2015

Matt at the White House, Jan 2015

This post is the second part of my March 10th interview of Matt Might, a PL researcher and Associate Professor in the Department of Computer Science at the University of Utah.

In Part I, we talked about Matt’s academic background, his PL research (including his favorite among the papers he’s written), and his work on understanding and treating rare disease, which began with the quest to diagnose his son Bertrand, and has led to a role in the President’s Initiative on Precision Medicine.

In this post, our conversation continues, covering the topics of blogging, privacy, managing a crazy schedule, and looking ahead to promising PL research directions. Continue reading

Leave a Comment

Filed under Bioinformatics, Interviews, Language adoption, Probabilistic programming, Program Analysis, Scientists, Software Security, Types

Interview with Matt Might

This post presents an interview I did on March 10th, 2015, with Matt Might, a PL researcher who is an Associate Professor in the School of Computing at the University of Utah.

Matt Might headshot

Matt Might

Matt has made strong scientific contributions to the field of programming languages, and he has done much more. He maintains an incredibly popular blog on wide-ranging topics (13 million pageviews since 2009 on topics from abstract interpretation to how to lose weight to how to be more productive). He has also become deeply committed to supporting people with rare diseases, including his own son, Bertrand, who was the first person diagnosed with NGLY1 deficiency. His work on rare disease propelled him to the White House: He met the President on January 31st, 2015, and he took a position in the Executive Office of the President to accelerate the implementation of the Precision Medicine Initiative on March 21st.

We had an engaging conversation covering all of these topics. It is too long for one post, so this post is the first of two. Continue reading

1 Comment

Filed under Abstract interpretation, Bioinformatics, Dynamic languages, Interviews, Program Analysis, Science, Scientists

DARPA STAC: Challenge-driven Cybersecurity Research

Last week I attended a multi-day meeting for the DARPA STAC program; I am the PI of a UMD-led team. STAC supports research to develop “Space/time Analysis for Cybersecurity.” More precisely, the goal is to develop tools that can analyze software to find exploitable side channels or denial-of-service attacks involving space usage or running time.

In general, DARPA programs focus on a very specific problem, and so are different from the NSF style of funded research that I’m used to, in which the problem, solution, and evaluation approach are proposed by each investigator. One of STAC’s noteworthy features is its use of engagements, during which research teams use their tools to find vulnerabilities in challenge problems produced by an independent red team. Our first engagement was last week, and I found the experience very compelling. I think that both the NSF style and the DARPA style have benefits, and it’s great that both styles are available.

This post talks about my experience with STAC so far. I discuss the interesting PL research challenges the program presents, the use of engagements, and the opportunities STAC’s organizational structure offers, when done right.

Continue reading

Leave a Comment

Filed under Process, Program Analysis, Research, Science, Software Security

Software Security Ideas Ahead of Their Time

[This post was conceived and co-authored by Andrew Ruef, Ph.D. student at the University of Maryland, working with me. –Mike]

As researchers, we are often asked to look into a crystal ball. We try to anticipate future problems so that work we begin now will help address those problems before they become acute. Sometimes, a researcher guesses the problem and its possible solution, but chooses not to pursue it. In a sense, she has found, and discarded, an idea ahead of its time.

Recently, a friend of Andrew’s pointed him to a 20-year-old email exchange on the “firewalls” mailing list that blithely suggests, and discards, problems and solutions that are now quite relevant, and on the cutting edge of software security research. The situation is both entertaining and instructive, especially in that the ideas are quite squarely in the domain of programming languages research, but were not considered by PL researchers at the time (as far as we know).

Continue reading

18 Comments

Filed under PL in practice, Research, Research directions, Software Security