The IEEE Cybersecurity Development (SecDev) Conference is a new conference focused on designing and building systems to be secure. It will be offered for the first time in Boston, MA, on November 3-4, 2016. This event was conceived, and is being organized, by Rob Cunningham; I’m pleased to be the PC Chair.
As stated in the call for papers, this first iteration of the conference is seeking short (5-page) papers, extended (1-page) abstracts, and tutorial proposals. The submission deadline is June 21, 2016 — if you have new results, old results you’d like to repackage, a tool, a process, a vision, or an idea you’d like to share with those working to make systems more secure, please consider submitting a paper!
This blog post explains why I think we need this conference, what I expect the first year to look like, and what sort of papers we hope to get, in question & answer format. Continue reading
[This guest post is by David Walker, a professor at Princeton, and recent winner of the SIGPLAN Robin Milner Young Researcher Award. –Mike]
Every once in a while it is useful to take a step back and consider where fruitful new research directions come from. One such place is from the confluence of two independent streams of thought. This is an idea that I picked up from George Varghese, who gave a wonderful talk on the topic at ACM SIGCOMM 2014 and summarized the ideas in a short paper for CCR.[ref]George Varghese. Life in the Fast Lane: Viewed from the Confluence Lens. ACM SIGCOMM Computer Communication Review 45 (1), pp 19-25, January 2015. (link)[/ref] This blog post considers confluences in the context of programming languages research, reflects upon the role such confluences have played in my own research, and suggests some things we might learn from the process. My keynote talk from POPL 2016[ref]David Walker. Confluences in Programming Languages Research (Keynote). ACM SIGPLAN Symposium on Principles of Programming Languages. pp. 4-4, January 2016. (abstract, video, slides)[/ref] touches on many of these same themes.
Last week I attended a multi-day meeting for the DARPA STAC program; I am the PI of a UMD-led team. STAC supports research to develop “Space/time Analysis for Cybersecurity.” More precisely, the goal is to develop tools that can analyze software to find exploitable side channels or denial-of-service attacks involving space usage or running time.
In general, DARPA programs focus on a very specific problem, and so are different from the NSF style of funded research that I’m used to, in which the problem, solution, and evaluation approach are proposed by each investigator. One of STAC’s noteworthy features is its use of engagements, during which research teams use their tools to find vulnerabilities in challenge problems produced by an independent red team. Our first engagement was last week, and I found the experience very compelling. I think that both the NSF style and the DARPA style have benefits, and it’s great that both styles are available.
This post talks about my experience with STAC so far. I discuss the interesting PL research challenges the program presents, the use of engagements, and the opportunities STAC’s organizational structure offers, when done right.
Consider this claim
Quality is more important than quantity
I expect few people would disagree with it, and yet we do not always act as if it were true. In Academia, when considering candidates to hire or promote, we count their papers, their citations, their funding, their software download rates, their graduated students, the number of their committee memberships or journal editorships, and more.
Researchers are getting the message: quantity matters. Ugo Bardi proposes the economic underpinnings of this apparent trend, cleverly arguing that scientific papers are currency, subject to phenomena like inflation (more papers!), assaying (peer review validates papers, which support funding proposals, which finance more papers), and counterfeiting (papers published without review by unscrupulous publishers). Moshe Vardi, in a recent blog post, concurs that “we have slid down the slippery path of using quantity as a proxy for quality” and that “the inflationary pressure to publish more and more encourages speed and brevity, rather than careful scholarship.”[ref]Update 8/21/2016: As more evidence of the problem, here’s a great retrospective from the editor of a top journal in sociology points to quantity greatly devaluing quality.[/ref]
In this post I consider the problem of incentivizing, and assessing, research quality, starting with a recent set of guidelines put out by the CRA. I conclude with a set of questions—I hope you will share your opinion. Continue reading
As I have written previously, academic computer science differs from other scientific disciplines in its heavy use of peer-reviewed conference publications.
Since other disciplines’ conferences typically do not employ peer review, results published at highly selective computer science conferences may not be given the credit they deserve, i.e., the same credit they would receive if published in a similarly selective journal.
The main remedy has simply been to explain the situation to the possibly confused party, be it a dean or provost or a colleague from another department. But this remedy is sometimes ineffective: At some institutions, scientists risk a poor evaluation if they publish too few journal articles, but they risk muting the influence of their work in their own community if they publish too few articles at top conferences.
The ACM publications board has recently put forth a proposal that takes this problem head on by formally recognizing conference publications as equal in quality to journal publications. How? By collecting them in a special journal series called the Proceedings of the ACM (PACM).
In this post I briefly summarize the motivation and substance of the ACM proposal and provide some thoughts about it. In the end, I support it, but with some caveats. You have the opportunity to voice your own opinion via survey. You can also read other opinions for (by Kathryn McKinley) and against (by David S. Rosenblum) the proposal (if you can get past the ACM paywall, but that’s a topic for another day…).
Update: PACM has been approved, as has a new journal series called PACM PL that will collect papers accepted by major SIGPLAN Conferences. It will debut during late 2017.
Filed under Process, Science
Anyone familiar with American academia will tell you that the US News rankings of academic programs play an outsized role in this world. Among other things, US News ranks graduate programs of computer science, by their strength in the field at large as well as certain specialties. One of these specialties is Programming Languages, the focus of this blog.
The US News rankings are based solely on surveys. Department heads and directors of graduate studies at a couple of hundred universities are asked to assign numerical scores to graduate programs. Departments are ranked by the average score that they receive.
It’s easy to see that much can go wrong with such a methodology. A reputation-based ranking system is an election, and elections are meaningful only when their voters are well-informed. The worry here is that respondents are not necessarily qualified to rank programs in research areas that are not their own. Also, it is plausible that respondents would give higher scores to departments that have high overall prestige or that they are personally familiar with.
In this post, I propose using publication metrics as an input to a well-informed ranking process. Rather than propose a one-size-fits-all ranking, I provide a web application to allow users to compute their own rankings. This approach has limitations, which I discuss in detail, but I believe it’s a reasonable start to a better system.
The Summit on Advances in Programming Languages (SNAPL) is a new kind of PL conference, focused on big-picture questions rather than concrete technical results. The conference will be held for the first time in Asilomar, CA, from May 3 to 6, 2015.
The submission deadline is January 9, 2015 — if you have a long-term vision about where the field of PL should go, you ought to submit a paper.
Here we post an interview with Shriram Krishnamurthi, who is a professor at Brown University and one of the organizers of the conference.[ref]My co-blogger, Mike Hicks, is on the SNAPL program committee.[/ref]
A few weeks ago, I posted about an analysis of collaboration in the POPL community. In that post, I promised similar analyses for a few other conference-defined communities as well. Well, here they are. In this post, I will report on an analysis of community structure in two other premier SIGPLAN conferences: PLDI and OOPSLA.
The methodology for the analysis was similar to that in my earlier post on POPL. The questions I asked were:
- Who works with whom in the community defined by a conference X?
- Are there prominent clusters of researchers who frequently publish papers with each other?
- Which papers/researchers are at the center of the community (that is, who are the Kevin Bacons of community X)?
To answer these questions, I used data from the DBLP database to construct, for each conference, an overlap graph: a graph where nodes represent papers with more than 1 author published in the period 2005-2014, and edges connect pairs of papers that have at least one author in common. For each graph, I generated the set of connected components (which correspond to disjoint subcommunities) and ran some further analyses on the largest component. Continue reading
I was recently in Princeton for the program committee meeting of the POPL conference. It was a lot of fun. David Walker, the program chair, offered excellent leadership, and I am excited about the program that we ended up selecting. I look forward to seeing many of you at the conference (Mumbai, January 2015).
POPL is a broad conference, and you really feel this when you attend its PC meeting. You inevitably discuss papers with fellow PC members whose backgrounds are very different from your own. Of the papers discussed, there are many that use techniques about which you only have rudimentary knowledge.
One thing I kept wondering at the meeting was: is POPL really one research community? Or is it really a union of disjoint sets of researchers who work on different themes within POPL, for example types or denotational semantics or abstract interpretation? Perhaps researchers in these sub-communities don’t really work with each other, even if they share a vision of reliable software and productive programming.
The question was bugging me enough that I decided to try to answer it through an analysis of actual data. The results I found were intriguing. The takeaway seems to be that POPL is indeed one family, but not a particularly close one. Continue reading