This post is the introduction of a series that discusses research bringing together ideas from the Programming Languages and Cryptography (Crypto) communities. The combination of ideas aims to produce **secure computation** technologies that are easier to use efficiently in a general-purpose setting, and easier to reason about, than when applying the Crypto ideas directly. My next post will be on work I’ve recently published in this area in collaboration with colleague cryptographers. Subsequent posts will consider other areas of interesting research (mostly not mine).

**Secure Computation**

Most people think of cryptography as securing **communications**. For example, Alice wants to send a message that only Bob can read, which is to say that Charlie should not be able to make sense of the message if he intercepts it. To achieve this goal of confidentiality Alice can encrypt the message using a scheme involving a secret key shared by Bob such that without the key, Charlie is unlikely to decrypt the message without unreasonably plentiful computational resources. Bob might similarly like to establish that a message he receives purporting to be from Alice is truly the message that Alice sent; this property is referred to as *authenticity*, or *integrity*. Once again, key-based cryptography can play a role.

Modern cryptography is interested not just in securing communications, but also **computation**, often with security goals similar to those of secure communication.

For example, Alice might have limited computational resources, and thus might ask Bob to compute *F(X)* for her. She might like to ensure *confidentiality*: Bob can compute and return *F(X)* but should not learn anything about the makeup of *F*, or *X*, or both. We call this **privacy-preserving computation **(among other terms).

Alice might like to ensure *authenticity* (aka *integrity* in this context): when she receives a result *Y* from Bob, she would like to efficiently confirm that Bob has not lied, and that truly *Y = F(X)*. We call this **verifiable computation**, or in some cases computation with **authenticated data structures**.

Another scenario is when Bob and Alice want to compute Z = *F(X,Y)* where Alice provides *X* and Bob provides *Y*, and while both learn the output *Z*, neither “sees” the other’s input. This can scenario can be implemented with a trusted third party, call her Zelda, but cryptographic protocols have been developed that provide similar guarantees without need of Zelda. This idea is referred to as **secure multiparty computation** (referred to as MPC, or sometimes SMC), and it aims to ensure privacy (of the inputs *X* and *Y*) and integrity (of *Z*).

**How Programming Languages Can Play a Role**

In all of these cases, we must reason about a **general computation** *F*, and about **properties of its execution**. Further, these properties are dependent on the observer (Alice, Bob, Charlie or Zelda). While cryptographers have developed techniques and mathematics for performing such reasoning, mathematical techniques from the programming languages community can be leveraged to significant advantage.

For example, how do we define privacy of MPC? Clearly, for most interesting functions *F*, knowing the result *Z* and an input will reveal some information about the other input. Cryptographers often accept such “semantic” leakage as a given, and instead focus on possible leakage *added* by using MPC instead of using a trusted Zelda. But ultimately, all sources of leakage, whether semantic or protocol-related, are relevant in deciding the privacy of the MPC. In the extreme, for some functions, knowing the output can reveal the other input entirely! The addition function *F(X,Y) = X + Y*, for example, reveals one input precisely when knowing the other and the result.

Ideas from the programming languages community, particularly **program analysis and verification**, can be used to reason about privacy. For example, probabilistic program analysis (viewing a function as a kind of random variable that transforms distributions) can be used to quantify a function’s information leakage by measuring how uncertainty about the input is reduced by knowing a particular output. Program verification and theorem proving techniques can be used to establish what facts can be proved from a cache of axioms and hypotheses, e.g., that knowledge of *Y* is a consequence of knowledge of *X* and *Z* when *F* is the addition function. These facts can be used to reason about the what new information an adversary can learn, which has security implications, but also has performance implications: if we accept this leakage is OK, we may permit relaxing the way a computation is implemented, to improve performance.

In addition, programming language researchers can shed light on methods of secure computation by integrating well-chosen abstractions into (new or existing) programming languages. That is to say, PL research can synthesize new programming methodologies, rather than simply understand ones that already exist but evade understanding, e.g., because they lack a focused underlying or general concept that can be reflected (directly) into a language.

The combination of PL and crypto is an area of great promise, and one we’ll be exploring in depth at an upcoming Dagstuhl seminar. I’ll present the more research opportunities and details from current results in an upcoming series of posts.

Pingback: Authenticated Data Structures (Generically) - The PL Enthusiast