I just got back from CCS (ACM’s conference on Computer and Communications Security) in Vienna, Austria. I really enjoyed the conference and the city.

At the conference we presented our Build it, Break it, Fix it paper. In the same session was the presentation on a really interesting paper called **Practical Detection of Entropy Loss in Pseudo-Random Number Generators**, by Felix Dörre and Vladimir Klebanov.[ref]You can download the paper for free from the conference’s OpenTOC.[/ref] This paper presents a program analysis that aims to find entropy-compromising flaws in PRNG implementations; such flaws could make the PRNG’s outputs more predictable (“entropy” is basically a synonym for “uncertainty”). When a PRNG is used to generate secrets like encryption keys, less entropy means those secrets can be more easily guessed by an attacker.

A PRNG can be viewed as a deterministic function from an actually-random seed to a pseudo-random output. The key insight of the work is that this function should not “lose entropy” — i.e., its outputs should contain all of the randomness present in its seed. Another way of saying this is that the PRNG must be *injective*. The injectivity of a function can be checked by an automatic program analysis.

Using the tool they developed, the paper’s authors were able to find known bugs, and an important unknown bug in libgcrypt’s PRNG. Their approach was essentially a variation of prior work checking *noninterference*; I found the connection to this property surprising and insightful. As discussed in a prior post, security properties can often be challenging to state and verify; in this case the property is pleasingingly simple to state and practical to verify.

Both injectivity and noninterference are examples of *K-safety properties,* which involve relations between multiple (i.e., *K*) program executions. New technology to check such properties is being actively researched, e.g., in DARPA’s STAC program. This paper is another example of the utility of that general thrust of work, and of using programming languages techniques (automated program analysis) for proving security properties.

## What is a PRNG?

A PRNG produces a stream of numbers that appear random, starting from an (actually) random seed. For example, using the OpenSSL API, you could first do

RAND_seed(in,M);

to initialize the seed using buffer `in`

having size `M`

bytes. Then, to generate a pseudo random output, you could call

RAND_bytes(out,N);

where the generated random bytes are stored in buffer`out`

, having size `N`

. The way that `RAND_bytes`

works internally is to keep some *state* that is initialized with the seed, and is updated with each random byte that is generated. You might also call a function like `RAND_add`

later to update the state with additional actual randomness (this is called *reseeding*).

The effectiveness of the PRNG owes in part to the *entropy* of — i.e., the attacker’s lack of certainty about — the state. (We assume that the code of the PRNG is known to the attacker.) If the state could be predicted, then the output of the PRNG could be as well. This would be really bad if the PRNG output was being used to, for example, generate an encryption key.

## What is the proper functioning of a PRNG?

One important way of making sure the output of a PRNG is unpredictable is to ensure that is *non-invertible*. That is, we want it to be hard to predict the state from the output of a function like `RAND_bytes`

. To achieve this, PRNGs typically compute the output and the next state deterministically, but then apply a cryptographic one-way function to both. By “one way,” we mean that the input of the function is hard to predict from its output; a typical example is a cryptographic hash function like SHA-256.

However, this technique is not sufficient; we also need a high-entropy seed. If the seed was one of a few predictable values, an adversary could just try them out and see if the stream of random numbers they produce match what is actually observed.

Even if the seed has high entropy, the PRNG itself could fail to utilize that entropy effectively. For example, and most egregiously, a function like `RAND_seed(x)`

could fail to use its argument, meaning the subsequent output has no dependence on it. Or, `RAND_seed`

or `RAND_bytes`

could fail to use all the bits of the seed when making the random number or updating the state. For example, the state could be stored in a bytearray but then `RAND_bytes`

overwrites part of the array with a constant (perhaps due to a bug), meaning the next invocation of `RAND_bytes`

will use a state with less entropy.

## Formalizing PRNG correctness

The task the paper takes on is verifying that a PRNG properly retains all of the entropy it derives from the initial seed. To do this, the paper boils down the definition of a PRNG to a function *g*: {0,1}^{m} → {0,1}^{n}. The argument of *g* is an m-bit seed, and the output is an n-bit random number. Internally, *g* will repeatedly generate pseudo-random bits and update its state as discussed above.

Given this definition, we can state the desired property of a correct PRNG: *injectivity*. In particular, we would like to prove the following property, call it *P*:

for all s1 and s2, s1 ≠ s2 ⟹ g(s1) ≠ g(s2)

That is, no two inputs share the same output.[ref]The paper proves, but I will not elaborate on, a precise connection between injectivity and min entropy.[/ref] Interestingly, property *P* is the dual to the following property, *noninteference*, which I covered in a previous blog post:

for all s1 and s2, s1 ≠ s2 ⟹ g(s1) = g(s2)

That is, no matter the (secret) input, the (public) output is always the same. To me, this was the **surprising, and cool, insight of the paper: PRNG correctness is a kind of information flow**. But rather than trying to *restrict* information flow, as with noninteference, the PRNG strives to *ensure* it. While a noninterfering program *reveals nothing* in its visible output about its secret input, an injective PRNG does the opposite: it *hides nothing* in its output about its input.

## Static analysis for PRNG correctness

With the key property *P* identified, building a tool to verify it can draw heavily on related, previous work. The key adjustment is dealing with cryptographic functions.

The basic approach is straightforward: Use the CBMC model checker to generate a formula that defines the output value of a call to g with some seed s1; make a copy of this formula, replacing s1 with variable s2 and renaming other introduced variables; use the original and the copy to construct the converse of correctness property *P*, call it *notP*. Now use a theorem prover to see if *notP* is satisfiable; if so, then the correctness property *P* is false and there may be an entropy problem.

One wrinkle is that this approach cannot directly analyze calls to cryptographic hash functions when proving a PRNG’s injectivity. The workaround is to assume the injectivity of such a function, and then use this assumption in the overall proof. The tool can, in fact, *require* that such a function be used on data that is ultimately output; such a requirement is similar to the *erasure policy* mentioned in a previous blog post (also called *required release*).

The paper reports on experiments with this tool on several PRNGs written in C and Java. Several were found to be bug free. One known to be buggy was confirmed as such by the tool. Most interestingly, the tool found a serious, and apparently 18-year old bug in libgcrypt, which is now patched. The bug is discussed in detail in this short paper.

## K-safety and hyperproperties

Both injectivity and noninterference are *K-safety properties*. A K-safety property is a property that requires at least K executions to refute. Notice, above, that the form of both injectivity and noninteference involve two executions of g, one on s1 and one on s2; as such, both are 2-safety properties. An example 3-safety property is transitivity: You need at least three executions to refute it (e.g., for a Java Comparator you could show `compare(a,b)`

and `compare(b,c)`

are both true but then show that `compare(a,c)`

is false).

K-safety is a kind of *hyperproperty*, a concept introduced by Clarkson and Schneider. They observed that while a rich set of techniques exists for automatically verifying *properties* (which can be refuted by a single execution), these techniques cannot verify many security properties which, like noninteference, relate multiple executions. Since Clarkson and Schneider’s paper, additional important hyperproperties have been identified and new techniques for verifying them are being developed. For example, some of my previous work with Aseem Rastogi, Piotr Mardziel, and Matt Hammer showed that one can optimize secure multiparty computations with no loss of privacy by checking something akin to injectivity, but involving the original and an optimized version of a program. Recently, Marcelo Sousa and Isil Dillig proposed Cartesian Hoare Logic for verifying K-safety properties, and developed an analyzer to check various K-safety properties of Java components. Many teams for the DARPA STAC program are effectively developing 2-safety checkers for non-interference involving space and timing channels.

## Conclusion

A practical challenge for verifying the security of important systems is stating the security property that is easy to understand and can admit automated analysis. Dörre and Klebanov realized that a key property of PRNGs could be easily stated as an information flow property, in the style of noninterference, and then verified by leveraging existing (and emerging) techniques for K-safety property checking. This is a really cool insight and result! It’s yet one more piece of evidence of the promise of formal tools involving PL techniques to check security properties of key software components, toward making critical systems more secure.

This actually seems a little strange to me for several reasons.

First, injectivity is neither necessary nor sufficient for a good PRNG. But I guess it’s sort of reasonable as a (heuristic) desideratum.

But I don’t see why it is reasonable to assume that the cryptographic hash being used is injective. For one thing, it’s almost surely not injective, nor was it designed to be. (Collisions in the hash are supposed to be hard to find, but they may certainly exist.) Second, it seems like you are assuming something rather close to what you are trying to prove.

Finally, and maybe I’m missing something here, it seems that most flaws in usage of PRNGs arise do to poor entropy of the initial seed, not bad implementation of the PRNG itself (which should, in general, be a standardized one). But I guess not, given the bug in the Libgcrypt PRNG (which I will have to look at in more detail).

Assuming the crypto hash is injective is simply a device for helping the verifier confirm that the rest of the PRNG is. As you say, the hash function is at best “computationally injective” in practice.

Right, I guess it just further supports my argument that injectivity is not necessary for pseudorandomness.

Thanks for your comments, Jonathan. Paper author here. You can find the

missing context in the paper, while here is its gist.

In theory, due to the way they are defined, injectivity and

pseudo-randomness do not strictly entail each other. In practice, being

as close to injectivity as possible is clearly a desirable property for

a PRNG.

Assuming injectivity of the cryptographic hash function is, at first,

merely a logician’s way of saying that this part of the code is somebody

else’s problem. But beyond that, I think this assumption is reasonable

in practice – there seem to be not too many collisions, as otherwise

they would be easy to find.

If we were assuming something rather close to what we were trying to

prove, then most of the code in, e.g.,

https://github.com/openssl/openssl/blob/master/crypto/rand/md_rand.c

would be superfluous (as this is the code not covered by the

assumption). Yet, this is the code running on your device. This is the

kind of code that had several spectacular failures. This is the kind of

code that has only been assured by manual review so far. We give you

machine-checked evidence of presence/absence of bugs in this code.

Possibility of seeding flaws does not make this evidence less valuable.

These are orthogonal problems that must both be addressed (by very

different means).

Thanks! I didn’t mean to cast any aspersions on your paper (which I haven’t yet read), I was just trying to understand it better.