VDFs, or “Proof-of-Time”
For anyone with a technical background but no blockchain knowledge.

TL;DR: A Verifiable Delay Function (VDF) is a function whose output on any input takes $N$ sequential operations to compute, but only $n$ operations to verify, for fixed $N$ and $n$ where $N >> n$. The output of a VDF $f$ can be seen as a “proof-of-time”: if a challenge input $x$ is revealed publicly at time $T$, then revelation of a value $y=f(x)$ at any point can serve as a guarantee that some known amount of time (up to a constant factor) has elapsed since the revelation of $x$

Another framing: a function that takes a long (fixed) time to compute, no matter how many processors you have.

# ​​Example: A Simple“Proto-VDF”

Consider the function $f(x) \equiv x^{(p+1)/4} ~(\text{mod }p)$, for some prime $p \equiv 3 ~(\text{mod }4)$, defined over quadratic residues $x$

Modular exponentiation is costly (in the sense that if the exponent is $e$, exponentiation takes about $\log_2{e}$ operations) and can’t be parallelized. However, verification of the exponentiation in this case is easy. Note that if $y = f(x)$, then $y^2 \equiv x$ (verify with FLT).  Therefore, by by repeatedly composing $f$ onto itself*, we can create a proto-VDF which takes a long (but fixed) time to compute in one direction, but which is also easy to verify. For this function, the VDF property comes from the fact that $f$ is expensive to compute but cheap to invert.

Designing VDFs with other security or convenience properties is an open problem. Here are some examples of properties we might care about (from this paper):
• Decodable: Input $x$ can be quickly recovered from output $y$.
• Incremental: The hardness of the VDF can be tuned (there are different ways a VDF may or may not be incremental).

*note that after each application of $f$ we have to slightly modify the output in some invertible way (for example, flipping the last bit). Otherwise it is easy to compute $f^k(x) \equiv x^{((p+1)/4)^k}$.

# ​​VDFs vs. Proof-of-Work

The “one-way”-ness of VDFs may remind you of the spirit of proof-of-work. It may be helpful to think about the VDF primitive from the perspective of the PoW primitive, which many people are more familiar with.
 ​​Proof-of-Work ​​VDFs ​​In Proof-of-Work, the number of hashes computed before a block is found follows an exponential distribution. ​​With VDFs, the number of operations needed to produce the output is fixed and known ahead of time. ​​In Proof-of-Work, the expected speed at which you can find a satisfactory hash input is directly proportional to your computational power—parallelism is very effective. This enables ASICs to be effective. ​​With VDFs, no reasonable (polynomial) amount of parallelization enables you to compute the VDF output non-negligibly faster. ASICs can’t really help you here—at least, not to the same degree that they can break PoW schemes. ​​In Proof-of-Work, having A% of the hashpower gives you an A% chance of finding the next satisfactory hash input. ​​With VDFs, the entity with the fastest processor will always compute the VDF output first, but at regular intervals known to everyone.
VDFs are definitely not a “new mining primitive.” Rather, they serve as a proof that some time has elapsed.

# ​​Example Application: Randomness Beacon

One open problem in blockchain is how to introduce a safe, shared source of randomness to a network (a “randomness beacon”). In particular, we’d like to ensure that this randomness can’t be manipulated or biased by any participant to the randomness-generation protocol. This is particularly important in designing Proof-of-Stake systems, where all staking parties must collectively agree on a fair and random way to select the next block proposer.

Below are two proposals for randomness that work only if we have VDFs. Again summarized from this paper, with some direct quotes.

## ​​Naive Proposal 1

Proof-of-work solutions associated with blocks are believed to have high min-entropy, and blocks are released at regular intervals. Use these hashes as a randomness beacon!

## ​​An Attack

Consider a powerful miner with lots of computational power. Such a miner could manipulate the beacon (these hashes) by computing the beacon output upon mining a block, and then only posting the block if the corresponding beacon output is favorable. Else, they discard the block.

## ​​A VDF-enabled Solution

“This attack is only feasible if the beacon can be computed quickly, as each block is fixed to a specific predecessor and will become “stale” if not published. If a VDF with a suitably long delay is used to compute the beacon, miners will not be able to determine the beacon output from a given block before it becomes stale.”

## ​​Naive Proposal 2

$n$ parties submit commitments $c(r_1), ..., c(r_n)$ to random values $r_i$ in an initial phase and subsequently reveal their commitments, at which point the beacon output is computed as $r=\bigoplus r_i$.

## ​​An Attack

An attacker provides a commitment in the commit phase. On the reveal phase, the attacker waits until all other parties have revealed, computes the hypothetical output of the beacon, and refuses to reveal if the beacon output would be unfavorable, forcing a protocol restart.

## ​​A VDF Solution

All parties first directly submit their values $r_i$, and these are aggregated into a seed $H(r_1, ..., r_n)$ which is then passed into a VDF for the final beacon output. “With a sufficiently long delay parameter (longer than the time period during which values may be submitted), even the last party to publish their $r_i$ cannot predict what its impact will be on the final beacon outcome.”