Welcome to Tari Labs University (TLU). Our mission is to be the premier destination for balanced and accessible learning material for blockchain, digital currency and digital assets learning material.

We hope to make this a learning experience for us at TLU: as a means to grow our knowledge base and internal expertise or as a refresher. We think this will also be an excellent resource for anyone interested in the myriad disciplines required to understand blockchain technology.

We would like this platform to be a place of learning accessible to anyone, irrespective of their degree of expertise. Our aim is to cover a wide range of topics that are relevant to the TLU space, starting at a beginner level and extending down a path of deeper complexity.

You are welcome to contribute to our online content by submitting a pull request or issue in GitHub. To help you get started, we've compiled a Style Guide for TLU reports. Using this Style Guide, you can help us to ensure consistency in the content and layout of TLU reports.

Errors, Comments and Contributions

We would like this collection of educational presentations and videos to be a collaborative affair. This extends to our presentations. We are learning along with you. Our content may not be perfect first time around, so we invite you to alert us to errors and issues or, better yet, if you know how to make a pull request, to contribute a fix, write the correction and make a pull request.

As much as this learning platform is called Tari Labs University and will see input from many internal contributors and external experts, we would like you to contribute to new material, be it in the form of a suggestion of topics, varying the skill levels of presentations, or posting presentations that you may feel will benefit us as a growing community. In the words of Yoda, “Always pass on what you have learned.”

Guiding Principles

If you are considering contributing content to TLU, please be aware of our guiding principles:

  1. The topic researched should be potentially relevant to the Tari protocol; chat to us on #tari-research on IRC if you're not sure.
  2. The topic should be thoroughly researched.
  3. A critical approach should be taken (in the academic sense), with critiques and commentaries sought out and presented alongside the main topic. Remember that every white paper promises the world, so go and look for counterclaims.
  4. A recommendation/conclusion section should be included, providing a critical analysis on whether or not the technology/ proposal would be useful to the Tari protocol.
  5. The work presented should be easy to read and understand, distilling complex topics into a form that is accessible to a technical but non-expert audience. Use your own voice.

Submission Process

This is the basic submission process we follow within TLU. We would appreciate it if you, as an external contributor, follow the same process.

  1. Get some agreement from the community that the topic is of interest.
  2. Write up your report.
  3. Push a first draft of your report as a pull request.
  4. The community will peer-review the report, much the same as we would with a code pull request.
  5. The report is merged into the master.
  6. Receive the fame and acclaim that is due.

Learning Paths

With a field that is rapidly developing it is very important to stay updated with the latest trends, protocols and products that have incorporated this technology.

List of resources available to garner your knowledge in the Blockchain and cryptocurrency Domains is presented in the following paths:

Learning PathDescription
1Blockchain BasicsBlockchain Basics provides a broad and easily understood explanation of blockchain technology, as well as the history of value itself. Having an understanding of the value is especially important in grasping how blockchain and cryptocurrencies have the potential to reshape the financial industry.
2Mimblewimble ImplementationMimblewimble is a blockchain protocol that focuses on privacy through the implementation of confidential transactions. It enables a greatly simplified blockchain in which all spent transactions can be pruned, resulting in a much smaller blockchain footprint and efficient base node validation. The blockchain consists only of block-headers, remaining Unspent Transaction Outputs (UTXO) with their range proofs and an unprunable transaction kernel per transaction.
3BulletproofsBulletproofs are short non-interactive zero-knowledge proofs that require no trusted setup. A bulletproof can be used to convince a verifier that an encrypted plaintext is well formed. For example, prove that an encrypted number is in a given range, without revealing anything else about the number.


These are easy to digest articles, reports and presentations
Requires the foundational knowledge and basic understanding of the content
Prior knowledge and/or mathematics skills are essential to understand these information sources

Blockchain Basics


Blockchain Basics provides a broad and easily understood explanation of blockchain technology, as well as the history of value itself. Having an understanding of the value is especially important in grasping how blockchain and cryptocurrencies have the potential to reshape the financial industry.

Learning Path Matrix

For learning purposes, we have arranged report, presentation and video topics in a matrix, in categories of difficulty, interest and format.

So you think you need a blockchain? Part I
So you think you need a Blockchain, Part II?
BFT Consensus Mechanisms
Lightning Network for Dummies
Atomic Swaps
Layer 2 Scaling Survey
Merged Mining Introduction
Introduction to SPV, Merkle Trees and Bloom Filters
Elliptic Curves 101

Mimblewimble Implementation


Mimblewimble is a blockchain protocol that focuses on privacy through the implementation of confidential transactions. It enables a greatly simplified blockchain in which all spent transactions can be pruned, resulting in a much smaller blockchain footprint and efficient base node validation. The blockchain consists only of block-headers, remaining Unspent Transaction Outputs (UTXO) with their range proofs and an unprunable transaction kernel per transaction.

Learning Path Matrix

Introduction to Schnorr Signatures
Introduction to Scriptless Scripts
Mimblewimble-Grin Block Chain Protocol Overview
Grin vs. BEAM; a Comparison
Grin Design Choice Criticisms - Truth or Fiction
Mimblewimble Transactions Explained
Mimblewimble Multiparty Bulletproof UTXO



Bulletproofs are short non-interactive zero-knowledge proofs that require no trusted setup. A bulletproof can be used to convince a verifier that an encrypted plaintext is well formed. For example, prove that an encrypted number is in a given range, without revealing anything else about the number.

Learning Path Matrix

Bulletproofs and Mimblewimble
Building on Bulletproofs
The Bulletproof Protocols
Mimblewimble Multiparty Bulletproof UTXO



"The purpose of cryptography is to protect data transmitted in the likely presence of an adversary" [1].


  • From Wikipedia: Cryptography or cryptology (from Ancient Greek - κρυπτός, kryptós "hidden, secret"; and γράφειν graphein, "to write", or -λογία -logia, "study", respectively) is the practice and study of techniques for secure communication in the presence of third parties called adversaries. More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages; various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation are central to modern cryptography [2].

  • From SearchSecurity: Cryptography is a method of protecting information and communications through the use of codes so that only those for whom the information is intended can read and process it. The prefix "crypt" means "hidden" or "vault" and the suffix "graphy" stands for "writing" [3].


[1] L. Koved, A. Nadalin, ‎N. Nagaratnam and M. Pistoia, "The Theory of Cryptography" [online], informIT. Available: Date accessed: 2019‑06‑07.

[2] Wikipedia: "Cryptography" [online]. Available: Date accessed: 2019‑06‑07.

[3] SearchSecurity: "Cryptography". Available: Date accessed: 2019‑06‑07.

Elliptic curves 101

Having trouble viewing this presentation?

View it in a separate window.

Introduction to Schnorr Signatures


Private-public key pairs are the cornerstone of much of the cryptographic security underlying everything from secure web browsing to banking to cryptocurrencies. Private-public key pairs are asymmetric. This means that given one of the numbers (the private key), it's possible to derive the other one (the public key). However, doing the reverse is not feasible. It's this asymmetry that allows one to share the public key, uh, publicly and be confident that no one can figure out our private key (which we keep very secret and secure).

Asymmetric key pairs are employed in two main applications:

  • in authentication, where you prove that you have knowledge of the private key; and
  • in encryption, where messages can be encoded and only the person possessing the private key can decrypt and read the message.

In this introduction to digital signatures, we'll be talking about a particular class of keys: those derived from elliptic curves. There are other asymmetric schemes, not least of which are those based on products of prime numbers, including RSA keys [1].

We're going to assume you know the basics of elliptic curve cryptography (ECC). If not, don't stress, there's a gentle introduction in a previous chapter.

Let's get Started

This is an interactive introduction to digital signatures. It uses Rust code to demonstrate some of the ideas presented here, so you can see them at work. The code for this introduction uses the libsecp256k-rs library.

That's a mouthful, but secp256k1 is the name of the elliptic curve that secures a lot of things in many cryptocurrencies' transactions, including Bitcoin.

This particular library has some nice features. We've overridden the + (addition) and * (multiplication) operators so that the Rust code looks a lot more like mathematical formulae. This makes it much easier to play with the ideas we'll be exploring.

WARNING! Don't use this library in production code. It hasn't been battle-hardened, so use this one in production instead.

Basics of Schnorr Signatures

Public and Private Keys

The first thing we'll do is create a public and private key from an elliptic curve.

On secp256k1, a private key is simply a scalar integer value between 0 and ~2256. That's roughly how many atoms there are in the universe, so we have a big sandbox to play in.

We have a special point on the secp256k1 curve called G, which acts as the "origin". A public key is calculated by adding G on the curve to itself, \( k_a \) times. This is the definition of multiplication by a scalar, and is written as: $$ P_a = k_a G $$ Let's take an example from this post, where it is known that the public key for 1, when written in uncompressed format, is 0479BE667...C47D08FFB10D4B8. The following code snippet demonstrates this:

extern crate libsecp256k1_rs;

use libsecp256k1_rs::{ SecretKey, PublicKey };

fn main() {
    // Create the secret key "1"
    let k = SecretKey::from_hex("0000000000000000000000000000000000000000000000000000000000000001").unwrap();
    // Generate the public key, P = k.G
    let pub_from_k = PublicKey::from_secret_key(&k);
    let known_pub = PublicKey::from_hex("0479BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8").unwrap();
    // Compare it to the known value
    assert_eq!(pub_from_k, known_pub);

Creating a Signature

Approach Taken

Reversing ECC math multiplication (i.e. division) is pretty much infeasible when using properly chosen random values for your scalars ([5],[6]). This property is called the Discrete Log Problem, and is used as the principle behind many cryptography and digital signatures. A valid digital signature is evidence that the person providing the signature knows the private key corresponding to the public key with which the message is associated, or that they have solved the Discrete Log Problem.

The approach to creating signatures always follows this recipe:

  1. Generate a secret once-off number (called a nonce), $r$.
  2. Create a public key, $R$ from $r$ (where $R = r.G$).
  3. Send the following to Bob, your recipient - your message ($m$), $R$, and your public key ($P = k.G$).

The actual signature is created by hashing the combination of all the public information above to create a challenge, $e$: $$ e = H(R || P || m) $$ The hashing function is chosen so that e has the same range as your private keys. In our case, we want something that returns a 256-bit number, so SHA256 is a good choice.

Now the signature is constructed using your private information: $$ s = r + ke $$ Bob can now also calculate $e$, since he already knows $m, R, P$. But he doesn't know your private key, or nonce.

Note: When you construct the signature like this, it's known as a Schnorr signature, which is discussed in a following section. There are other ways of constructing $s$, such as ECDSA [2], which is used in Bitcoin.

But see this:

$$ sG = (r + ke)G $$

Multiply out the right-hand side:

$$ sG = rG + (kG)e ​$$

Substitute \(R = rG \) and \(P = kG \) and we have: $$ sG = R + Pe ​$$

So Bob must just calculate the public key corresponding to the signature $\text{(}s.G\text{)}$ and check that it equals the right-hand side of the last equation above $\text{(}R + P.e\text{)}$, all of which Bob already knows.

Why do we Need the Nonce?

Why do we need a nonce in the standard signature?

Let's say we naïvely sign a message $m$ with $$ e = H(P || m) $$ and then the signature would be \(s = ek \).

Now as before, we can check that the signature is valid: $$ \begin{align} sG &= ekG \\ &= e(kG) = eP \end{align} $$ So far so good. But anyone can read your private key now because $s$ is a scalar, so \(k = {s}/{e} \) is not hard to do. With the nonce you have to solve \( k = (s - r)/e \), but $r$ is unknown, so this is not a feasible calculation as long as $r$ has been chosen randomly.

We can show that leaving off the nonce is indeed highly insecure:

extern crate libsecp256k1_rs as secp256k1;

use secp256k1::{SecretKey, PublicKey, thread_rng, Message};
use secp256k1::schnorr::{ Challenge};

fn main() {
    // Create a random private key
    let mut rng = thread_rng();
    let k = SecretKey::random(&mut rng);
    println!("My private key: {}", k);
    let P = PublicKey::from_secret_key(&k);
    let m = Message::hash(b"Meet me at 12").unwrap();
    // Challenge, e = H(P || m)
    let e = Challenge::new(&[&P, &m]).as_scalar().unwrap();

    // Signature
    let s = e * k;

    // Verify the signature
    assert_eq!(PublicKey::from_secret_key(&s), e*P);
    println!("Signature is valid!");
    // But let's try calculate the private key from known information
    let hacked = s * e.inv();
    assert_eq!(k, hacked);
    println!("Hacked key:     {}", k)


How do parties that want to communicate securely generate a shared secret for encrypting messages? One way is called the Elliptic Curve Diffie-Hellman exchange (ECDH), which is a simple method for doing just this.

ECDH is used in many places, including the Lightning Network during channel negotiation [3].

Here's how it works. Alice and Bob want to communicate securely. A simple way to do this is to use each other's public keys and calculate $$ \begin{align} S_a &= k_a P_b \tag{Alice} \\ S_b &= k_b P_a \tag{Bob} \\ \implies S_a = k_a k_b G &\equiv S_b = k_b k_a G \end{align} $$

extern crate libsecp256k1_rs as secp256k1;

use secp256k1::{ SecretKey, PublicKey, thread_rng, Message };

fn main() {
    let mut rng = thread_rng();
    // Alice creates a public-private keypair
    let k_a = SecretKey::random(&mut rng);
    let P_a = PublicKey::from_secret_key(&k_a);
    // Bob creates a public-private keypair
    let k_b = SecretKey::random(&mut rng);
    let P_b = PublicKey::from_secret_key(&k_b);
    // They each calculate the shared secret based only on the other party's public information
    // Alice's version:
    let S_a = k_a * P_b;
    // Bob's version:
    let S_b = k_b * P_a;

    assert_eq!(S_a, S_b, "The shared secret is not the same!");
    println!("The shared secret is identical")

For security reasons, the private keys are usually chosen at random for each session (you'll see the term ephemeral keys being used), but then we have the problem of not being sure the other party is who they say they are (perhaps due to a man-in-the-middle attack [4]).

Various additional authentication steps can be employed to resolve this problem, which we won't get into here.

Schnorr Signatures

If you follow the crypto news, you'll know that that the new hotness in Bitcoin is Schnorr Signatures.

But in fact, they're old news! The Schnorr signature is considered the simplest digital signature scheme to be provably secure in a random oracle model. It is efficient and generates short signatures. It was covered by U.S. Patent 4,995,082, which expired in February 2008 [7].

So why all the Fuss?

What makes Schnorr signatures so interesting and potentially dangerous, is their simplicity. Schnorr signatures are linear, so you have some nice properties.

Elliptic curves have the multiplicative property. So if you have two scalars $x, y$ with corresponding points $X, Y$, the following holds: $$ (x + y)G = xG + yG = X + Y $$ Schnorr signatures are of the form \( s = r + e.k \). This construction is linear too, so it fits nicely with the linearity of elliptic curve math.

You saw this property in a previous section, when we were verifying the signature. Schnorr signatures' linearity makes it very attractive for, among others:

Naïve Signature Aggregation

Let's see how the linearity property of Schnorr signatures can be used to construct a two-of-two multi-signature.

Alice and Bob want to cosign something (a Tari transaction, say) without having to trust each other; i.e. they need to be able to prove ownership of their respective keys, and the aggregate signature is only valid if both Alice and Bob provide their part of the signature.

Assuming private keys are denoted \( k_i \) and public keys \( P_i \). If we ask Alice and Bob to each supply a nonce, we can try: $$ \begin{align} P_{agg} &= P_a + P_b \\ e &= H(R_a || R_b || P_a || P_b || m) \\ s_{agg} &= r_a + r_b + (k_a + k_b)e \\ &= (r_a + k_ae) + (r_b + k_ae) \\ &= s_a + s_b \end{align} $$ So it looks like Alice and Bob can supply their own $R$, and anyone can construct the two-of-two signature from the sum of the $Rs$ and public keys. This does work:

extern crate libsecp256k1_rs as secp256k1;

use secp256k1::{SecretKey, PublicKey, thread_rng, Message};
use secp256k1::schnorr::{Schnorr, Challenge};

fn main() {
    // Alice generates some keys
    let (ka, Pa, ra, Ra) = get_keyset();
    // Bob generates some keys
    let (kb, Pb, rb, Rb) = get_keyset();
    let m = Message::hash(b"a multisig transaction").unwrap();
    // The challenge uses both nonce public keys and private keys
    // e = H(Ra || Rb || Pa || Pb || H(m))
    let e = Challenge::new(&[&Ra, &Rb, &Pa, &Pb, &m]).as_scalar().unwrap();
    // Alice calculates her signature
    let sa = ra + ka * e;
    // Bob calculates his signature
    let sb = rb + kb * e;
    // Calculate the aggregate signature
    let s_agg = sa + sb;
    // S = s_agg.G
    let S = PublicKey::from_secret_key(&s_agg);
    // This should equal Ra + Rb + e(Pa + Pb)
    assert_eq!(S, Ra + Rb + e*(Pa + Pb));
    println!("The aggregate signature is valid!")

fn get_keyset() -> (SecretKey, PublicKey, SecretKey, PublicKey) {
    let mut rng = thread_rng();
    let k = SecretKey::random(&mut rng);
    let P = PublicKey::from_secret_key(&k);
    let r = SecretKey::random(&mut rng);
    let R = PublicKey::from_secret_key(&r);
    (k, P, r, R)

But this scheme is not secure!

Key Cancellation Attack

Let's take the previous scenario again, but this time, Bob knows Alice's public key and nonce ahead of time, by waiting until she reveals them.

Now Bob lies and says that his public key is \( P_b' = P_b - P_a \) and public nonce is \( R_b' = R_b - R_a \).

Note that Bob doesn't know the private keys for these faked values, but that doesn't matter.

Everyone assumes that \(s_{agg} = R_a + R_b' + e(P_a + P_b') \) as per the aggregation scheme.

But Bob can create this signature himself: $$ \begin{align} s_{agg}G &= R_a + R_b' + e(P_a + P_b') \\ &= R_a + (R_b - R_a) + e(P_a + P_b - P_a) \\ &= R_b + eP_b \\ &= r_bG + ek_bG \\ \therefore s_{agg} &= r_b + ek_b = s_b \end{align} $$

extern crate libsecp256k1_rs as secp256k1;

use secp256k1::{SecretKey, PublicKey, thread_rng, Message};
use secp256k1::schnorr::{Schnorr, Challenge};

fn main() {
    // Alice generates some keys
    let (ka, Pa, ra, Ra) = get_keyset();
    // Bob generates some keys as before
    let (kb, Pb, rb, Rb) = get_keyset();
    // ..and then publishes his forged keys
    let Pf = Pb - Pa;
    let Rf = Rb - Ra;

    let m = Message::hash(b"a multisig transaction").unwrap();
    // The challenge uses both nonce public keys and private keys
    // e = H(Ra || Rb' || Pa || Pb' || H(m))
    let e = Challenge::new(&[&Ra, &Rf, &Pa, &Pf, &m]).as_scalar().unwrap();

    // Bob creates a forged signature
    let s_f = rb + e * kb;
    // Check if it's valid
    let sG = Ra + Rf + e*(Pa + Pf);
    assert_eq!(sG, PublicKey::from_secret_key(&s_f));
    println!("Bob successfully forged the aggregate signature!")

fn get_keyset() -> (SecretKey, PublicKey, SecretKey, PublicKey) {
    let mut rng = thread_rng();
    let k = SecretKey::random(&mut rng);
    let P = PublicKey::from_secret_key(&k);
    let r = SecretKey::random(&mut rng);
    let R = PublicKey::from_secret_key(&r);
    (k, P, r, R)

Better Approaches to Aggregation

In the Key Cancellation Attack, Bob didn't know the private keys for his published $R$ and $P$ values. We could defeat Bob by asking him to sign a message proving that he does know the private keys.

This works, but it requires another round of messaging between parties, which is not conducive to a great user experience.

A better approach would be one that incorporates one or more of the following features:

  • It must be provably secure in the plain public-key model, without having to prove knowledge of secret keys, as we might have asked Bob to do in the naïve approach.
  • It should satisfy the normal Schnorr equation, i.e. the resulting signature can be verified with an expression of the form \( R + e X \).
  • It allows for Interactive Aggregate Signatures (IAS), where the signers are required to cooperate.
  • It allows for Non-interactive Aggregate Signatures (NAS), where the aggregation can be done by anyone.
  • It allows each signer to sign the same message, $m$.
  • It allows each signer to sign their own message, \( m_i \).


MuSig is a recently proposed ([8],[9]) simple signature aggregation scheme that satisfies all of the properties in the preceding section.

MuSig Demonstration

We'll demonstrate the interactive MuSig scheme here, where each signatory signs the same message. The scheme works as follows:

  1. Each signer has a public-private key pair, as before.
  2. Each signer shares a commitment to their public nonce (we'll skip this step in this demonstration). This step is necessary to prevent certain kinds of rogue key attacks [10].
  3. Each signer publishes the public key of their nonce, \( R_i \).
  4. Everyone calculates the same "shared public key", $X$ as follows:

$$ \begin{align} \ell &= H(X_1 || \dots || X_n) \\ a_i &= H(\ell || X_i) \\ X &= \sum a_i X_i \\ \end{align} $$

Note that in the preceding ordering of public keys, some deterministic convention should be used, such as the lexicographical order of the serialized keys.

  1. Everyone also calculates the shared nonce, \( R = \sum R_i \).
  2. The challenge, $e$ is \( H(R || X || m) \).
  3. Each signer provides their contribution to the signature as:

$$ s_i = r_i + k_i a_i e $$

Notice that the only departure here from a standard Schnorr signature is the inclusion of the factor \( a_i \).

The aggregate signature is the usual summation, \( s = \sum s_i \).

Verification is done by confirming that as usual: $$ sG \equiv R + e X \ $$ Proof: $$ \begin{align} sG &= \sum s_i G \\ &= \sum (r_i + k_i a_i e)G \\ &= \sum r_iG + k_iG a_i e \\ &= \sum R_i + X_i a_i e \\ &= \sum R_i + e \sum a_i X_i \\ &= R + e X \\ \blacksquare \end{align} $$ Let's demonstrate this using a three-of-three multisig:

extern crate libsecp256k1_rs as secp256k1;

use secp256k1::{ SecretKey, PublicKey, thread_rng, Message };
use secp256k1::schnorr::{ Challenge };

fn main() {
    let (k1, X1, r1, R1) = get_keys();
    let (k2, X2, r2, R2) = get_keys();
    let (k3, X3, r3, R3) = get_keys();

    // I'm setting the order here. In general, they'll be sorted
    let l = Challenge::new(&[&X1, &X2, &X3]);
    // ai = H(l || p)
    let a1 = Challenge::new(&[ &l, &X1 ]).as_scalar().unwrap();
    let a2 = Challenge::new(&[ &l, &X2 ]).as_scalar().unwrap();
    let a3 = Challenge::new(&[ &l, &X3 ]).as_scalar().unwrap();
    // X = sum( a_i X_i)
    let X = a1 * X1 + a2 * X2 + a3 * X3;

    let m = Message::hash(b"SomeSharedMultiSigTx").unwrap();

    // Calc shared nonce
    let R = R1 + R2 + R3;

    // e = H(R || X || m)
    let e = Challenge::new(&[&R, &X, &m]).as_scalar().unwrap();

    // Signatures
    let s1 = r1 + k1 * a1 * e;
    let s2 = r2 + k2 * a2 * e;
    let s3 = r3 + k3 * a3 * e;
    let s = s1 + s2 + s3;

    let sg = PublicKey::from_secret_key(&s);
    let check = R + e * X;
    assert_eq!(sg, check, "The signature is INVALID");
    println!("The signature is correct!")

fn get_keys() -> (SecretKey, PublicKey, SecretKey, PublicKey) {
    let mut rng = thread_rng();
    let k = SecretKey::random(&mut rng);
    let P = PublicKey::from_secret_key(&k);
    let r = SecretKey::random(&mut rng);
    let R = PublicKey::from_secret_key(&r);
    (k, P, r, R)

Security Demonstration

As a final demonstration, let's show how MuSig defeats the cancellation attack from the naïve signature scheme. Using the same idea as in the Key Cancellation Attack section, Bob has provided fake values for his nonce and public keys: $$ \begin{align} R_f &= R_b - R_a \\ X_f &= X_b - X_a \\ \end{align} $$ This leads to both Alice and Bob calculating the following "shared" values: $$ \begin{align} \ell &= H(X_a || X_f) \\ a_a &= H(\ell || X_a) \\ a_f &= H(\ell || X_f) \\ X &= a_a X_a + a_f X_f \\ R &= R_a + R_f (= R_b) \\ e &= H(R || X || m) \end{align} $$ Bob then tries to construct a unilateral signature following MuSig: $$ s_b = r_b + k_s e $$ Let's assume for now that \( k_s \) doesn't need to be Bob's private key, but that he can derive it using information he knows. For this to be a valid signature, it must verify to \( R + eX \). So therefore: $$ \begin{align} s_b G &= R + eX \\ (r_b + k_s e)G &= R_b + e(a_a X_a + a_f X_f) & \text{The first term looks good so far}\\ &= R_b + e(a_a X_a + a_f X_b - a_f X_a) \\ &= (r_b + e a_a k_a + e a_f k_b - e a_f k_a)G & \text{The r terms cancel as before} \\ k_s e &= e a_a k_a + e a_f k_b - e a_f k_a & \text{But nothing else is going away}\\ k_s &= a_a k_a + a_f k_b - a_f k_a \\
\end{align} $$ In the previous attack, Bob had all the information he needed on the right-hand side of the analogous calculation. In MuSig, Bob must somehow know Alice's private key and the faked private key (the terms don't cancel anymore) in order to create a unilateral signature, and so his cancellation attack is defeated.

Replay Attacks!

It's critical that a new nonce be chosen for every signing ceremony. The best way to do this is to make use of a cryptographically secure (pseudo-)random number generator (CSPRNG).

But even if this is the case, let's say an attacker can trick us into signing a new message by "rewinding" the signing ceremony to the point where partial signatures are generated. At this point, the attacker provides a different message, \( e' = H(...||m') \) to sign. Not suspecting any foul play, each party calculates their partial signature:

$$ s'_i = r_i + a_i k_i e' $$ However, the attacker still has access to the first set of signatures: \( s_i = r_i + a_i k_i e \). He now simply subtracts them: $$ \begin{align} s'_i - s_i &= (r_i + a_i k_i e') - (r_i + a_i k_i e) \\ &= a_i k_i (e' - e) \\ \therefore k_i &= \frac{s'_i - s_i}{a_i(e' - e)} \end{align} $$ Everything on the right-hand side of the final equation is known by the attacker and thus he can trivially extract everybody's private key. It's difficult to protect against this kind of attack. One way to is make it difficult (or impossible) to stop and restart signing ceremonies. If a multi-sig ceremony gets interrupted, then you need to start from step one again. This is fairly unergonomic, but until a more robust solution comes along, it may be the best we have!


[1] Wikipedia: "RSA (Cryptosystem)" [online]. Available: Date accessed: 2018‑10‑11.

[2] Wikipedia: "Elliptic Curve Digital Signature Algorithm" [online]. Available: Date accessed: 2018‑10‑11.

[3] Github: "BOLT #8: Encrypted and Authenticated Transport, Lightning RFC" [online].
Available: Date accessed: 2018‑10‑11.

[4] Wikipedia: "Man in the Middle Attack" [online]. Available: Date accessed: 2018‑10‑11.

[5] StackOverflow: "How does a Cryptographically Secure Random Number Generator Work?" [online]. Available: Date accessed: 2018‑10‑11.

[6] Wikipedia: "Cryptographically Secure Pseudorandom Number Generator" [online]. Available: Date accessed: 2018‑10‑11.

[7] Wikipedia: "Schnorr Signature" [online]. Available: Date accessed: 2018‑09‑19.

[8] Blockstream: "Key Aggregation for Schnorr Signatures" [online]. Available: Date accessed: 2018‑09‑19.

[9] G. Maxwell, A. Poelstra, Y. Seurin and P. Wuille, "Simple Schnorr Multi-signatures with Applications to Bitcoin" [online]. Available: Date accessed: 2018‑09‑19.

[10] M. Drijvers, K. Edalatnejad, B. Ford, E. Kiltz, J. Loss, G. Neven and I. Stepanovs, "On the Security of Two-round Multi-signatures", Cryptology ePrint Archive, Report 2018/417 [online]. Available: Date accessed: 2019‑02‑21.


Introduction to Scriptless Scripts

Definition of Scriptless Scripts

Scriptless Scripts are a means to execute smart contracts off-chain, through the use of Schnorr signatures [1].

The concept of Scriptless Scripts was born from Mimblewimble, which is a blockchain design that with the exception of kernels and their signatures, does not store permanent data. Fundamental properties of Mimblewimble include both privacy and scaling, both of which require the implementation of Scriptless Scripts [2].

A brief introduction is also given in Scriptless Scripts, Layer 2 Scaling Survey.

Benefits of Scriptless Scripts

The benefits of Scriptless Scripts are functionality, privacy and efficiency.


With regard to functionality, Scriptless Scripts are said to increase the range and complexity of smart contracts. Currently, as within Bitcoin Script, limitations stem from the number of OP_CODES that have been enabled by the network. Scriptless Scripts move the specification and execution of smart contracts from the network to a discussion that only involves the participants of the smart contract.


With regard to privacy, moving the specification and execution of smart contracts from on-chain to off-chain increases privacy. When on-chain, many details of the smart contract are shared to the entire network. These details include the number and addresses of participants, and the amounts transferred. By moving smart contracts off-chain, the network only knows that the participants agree that the terms of their contract have been satisfied and that the transaction in question is valid.


With regard to efficiency, Scriptless Scripts minimize the amount of data that requires verification and storage on-chain. By moving smart contracts off-chain, there are fewer overheads for full nodes and lower transaction fees for users [1].

List of Scriptless Scripts

In this report, various forms of scripts will be covered, including [3]:

  • Simultaneous Scriptless Scripts
  • Adaptor Signatures
  • Zero Knowledge Contingent Payments

Role of Schnorr Signatures

To begin with, the fundamentals of Schnorr signatures must be defined. The signer has a private key $x$ and random private nonce $r$. $G$ is the generator of a discrete log hard group, $R=rG$ is the public nonce and $P=xG$ is the public key [4].

The signature, $s$, for message $m$, can then be computed as a simple linear transaction

$$ s = r + e \cdot x $$


$$ e=H(P||R||m) \text{, } P=xG $$

The position on the line chosen is taken as the hash of all the data that one needs to commit to, the digital signature. The verification equation involves the multiplication of each of the terms in the equation by $G$ and takes into account the cryptographic assumption (discrete log) where $G$ can be multiplied in but not divided out, thus preventing deciphering.

$$ sG = rG + e \cdot xG $$

Elliptic Curve Digital Signature Algorithm (ECDSA) signatures (used in Bitcoin) are not linear in $x$ and $r$, and are thus less useful [2].

Schnorr Multi-signatures

A multi-signature (multisig) has multiple participants that produce a signature. Every participant might produce a separate signature and concatenate them, forming a multisig.

With Schnorr Signatures, one can have a single public key, which is the sum of many different people's public keys. The resulting key is one against which signatures will be verifiable [5].

The formulation of a multisig involves taking the sum of all components; thus all nonces and $s$ values result in the formulation of a multisig [4]: $$ s=\sum s(i) $$ It can therefore be seen that these signatures are essentially Scriptless Scripts. Independent public keys of several participants are joint to form a single key and signature, which, when published, do not divulge details of the number of participants involved or the original public keys.

Adaptor Signatures

This multisig protocol can be modified to produce an adaptor signature, which serves as the building block for all Scriptless Script functions [5].

Instead of functioning as a full valid signature on a message with a key, an adaptor signature is a promise that a signature agreed to be published, will reveal a secret.

This concept is similar to that of atomic swaps. However, no scripts are implemented. Since this is elliptic curve cryptography, there is only scalar multiplication of elliptic curve points. Fortunately, similar to a hash function, elliptic curves function in one way, so an elliptic curve point ($T$), can simply be shared and the secret will be its corresponding private key.

If two parties are considered, rather than providing their nonce $R$ in the multisig protocol, a blinding factor, taken as an elliptic curve point $T$, is conceived and sent in addition to $R$ (i.e. $R+T$). It can therefore be seen that $R$ is not blinded; it has instead been offset by the secret value $T$.

Here, the Schnorr multisig construction is modified such that the first party generates $$ T=tG, R=rG $$ where $t$ is the shared secret, $G$ is the generator of discrete log hard group and $r$ is the random nonce.

Using this information, the second party generates $$ H(P||R+T||m)x $$ where the coins to be swapped are contained within message $m$. The first party can now calculate the complete signature $s$ such that
$$ s=r+t+H(P||R+T||m)x $$ The first party then calculates and publishes the adaptor signature $s'$ to the second party (and anyone else listening) $$ s'=s-t $$ The second party can verify the adaptor signature $s'$ by asserting $s'G$ $$ s'G \overset{?}{=} R+H(P||R+T||m)P $$ However, this is not a valid signature, as the hashed nonce point is $R+T$ and not $R$.

The second party cannot retrieve a valid signature from this and requires ECDLP solving to recover $s'+t$, which is virtually impossible.

After the first party broadcasts $s$ to claim the coins within message $m$, the second party can calculate the secret $t$ from $$ t=s-s' $$ The above is very general. However, by attaching auxiliary proofs too, an adaptor signature can be derived that will allow the translation of correct movement of the auxiliary protocol into a valid signature.

Simultaneous Scriptless Scripts


The execution of separate transactions in an atomic fashion is achieved through preimages. If two transactions require the preimage to the same hash, once one is executed, the preimage is exposed so that the other one can be as well. Atomic swaps and Lightning channels use this construction [4].

Difference of Two Schnorr Signatures

If we consider the difference of two Schnorr signatures: $$ d=s-s'=k-k'+e \cdot x-e' \cdot x' $$ The above equation can be verified in a similar manner to that of a single Schnorr signature, by multiplying each term by $G$ and confirming algebraic correctness: $$ dG=kG-k'G+e \cdot xG-e' \cdot x'G $$ It must be noted that the difference $d$ is being verified, and not the Schnorr signature itself. $d$ functions as the translating key between two separate independent Schnorr signatures. Given $d$ and either $s$ or $s'$, the other can be computed. So possession of $d$ makes these two signatures atomic. This scheme does not link the two signatures or compromise their security.

For an atomic transaction, during the setup stage, someone provides the opposing party with the value $d$, and asserts it as the correct value. Once the transaction is signed, it can be adjusted to complete the other transaction. Atomicity is achieved, but can only be used by the person who possesses this $d$ value. Generally, the party that stands to lose money requires the $d$ value.

The $d$ value provides an interesting property with regard to atomicity. It is shared before signatures are public, which in turn allows the two transactions to be atomic once the transactions are published. By taking the difference of any two Schnorr signatures, one is able to construct transcripts, such as an atomic swap multisig contract.

This is a critical feature for Mimblewimble, which was previously thought to be unable to support atomic swaps or Lightning channels [4].

Atomic (Cross-chain Swaps) Example with Adaptor Signatures

Alice has a certain number of coins on a particular blockchain; Bob also has a certain number of coins on another blockchain. Alice and Bob want to engage in an atomic exchange. However, neither blockchain is aware of the other, nor are they able to verify each other's transactions.

The classical way of achieving this involves the use of the blockchain's script system to put a hash preimage challenge and then reveal the same preimage on both sides. Once Alice knows the preimage, she reveals it to take her coins. Bob then copies it off one chain to the other chain to take his coins.

Using adaptor signatures, the same result can be achieved through simpler means. In this case, both Alice and Bob put up their coins on two of two outputs on each blockchain. They sign the multisig protocols in parallel, where Bob then gives Alice the adaptor signatures for each side using the same value $T$ . This means that for Bob to take his coins, he needs to reveal $t$; and for Alice to take her coins, she needs to reveal $T$. Bob then replaces one of the signatures and publishes $t$, taking his coins. Alice computes $t$ from the final signature, visible on the blockchain, and uses that to reveal another signature, giving Alice her coins.

Thus it can be seen that atomicity is achieved. One is still able to exchange information, but now there are no explicit hashes or preimages on the blockchain. No script properties are necessary and privacy is achieved [4].

Zero Knowledge Contingent Payments

Zero Knowledge Contingent Payments (ZKCP) is a transaction protocol. This protocol allows a buyer to purchase information from a seller using coins in a manner that is private, scalable, secure and, importantly, in a trustless environment. The expected information is transferred only when payment is made. The buyer and seller do not need to trust each other or depend on arbitration by a third party [6].

Mimblewimble's Core Scriptless Script

As previously stated, Mimblewimble is a blockchain design. Built similarly to Bitcoin, every transaction has inputs and outputs. Each input and output has a confidential transaction commitment. Confidential commitments have an interesting property where, in a valid balanced transaction, one can subtract the input from the output commitments, ensuring that all of the values of the Pedersen values balance out. Taking the difference of these inputs and outputs results in the multisig key of the owners of every output and every input in the transaction. This is referred to as the kernel.

Mimblewimble blocks will only have a list of new inputs, new outputs and signatures that are created from the aforementioned excess value [7].

Since the values are homomorphically encrypted, nodes can verify that no coins are being created or destroyed.


[1] "Crypto Innovation Spotlight 2: Scriptless Scripts" [online]. Available: Date accessed: 2018‑02‑27.

[2] A. Poelstra, "Mimblewimble and Scriptless Scripts". Presented at Real World Crypto, 2018 [online]. Available: Date accessed: 2018‑01‑11.

[3] A. Poelstra, "Scriptless Scripts". Presented at Layer 2 Summit Hosted by MIT DCI and Fidelity Labs on 18 May 2018 [online]. Available: Date Accessed: 2018‑05‑25.

[4] A. Poelstra, "Mimblewimble and Scriptless Scripts". Presented at MIT Bitcoin Expo 2017 Day 1 [online]. Available: Date accessed: 2017‑03‑04.

[5] "Flipping the Scriptless Script on Schnorr" [online]. Available: Date accessed: November 2017.

[6] "The First Successful Zero-knowledge Contingent Payment" [online]. Available: Date accessed: 2016‑02‑26.

[7] "What is Mimblewimble?" [Online.] Available: Date accessed: 2018‑06‑30.


The MuSig Schnorr Signature Scheme


This report investigates Schnorr Multi-Signature Schemes (MuSig), which make use of key aggregation and are provably secure in the plain public-key model.

Signature aggregation involves mathematically combining several signatures into a single signature, without having to prove Knowledge of Secret Keys (KOSK). This is known as the plain public-key model, where the only requirement is that each potential signer has a public key. The KOSK scheme requires that users prove knowledge (or possession) of the secret key during public key registration with a certification authority. It is one way to generically prevent rogue-key attacks.

Multi-signatures are a form of technology used to add multiple participants to cryptocurrency transactions. A traditional multi-signature protocol allows a group of signers to produce a joint multi-signature on a common message.

Schnorr Signatures and their Attack Vectors

Schnorr signatures produce a smaller on-chain size, support faster validation and have better privacy. They natively allow for combining multiple signatures into one through aggregation, and they permit more complex spending policies.

Signature aggregation also has its challenges. This includes the rogue-key attack, where a participant steals funds using a specifically constructed key. Although this is easily solved for simple multi-signatures through an enrollment procedure that involves the keys signing themselves, supporting it across multiple inputs of a transaction requires plain public-key security, meaning there is no setup.

There is an additional attack, named the Russel attack, after Russel O'Connor, who discovered that for multiparty schemes, a party could claim ownership of someone else's private key and so spend the other outputs.

P. Wuille [1] addressed some of these issues and provided a solution that refines the Bellare-Neven (BN) scheme. He also discussed the performance improvements that were implemented for the scaler multiplication of the BN scheme and how they enable batch validation on the blockchain [2].


Multi-signature protocols, introduced by [3], allow a group of signers (that individually possess their own private/ public key pair) to produce a single signature $ \sigma $ on a message $ m $. Verification of the given signature $ \sigma $ can be publicly performed, given the message and the set of public keys of all signers.

A simple way to change a standard signature scheme into a multi-signature scheme is to have each signer produce a stand-alone signature for $ m $ with their private key, and to then concatenate all individual signatures.

The transformation of a standard signature scheme into a multi-signature scheme needs to be useful and practical. The newly calculated multi-signature scheme must therefore produce signatures where the size is independent of the number of signers and similar to that of the original signature scheme [4].

A traditional multi-signature scheme is a combination of a signing and verification algorithm, where multiple signers (each with their own private/public key) jointly sign a single message, resulting in a combined signature. This can then be verified by anyone knowing the message and the public keys of the signers, where a trusted setup with KOSK is a requirement.

MuSig is a multi-signature scheme that is novel in combining:

  • support for key aggregation; and
  • security in the plain public-key model.

There are two versions of MuSig that are provably secure, and which differ based on the number of communication rounds:

  1. Three-round MuSig, which only relies on the Discrete Logarithm (DL) assumption, on which Elliptic Curve Digital Signature Algorithm (ECDSA) also relies.
  2. Two-round MuSig, which instead relies on the slightly stronger One-More Discrete Logarithm (OMDL) assumption.

Key Aggregation

The term key aggregation refers to multi-signatures that look like a single-key signature, but with respect to an aggregated public key that is a function of only the participants' public keys. Thus, verifiers do not require the knowledge of the original participants' public keys. They can just be given the aggregated key. In some use cases, this leads to better privacy and performance. Thus, MuSig is effectively a key aggregation scheme for Schnorr signatures.

To make the traditional approach more effective and without needing a trusted setup, a multi-signature scheme must provide sublinear signature aggregation, along with the following properties:

  • It must be provably secure in the plain public-key model.
  • It must satisfy the normal Schnorr equation, whereby the resulting signature can be written as a function of a combination of the public keys.
  • It must allow for Interactive Aggregate Signatures (IAS), where the signers are required to cooperate.
  • It must allow for Non-interactive Aggregate Signatures (NAS). where the aggregation can be done by anyone.
  • It must allow each signer to sign the same message.
  • It must allow each signer to sign their own message.

This is different to a normal multi-signature scheme, where one message is signed by all. MuSig potentially provides all of these properties.

Other multi-signature schemes that already exist, provide key aggregation for Schnorr signatures. However, they come with some limitations, such as needing to verify that participants actually have the private key corresponding to the pubic keys that they claim to have. Security in the plain public-key model means that no limitations exist. All that is needed from the participants is their public keys [1].

Overview of Multi-signatures

Recently, the most obvious use case for multi-signatures is with regard to Bitcoin, where it can function as a more efficient replacement of $ n-of-n $ multisig scripts (where the signatures required to spend and the signatures possible are equal in quantity) and other policies that permit a number of possible combinations of keys.

A key aggregation scheme also lets one reduce the number of public keys per input to one, as a user can send coins to the aggregate of all involved keys rather than including them all in the script. This leads to a smaller on-chain footprint, faster validation and better privacy.

Instead of creating restrictions with one signature per input, one signature can be used for the entire transaction. Traditionally, key aggregation cannot be used across multiple inputs, as the public keys are committed to by the outputs, and those can be spent independently. MuSig can be used here, with key aggregation done by the verifier.

No non-interactive aggregation scheme is known that only relies on the DL assumption, but interactive schemes are trivial to construct where a multi-signature scheme has every participant sign the concatenation of all messages. Reference [4], focused on key aggregation for Schnorr Signatures, showed that this is not always a desirable construction, and gave an IAS variant of BN with better properties instead [1].

Bitcoin $ m-of-n $ Multi-signatures

What are $ m-of-n $ Transactions?

Currently, standard transactions on the Bitcoin network can be referred to as single-signature transactions, as they require only one signature, from the owner of the private key associated with the Bitcoin address. However, the Bitcoin network supports far more complicated transactions, which can require the signatures of multiple people before the funds can be transferred. These are often referred to as $ m-of-n $ transactions, where m represents the number of signatures required to spend, while n represents the number of signatures possible [5].

Use Cases for $ m-of-n $ Multi-signatures

The use cases for $ m-of-n $ multi-signatures are as follows:

  • When $ m=1 $ and $ n>1 ​$, it is considered a shared wallet, which could be used for small group funds that do not require much security. This is the least secure multi-signature option, because it is not multifactor. Any compromised individual would jeopardize the entire group. Examples of use cases include funds for a weekend or evening event, or a shared wallet for some kind of game. Besides being convenient to spend from, the only benefit of this setup is that all but one of the backup/password pairs could be lost and all of the funds would be recoverable.

  • When $ m=n $, it is considered a partner wallet, which brings with it some nervousness, as no keys can be lost. As the number of signatures required increases, the risk also increases. This type of multi-signature can be considered to be a hard multifactor authentication.

  • When $ m<0.5n $, it is considered a buddy account, which could be used for spending from corporate group funds. The consequence for the colluding minority needs to be greater than possible benefits. It is considered less convenient than a shared wallet, but much more secure.

  • When $ m>0.5n $, it is referred to as a consensus account. The classic multi-signature wallet is a two of three, and is a special case of a consensus account. A two of three scheme has the best characteristics for creating new Bitcoin addresses, and for secure storing and spending. One compromised signatory does not compromise the funds. A single secret key can be lost and the funds can still be recovered. If done correctly, off-site backups are created during wallet setup. The way to recover funds is known by more than one party. The balance of power with a multi-signature wallet can be shifted by having one party control more keys than the other parties. If one party controls multiple keys, there is a greater risk of those keys not remaining as multiple factors.

  • When $ m=0.5n $, it is referred to as a split account. This is an interesting use case, as there would be three of six, where one person holds three keys and three people hold one key. In this way, one person could control their own money, but the funds could still be recoverable, even if the primary key holder were to disappear with all of their keys. As $ n $ increases, the level of trust in the secondary parties can decrease. A good use case might be a family savings account that would automatically become an inheritance account if the primary account holder were to die [5].

Rogue Attacks

Refer to Key cancellation attack demonstration in Introduction to Schnorr Signatures.

Rogue attacks are a significant concern when implementing multi-signature schemes. Here, a subset of corrupted signers manipulate the public keys computed as functions of the public keys of honest users, allowing them to easily produce forgeries for the set of public keys (despite them not knowing the associated secret keys).

Initial proposals from [10], [11], [12], [13], [14], [15] and [16] were thus undone before a formal model was put forward along with a provably secure scheme from [17]. Unfortunately, despite being provably secure, their scheme is costly and an impractical interactive key generation protocol [4].

A means of generically preventing rogue-key attacks is to make it mandatory for users to prove knowledge (or possession) of the secret key during public key registration with a certification authority [18]. Certification authority is a setting known as the KOSK assumption. The pairing-based multi-signature schemes described in [19] and [20] rely on the KOSK assumption in order to maintain security. However, according to [18] and [21], the cost of complexity and expense of the scheme, and the unrealistic and burdensome assumptions on the Public-key Infrastructure (PKI), have made this solution problematic.

As it stands, [21] provides one of the most practical multi-signature schemes, based on the Schnorr signature scheme, which is provably secure and that does not contain any assumption on the key setup. Since the only requirement of this scheme is that each potential signer has a public key, this setting is referred to as the plain-key model.

Reference [17] solves the rogue-key attack using a sophisticated interactive key generation protocol.

In [22], the number of rounds was reduced from three to two, using a homomorphic commitment scheme. Unfortunately, this increases the signature size and the computational cost of signing and verification.

Reference [23] proposed a signature scheme that involved the "double hashing" technique, which sees the reduction of the signature size compared to [22], while using only two rounds.

However, neither of these two variants allows for key aggregation.

Multi-signature schemes supporting key aggregation are easier to come by in the KOSK model. In particular, [24] proposed the CoSi scheme, which can be seen as the naive Schnorr multi-signature scheme, where the cosigners are organized in a tree structure for fast signature generation.

Interactive Aggregate Signatures

In some situations, it may be useful to allow each participant to sign a different message rather than a single common one. An IAS is one where each signer has their own message $ m_{i} $ to sign, and the joint signature proves that the $ i $ -th signer has signed $ m_{i} ​$. These schemes are considered to be more general than multi-signature schemes. However, they are not as flexible as non-interactive aggregate signatures ([25], [26]) and sequential aggregate signatures [27].

According to [21], a generic way of turning any multi-signature scheme into an IAS scheme, is for the signer running the multi-signature protocol to use the tuple of all public keys/message pairs involved in the IAS protocol, as the message.

For BN's scheme and Schnorr multi-signatures, this does not increase the number of communication rounds, as messages can be sent together with shares $ R_{i} $.

Applications of Interactive Aggregate Signatures

With regard to digital currency schemes, where all participants have the ability to validate transactions, these transactions consist of outputs (which have a verification key and amount) and inputs (which are references to outputs of earlier transactions). Each input contains a signature of a modified version of the transaction to be validated with its referenced output's key. Some outputs may require multiple signatures to be spent. Transactions spending such an output are referred to as m-of-n multi-signature transactions [28], and the current implementation corresponds to the trivial way of building a multi-signature scheme by concatenating individual signatures. Additionally, a threshold policy can be enforced where only $ m $ valid signatures out of the $ n $ possible ones are needed to redeem the transaction. (Again, this is the most straightforward way to turn a multi-signature scheme into some kind of basic threshold signature scheme.)

While several multi-signature schemes could offer an improvement over the currently available method, two properties increase the possible impact:

  • The availability of key aggregation removes the need for verifiers to see all the involved keys, improving bandwidth, privacy and validation cost.
  • Security under the plain public-key model enables multi-signatures across multiple inputs of a transaction, where the choice of signers cannot be committed to in advance. This greatly increases the number of situations in which multi-signatures are beneficial.

Native Multi-signature Support

An improvement is to replace the need for implementing $ n-of-n $ multi-signatures with a constant-size, multi-signature primitive such as BN. While this is an improvement in terms of size, it still needs to contain all of the signers' public keys. Key aggregation improves upon this further, as a single-key predicate can be used instead. This is smaller and has lower computational cost for verification. Predicate encryption is an encryption paradigm that gives a master secret key owner fine-grained control over access to encrypted data [29]. It also improves privacy, as the participant keys and their count remain private to the signers.

When generalizing to the $ m-of-n $ scenario, several options exist. One is to forego key aggregation, and still include all potential signer keys in the predicates, while still only producing a single signature for the chosen combination of keys. Alternatively, a Merkle tree [30] where the leaves are permitted combinations of public keys (in aggregated form), can be employed. The predicate in this case would take as input an aggregated public key, a signature and a proof. Its validity would depend on the signature being valid with the provided key, and the proof establishing that the key is in fact one of the leaves of the Merkle tree, identified by its root hash. This approach is very generic, as it works for any subset of combinations of keys, and as a result has good privacy, as the exact policy is not visible from the proof.

Some key aggregation schemes that do not protect against rogue-key attacks can be used instead in the above cases, under the assumption that the sender is given a proof of knowledge/possession for the receivers' private keys. However, these schemes are difficult to prove secure, except by using very large proofs of knowledge. As those proofs of knowledge/possession do not need to be seen by verifiers, they are effectively certified by the sender's validation. However, passing them around to senders is inconvenient, and easy to get wrong. Using a scheme that is secure in the plain public-key model categorically avoids these concerns.

Another alternative is to use an algorithm whose key generation requires a trusted setup, e.g. in the KOSK model. While many of these schemes have been proven secure, they rely on mechanisms that are usually not implemented by certification authorities ([18], [19], [20]), [21]).

Cross-input Multi-signatures

The previous sections explained how the numbers of signatures per input can generally by reduced to one. However, it is possible to go further and replace this with a single signature per transaction. Doing so requires a fundamental change in validation semantics, as the validity of separate inputs is no longer independent. As a result, the outputs can no longer be modeled as predicates, where the secret key owner is given access to encrypted data. Instead, they are modeled as functions that return a Boolean (data type with only two possible values) plus a set of zero or more public keys.

Overall validity requires all returned Booleans to be True and a multi-signature of the transaction with $ L $ the union of all returned keys.

With regard to Bitcoin, this can be implemented by providing an alternative to the signature checking opcode OP_CHECKSIG and related opcodes in the Script language. Instead of returning the result of an actual ECDSA verification, they always return True, but additionally add the public key with which the verification would have taken place to a transaction-wide multi-set of keys. Finally, after all inputs are verified, a multi-signature present in the transaction is verified against that multi-set. In case the transaction spends inputs from multiple owners, they will need to collaborate to produce the multi-signature, or choose to only use the original opcodes. Adding these new opcodes is possible in a backward-compatible way [4].

Protection against Rogue-key Attacks

In Bitcoin, when taking cross-input signatures into account, there is no published commitment to the set of signers, as each transaction input can independently spend an output that requires authorization from distinct participants. This functionality was not restricted, as it would then interfere with fungibility improvements such as CoinJoin [31]. Due to the lack of certification, security against rogue-key attacks is of great importance.

If it is assumed that transactions used a single multi-signature that was vulnerable to rogue-attacks, an attacker could identify an arbitrary number of outputs they want to steal, with the public keys $ X_{1},...,X_{n-t} $ and then use the rogue-key attack to determine $ X_{n-t+1},...,X_{n} $ such that they can sign for the aggregated key $ \tilde{X} $. They would then send a small amount of their own money to outputs with predicates corresponding to the keys $ X_{n-t+1},...,X_{n} $. Finally, they can create a transaction that spends all of the victims' coins together with the ones they have just created by forging a multi-signature for the whole transaction.

It can be seen that in the case of multi-signatures across inputs, theft can occur through the ability to forge a signature over a set of keys that includes at least one key which is not controlled by the attacker. According to the plain public-key model, this is considered a win for the attacker. This is in contrast to the single-input multi-signature case, where theft is only possible by forging a signature for the exact (aggregated) keys contained in an existing output. As a result, it is no longer possible to rely on proofs of knowledge/possession that are private to the signers.

Formation of MuSig

Notation Used

This section gives the general notation of mathematical expressions when specifically referenced. This information is important pre-knowledge for the remainder of the report.

  • Let $ p $ be a large prime number.
  • Let $ \mathbb{G} $ denote cyclic group of the prime order $ p $.
  • Let $ \mathbb Z_p $ denote the ring of integer $ modulo \mspace{4mu} p ​$.
  • Let a generator of $ \mathbb{G} $ be denoted by $ g $. Thus, there exists a number $ g \in\mathbb{G} $ such that $ \mathbb{G} = \lbrace 1, \mspace{3mu}g, \mspace{3mu}g^2,\mspace{3mu}g^3, ..., \mspace{3mu}g^{p-1} \rbrace ​$.
  • Let $ \textrm{H} $ denote the hash function.
  • Let $ S= \lbrace (X_{1}, m_{1}),..., (X_{n}, m_{n}) \rbrace ​$ be the multi-set of all public key/message pairs of all participants, where $ X_{1}=g^{x_{1}} ​$.
  • Let $ \langle S \rangle $ denote a lexicographically encoding of the multi-set of public key/message pairs in $ S $.
  • Let $ L= \lbrace X_{1}=g^{x_{1}},...,X_{n}=g^{x_{n}} \rbrace ​$ be the multi-set of all public keys.
  • Let $ \langle L \rangle ​$ denote a lexicographically encoding of the multi-set of public keys $ L= \lbrace X_{1}...X_{n} \rbrace ​$.
  • Let $ \textrm{H}_{com} $ denote the hash function in the commitment phase.
  • Let $ \textrm{H}_{agg} $ denote the hash function used to compute the aggregated key.
  • Let $ \textrm{H}_{sig} ​$ denote the hash function used to compute the signature.
  • Let $ X_{1} $ and $ x_{1} $ be the public and private key of a specific signer.
  • Let $ m $ be the message that will be signed.
  • Let $ X_{2},...,X_{n} $ be the public keys of other cosigners.

Recap on Schnorr Signature Scheme

The Schnorr signature scheme [6] uses group parameters $(\mathbb{G\mathrm{,p,g)}}$ and a hash function $ \textrm{H} $.

A private/public key pair is a pair

$$ (x,X) \in \lbrace 0,...,p-1 \rbrace \mspace{6mu} \mathsf{x} \mspace{6mu} \mathbb{G} $$

where $ X=g^{x} $

To sign a message $ m $, the signer draws a random integer $ r \in Z_{p} $ and computes

$$ \begin{aligned} R &= g^{r} \\ c &= \textrm{H}(X,R,m) \\ s &= r+cx \end{aligned} $$

The signature is the pair $ (R,s) $, and its validity can be checked by verifying whether $$ g^{s} = RX^{c} $$

This scheme is referred to as the "key-prefixed" variant of the scheme, which sees the public key hashed together with $ R ​$ and $ m ​$ [7]. This variant was thought to have a better multi-user security bound than the classic variant [8]. However, in [9], the key-prefixing was seen as unnecessary to enable good multi-user security for Schnorr signatures.

For the development of the MuSig Schnorr-based multi-signature scheme [4], key-prefixing is a requirement for the security proof, despite not knowing the form of an attack. The rationale also follows the process in reality, as messages signed in Bitcoin always indirectly commit to the public key.

Design of Schnorr Multi-signature Scheme

The naive way to design a Schnorr multi-signature scheme would be as follows:

A group of $ n $ signers want to cosign a message $ m $. Each cosigner randomly generates and communicates to others a share

$$ R_i = g^{r_{i}} $$

Each of the cosigners then computes:

$$ R = \prod _{i=1}^{n} R_{i} \mspace{30mu} \mathrm{and} \mspace{30mu} c = \textrm{H} (\tilde{X},R,m) $$

where $$ \tilde{X} = \prod_{i=1}^{n}X_{i} $$

is the product of individual public. The partial signature is then given by

$$ s_{i} = r_{i}+cx_{i} $$

All partial signatures are then combined into a single signature $(R,s)​$ where

$$ s = \displaystyle\sum_{i=1}^{n}s_i \mod p ​ $$

The validity of a signature $ (R,s) $ on message $ m $ for public keys $ \lbrace X_{1},...X_{n} \rbrace $ is equivalent to

$$ g^{s} = R\tilde{X}^{c} $$


$$ \tilde{X} = \prod_{i=1}^{n} X_{i} \mspace{30mu} \mathrm{and} \mspace{30mu} c = \textrm{H}(\tilde{X},R,m) $$

Note that this is exactly the verification equation for a traditional key-prefixed Schnorr signature with respect to public key $ \tilde{X} $, a property termed key aggregation. However, these protocols are vulnerable to a rogue-key attack ([12], [14], [15] and [17]) where a corrupted signer sets its public key to

$$ X_{1}=g^{x_{1}} (\prod_{i=2}^{n} X_{i})^{-1} $$

allowing the signer to produce signatures for public keys $ \lbrace X_{1},...X_{n} \rbrace ​$ by themselves.

Bellare and Neven Signature Scheme

Bellare and Neven [21] proceeded differently in order to avoid any key setup. A group of $ n $ signers want to cosign a message $ m $. Their main idea is to have each cosigner use a distinct "challenge" when computing their partial signature

$$ s_{i} = r_{i}+c_{i}x_{i} $$

defined as

$$ c_{i} = \textrm{H}( \langle L \rangle , X_{i},R,m) $$


$$ R = \prod_{i=1}^{n}R_{i} $$

The equation to verify signature $ (R,s) $ on message $ m $ for the public keys $ L $ is

$$ g^s = R\prod_{i=1}^{n}X_{i}^{c_{i}} $$

A preliminary round is also added to the signature protocol, where each signer commits to its share $ R_i $ by sending $ t_i = \textrm{H}^\prime(R_i) $ to other cosigners first.

This stops any cosigner from setting $ R = \prod_{i=1}^{n}R_{i} ​$ to some maliciously chosen value, and also allows the reduction to simulate the signature oracle in the security proof.

Bellare and Neven [21] showed that this yields a multi-signature scheme provably secure in the plain public-key model under the Discrete Logarithm assumptions, modeling $ \textrm{H} $ and $ \textrm{H}^\prime $ as random oracles. However, this scheme does not allow key aggregation anymore, since the entire list of public keys is required for verification.

MuSig Signature Scheme

MuSig is paramaterized by group parameters $(\mathbb{G\mathrm{,p,g)}}$ and three hash functions $ ( \textrm{H}_{com} , \textrm{H}_{agg} , \textrm{H}_{sig} ) $ from $ \lbrace 0,1 \rbrace ^{*} $ to $ \lbrace 0,1 \rbrace ^{l} $ (constructed from a single hash, using proper domain separation).

Round 1

A group of $ n $ signers want to cosign a message $ m $. Let $ X_1 $ and $ x_1 $ be the public and private key of a specific signer, let $ X_2 , . . . , X_n $ be the public keys of other cosigners and let $ \langle L \rangle $ be the multi-set of all public keys involved in the signing process.

For $ i\in \lbrace 1,...,n \rbrace ​$ , the signer computes the following

$$ a_{i} = \textrm{H}_{agg}(\langle L \rangle,X_{i}) $$

as well as the "aggregated" public key

$$ \tilde{X} = \prod_{i=1}^{n}X_{i}^{a_{i}} ​ $$

Round 2

The signer generates a random private nonce $ r_{1}\leftarrow\mathbb{Z_{\mathrm{p}}} $, computes $ R_{1} = g^{r_{1}} $ (the public nonce) and commitment $ t_{1} = \textrm{H}_{com}(R_{1}) $ and sends $t_{1}​$ to all other cosigners.

When receiving the commitments $t_{2},...,t_{n}$ from the other cosigners, the signer sends $R_{1}$ to all other cosigners. This ensures that the public nonce is not exposed until all commitments have been received.

Upon receiving $R_{2},...,R_{n}​$ from other cosigners, the signer verifies that $t_{i}=\textrm{H}_{com}(R_{i})​$ for all $ i\in \lbrace 2,...,n \rbrace ​$.

The protocol is aborted if this is not the case.

Round 3

If all commitment and random challenge pairs can be verified with $ \textrm{H}_{agg} $, the following is computed:

$$ \begin{aligned} R &= \prod^{n}_{i=1}R_{i} \\ c &= \textrm{H}_{sig} (\tilde{X},R,m) \\ s_{1} &= r_{1} + ca_{1} x_{1} \mod p \end{aligned} $$

Signature $s_{1}​$ is sent to all other cosigners. When receiving $ s_{2},...s_{n} ​$ from other cosigners, the signer can compute $ s = \sum_{i=1}^{n}s_{i} \mod p​$. The signature is $ \sigma = (R,s) ​$.

In order to verify the aggregated signature $ \sigma = (R,s) ​$, given a lexicographically encoded multi-set of public keys $ \langle L \rangle ​$ and message $ m ​$, the verifier computes:

$$ \begin{aligned} a_{i} &= \textrm{H}_{agg}(\langle L \rangle,X_{i}) \mspace{9mu} \textrm {for} \mspace{9mu} i \in \lbrace 1,...,n \rbrace \\ \tilde{X} &= \prod_{i=1}^{n}X_{i}^{a_{i}} \\ c &= \textrm{H}_{sig} (\tilde{X},R,m) \end{aligned} $$

then accepts the signature if

$$ g^{s} = R\prod_{i=1}^{n}X_{i}^{a_{i}c}=R\tilde{X^{c}.} $$


In a previous version of [4], published on 15 January 2018, a two-round variant of MuSig was proposed, where the initial commitment round is omitted, claiming a security proof under the One More Discrete Logarithm (OMDL) assumptions ([32], [33]). The authors of [34] then discovered a flaw in the security proof and showed that through a meta-reduction, the initial multi-signature scheme cannot be proved secure using an algebraic black box reduction under the DL or OMDL assumption.

In more detail, it was observed that in the two-round variant of MuSig, an adversary (controlling public keys $ X_{2},...,X_{n} $) can impose the value of $ R=\Pi_{i=1}^{n}R_{i} $ used in signature protocols since they can choose $ R_{2},...,R_{n} $ after having received $ R_{1} $ from the honest signer (controlling public key $ X_{1}=g^{x_{1}} $ ). This prevents one from using the initial method of simulating the honest signer in the Random Oracle model without knowing $ x_{1} $ by randomly drawing $ s_{1} $ and $ c $, computing $ R_1=g^{s_1}(X_1)^{-a_1c}$, and later programming $ \textrm{H}_{sig}(\tilde{X}, R, m) \mspace{2mu} : = c_1 $ since the adversary might have made the random oracle query $ \textrm{H}_{sig}(\tilde{X}, R, m) $ before engaging the corresponding signature protocol.

Despite this, there is no attack currently known against the two-round variant of MuSig and it might be secure, although this is not provable under standard assumptions from existing techniques [4].

Turning Bellare and Neven’s Scheme into an IAS Scheme

In order to change the BN multi-signature scheme into an IAS scheme, [4] proposed the following scheme, which includes a fix to make the execution of the signing algorithm dependent on the message index.

If $ X = g^{x_i} $ is the public key of a specific signer and $ m $ the message they want to sign, and

$$ S^\prime = \lbrace (X^\prime_{1}, m^\prime_{1}),..., (X^\prime_{n-1}, m^\prime_{n-1}) \rbrace $$

is the set of the public key/message pairs of other signers, this specific signer merges $ (X, m) $ and $ S^\prime $ into the ordered set

$$ \langle S \rangle \mspace{6mu} \mathrm{of} \mspace{6mu} S = \lbrace (X_{1}, m_{1}),..., (X_{n}, m_{n}) \rbrace $$

and retrieves the resulting message index $ i ​$ such that

$$ (X_{1}, m_{i}) = (X, m) $$

Each signer then draws $ r_{1}\leftarrow\mathbb{Z_{\mathrm{p}}} ​$, computes $ R_{i} = g^{r_{i}} ​$ and subsequently sends commitment $ t_{i} = H^\prime(R_{i}) ​$ in a first round and then $ R_{i} ​$ in a second round, and then computes

$$ R = \prod_{i=1}^{n}R_{i} $$

The signer with message index $ i ​$ then computes:

$$ c_{i} = H(R, \langle S \rangle, i) \mspace{30mu} \\ s_{i} = r_{i} + c_{i}x_{i} \mod p $$

and then sends $ s_{i} ​$ to other signers. All signers can compute

$$ s = \displaystyle\sum_{i=1}^{n}s_{i} \mod p $$

The signature is $ \sigma = (R, s) ​$.

Given an ordered set $ \langle S \rangle \mspace{6mu} \mathrm{of} \mspace{6mu} S = \lbrace (X_{1}, m_{1}),...,(X_{n}, m_{n}) \rbrace $ and a signature $ \sigma = (R, s) $ then $ \sigma $ is valid for $ S ​$ when

$$ g^s = R\prod_{i=1}^{n}X_{i} ^{H(R, \langle S \rangle, i)} $$

There is no need to include $ \langle L \rangle $ in the hash computation nor the public key $ X_i $ of the local signer, since they are already "accounted for" through ordered set $ \langle S \rangle $ and the message index $ i $.

Note: As of writing of this report, the secure IAS scheme presented here still needs to undergo a complete security analysis.

Conclusions, Observations and Recommendations

  • MuSig leads to both native and private multi-signature transactions with signature aggregation.
  • Signature data for multi-signatures can be large and cumbersome. MuSig will allow users to create more complex transactions without burdening the network and revealing compromising information.
  • The IAS case where each signer signs their own message must still be proven by a complete security analysis.


[1] P. Wuille, “Key Aggregation for Schnorr Signatures”, 2018. Available: Date accessed: 2019‑01‑20.

[2] Blockstream, “Schnorr Signatures for Bitcoin - BPASE ’18”, 2018. Available: Date accessed: 2019‑01‑20.

[3] K. Itakura, “A Public-key Cryptosystem Suitable for Digital Multisignatures”, NEC J. Res. Dev., Vol. 71, 1983. Available: Date accessed: 2019‑01‑20.

[4] G. Maxwell, A. Poelstra, Y. Seurin and P. Wuille, “Simple Schnorr Multi-signatures with Applications to Bitcoin”, pp. 1–34, 2018. Available: Date accessed: 2019‑01‑20.

[5] B. W. Contributors, “Multisignature”, 2017. Available: Date accessed: 2019‑01‑20.

[6] C. P. Schnorr, “Efficient Signature Generation by Smart Cards”, Journal of Cryptology, Vol. 4, No. 3, pp. 161‑174, 1991. Available: Date accessed: 2019‑01‑20.

[7] D. J. Bernstein, N. Duif, T. Lange, P. Schwabe and B. Yang, “High-speed High-security Signatures”, Journal of Cryptographic Engineering, Vol. 2, No. 2, pp. 77‑89, 2012. Available: Date accessed: 2019‑01‑20.

[8] D. J. Bernstein, “Multi-user Schnorr Security, Revisited”, IACR Cryptology ePrint Archive, Vol. 2015, p. 996, 2015. Available: Date accessed: 2019‑01‑20.

[9] E. Kiltz, D. Masny and J. Pan, “Optimal Security Proofs for Signatures from Identification Schemes”, Annual Cryptology Conference, pp. 33‑61, Springer, 2016. Available: Date accessed: 2019‑01‑20.

[10] C. M. Li, T. Hwang and N. Y. Lee, “Threshold-multisignature Schemes where Suspected Forgery Implies Traceability of Adversarial Shareholders”, Workshop on the Theory and Application of Cryptographic Techniques, pp. 194‑204, Springer, 1994. Available: Date accessed: 2019‑01‑20.

[11] L. Harn, “Group-oriented (t, n) Threshold Digital Signature Scheme and Digital Multisignature”, IEEE Proceedings-Computers and Digital Techniques, Vol. 141, No. 5, pp. 307‑313, 1994. Available: Date accessed: 2019‑01‑20.

[12] P. Horster, M. Michels and H. Petersen, “Meta-Multisignature Schemes Based on the Discrete Logarithm Problem”, Information Security-the Next Decade, pp. 128‑142, Springer, 1995. Available: Date accessed: 2019‑01‑20.

[13] K. Ohta and T. Okamoto, “A Digital Multisignature Scheme Based on the Fiat-Shamir Scheme,” International Conference on the Theory and Application of Cryptology, pp. 139‑148, Springer, 1991. Available: Date accessed: 2019‑01‑20.

[14] S. K. Langford, “Weaknesses in Some Threshold Cryptosystems”, Annual International Cryptology Conference, pp. 74‑82, Springer, 1996. Available: Date accessed: 2019‑01‑20.

[15] M. Michels and P. Horster, “On the Risk of Disruption in Several Multiparty Signature Schemes”, International Conference on the Theory and Application of Cryptology and Information Security, pp. 334‑345, Springer, 1996. Available: Date accessed: 2019‑01‑20.

[16] K. Ohta and T. Okamoto, “Multi-signature Schemes Secure Against Active Insider Attacks”, IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, Vol. 82, No. 1, pp. 21‑31, 1999. Available: Date accessed: 2019‑01‑20.

[17] S. Micali, K. Ohtaz and L. Reyzin, “Accountable-subgroup Multisignatures”, Proceedings of the 8th ACM C onference on Computer and Communications Security, pp. 245#8209;254, ACM, 2001. Available: Date accessed: 2019‑01‑20.

[18] T. Ristenpart and S. Yilek, “The Power of Proofs-of-possession: Securing Multiparty Signatures Against Rogue-key Attacks”, Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 228‑245, Springer, 2007. Available: Date accessed: 2019‑01‑20.

[19] A. Boldyreva, “Threshold Signatures, Multisignatures and Blind Signatures Based on the Gap-Diffie-Hellman-group Signature Scheme”, International Workshop on Public Key Cryptography, pp. 31‑46, Springer, 2003. Available: Date accessed: 2019‑01‑20.

[20] S. Lu, R. Ostrovsky, A. Sahai, H. Shacham and B. Waters, “Sequential Aggregate Signatures and Multisignatures without Random Oracles,” Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 465‑485, Springer, 2006. Available: Date accessed: 2019‑01‑20.

[21] M. Bellare and G. Neven, “Multi-Signatures in the Plain Public-Key Model and a General Forking Lemma”, Acm Ccs, pp. 390‑399, 2006. Available: Date accessed: 2019‑01‑20.

[22] A. Bagherzandi, J. H. Cheon and S. Jarecki, “Multisignatures Secure under the Discrete Logarithm Assumption and a Generalized Forking Lemma”, Proceedings of the 15th ACM Conference on Computer and Communications Security - CCS '08, p. 449, 2008. Available: Date accessed: 2019‑01‑20.

[23] C. Ma, J. Weng, Y. Li and R. Deng, “Efficient Discrete Logarithm Based Multi-signature Scheme in the Plain Public Key Model”, Designs, Codes and Cryptography, Vol. 54, No. 2, pp. 121‑133, 2010. Available: Date accessed: 2019‑01‑20.

[24] E. Syta, I. Tamas, D. Visher, D. I. Wolinsky, P. Jovanovic, L. Gasser, N. Gailly, I. Khoffi and B. Ford, “Keeping Authorities 'Honest or Bust' with Decentralized Witness Cosigning”, Security and Privacy (SP), 2016 IEEE Symposium on pp. 526‑545, IEEE, 2016. Available: Date accessed: 2019‑01‑20.

[25] D. Boneh, C. Gentry, B. Lynn and H. Shacham, “Aggregate and Verifiably Encrypted Signatures from Bilinear Maps”, in International Conference on the Theory and Applications of Cryptographic Techniques, pp. 416‑432, Springer, 2003. Available: Date accessed: 2019‑01‑20.

[26] M. Bellare, C. Namprempre and G. Neven, “Unrestricted Aggregate Signatures”, International Colloquium on Automata, Languages, and Programming, pp. 411‑422, Springer, 2007. Available: Date accessed: 2019‑01‑20.

[27] A. Lysyanskaya, S. Micali, L. Reyzin and H. Shacham, “Sequential Aggregate Signatures from Trapdoor Permutations”, in International Conference on the Theory and Applications of Cryptographic Techniques, pp. 74‑90, Springer, 2004. Available: Date accessed: 2019‑01‑20.

[28] G. Andersen, “M-of-N Standard Transactions”, 2011. Available: Date accessed: 2019‑01‑20.

[29] E. Shen, E. Shi and B. Waters, “Predicate Privacy in Encryption Systems”, Theory of Cryptography Conference, pp. 457‑473, Springer, 2009. Available: Date accessed: 2019‑01‑20.

[30] R. C. Merkle, “A Digital Signature Based on a Conventional Encryption Function”, Conference on the Theory and Application of Cryptographic Techniques, pp. 369‑378, Springer, 1987. Available: Date accessed: 2019‑01‑20.

[31] G. Maxwell, “CoinJoin: Bitcoin Privacy for the Real World”, 2013. Available: Date accessed: 2019‑01‑20.

[32] M. Bellare and A. Palacio, “GQ and Schnorr Identification Schemes: Proofs of Security against Impersonation under Active and Concurrent Attacks”, Annual International Cryptology Conference, pp. 162–177, Springer, 2002. Available: Date accessed: 2019-01-20.

[33] M. Bellare, C. Namprempre, D. Pointcheval and M. Semanko, “The One-More-RSA Inversion Problems and the Security of Chaum’s Blind Signature Scheme”, Journal of Cryptology, Vol. 16, No. 3, 2003. Available: Date accessed: 2019‑01‑20.

[34] M. Drijvers, K. Edalatnejad, B. Ford, and G. Neven, “Okamoto Beats Schnorr: On the Provable Security of Multi-signatures”, Tech. Rep., 2018. Available: Date accessed: 2019‑01‑20.


Fraud Proofs and SPV (Lightweight) Clients - Easier Said than Done?


The bitcoin blockchain was, as of June 2018, approximately 173 GB in size [1]. This makes it nearly impossible for everyone to run a full bitcoin node. Lightweight/Simplified Payment Verification (SPV) clients will have to be used by users, since not everyone can run full nodes due to the computational power, bandwidth and cost needed to run a full bitcoin node.

SPV clients will believe everything miners or nodes tell them, as evidenced by Peter Todd in the following screenshot of an Android client showing millions of bitcoins. The wallet was sent a transaction of 2.1 million BTC outputs [17]. Peter Todd modified the code for his node in order to deceive the bitcoin wallet, since the wallets cannot verify coin amounts [27] (code can be found in the "Quick-n-dirty hack to lie to SPV wallets" branch on his GitHub repository).

Courtesy: MIT Bitcoin Expo 2016 Day 1


In the original bitcoin whitepaper [2], Satoshi recognized the limitations described in Background, and introduced the concept of a Simplified Payment Verification (SPV). This concept allows verification of payments using a lightweight client that does not need to download the entire bitcoin blockchain. It only downloads block headers with the longest proof-of-work chain, which are achieved by obtaining the Merkle branch linking a transaction to a block [3]. The existence of the Merkle root in the chain, along with blocks added after the block containing the Merkle root, provides confirmation of the legitimacy of that chain.

Courtesy: Bitcoin: A Peer-to-Peer Electronic Cash System

In this system, the full nodes would need to provide an alert (known as a fraud proof) to SPV clients when an invalid block is detected. The SPV clients would then be prompted to download the full block and alerted transactions to confirm the inconsistency [2]. An invalid block need not be of malicious intent, but could be as a result of other accounting errors (whether by accident or by malicious intent).

Full Node vs. SPV Client

A full bitcoin node contains the following details:

  • every block;
  • every transaction that has ever been sent;
  • all the unspent transaction outputs (UTXOs) [4].

An SPV client, however, contains:

  • a block header with transaction data relative to the client, including other transactions required to compute the Merkle root; or
  • just a block header with no transactions.

What are Fraud Proofs?

Fraud proofs are a way to improve the security of SPV clients [5] by providing a mechanism for full nodes to prove that a chain is invalid, irrespective of the amount of proof of work it has [5]. Fraud proofs could also help with the bitcoin scaling debate, as SPV clients are easier to run and could thus help with bitcoin scalability issues ([6], [18]).

Fraud Proofs Possible within Existing Bitcoin Protocol

At the time of writing (February 2019), various proofs are needed to prove fraud in the bitcoin blockchain based on various actions. The following are the types of proofs needed to prove fraud based on specific fraud cases within the existing bitcoin protocol [5]:

Invalid Transaction due to Stateless Criteria Violation (Correct Syntax, Input Scripts Conditions Satisfied, etc.)

In the case of an invalid transaction, the fraud proofs consist of:

  • the header of invalid block;
  • the invalid transaction;
  • an invalid block's Merkle tree containing the minimum number of nodes needed to prove the existence of the invalid transaction in the tree.

Invalid Transaction due to Input Already Spent

In this case, the fraud proof would consist of:

  • the header of the invalid block;
  • the invalid transaction;
  • proof that the invalid transaction is within the invalid block;
  • the header of the block containing the original spend transaction;
  • the original spending transaction;
  • proof showing that the spend transaction is within the header block of the spend transaction.

Invalid Transaction due to Incorrect Generation Output Value

In this case, the fraud proof consists of the block itself.

Invalid Transaction if Input does not Exist

In this case, the fraud proof consists of the entire blockchain.

Fraud Proofs Requiring Changes to Bitcoin Protocol

The following fraud proofs would require changes to the bitcoin protocol itself [5]:

Invalid Transaction if Input does not Exist in Old Blocks

In this case, the fraud proof consists of:

  • the header of the invalid block;
  • the invalid transaction;
  • proof that the header of the invalid block contains the invalid transaction;
  • proof that the header of the invalid block contains the leaf node corresponding to the non-existent input;
  • the block referenced by the leaf node, if it exists.

Missing Proof Tree Item

In this case, the fraud proof consists of:

  • the header of the invalid block;
  • the transaction of the missing proof tree node;
  • an indication as to which input from the transaction of the missing proof tree node is missing;
  • proof that the header of the invalid block contains the transition of the missing proof tree node;
  • proof that the proof tree contains two adjacent leaf nodes.

Universal Fraud Proofs (Suggested Improvement)

As can be seen, requiring different fraud proof constructions for different fraud proofs can get cumbersome. Al-Bassam, et al. [26] proposed a general, universal fraud-proof construction for most cases. Their proposition is to generalize the entire blockchain as a state transition system and represent the entire state as a Merkle root using a Sparse Merkle tree, with each transaction changing the state root of the blockchain. This can be simplified by this function:

  • transaction(state,tx) = State or Error

Courtesy: Fraud Proofs: Maximising Light Client Security and Scaling Blockchains with Dishonest Majorities

In the case of the bitcoin blockchain, representing the entire blockchain as a key-value store Sparse Merkle tree would mean:

  • Key = UTXO ID
  • Value = 1 if unspent or 0 if spent

Each transaction will change the state root of the blockchain and can be represented with this function:

  • TransitionRoot(stateRoot,tx,Witnesses) = stateRoot or Error

In this proposition, a valid fraud proof construction will consist of:

  • the transaction;
  • the pre-state root;
  • the post-state root;
  • witnesses (Merkle proofs of all the parts of the state the transaction accesses/modifies).

Also expressed as this function:

  • rootTransition(stateRoot, tx, witnesses) != stateRoot

So a full node would send a light client/SPV this data to prove a valid fraud proof. The SPV would compute this function and, if the transition root of the state root is different to the state root in the block, then the block is rejected.

Courtesy: Fraud Proofs: Maximising Light Client Security and Scaling Blockchains with Dishonest Majorities

The post-state root can be excluded in order to save block space. However, this does increase the fraud proof size. This works with the assumption that the SPV client is connected to a minimum of one honest node.

How SPV Clients Work

SPV clients make use of Bloom filters to receive transactions that are relevant to the user [7]. Bloom filters are probabilistic data structures used to check the existence of an element in a set quicker by responding with a Boolean answer [9].

Courtesy: On the Privacy Provisions of Bloom Filters in Lightweight Bitcoin Clients

In addition to Bloom filters, SPV clients rely on Merkle trees [26] - binary structures that have a list of all the hashes between the block (apex) and the transaction (leaf). With Merkle trees, one only needs to check a small part of the block, called a Merkle root, to prove that the transaction has been accepted in the network [8].

Fraud proofs are integral to the security of SPV clients. However, the other components in SPV clients are not without issues.

Security and Privacy Issues with SPV Clients

Weak Bloom Filters and Merkle Tree Designs

In August 2017, a weakness in the bitcoin Merkle tree design was found to reduce the security of SPV clients. This weakness could allow an attacker to simulate a payment of an arbitrary amount to a victim using an SPV wallet, and trick the victim into accepting it as valid [10]. The bitcoin Merkle tree makes no distinction between inner and leaf nodes, and could thus be manipulated by an attack that could reinterpret transactions as nodes and nodes as transactions [11]. This weakness is due to inner nodes having no format and only requiring the length to be 64 bytes.

A brute-force attack particularly affects systems that automatically accept SPV proofs and could be carried out with an investment of approximately USD3 million. One proposed solution is to ensure that no internal, 64-bit node is ever accepted as a valid transaction by SPV wallets/clients [11].

The BIP37 SPV [13] Bloom filters do not have relevant privacy features [7]. They leak information such as IP addresses of the user, and whether multiple addresses belong to a single owner [12] (if Tor or Virtual Private Networks (VPNs) are not used).

Furthermore, SPV clients face the risk of a denial of service attack against full nodes due to processing load (80 GB disk reads) when SPV clients sync and full nodes themselves can cause a denial of service for SPV clients by returning NULL filter responses to requests [14]. Peter Todd [15] aptly demonstrates the risk of SPV denial of service.


To address these issues, a new concept called committed Bloom filters was introduced to improve the performance and security of SPV clients. In this concept, which can be used in lieu of BIP37 [16], a Bloom filter digest (BFD) of every block's inputs, outputs and transactions is created with a filter that consists of a small size of the overall block size [14].

A second Bloom filter is created with all transactions and a binary comparison is made to determine matching transactions. This BFD allows the caching of filters by SPV clients without the need to recompute [16]. It also introduces semi-trusted oracles to improve the security and privacy of SPV clients by allowing SPV clients to download block data via any out of band method [14].

Examples of SPV Implementations

There are two well-known SPV implementations for bitcoin: bitcoinj and electrum. The latter does SPV-level validation, comparing multiple electrum servers against each other. It has very similar security to bitcoinj, but potentially better privacy [25] due to bitcoinj's implementation of Bloom filters [7].

Other Suggested Fraud-proof Improvements

Erasure Codes

Along with the proposed universal fraud-proof solution, another data availability issue with fraud proofs is erasure coding. Erasure coding allows a piece of data M chunks long to be expanded into a piece of data N chunks long (“chunks” can be of arbitrary size), such that any M of the N chunks can be used to recover the original data. Blocks are then required to commit the Merkle root of this extended data and have light clients probabilistically check that the majority of the extended data is available [21].

According to the proposed solution, one of three conditions will be true for the SPV client when using erasure codes [20]:

  1. The entire extended data is available, the erasure code is constructed correctly and the block is valid.
  2. The entire extended data is available, the erasure code is constructed correctly, but the block is invalid.
  3. The entire extended data is available, but the erasure code is constructed incorrectly.

In case (1), the block is valid and the light client can accept it. In case (2), it is expected that some other node will quickly construct and relay a fraud proof. In case (3), it is also expected that some other node will quickly construct and relay a specialized kind of fraud proof that shows that the erasure code is constructed incorrectly.

Merklix Trees

Another suggested fraud proof improvement for the bitcoin blockchain is by means of block sharding and validation using Merklix trees. Merklix trees are essentially Merkle trees that use unordered set [22]. This also assumes that there is at least one honest node per shard. Using Merklix proofs, the following can be proven [23]:

  1. A transaction is in the block.
  2. The transaction's inputs and outputs are or are not in the UTXO set.

In this scenario, SPV clients can be made aware of any invalidity in blocks and cannot be lied to about the UTXO set.

Payment Channels

Bitcoin is made to be resilient to denial of service (DoS) attacks. However, the same cannot be said for SPV clients. This could be an issue if malicious alerting nodes spam with false fraud proofs. A proposed solution to this is payment channels [6], due to them:

  1. operating at near instant speeds, thus allowing quick alerting of fraud proofs;
  2. facilitating micro-transactions;
  3. being robust to temporary mining failures (as they use long “custodial periods”).

In this way, the use of payment channels can help with incentivizing full nodes to issue fraud proofs.

Conclusions, Observations and Recommendations

Fraud proofs can be complex [6] and hard to implement. However, they appear to be necessary for scalability of blockchains and the security and privacy for SPV clients, since not everyone can or should want to run a full node to participate in the network. The current SPV implementations are working on improving the security and privacy of these SPV clients. Furthermore, for current blockchains, a hard or soft fork would need to be done in order to accommodate the data in the block headers.

Based on the payment channels fraud proof proposal that suggests some sort of incentive for nodes that issue alert/fraud proofs, it seems likely that some sort of fraud proof provider and consumer marketplace will have to emerge.

Where Tari is concerned, it would appear that the universal fraud proof proposals or something similar would need to be looked into, as undoubtedly end-users of the protocol/network will mostly be using light clients. However, since these fraud proofs work on the assumption of a minimum of one honest node, in the case of a digital issuer (which may be one or more), a fraud proof will not be viable on this assumption, as the digital issuer could be the sole node.


[1] "Size of the Bitcoin Blockchain from 2010 to 2018, by Quarter (in Megabytes)" [online].
Available: Date accessed: 2018‑09‑10.

[2] S. Nakamoto, "Bitcoin: A Peer-to-Peer Electronic Cash System" [online]. Available:
Date accessed: 2018‑09‑10.

[3] "Simple Payment Verification" [online]. Available: Date accessed: 2018‑09‑10.

[4] "SPV, Bloom Filters and Checkpoints" [online]. Available: Date accessed: 2018‑09‑10.

[5] "Improving the Ability of SPV Clients to Detect Invalid Chains" [online].
Available: Date accessed: 2018‑09‑10.

[6] "Meditations on Fraud Proofs" [online]. Available: Dated accessed: 2018‑09‑10.

[7] A. Gervais, G. O. Karame, D. Gruber and S. Capkun, "On the Privacy Provisions of Bloom Filters in Lightweight Bitcoin Clients" [online]. Available: Date accessed: 2018‑09‑10.

[8] "SPV, Bloom Filters and Checkpoints" [online]. Available: Date accessed: 2018‑09‑10.

[9] "A Case of False Positives in Bloom Filters" [online].
Available: Date accessed: 2018‑09‑11.

[10] "The Design of Bitcoin Merkle Trees Reduces the Security of SPV Clients" [online].
Available: Date accessed: 2018‑09‑11.

[11] "Leaf-node Weakness in Bitcoin Merkle Tree Design" [online].
Available: Date accessed: 2018‑09‑11.

[12] "Privacy in Bitsquare" [online]. Available: Date accessed: 2018‑09‑11.

[13] "bip-0037.mediawiki" [online]. Available: Date accessed: 2018‑09‑11.

[14] "Committed Bloom Filters for Improved Wallet Performance and SPV Security" [online].
Available: Date accessed: 2018‑09‑11.

[15] "Bloom-io-attack" [online]. Available: Date accessed: 2018‑09‑11.

[16] "Committed Bloom Filters versus BIP37 SPV" [online].
Available: Date accessed: 2018‑09‑12.

[17] "Fraud Proofs" [online]. Available:
Date accessed: 2018‑09‑12.

[18] "New Satoshi Nakamoto E-mails Revealed" [online]. Available: Date accessed: 2018‑09‑12.

[19] J. Poon and V. Buterin, "Plasma: Scalable Autonomous Smart Contracts" [online]. Available:
Date accessed: 2018‑09‑13.

[20] "A Note on Data Availability and Erasure Coding" [online].
Available: Date accessed: 2018‑09‑13.

[21] "Vitalik Buterin and Peter Todd Go Head to Head in the Crypto Culture Wars" [online].
Available: Date accessed: 2018‑09‑14.

[22] "Introducing Merklix Tree as an Unordered Merkle Tree on Steroid" [online].
Available: Date accessed 2018‑09‑14.

[23] "Using Merklix Tree to Shard Block Validation" [online].
Available: Date accessed: 2018‑09‑14.

[24] "Fraud Proofs" [online]. Available: Date accessed: 2018‑09‑18.

[25] "Whats the Difference between an API Wallet and a SPV Wallet?" [Online.]
Available: Date accessed: 2018‑09‑21.

[26] M. Al-Bassam, A. Sinnino and V. Butterin, "Fraud Proofs: Maximising Light Client Security and Scaling Blockchains with Dishonest Majorities" [online]. Available: Date accessed: 2018‑10‑08.

[27] "Bitcoin Integration/Staging Tree" [online]. Available:
Date accessed: 2018‑10‑12.


Bulletproofs and Mimblewimble



Bulletproofs form part of the family of distinct Zero-knowledge Proofdef systems, such as Zero-Knowledge Succinct Non-Interactive ARguments of Knowledge (zk-SNARK); Succinct Transparent ARgument of Knowledge (STARK); and Zero Knowledge Prover and Verifier for Boolean Circuits (ZKBoo). Zero-knowledge proofs are designed so that a prover is able to indirectly verify that a statement is true without having to provide any information beyond the verification of the statement, e.g. to prove that a number is found that solves a cryptographic puzzle and fits the hash value without having to reveal the Noncedef ([2], [4]).

The Bulletproofs technology is a Non-interactive Zero-knowledge (NIZK) proof protocol for general Arithmetic Circuitsdef with very short proofs (Arguments of Knowledge Systemsdef) and without requiring a trusted setup. They rely on the Discrete Logarithmdef (DL) assumption and are made non-interactive using the Fiat-Shamir Heuristicdef. The name "Bulletproof" originated from a non-technical summary from one of the original authors of the scheme's properties: "Short like a bullet with bulletproof security assumptions" ([1], [29]).

Bulletproofs also implement a Multi-party Computation (MPC) protocol, whereby distributed proofs of multiple provers with secret committed values are aggregated into a single proof before the Fiat-Shamir challenge is calculated and sent to the verifier, thereby minimizing rounds of communication. Secret committed values will stay secret ([1], [6]).

The essence of Bulletproofs is its inner-product algorithm originally presented by Groth [13] and then further refined by Bootle et al. [12]. The latter development provided a proof (argument of knowledge) for two independent (not related) bindingdef vector Pedersen Commitmentsdef that satisfied the given inner-product relation. Bulletproofs build on these techniques, which yield communication-efficient, zero-knowledge proofs, but offer a further replacement for the inner product argument that reduces overall communication by a factor of three ([1], [29]).


Mimblewimble is a blockchain protocol designed for confidential transactions. The essence is that a Pedersen Commitment to $ 0 $ can be viewed as an Elliptic Curve Digital Signature Algorithm (ECDSA) public key, and that for a valid confidential transaction, the difference between outputs, inputs and transaction fees must be $ 0 $. A prover constructing a confidential transaction can therefore sign the transaction with the difference of the outputs and inputs as the public key. This enables a greatly simplified blockchain in which all spent transactions can be pruned, and new nodes can efficiently validate the entire blockchain without downloading any old and spent transactions. The blockchain consists only of block-headers, remaining Unspent Transaction Outputs (UTXO) with their range proofs and an unprunable transaction kernel per transaction. Mimblewimble also allows transactions to be aggregated before being committed to the blockchain ([1], [20]).

How do Bulletproofs Work?

The basis of confidential transactions is to replace the input and output amounts with Pedersen Commitmentsdef. It is then publicly verifiable that the transactions balance (the sum of the committed inputs is greater than the sum of the committed outputs, and all outputs are positive), while keeping the specific committed amounts hidden. This makes it a zero-knowledge transaction. The transaction amounts must be encoded as $ integers \mod q ​$, which can overflow, but are prevented from doing so by making use of range proofs. This is where Bulletproofs come in. The essence of Bulletproofs is its ability to calculate proofs, including range proofs, from inner-products.

The prover must convince the verifier that commitment $ C(x,r) = xH + rG $ contains a number such that $ x \in [0,2^n - 1] $. If $ \mathbf {a} = (a_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} a_n) \in {0,1}^n $ is the vector containing the bits of $ x $, the basic idea is to hide all the bits of the amount in a single vector Pedersen Commitment. It must then be proven that each bit satisfies $ \omega(\omega-1) = 0 $, i.e. each $ \omega $ is either $ 0 $ or $ 1 $, and that they sum to $ x $. As part of the ensuing protocol, the verifier sends random linear combinations of constraints and challenges $ \in \mathbb{Z_p} $ to the prover. The prover is then able to construct a vectorized inner product relation containing the elements of $ \mathbf {a} $, the constraints and challenges $ \in \mathbb{Z_p} $, and appropriate blinding vectors $ \in \mathbb Z_p^n $.

These inner product vectors have size $ n ​$ that would require many expensive exponentiations. The Pedersen Commitment scheme, shown in Figure 1, allows for a vector to be cut in half, and for the two halves to be compressed together, each time calculating a new set of Pedersen Commitment generators. Applying the same trick repeatedly, $ \log _2 n ​$ times, produces a single value. This is applied to the inner product vectors; they are reduced interactively with a logarithmic number of rounds by the prover and verifier into a single multi-exponentiation of size $ 2n + 2 \log_2(n) + 1 ​$. This single multi-exponentiation can then be calculated much faster than $ n ​$ separate ones. All of this is made non-interactive using the Fiat-Shamir Heuristicdef.

Figure 1: Vector Pedersen Commitment Cut and Half ([12], [63])

Bulletproofs only rely on the discrete logarithm assumption. In practice, this means that Bulletproofs are compatible with any secure elliptic curve, making them extremely versatile. The proof sizes are short; only $ [2 \log_2(n) + 9] ​$ elements are required for the range proofs and $ [\log_2(n) + 13] ​$ elements for arithmetic circuit proofs, with $ n ​$ denoting the multiplicative complexity. Additionally, the logarithmic proof size enables the prover to aggregate multiple range proofs into a single short proof, as well as to aggregate multiple range proofs from different parties into one proof (refer to Figure 2) ([1], [3], [5]).

Figure 2: Logarithmic Aggregate Bulletproofs Proof Sizes [3]

If all Bitcoin transactions were confidential, approximately 50 million UTXOs from approximately 22 million transactions would result in roughly 160GB range proof data, when using current/linear proof systems and assuming use of 52 bits to represent any value from 1 satoshi up to 21 million bitcoins. Aggregated Bulletproofs would reduce the data storage requirement to less than 17GB [[1]].

In Mimblewimble, the blockchain grows with the size of the UTXO set. Using Bulletproofs as a drop-in replacement for range proofs in confidential transactions, the size of the blockchain would only grow with the number of transactions that have unspent outputs. This is much smaller than the size of the UTXO set [1].

The recent implementation of Bulletproofs in Monero on 18 October 2018 saw the average data size on the blockchain per payment reduce by ~73% and the average USD-based fees reduce by ~94.5% for the period 30 August 2018 to 28 November 2018 (refer to Figure 3).

Figure 3: Monero Payment, Block and Data Size Statistics

Applications for Bulletproofs

Bulletproofs were designed for range proofs. However, they also generalize to arbitrary arithmetic circuits. In practice, this means that Bulletproofs have wide application and can be efficiently used for many types of proofs. Use cases of Bulletproofs are listed in this section, but this list may not be exhaustive, as use cases for Bulletproofs continue to evolve ([1], [2], [3], [5], [6], [59]).

  1. Range proofs

    Range proofs are proofs that a secret value, which has been encrypted or committed to, lies in a certain interval. It prevents any numbers coming near the magnitude of a large prime, say $ 2^{256} $, that can cause wraparound when adding a small number, e.g. proof that $ x \in [0,2^{52} - 1] ​$.

  2. Merkle proofs

    Hash preimages in a Merkle tree [7] can be leveraged to create zero-knowledge Merkle proofs using Bulletproofs, to create efficient proofs of inclusion in massive data sets.

  3. Proof of solvency

    Proofs of solvency are a specialized application of Merkle proofs; coins can be added into a giant Merkle tree. It can then be proven that some outputs are in the Merkle tree and that those outputs add up to some amount that the cryptocurrency exchange claims they have control over without revealing any private information. A Bitcoin exchange with 2 million customers needs approximately 18GB to prove solvency in a confidential manner using the Provisions protocol [58]. Using Bulletproofs and its variant protocols proposed in [1], this size could be reduced to approximately 62MB.

  4. Multi-signatures with deterministic nonces

    With Bulletproofs, every signatory can prove that their nonce was generated deterministically. A SHA256 arithmetic circuit could be used in a deterministic way to show that the de-randomized nonces were generated deterministically. This would still work if one signatory were to leave the conversation and rejoin later, with no memory of interacting with the other parties they were previously interacting with.

  5. Scriptless scripts

    Scriptless scripts is a way to do smart contracts exploiting the linear property of Schnorr signatures, using an older form of zero-knowledge proofs called a Sigma protocol. This can all be done with Bulletproofs, which could be extended to allow assets that are functions of other assets, i.e. crypto derivatives.

  6. Smart contracts and crypto derivatives

    Traditionally, a new trusted setup is needed for each smart contract when verifying privacy-preserving smart contracts, but with Bulletproofs, no trusted setup is needed. Verification time, however, is linear, and it might be too complex to prove every step in a smart contract. The Refereed Delegation Model [33] has been proposed as an efficient protocol to verify smart contracts with public verifiability in the off-line stage, by making use of a specific verification circuit linked to a smart contract.

    A challenger will input the proof to the verification circuit and get a binary response as to the validity of the proof. The challenger can then complain to the smart contract, claim the proof is invalid and send the proof, together with the output from a chosen gate in the verification circuit, to the smart contract. Interactive binary searches are then used to identify the gate where the proof turns invalid. Hence the smart contract must only check a single gate in the verification procedure to decide whether the challenger or prover was correct. The cost is logarithmic in the number of rounds and amount of communications, with the smart contract only doing one computation. A Bulletproof can be calculated as a short proof for the arbitrary computation in the smart contract, thereby creating privacy-preserving smart contracts (refer to Figure 4).

Figure 4: Bulletproofs for Refereed Delegation Model [5]
  1. Verifiable shuffles

    Alice has some computation and wants to prove to Bob that she has done it correctly and has some secret inputs to this computation. It is possible to create a complex function that either evaluates to 1 if all secret inputs are correct and to 0 otherwise. Such a function can be encoded in an arithmetic circuit and can be implemented with Bulletproofs to prove that the transaction is valid.

    When a proof is needed that one list of values $[x_1, ... , x_n]$ is a permutation of a second list of values $[y_1, ... , y_n]$, it is called a verifiable shuffle. It has many applications, e.g. voting, blind signatures for untraceable payments and solvency proofs. Currently, the most efficient shuffle has size $O \sqrt{n}$. Bulletproofs can be used very efficiently to prove verifiable shuffles of size $O \log(n)$, as shown in Figure 5.

    Another potential use case is to verify that two nodes executed the same list of independent instructions $ [x1,x4,x3,x2] $ and $ [x1,x2,x3,x4] $, which may be in different order, to arrive at the same next state $ N $. The nodes do not need to share the actual instructions with a verifier, but the verifier can show that they executed the same set without having knowledge of the instructions.

Figure 5: Bulletproofs for Verifiable Shuffles [5]
  1. Batch verifications

    Batch verifications can be done using one of the Bulletproofs derivative protocols. This has application where the verifier needs to verify multiple (separate) range proofs at once, e.g. a blockchain full node receiving a block of transactions needs to verify all transactions as well as range proofs. This batch verification is then implemented as one large multi-exponentiation; it is applied to reduce the number of expensive exponentiations.

Comparison to other Zero-knowledge Proof Systems

Table 1 ([2], [5]) shows a high-level comparison between Sigma protocols (i.e. interactive public-coin protocols) and the different Zero-knowledge proof systems mentioned in this report. (The most desirable outcomes for each measurement are shown in bold italics.) The aim will be to have a proof system that is not interactive, has short proof sizes, has linear prover runtime scalability, has efficient (sub-linear) verifier runtime scalability, has no trusted setup, is practical and is at least DL secure. Bulletproofs are unique in that they are not interactive, have a short proof size, do not require a trusted setup, have very fast execution times and are practical to implement. These attributes make Bulletproofs extremely desirable to use as range proofs in cryptocurrencies.

Table 1: Comparison to other Zero-knowledge Proof Systems
Proof SystemSigma Protocolszk-SNARKSTARKZKBooBulletproofs
Proof Sizelongshortshortishlongshort
Prover Runtime Scalabilitylinearquasilinearquasilinear (big memory requirement)linearlinear
Verifier Runtime Scalabilitylinearefficientefficient (poly-logarithmically)efficientlinear
Trusted Setupnorequirednonono
Practicalyesyesnot quitesomewhatyes
Security AssumptionsDLnon-falsifiable, but not on par with DLquantum secure One-way Function (OWF) [50], which is better than DLsimilar to STARKsDL

Interesting Bulletproofs Implementation Snippets

Bulletproofs development is currently still evolving, as can be seen when following the different community development projects. Different implementations of Bulletproofs also offer different levels of efficiency, security and functionality. This section describes some of these aspects.

Current and Past Efforts

The initial prototype Bulletproofs' implementation was done by Benedikt Bünz in Java located at GitHub:bbuenz/BulletProofLib [27].

The initial work that provided cryptographic support for a Mimblewimble implementation was mainly done by Pieter Wuille, Gregory Maxwell and Andrew Poelstra in C located at GitHub:ElementsProject/secp256k1-zkp [25]. This effort was forked as GitHub:apoelstra/secp256k1-mw [26] with main contributors being Andrew Poelstra, Pieter Wuille, and Gregory Maxwell where Mimblewimble primitives and support for many of the Bulletproof protocols (e.g. zero knowledge proofs, range proofs and arithmetic circuits) were added. Current effort also involves MuSig [48] support.

The Grin project (an open source Mimblewimble implementation in Rust) subsequently forked GitHub:ElementsProject/secp256k1-zkp [25] as GitHub:mimblewimble/secp256k1-zkp [30] and has added Rust wrappers to it as mimblewimble/rust-secp256k1-zkp [45] for use in its blockchain. The Beam project (another open source Mimblewimble implementation in C++) links directly to GitHub:ElementsProject/secp256k1-zkp [25] as its cryptographic sub-module. Refer to Mimblewimble-Grin Blockchain Protocol Overview and Grin vs. BEAM, a Comparison for more information about the Mimblewimble implementation of Grin and Beam.

An independent implementation for Bulletproof range proofs was done for the Monero project (an open source CryptoNote implementation in C++) by Sarang Noether [49] in Java as the precursor and moneromooo-monero [46] in C++ as the final implementation. Its implementation supports single and aggregate range proofs.

Adjoint, Inc. has also done an independent open source implementation of Bulletproofs in Haskell at GitHub: adjoint-io/bulletproofs [29]. It has an open source implementation of a private permissioned blockchain with multiparty workflow aimed at the financial industry.

Chain/Interstellar has done another independent open source implementation of Bulletproofs in Rust from the ground up at GitHub:dalek-cryptography/bulletproofs [28]. It has implemented parallel Edwards formulas [39] using Intel® Advanced Vector Extensions 2 (AVX2) to accelerate curve operations. Initial testing suggests approximately 50% speedup (twice as fast) over the original libsecp256k1-based Bulletproofs implementation.

Security Considerations

Real-world implementation of Elliptic-curve Cryptography (ECC) is largely based on official standards that govern the selection of curves in order to try and make the Elliptic-curve Discrete-logarithm Problem (ECDLP) hard to solve, i.e. finding an ECC user's secret key given the user's public key. Many attacks break real-world ECC without solving ECDLP due to problems in ECC security, where implementations can produce incorrect results and also leak secret data. Some implementation considerations also favor efficiency over security. Secure implementations of the standards-based curves are theoretically possible, but highly unlikely ([14], [32]).

Grin, Beam and Adjoint use ECC curve secp256k1 [24] for their Bulletproofs implementation, which fails one out of the four ECDLP security criteria and three out of the four ECC security criteria. Monero and Chain/Interstellar use the ECC curve Curve25519 [38] for their Bulletproofs implementation, which passes all ECDLP and ECC security criteria [32].

Chain/Interstellar goes one step further with its use of Ristretto, a technique for constructing prime order elliptic curve groups with non-malleable encodings, which allows an existing Curve25519 library to implement a prime-order group with only a thin abstraction layer. This makes it possible for systems using Ed25519 signatures to be safely extended with zero-knowledge protocols, with no additional cryptographic assumptions and minimal code changes [31].

The Monero project has also had security audits done on its Bulletproofs' implementation, which resulted in a number of serious and critical bug fixes as well as some other code improvements ([8], [9], [11]).

Wallet Reconstruction and Switch Commitment - Grin

Grin implemented a switch commitment [43] as part of a transaction output to be ready for the age of quantum adversaries and to pose as a defense mechanism. It had an original implementation that was discarded (completely removed) due to it being complex, using a lot of space in the blockchain and allowing inclusion of arbitrary data. Grin also employed a complex scheme to embed the transaction amount inside a Bulletproof range proof for wallet reconstruction, which was linked to the original switch commitment hash implementation. The latest implementation improved on all those aspects and uses a much simpler method to regain the transaction amount from a Bulletproof range proof.

Initial Implementation

The initial Grin implementation ([21], [34]. [35], [54]) hides two things in the Bulletproof range proof: a transaction amount for wallet reconstruction and an optional switch commitment hash to make the transaction perfectly bindingdef later on, as opposed to currently being perfectly hidingdef. Perfect in this sense means that a quantum adversary (an attacker with infinite computing power) cannot tell what amount has been committed to and is also unable to produce fake commitments. Computational means that no efficient algorithm running in a practical amount of time can reveal the commitment amount or produce fake commitments, except with small probability. The Bulletproof range proofs are stored in the transaction kernel and will thus remain persistent in the blockchain.

In this implementation, a Grin transaction output contains the original (Elliptic Curve) Pedersen Commitmentdef as well as the optional switch commitment hash. The switch commitment hash takes the resultant blinding factor $ b $, a third cyclic group random generator $ J $ and a wallet-seed derived random value $ r $ as input. The transaction output has the following form:

$$ (vG + bH \mspace{3mu} , \mspace{3mu} \mathrm{H_{B2}}(bJ \mspace{3mu} , \mspace{3mu} r)) \tag{1} $$

where $ \mathrm{H_{B2}} $ is the BLAKE2 hash function [44] and $ \mathrm{H_{B2}}(bJ \mspace{3mu} , \mspace{3mu} r) $ the switch commitment hash. In order for such an amount to be spent, the owner needs to reveal $ b , r $ so that the verifier can check the opening of $ \mathrm{H_{B2}}(bJ \mspace{3mu} , \mspace{3mu} r) $ by confirming that it matches the value stored in the switch commitment hash portion of the transaction output. Grin implemented the BLAKE2 hash function, which outperforms all mainstream hash function implementations in terms of hashing speed with similar security to the latest Secure Hash Algorithm 3 (SHA-3) standard [44].

In the event of quantum adversaries, the owner of an output can choose to stay anonymous and not claim ownership or reveal $ bJ $ and $ r $, whereupon the amount can be moved to the then hopefully forked quantum resistant blockchain.

In the Bulletproof range proof protocol, two 32-byte scalar nonces $ \tau_1 , \alpha $ (not important to know what they are) are generated with a secure random number generator. If the seed for the random number generator is known, the scalar values $ \tau_1 , \alpha $ can be recalculated when needed. Sixty-four (64) bytes worth of message space (out of 674 bytes worth of range proof) are made available by embedding a message into those variables using a logic $ \mathrm{XOR} $ gate. This message space is used for the transaction amount for wallet reconstruction.

To ensure that the transaction amount of the output cannot be spent by only opening the (Elliptic Curve) Pedersen Commitment $ vG + bH $, the switch commitment hash and embedded message are woven into the Bulletproof range proof calculation. The initial part is done by seeding the random number generator used to calculate $ \tau_1 , \alpha $ with the output from a seed function $ \mathrm S $ that uses as input a nonce $ \eta $ (which may be equal to the original blinding factor $ b $), the (Elliptic Curve) Pedersen Commitmentdef $ P $ and the switch commitment hash

$$ \mathrm S (\eta \mspace{3mu} , \mspace{3mu} P \mspace{3mu} , \mspace{3mu} \mathrm{H_{B2}}(bJ \mspace{3mu} , \mspace{3mu} r) ) = \eta \mspace{3mu} \Vert \mspace{3mu} \mathrm{H_{S256}} (P \mspace{3mu} \Vert \mspace{3mu} \mathrm{H_{B2}}(bJ \mspace{3mu} , \mspace{3mu} r) ) \tag{2} $$

where $ \mathrm{H_{S256}}$ is the SHA256 hash function. The Bulletproof range proof is then calculated with an adapted pair $ \tilde{\alpha} , \tilde{\tau_1} $, using the original $ \tau_1 , \alpha $ and two
32-byte words $ m_{w1} $ and $m_{w2} $ that make up the 64-byte embedded message as follows:

$$ \tilde{\alpha} = \mathrm {XOR} ( \alpha \mspace{3mu} , \mspace{3mu} m_{w1}) \mspace{12mu} \mathrm{and} \mspace{12mu} \tilde{\tau_1} = \mathrm {XOR} ( \tau_1 \mspace{3mu} , \mspace{3mu} m_{w2} ) \tag{3} $$

To retrieve the embedded message, the process is simply inverted. Note that the owner of an output needs to keep record of the blinding factor $ b $, the nonce $ \eta $ if not equal to the blinding factor $ b $, as well as the wallet-seed derived random value $ r $ to be able to claim such an output.

Improved Implementation

The latter Grin implementation ([56], [57]) uses Bulletproof range proof rewinding so that wallets can recognize their own transaction outputs. This negated the requirement to remember the wallet-seed derived random value $ r $, nonce $ \eta $ for the seed function $ \mathrm S $ and use of the adapted pair $ \tilde{\alpha} , \tilde{\tau_1} $ in the Bulletproof range proof calculation.

In this implementation, it is not necessary to remember a hash of the switch commitment as part of the transaction output set and for it to be passed around during a transaction. The switch commitment looks exactly like the original (Elliptic Curve) Pedersen Commitment $ vG + bH $, but in this instance the blinding factor $ b $ is tweaked to be

$$ b = b^\prime + \mathrm{H_{B2}} ( vG + b^\prime H \mspace{3mu} , \mspace{3mu} b^\prime J ) \tag{4} $$

with $ b^\prime $ being the user generated blinding factor. The (Elliptic Curve) Pedersen Commitment then becomes

$$ vG + b^\prime H + \mathrm{H_{B2}} ( vG + b^\prime H \mspace{3mu} , \mspace{3mu} b^\prime J ) H \tag{5} $$

After activation of the switch commitment in the age of quantum adversaries, users can reveal $ ( vG + b^\prime H \mspace{3mu} , \mspace{3mu} b^\prime J ) $, and verifiers can check if it is computed correctly and use it as if it were the ElGamal Commitmentdef $ ( vG + b H \mspace{3mu} , \mspace{3mu} b J ) $.

GitHub Extracts

The following extracts of discussions depict the initial and improved implementations of the switch commitment and retrieving transactions amounts from Bulletproofs for wallet reconstruction.

Bulletproofs #273 [35]

{yeastplume} "The only thing I think we're missing here from being able to use this implementation is the ability to store an amount within the range proof (for wallet reconstruction). From conversations with @apoelstra earlier, I believe it's possible to store 64 bytes worth of 'message' (not nearly as much as the current range proofs)."

{apoelstra} "Ok, I can get you 64 bytes without much trouble (xoring them into* tau_1 and alpha which are easy to extract from tau_x and mu if you know the original seed used to produce the randomness). I think it's possible to get another 32 bytes into t but that's way more involved since t is a big inner-product*."

Message hiding in Bulletproofs #721 [21]

"Breaking out from #273, we need the wind a message into a bulletproof similarly to how it could be done in 'Rangeproof Classic'. This is an absolute requirement as we need to embed an output's SwitchCommitHash (which is otherwise not committed to) and embed an output amount for wallet reconstruction. We should be able to embed up to 64 bytes of message without too much difficulty, and another 32 with more difficulty (see original issue). 64 should be enough for the time being."

Switch Commits/Bulletproofs - Status #734 [34]

"The prove function takes a value, a secret key (blinding factor in our case), a nonce, optional extra_data and a generator and produces a 674 byte proof. I've also modified it to optionally take a message (more about this in a bit). It creates the Pedersen commitment it works upon internally with these values."

"The verify function takes a proof, a Pedersen commitment and optional extra_data and returns true if proof demonstrates that the value within the Pedersen commitment is in the range [0..2^64] (and the extra_data is correct)."

"Additionally, I've added an unwind function which takes a proof, a Pedersen commitment, optional extra_data and a 32 bit nonce (which needs to be the same as the original nonce used in order to return the same message) and returns the hidden message."

"If you have the correct Pedersen commitment and proof and extra_data, and attempt to unwind a message out using the wrong nonce, the attempt won't fail, you'll get out gibberish or just wildly incorrect values as you parse back the bytes."

"The SwitchCommitHash is currently a field of an output, and while it is stored in the Txo set and passed around during a transaction, it is not currently included in the output's hash. It is passed in as the extra_data field above, meaning that anyone validating the range proof also needs to have the correct switch commit in order to validate the range proof."

Removed all switch commitment usages, including restore #841 [55]

{ignopeverell} "After some discussion with @antiochp, @yeastplume and @tromp, we decided switch commitments weren't worth the cost of maintaining them and their drawbacks. Removing them."

{ignopeverell} "For reference, switch commitments were found to:

  • add a lot of complexity and assumptions
  • take additional space for little benefit right now
  • allow the inclusion of arbitrary data, potentially for the worst
  • provide little to no advantage in case of quantamageddon (as range proofs are still a weakness)"

{apoelstra} "After chatting with @yeastplume on IRC, I realize that we can actually use rangeproof rewinding for wallets to recognize their own outputs, which even avoids the "gap" problem of just scanning for pre-generated keys. With that in mind, it's true that the benefit of switch commitments for MW are not spectacular."

Switch commitment discussion #998 [56]

{antiochp} "Sounds like there is a "zero cost" way of getting switch commitments in as part of the commitment itself, so we would not need to store and maintain a separate "switch commitment" on each output. I saw that switch commitments have been removed for various reasons."

"Let me suggest a variant (idea suggested by Pieter Wuille initially): The switch commitment is (vG + bH), where b = b' + hash(vG + b'H,b'J). (So this "tweaks" the commitment, in a pay-to-contract / taproot style). Before the switch, this is just used like a normal Pedersen Commitment vG + bH. After the switch, users can reveal (vG + b'H, b'J), and verifiers check if it's computed correctly and use as if it were the ElGamal commitment (vG + bH, bJ)."

{@ignopeverell} modified the milestones: Beta / testnet3, Mainnet on 11 Jul.

{@ignopeverell} added the must-have label on 24 Aug.

Conclusions, Observations and Recommendations

  • Bulletproofs are not Bulletproofs are not Bulletproofs. This is evident by comparing the functionality, security and performance of all the current different Bulletproof implementations as well as the evolving nature of Bulletproofs.

  • The security audit instigated by the Monero project on their Bulletproofs implementation as well as the resulting findings and corrective actions prove that every implementation of Bulletproofs has potential risk. This risk is due to the nature of confidential transactions; transacted values and token owners are not public.

  • The growing number of open source Bulletproof implementations should strengthen the development of a new confidential blockchain protocol such as Tari.

  • In the pure implementation of Bulletproof range proofs, a discrete-log attacker (e.g. a bad actor employing a quantum computer) would be able to exploit Bulletproofs to silently inflate any currency that used them. Bulletproofs are perfectly hidingdef (i.e. confidential), but only computationally bindingdef (i.e. not quantum resistant). Unconditional soundness is lost due to the data compression being employed ([1], [5], [6] and [10]).

  • Bulletproofs are not only about range proofs. All the different Bulletproof use cases have a potential implementation in a new confidential blockchain protocol such as Tari; in the base layer as well as in the probable second layer.


[1] B. Bünz, J. Bootle, D. Boneh, A. Poelstra, P. Wuille and G. Maxwell, "Bulletproofs: Short Proofs for Confidential Transactions and More", Blockchain Protocol Analysis and Security Engineering 2018 [online]. Available: Date accessed: 2018‑09‑18.

[2] A. Poelstra, "Bulletproofs" (Transcript), Bitcoin Milan Meetup 2018‑02‑02 [online]. Available: Date accessed: 2018‑09‑10.

[3] A. Poelstra, "Bulletproofs" (Slides), Bitcoin Milan Meetup 2018‑02‑02 [online]. Available: Date accessed: 2018‑09‑10.

[4] B. Feng, "Decoding zk-SNARKs" [online]. Available: Date accessed: 2018‑09‑17.

[5] B. Bünz, J. Bootle, D. Boneh, A. Poelstra, P. Wuille and G. Maxwell, "Bulletproofs: Short Proofs for Confidential Transactions and More" (Slides) [online]. Available: Date accessed: 2018‑09‑18.

[6] B. Bünz, J. Bootle, D. Boneh, A. Poelstra, P. Wuille and G. Maxwell, "Bulletproofs: Short Proofs for Confidential Transactions and More (Transcripts)" [online]. Available: Date accessed: 2018‑09‑18.

[7] "Merkle Root and Merkle Proofs" [online]. Available: Date accessed: 2018‑10‑10.

[8] "Bulletproofs Audit: Fundraising" [online]. Available: Date accessed: 2018‑10‑23.

[9] "The QuarksLab and Kudelski Security Audits of Monero Bulletproofs are Complete" [online]. Available: Date accessed: 2018‑10‑23.

[10] A. Poelstra, "Bulletproofs Presentation at Feb 2 Milan Meetup, Reddit" [online]. Available: Date accessed: 2018‑09‑10.

[11] "The OSTIF and QuarksLab Audit of Monero Bulletproofs is Complete – Critical Bug Patched [online]". Available: Date accessed: 2018-10-23.

[12] J. Bootle, A. Cerulli1, P. Chaidos, J. Groth and C. Petit, "Efficient Zero-knowledge Arguments for Arithmetic Circuits in the Discrete Log Setting", Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 327-357. Springer, 2016 [online]. Available: Date accessed: 2018‑09‑21.

[13] J. Groth, "Linear Algebra with Sub-linear Zero-knowledge Arguments" [online]. Available: Date accessed: 2018‑09‑21.

[14] T. Perrin, "The XEdDSA and VXEdDSA Signature Schemes", 2016-10-20 [online]. Available: and Date accessed: 2018‑10‑23.

[15] A. Poelstra, A. Back, M. Friedenbach, G. Maxwell and P. Wuille, "Confidential Assets", Blockstream [online]. Available: Date accessed: 2018‑09‑25.

[16] Wikipedia: "Zero-knowledge Proof" [online]. Available: Date accessed: 2018‑09‑18.

[17] Wikipedia: "Discrete Logarithm" [online]. Available: Date accessed: 2018‑09‑20.

[18] A. Fiat and A. Shamir, "How to Prove Yourself: Practical Solutions to Identification and Signature Problems". CRYPTO 1986: pp. 186‑194 [online]. Available: Date accessed: 2018‑09‑20.

[19] D. Bernhard, O. Pereira and B. Warinschi, "How not to Prove Yourself: Pitfalls of the Fiat-Shamir Heuristic and Applications to Helios" [online]. Available: Date accessed: 2018‑09‑20.

[20] "Mimblewimble Explained" [online]. Available: Date accessed: 2018‑09‑10.

[21] GitHub: "Message Hiding in Bulletproofs #721" [online]. Available: Date accessed: 2018‑09‑10.

[22] Pedersen-commitment: "An Implementation of Pedersen Commitment Schemes" [online]. Available: Date accessed: 2018-09-25.

[23] "Zero Knowledge Proof Standardization - An Open Industry/Academic Initiative" [online]. Available: Date accessed: 2018‑09‑26.

[24] "SEC 2: Recommended Elliptic Curve Domain Parameters, Standards for Efficient Cryptography", 20 September 2000 [online]. Available: Date accessed: 2018‑09‑26.

[25] GitHub: "ElementsProject/secp256k1-zkp, Experimental Fork of libsecp256k1 with Support for Pedersen Commitments and Range Proofs" [online]. Available: Date accessed: 2018‑09‑18.

[26] GitHub: "apoelstra/secp256k1-mw, Fork of libsecp-zkp d78f12b to Add Support for Mimblewimble Primitives" [online]. Available: Date accessed: 2018‑09‑18.

[27] GitHub: "bbuenz/BulletProofLib, Library for Generating Non-interactive Zero Knowledge Proofs without Trusted Setup (Bulletproofs)" [online]. Available: Date accessed: 2018‑09‑18.

[28] GitHub: "dalek-cryptography/Bulletproofs, A Pure-Rust Implementation of Bulletproofs using Ristretto" [online]. Available: Date accessed: 2018-09-18.

[29] GitHub: "adjoint-io/Bulletproofs, Bulletproofs are Short Non-interactive Zero-knowledge Proofs that Require no Trusted Setup" [online]. Available: Date accessed: 2018‑09‑10.

[30] GitHub: "mimblewimble/secp256k1-zkp, Fork of secp256k1-zkp for the Grin/MimbleWimble project" [online]. Available: Date accessed: 2018-09-18.

[31] "The Ristretto Group" [online]. Available: Date accessed: 2018‑10‑23.

[32] "SafeCurves: Choosing Safe Curves for Elliptic-curve Cryptography" [online]. Available: Date accessed: 2018‑10‑23.

[33] R. Canetti, B. Riva and G. N. Rothblum, "Two 1-Round Protocols for Delegation of Computation" [online]. Available: Date accessed: 2018‑10‑11.

[34] GitHub: "mimblewimble/grin, Switch Commits / Bulletproofs - Status #734" [online]. Available: Date accessed: 2018-09-10.

[35] GitHub: "mimblewimble/grin, Bulletproofs #273" [online]. Available: Date accessed: 2018‑09‑10.

[36] Wikipedia: "Commitment Scheme" [online]. Available: Date accessed: 2018‑09‑26.

[37] Cryptography Wikia: "Commitment Scheme" [online]. Available: Date accessed: 2018‑09‑26.

[38] D. J. Bernstein, "Curve25519: New Diffie-Hellman Speed Records" [online]. Available: Date accessed: 2018‑09‑26.

[39] H. Hisil, K. Koon-Ho Wong, G. Carter and E. Dawson, "Twisted Edwards Curves Revisited", Information Security Institute, Queensland University of Technology [online]. Available: Date accessed: 2018‑09‑26.

[40] A. Sadeghi and M. Steiner, "Assumptions Related to Discrete Logarithms: Why Subtleties Make a Real Difference" [online]. Available: Date accessed: 2018-09-24.

[41] Crypto Wiki: "Cryptographic Nonce" [online]. Available: Date accessed: 2018‑10‑08.

[42] Wikipedia: "Cryptographic Nonce" [online]. Available: Date accessed: 2018-10-08.

[43] T. Ruffing and G. Malavolta, " Switch Commitments: A Safety Switch for Confidential Transactions", Saarland University [online]. Available: Date accessed: 2018‑10‑08.

[44] "BLAKE2 — Fast Secure Hashing" [online]. Available: Date accessed: 2018‑10‑08.

[45] GitHub: "mimblewimble/rust-secp256k1-zkp" [online]. Available: Date accessed: 2018‑11‑16.

[46] GitHub: "monero-project/monero [online]". Available: Date accessed: 2018‑11‑16.

[47] Wikipedia: "Arithmetic Circuit Complexity [online]". Available: Date accessed: 2018‑11‑08.

[48] G. Maxwell, A. Poelstra1, Y. Seurin and P. Wuille, "Simple Schnorr Multi-signatures with Applications to Bitcoin", 20 May 2018 [online]. Available: Date accessed: 2018‑07‑24.

[49] GitHub: "b-g-goodell/research-lab" [online]. Available: Date accessed: 2018‑11‑16.

[50] Wikipedia: "One-way Function" [online]. Available: Date accessed: 2018‑11‑27.

[51] P. Sharma, A. K. Gupta and S. Sharma, "Intensified ElGamal Cryptosystem (IEC)", International Journal of Advances in Engineering & Technology, January 2012 [online]. Available: Date accessed: 2018‑10‑09.

[52] Y. Tsiounis and M Yung, "On the Security of ElGamal Based Encryption" [online]. Available: Date accessed: 2018‑10‑09.

[53] Wikipedia: "Decisional Diffie–Hellman Assumption" [online]. Available: Date accessed: 2018‑10‑09.

[54] GitHub: "mimblewimble/grin, Bulletproof messages #730" [online]. Available: Date accessed: 2018‑11‑29.

[55] GitHub: "mimblewimble/grin, Removed all switch commitment usages, including restore #841" [online]. Available: Date accessed: 2018‑11‑29.

[56] GitHub: "mimblewimble/grin, switch commitment discussion #998" [online]. Available: Date accessed: 2018‑11‑29.

[57] GitHub: "mimblewimble/grin, [DNM] Switch commitments #2007" [online]. Available: Date accessed: 2018‑11‑29.

[58] G. G. Dagher, B. Bünz, J. Bonneauy, J. Clark and D. Boneh1, "Provisions: Privacy-preserving Proofs of Solvency for Bitcoin Exchanges", October 2015 [online]. Available: Date accessed: 2018‑11‑29.

[59] A. Poelstra, "Bulletproofs: Faster Rangeproofs and Much More", February 2018 [online]. Available: Date accessed: 2018‑11‑30.

[60] B. Franca, "Homomorphic Mini-blockchain Scheme", April 2015 [online]. Available: Date accessed: 2018‑11‑22.

[61] C. Franck and J. Großschädl, "Efficient Implementation of Pedersen Commitments Using Twisted Edwards Curves", University of Luxembourg [online]. Available: Date accessed: 2018‑11‑22.

[62] A. Gibson, "An Investigation into Confidential Transactions", July 2018 [online]. Available: Date accessed: 2018‑11‑22.

[63] J. Bootle, "How to do Zero-knowledge from Discrete-Logs in under 7kB", October 2016 [online]. Available: Date accessed: 2019‑01‑18.


Appendix A: Definition of Terms

Definitions of terms presented here are high level and general in nature. Full mathematical definitions are available in the cited references.

  • Arithmetic Circuits: An arithmetic circuit $ C $ over a field $ F $ and variables $ (x_1, ..., x_n) $ is a directed acyclic graph whose vertices are called gates. Arithmetic circuits can alternatively be described as a list of addition and multiplication gates with a collection of linear consistency equations relating the inputs and outputs of the gates. The size of an arithmetic circuit is the number of gates in it, with the depth being the length of the longest directed path. Upper bounding the complexity of a polynomial $ f $ is to find any arithmetic circuit that can calculate $ f $, whereas lower bounding is to find the smallest arithmetic circuit that can calculate $ f $. An example of a simple arithmetic circuit with size six and depth two that calculates a polynomial is shown below ([29], [47]).

  • Argument of Knowledge System: Proof systems with computational soundness like Bulletproofs are sometimes called argument systems. The terms proof and argument of knowledge have exactly the same meaning and can be used interchangeably [29].
  • Commitment Scheme: A commitment scheme in a Zero-knowledge Proofdef is a cryptographic primitive that allows a prover to commit to only a single chosen value/statement from a finite set without the ability to change it later (binding property) while keeping it hidden from a verifier (hiding property). Both binding and hiding properties are then further classified in increasing levels of security to be computational, statistical or perfect. No commitment scheme can at the same time be perfectly binding and perfectly hiding ([36], [37]).
  • Discrete Logarithm/Discrete Logarithm Problem (DLP): In the mathematics of real numbers, the logarithm $ \log_b^a $ is a number $ x $ such that $ b^x=a $, for given numbers $ a $ and $ b $. Analogously, in any group $ G $, powers $ b^k $ can be defined for all integers $ k $, and the discrete logarithm $ \log_ba $ is an integer $ k $ such that $ b^k=a $. Algorithms in public-key cryptography base their security on the assumption that the discrete logarithm problem over carefully chosen cyclic finite groups and cyclic subgroups of elliptic curves over finite fields has no efficient solution ([17], [40]).
  • Elliptic Curve Pedersen Commitment: An efficient implementation of the Pedersen Commitment ([15], [22]) will use secure Elliptic Curve Cryptography (ECC), which is based on the algebraic structure of elliptic curves over finite (prime) fields. Elliptic curve points are used as basic mathematical objects, instead of numbers. Note that traditionally in elliptic curve arithmetic lower case letters are used for ordinary numbers (integers) and upper case letters for curve points ([60], [61], [62]).
    • The generalized Elliptic Curve Pedersen Commitment definition follows (refer to Appendix B: Notation Used):

      • Let $ \mathbb F_p $ be the group of elliptic curve points, where $ p $ is a large prime.
      • Let $ G \in \mathbb F_p $ be a random generator point (base point) and let $ H \in \mathbb F_p $ be specially chosen so that the value $ x_H $ to satisfy $ H = x_H G $ cannot be found except if the Elliptic Curve DLP (ECDLP) is solved.
      • Let $ r $ (the blinding factor) be a random value and element of $ \mathbb Z_p $.
      • The commitment to value $ x \in \mathbb Z_p $ is then determined by calculating $ C(x,r) = rH + xG $, which is called the Elliptic Curve Pedersen Commitment.
    • Elliptic curve point addition is analogous to multiplication in the originally defined Pedersen Commitment. Thus $ g^x $, the number $ g $ multiplied by itself $ m $ times, is analogous to $ xG $, the elliptic curve point $ G $ added to itself $ x $ times. In this context $ xG $ is also a point in $ \mathbb F_p $.

    • In the Elliptic Curve context $ C(x,r) = rH + xG $ is then analogous to $ C(x,r) = h^r g^x $.

    • The number $ H $ is what is known as a Nothing Up My Sleeve (NUMS) number. With secp256k1, the value of $ H $ is the SHA256 hash of a simple encoding of the pre-specified generator point $ G $.

    • Similar to Pedersen Commitments, the Elliptic Curve Pedersen Commitments are also additionally homomorphic, such that for messages $ x $, $ x_0 $ and $ x_1 $, blinding factors $ r $, $ r_0 $ and $ r_1 $ and scalar $ k $ the following relation holds: $ C(x_0,r_0) + C(x_1,r_1) = C(x_0+x_1,r_0+r_1) $ and $ C(k \cdot x, k \cdot r) = k \cdot C(x, r) $.

    • In secure implementations of ECC, it is as hard to guess $ x $ from $ xG $ as it is to guess $ x $ from $g^x $. This is called the Elliptic Curve DLP (ECDLP).

    • Practical implementations usually consist of three algorithms: Setup() to set up the commitment parameters; Commit() to commit to the message using the commitment parameters; and Open() to open and verify the commitment.

  • ElGamal Commitment/Encryption: An ElGamal commitment is a Pedersen Commitment ([15], [22]) with an additional commitment $ g^r $ to the randomness used. The ElGamal encryption scheme is based on the Decisional Diffe-Hellman (DDH) assumption and the difficulty of the DLP for finite fields. The DDH assumption states that it is infeasible for a Probabilistic Polynomial-time (PPT) adversary to solve the DDH problem. Note: The ElGamal encryption scheme should not be confused with the ElGamal signature scheme ([1], [51], [52], [53]).
  • Fiat‑Shamir Heuristic/Transformation: The Fiat‑Shamir heuristic is a technique in cryptography to convert an interactive public-coin protocol (Sigma protocol) between a prover and a verifier into a one-message (non-interactive) protocol using a cryptographic hash function ([18], [19]).
    • The prover will use a Prove() algorithm to calculate a commitment $ A $ with a statement $ Y $ that is shared with the verifier and a secret witness value $ w $ as inputs. The commitment $ A $ is then hashed to obtain the challenge $ c $, which is further processed with the Prove() algorithm to calculate the response $ f $. The single message sent to the verifier then contains the challenge $ c $ and response $ f $.

    • The verifier is then able to compute the commitment $ A $ from the shared statement $ Y $, challenge $ c $ and response $ f $. The verifier will then use a Verify() algorithm to verify the combination of shared statement $ Y $, commitment $ A $, challenge $ c $ and response $ f $.

    • A weak Fiat‑Shamir transformation can be turned into a strong Fiat‑Shamir transformation if the hashing function is applied to the commitment $ A $ and shared statement $ Y $ to obtain the challenge $ c $ as opposed to only the commitment $ A ​$.

  • Nonce: In security engineering, nonce is an abbreviation of number used once. In cryptography, a nonce is an arbitrary number that can be used just once. It is often a random or pseudo-random number issued in an authentication protocol to ensure that old communications cannot be reused in replay attacks ([41], [42]).
  • Zero-knowledge Proof/Protocol: In cryptography, a zero-knowledge proof/protocol is a method by which one party (the prover) can convince another party (the verifier) that a statement $ Y $ is true, without conveying any information apart from the fact that the prover knows the value of $ Y $. The proof system must be complete, sound and zero-knowledge ([16], [23]):
    • Complete - if the statement is true and both prover and verifier follow the protocol, the verifier will accept.

    • Sound - if the statement is false, and the verifier follows the protocol, the verifier will not be convinced.

    • Zero-knowledge - if the statement is true and the prover follows the protocol, the verifier will not learn any confidential information from the interaction with the prover, apart from the fact that the statement is true.

Appendix B: Notation Used

The general notation of mathematical expressions when specifically referenced is given here, based on [1].

  • Let $ p $ and $ q $ be large prime numbers.
  • Let $ \mathbb G $ and $ \mathbb Q $ denote cyclic groups of prime order $ p $ and $ q $ respectively.
  • let $ \mathbb Z_p $ and $ \mathbb Z_q $ denote the ring of integers $ modulo \mspace{4mu} p $ and $ modulo \mspace{4mu} q $ respectively.
  • Let generators of $ \mathbb G $ be denoted by $ g, h, v, u \in \mathbb G $. In other words, there exists a number $ g \in \mathbb G $ such that $ \mathbb G = \lbrace 1 \mspace{3mu} , \mspace{3mu} g \mspace{3mu} , \mspace{3mu} g^2 \mspace{3mu} , \mspace{3mu} g^3 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} g^{p-1} \rbrace \equiv \mathbb Z_p $. Note that not every element of $ \mathbb Z_p $ is a generator of $ \mathbb G $.
  • Let $ \mathbb Z_p^* $ denote $ \mathbb Z_p \setminus \lbrace 0 \rbrace $ and $ \mathbb Z_q^* $ denote $ \mathbb Z_q \setminus \lbrace 0 \rbrace $, that is all invertible elements of $ \mathbb Z_p $ and $ \mathbb Z_q $ respectively. This excludes the element $ 0 $ which is not invertible.


Building on Bulletproofs

    Cathie Yun


In this post Cathie explains the basics of the Bulletproofs zero knowledge proof protocol. She then goes further to explain specific applications built on top of Bulletproofs, which can be used for range proofs or constraint system proofs. The constraint system proof protocol was used to build a confidential assets scheme called Cloak and a confidential smart contract language called ZkVM.

The Bulletproof Protocols


The overview of Bulletproofs given in Bulletproofs and Mimblewimble was largely based on the original work done by Bünz et al. [1]. They documented a number of different Bulletproof protocols, but not all of them in an obvious manner. This report summarizes and explains the different Bulletproof protocols as simply as possible. It also simplifies the logic and explains the base mathematical concepts in more detail where prior knowledge was assumed. The report concludes with a discussion on an improved Bulletproof zero-knowledge proof protocol provided by some community members following an evolutionary approach.


Notation Used

This section gives the general notation of mathematical expressions when specifically referenced, based on [1]. This notation provides important pre-knowledge for the remainder of the report.

  • Let $ p $ and $ q ​$ be large prime numbers.
  • Let $ \mathbb G ​$ and $ \mathbb Q ​$ denote cyclic groups of prime order $ p ​$ and $ q ​$ respectively.
  • let $ \mathbb Z_p ​$ and $ \mathbb Z_q ​$ denote the ring of integers $ modulo \mspace{4mu} p ​$ and $ modulo \mspace{4mu} q ​$ respectively.
  • Let generators of $ \mathbb G $ be denoted by $ g, h, v, u \in \mathbb G $, i.e. there exists a number $ g \in \mathbb G $ such that $ \mathbb G = \lbrace 1 \mspace{3mu} , \mspace{3mu} g \mspace{3mu} , \mspace{3mu} g^2 \mspace{3mu} , \mspace{3mu} g^3 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} g^{p-1} \rbrace \equiv \mathbb Z_p $. Note that not every element of $ \mathbb Z_p $ is a generator of $ \mathbb G $.
  • Let $ \mathbb Z_p^* $ denote $ \mathbb Z_p \setminus \lbrace 0 \rbrace $ and $ \mathbb Z_q^* $ denote $ \mathbb Z_q \setminus \lbrace 0 \rbrace $, i.e. all invertible elements of $ \mathbb Z_p $ and $ \mathbb Z_q $ respectively. This excludes the element $ 0 ​$, which is not invertible.
  • Let $ \mathbb G^n $ and $ \mathbb Z^n_p $ be vector spaces of dimension $ n $ over $ \mathbb G $ and $ \mathbb Z_p $ respectively.
  • Let $ h^r \mathbf g^\mathbf x = h^r \prod_i g_i^{x_i} \in \mathbb G ​$ be the vector Pedersen Commitmentdef with $ \mathbf {g} = (g_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} g_n) \in \mathbb G^n ​$ and $ \mathbf {x} = (x_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} x_n) \in \mathbb G^n ​$.
  • Let $ \mathbf {a} \in \mathbb F^n ​$ be a vector with elements $ a_1 \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_n \in \mathbb F ​$.
  • Let $ \langle \mathbf {a}, \mathbf {b} \rangle = \sum _{i=1}^n {a_i \cdot b_i} ​$ denote the inner-product between two vectors $ \mathbf {a}, \mathbf {b} \in \mathbb F^n ​$.
  • Let $ \mathbf {a} \circ \mathbf {b} = (a_1 \cdot b_1 \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_n \cdot b_n) \in \mathbb F^n ​$ denote the entry wise multiplication of two vectors $ \mathbf {a}, \mathbf {b} \in \mathbb F^n ​$.
  • Let $ \mathbf {A} \circ \mathbf {B} = (a_{11} \cdot b_{11} \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_{1m} \cdot b_{1m} \mspace{6mu} ; \mspace{6mu} . . . \mspace{6mu} ; \mspace{6mu} a_{n1} \cdot b_{n1} \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_{nm} \cdot b_{nm} ) $ denote the entry wise multiplication of two matrixes, also known as the Hadamard Productdef.
  • Let $ \mathbf {a} \parallel \mathbf {b} ​$ denote the concatenation of two vectors; if $ \mathbf {a} \in \mathbb Z_p^n ​$ and $ \mathbf {b} \in \mathbb Z_p^m ​$ then $ \mathbf {a} \parallel \mathbf {b} \in \mathbb Z_p^{n+m} ​$.
  • Let $ p(X) = \sum _{i=0}^d { \mathbf {p_i} \cdot X^i} \in \mathbb Z_p^n [X] $ be a vector polynomial where each coefficient $ \mathbf {p_i} $ is a vector in $ \mathbb Z_p^n $.
  • Let $ \langle l(X),r(X) \rangle = \sum _{i=0}^d { \sum _{j=0}^i { \langle l_i,r_i \rangle \cdot X^{i+j}}} \in \mathbb Z_p [X] $ denote the inner-product between two vector polynomials $ l(X),r(X) $.
  • Let $ t(X)=\langle l(X),r(X) \rangle $, then the inner-product is defined such that $ t(x)=\langle l(x),r(x) \rangle $ holds for all $ x \in \mathbb{Z_p} $.
  • Let $ C=g^a = \prod _{i=1}^n g_i^{a_i} \in \mathbb{G} $ be a binding (but not hiding) commitment to the vector $ \mathbf {a} \in \mathbb Z_p^n $ where $ \mathbf {g} = (g_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} g_n) \in \mathbb G^n $. Given vector $ \mathbf {b} \in \mathbb Z_p^n $ with non-zero entries, $ \mathbf {a} \circ \mathbf {b} $ is treated as a new commitment to $ C $. For this let $ g_i^\backprime =g_i^{(b_i^{-1})} $ such that $ C= \prod _{i=1}^n (g_i^\backprime)^{a_i \cdot b_i} $. The binding property of this new commitment is inherited from the old commitment.
  • Let $ \mathbf a _{[:l]} = ( a_1 \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_l ) \in \mathbb F ^ l$ and $ \mathbf a _{[l:]} = ( a_{1+1} \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_n ) \in \mathbb F ^ {n-l} $ be slices of vectors for $ 0 \le l \le n $ (using Python notation).
  • Let $ \mathbf {k}^n $ denote the vector containing the first $ n $ powers of $ k \in \mathbb Z_p^* $ such that $ \mathbf {k}^n = (1,k,k^2, \mspace{3mu} ... \mspace{3mu} ,k^{n-1}) \in (\mathbb Z_p^*)^n $.
  • Let $ \mathcal{P} $ and $ \mathcal{V} $ denote the prover and verifier respectively.
  • Let $ \mathcal{P_{IP}} $ and $ \mathcal{V_{IP}} ​$ denote the prover and verifier in relation to inner-product calculations respectively.

Pedersen Commitments and Elliptic Curve Pedersen Commitments

The basis of confidential transactions is the Pedersen Commitment scheme defined in [15].

A commitment scheme in a Zero-knowledge Proofdef is a cryptographic primitive that allows a prover $ \mathcal{P} $ to commit to only a single chosen value/statement from a finite set without the ability to change it later (binding property), while keeping it hidden from a verifier $ \mathcal{V} $ (hiding property). Both binding and hiding properties are then further classified in increasing levels of security to be computational, statistical or perfect:

  • Computational means that no efficient algorithm running in a practical amount of time can reveal the commitment amount or produce fake commitments, except with a small probability.
  • Statistical means that the probability of an adversary doing the same in a finite amount of time is negligible.
  • Perfect means that a quantum adversary (an attacker with infinite computing power) cannot tell what amount has been committed to, and is unable to produce fake commitments.

No commitment scheme can simultaneously be perfectly binding and perfectly hiding ([12], [13]).

Two variations of the Pedersen Commitment scheme share the same security attributes:

  • Pedersen Commitment: The Pedersen Commitment is a system for making a blinded, non-interactive commitment to a value ([1], [3], [8], [14], [15]).
    • The generalized Pedersen Commitment definition follows (refer to Notation Used):

      • Let $ q $ be a large prime and $ p $ be a large safe prime such that $ p = 2q + 1 $.
      • Let $ h $ be a random generator of cyclic group $ \mathbb G $ such that $ h $ is an element of $ \mathbb Z_q^* $.
      • Let $ a $ be a random value and element of $ \mathbb Z_q^* $ and calculate $ g $ such that $ g = h^a $.
      • Let $ r $ (the blinding factor) be a random value and element of $ \mathbb Z_p^* $.
      • The commitment to value $ x $ is then determined by calculating $ C(x,r) = h^r g^x $, which is called the Pedersen Commitment.
      • The generator $ h $ and resulting number $ g $ are known as the commitment bases and should be shared along with $ C(x,r) $, with whomever wishes to open the value.
    • Pedersen Commitments are also additionally homomorphic, such that for messages $ x_0 $ and $ x_1 $ and blinding factors $ r_0 $ and $ r_1 $, we have $ C(x_0,r_0) \cdot C(x_1,r_1) = C(x_0+x_1,r_0+r_1) $

  • Elliptic Curve Pedersen Commitment: An efficient implementation of the Pedersen Commitmentdef will use secure Elliptic Curve Cryptography (ECC), which is based on the algebraic structure of elliptic curves over finite (prime) fields. Elliptic curve points are used as basic mathematical objects, instead of numbers. Note that traditionally in elliptic curve arithmetic, lower-case letters are used for ordinary numbers (integers) and upper-case letters for curve points ([26], [27], [28]).
    • The generalized Elliptic Curve Pedersen Commitment definition follows (refer to Notation Used:

      • Let $ \mathbb F_p $ be the group of elliptic curve points, where $ p $ is a large prime.
      • Let $ G \in \mathbb F_p $ be a random generator point (base point) and let $ H \in \mathbb F_p $ be specially chosen so that the value $ x_H $ to satisfy $ H = x_H G $ cannot be found except if the Elliptic Curve DLP (ECDLP) is solved.
      • Let $ r $ (the blinding factor) be a random value and element of $ \mathbb Z_p $.
      • The commitment to value $ x \in \mathbb Z_p $ is then determined by calculating $ C(x,r) = rH + xG ​$, which is called the Elliptic Curve Pedersen Commitment.
    • Elliptic curve point addition is analogous to multiplication in the originally defined Pedersen Commitmentdef. Thus $ g^x $, the number $ g $ multiplied by itself $ m $ times, is analogous to $ xG $, the elliptic curve point $ G $ added to itself $ x $ times. In this context, $ xG $ is also a point in $ \mathbb F_p $.

    • In the Elliptic Curve context, $ C(x,r) = rH + xG $ is then analogous to $ C(x,r) = h^r g^x $.

    • The number $ H $ is what is known as a Nothing Up My Sleeve (NUMS) number. With secp256k1, the value of $ H $ is the SHA256 hash of a simple encoding of the prespecified generator point $ G $.

    • Similar to Pedersen Commitments, the Elliptic Curve Pedersen Commitments are also additionally homomorphic, such that for messages $ x $, $ x_0 $ and $ x_1 $, blinding factors $ r $, $ r_0 $ and $ r_1 $ and scalar $ k $, the following relation holds: $ C(x_0,r_0) + C(x_1,r_1) = C(x_0+x_1,r_0+r_1) $ and $ C(k \cdot x, k \cdot r) = k \cdot C(x, r) $.

    • In secure implementations of ECC, it is as hard to guess $ x $ from $ xG $ as it is to guess $ x $ from $g^x $. This is called the Elliptic Curve DLP (ECDLP).

    • Practical implementations usually consist of three algorithms: Setup() to set up the commitment parameters; Commit() to commit to the message using the commitment parameters; and Open() to open and verify the commitment.

Security Aspects of (Elliptic Curve) Pedersen Commitments

By virtue of their definition, Pedersen Commitments are only computationally binding but perfectly hiding. A simplified explanation follows.

If Alice wants to commit to a value $ x $ that will be sent to Bob, who will at a later stage want Alice to prove that the value $ x $ is the value that was used to generate the commitment $ C $, then Alice will select random blinding factor $ r $, calculate $ C(x,r) = h^r g^x $ and send that to Bob. Later, Alice can reveal $ x $ and $ r $, and Bob can do the calculation and see that the commitment $ C $ it produces is the same as the one Alice sent earlier.

Perhaps Alice or Bob has managed to build a computer that can solve the DLP and, given a public key, could find alternative solutions to $ C(x,r) = h^r g^x $ in a reasonable time, i.e. $ C(x,r) = h^{r^\prime} g^{x^\prime} $. This means, even though Bob can find values for $ r^\prime $ and $ x^\prime $ that produce $ C $, he cannot know if those are the specific $ x $ and $ r $ that Alice chose, because there are so many that can produce the same $ C $. Pedersen Commitments are thus perfectly hiding.

Although the Pederson Commitment is perfectly hiding, it does rely on the fact that Alice has NOT cracked the DLP to be able to calculate other pairs of input values to open the commitment to another value when challenged. The Pederson Commitment is thus only computationally binding.

Bulletproof Protocols

Bulletproof protocols have multiple applications, most of which are discussed in Bulletproofs and Mimblewimble. The following list links these use cases to the different Bulletproof protocols:

  • Bulletproofs provide short, non-interactive, zero-knowledge proofs without a trusted setup. The small size of Bulletproofs reduces overall cost. This has applicability in distributed systems where proofs are transmitted over a network or stored for a long time.

  • Range Proof Protocol with Logarithmic Size provides short, single and aggregatable range proofs, and can be used with multiple blockchain protocols including Mimblewimble. These can be applied to normal transactions or to some smart contract, where committed values need to be proven to be in a specific range without revealing the values.

  • The protocol presented in Aggregating Logarithmic Proofs can be used for Merkle proofs and proof of solvency without revealing any additional information.

  • Range proofs can be compiled for a multi-party, single, joined confidential transaction for their known outputs using MPC Protocol for Bulletproofs. Users do not have to reveal their secret transaction values.

  • Verifiable shuffles and multi-signatures with deterministic nonces can be implemented with Protocol 3.

  • Bulletproofs present an efficient and short zero-knowledge proof for arbitrary Arithmetic Circuitsdef using Zero-knowledge Proof for Arithmetic Circuits.

  • Various Bulletproof protocols can be applied to scriptless scripts, to make them non-interactive and not have to use Sigma protocols.

  • Batch verifications can be done using Optimized Verifier using Multi-exponentiation and Batch Verification, e.g. a blockchain full node receiving a block of transactions needs to verify all transactions as well as range proofs.

A detailed mathematical discussion of the different Bulletproof protocols follows. Protocols 1, 2 and 3 are numbered consistently with [1]. Refer to Notation Used.

Note: Full mathematical definitions and terms not defined are available in [1].

Inner-product Argument (Protocol 1)

The first and most important building block of the Bulletproofs is its efficient algorithm to calculate an inner-product argument for two independent (not related) binding vector Pedersen Commitmentsdef. Protocol 1 is an argument of knowledge that the prover $ \mathcal{P} $ knows the openings of two binding Pedersen vector commitments that satisfy a given inner product relation. Let inputs to the inner-product argument be independent generators $ g,h \in \mathbb G^n $, a scalar $ c \in \mathbb Z_p $ and $ P \in \mathbb G $. The argument lets the prover $ \mathcal{P} $ convince a verifier $ \mathcal{V} $ that the prover $ \mathcal{P} $ knows two vectors $ \mathbf a, \mathbf b \in \mathbb Z^n_p ​$ such that $$ P =\mathbf{g}^\mathbf{a}\mathbf{h}^\mathbf{b} \mspace{30mu} \mathrm{and} \mspace{30mu} c = \langle \mathbf {a} \mspace{3mu}, \mspace{3mu} \mathbf {b} \rangle $$

$ P ​$ is referred to as the binding vector commitment to $ \mathbf a, \mathbf b ​$. The inner product argument is an efficient proof system for the following relation:

$$ { (\mathbf {g},\mathbf {h} \in \mathbb G^n , \mspace{12mu} P \in \mathbb G , \mspace{12mu} c \in \mathbb Z_p ; \mspace{12mu} \mathbf {a}, \mathbf {b} \in \mathbb Z^n_p ) \mspace{3mu} : \mspace{15mu} P = \mathbf{g}^\mathbf{a}\mathbf{h}^\mathbf{b} \mspace{3mu} \wedge \mspace{3mu} c = \langle \mathbf {a} \mspace{3mu}, \mspace{3mu} \mathbf {b} \rangle } \tag{1} $$

Relation (1) requires sending $ 2n $ elements to the verifier $ \mathcal{V} $. The inner product $ c = \langle \mathbf {a} \mspace{3mu}, \mspace{3mu} \mathbf {b} \rangle \ $ can be made part of the vector commitment $ P \in \mathbb G $. This will enable sending only $ 2 \log 2 (n) $ elements to the verifier $ \mathcal{V} $ (explained in the next section). For a given $ P \in \mathbb G $, the prover $ \mathcal{P} $ proves that it has vectors $ \mathbf {a}, \mathbf {b} \in \mathbb Z^n_p $ for which $ P =\mathbf{g}^\mathbf{a}\mathbf{h}^\mathbf{b} \cdot u^{ \langle \mathbf {a}, \mathbf {b} \rangle } $. Here $ u \in \mathbb G $ is a fixed group element with an unknown discrete-log relative to $ \mathbf {g},\mathbf {h} \in \mathbb G^n ​$.

$$ { (\mathbf {g},\mathbf {h} \in \mathbb G^n , \mspace{12mu} u,P \in \mathbb G ; \mspace{12mu} \mathbf {a}, \mathbf {b} \in \mathbb Z^n_p ) : \mspace{15mu} P = \mathbf{g}^\mathbf{a}\mathbf{h}^\mathbf{b} \cdot u^{ \langle \mathbf {a}, \mathbf {b} \rangle } } \tag{2} $$

A proof system for relation (2) gives a proof system for (1) with the same complexity, thus only a proof system for relation (2) is required.

Protocol 1 is then defined as the proof system for relation (2) as shown in Figure 1. The element $ u $ is raised to a random power $ x $ (the challenge) chosen by the verifier $ \mathcal{V} $ to ensure that the extracted vectors $ \mathbf {a}, \mathbf {b} $ from Protocol 2 satisfy $ c = \langle \mathbf {a} , \mathbf {b} \rangle $.

Figure 1: Bulletproofs Protocol 1 [1]

The argument presented in Protocol 1 has the following Commitment Scheme properties:

  • Perfect completeness (hiding): Every validity/truth is provable. Also refer to Definition 9 in [1].
  • Statistical witness extended emulation (binding): Robust against either extracting a non-trivial discrete logarithm relation between $ \mathbf {g} , \mathbf {h} , u $ or extracting a valid witness $ \mathbf {a}, \mathbf {b} $.

How Proof System for Protocol 1 Works, Shrinking by Recursion

Protocol 1 uses an inner product argument of two vectors $ \mathbf a, \mathbf b \in \mathbb Z^n_p $ of size $ n $. The Pedersen Commitment scheme allows a vector to be cut in half and the two halves to then be compressed together. Let $ \mathrm H : \mathbb Z^{2n+1}_p \to \mathbb G $ be a hash function for commitment $ P $, with $ P = \mathrm H(\mathbf a , \mathbf b, \langle \mathbf a, \mathbf b \rangle) $. Note that commitment $ P $ and thus $ \mathrm H $ is additively homomorphic, therefore sliced vectors of $ \mathbf a, \mathbf b \in \mathbb Z^n_p $ can be hashed together with inner product $ c = \langle \mathbf a , \mathbf b \rangle \in \mathbb Z_p$. If $ n ^\prime = n/2 ​$, starting with relation (2), then

$$ \begin{aligned} \mathrm H(\mathbf a \mspace{3mu} , \mspace{3mu} \mathbf b \mspace{3mu} , \mspace{3mu} \langle \mathbf a , \mathbf b \rangle) &= \mathbf{g} ^\mathbf{a} \mathbf{h} ^\mathbf{b} \cdot u^{ \langle \mathbf a, \mathbf b \rangle} \mspace{20mu} \in \mathbb G \\ \mathrm H(\mathbf a_{[: n ^\prime]} \mspace{3mu} , \mspace{3mu} \mathbf a_{[n ^\prime :]} \mspace{3mu} , \mspace{3mu} \mathbf b_{[: n ^\prime]} \mspace{3mu} , \mspace{3mu} \mathbf b_{[n ^\prime :]} \mspace{3mu} , \mspace{3mu} \langle \mathbf {a}, \mathbf {b} \rangle) &= \mathbf g ^ {\mathbf a_{[: n ^\prime]}} _{[: n ^\prime]} \cdot \mathbf g ^ {\mathbf a^\prime_{[n ^\prime :]}} _{[n ^\prime :]} \cdot \mathbf h ^ {\mathbf b_{[: n ^\prime]}} _{[: n ^\prime]} \cdot \mathbf h ^ {\mathbf b^\prime_{[n ^\prime :]}} _{[n ^\prime :]} \cdot u^{\langle \mathbf {a}, \mathbf {b} \rangle} \mspace{20mu} \in \mathbb G \end{aligned} $$

Commitment $ P $ can then further be sliced into $ L $ and $ R $ as follows:

$$ \begin{aligned} P &= \mathrm H(\mspace{3mu} \mathbf a_{[: n ^\prime]} \mspace{6mu} , \mspace{6mu} \mathbf a_{[n ^\prime :]} \mspace{6mu} , \mspace{6mu} \mathbf b_{[: n ^\prime]} \mspace{6mu} , \mspace{6mu} \mathbf b_{[n ^\prime :]} \mspace{6mu} , \mspace{6mu} \langle \mathbf {a}, \mathbf {b} \rangle \mspace{49mu}) \mspace{20mu} \in \mathbb G \\ L &= \mathrm H(\mspace{3mu} 0 ^ {n ^\prime} \mspace{18mu} , \mspace{6mu} \mathbf a_{[: n ^\prime]} \mspace{6mu} , \mspace{6mu} \mathbf b_{[n ^\prime :]} \mspace{6mu} , \mspace{6mu} 0 ^ {n ^\prime} \mspace{18mu} , \mspace{6mu} \langle \mathbf {a_{[: n ^\prime]}} , \mathbf {b_{[n ^\prime :]}} \rangle \mspace{3mu}) \mspace{20mu} \in \mathbb G \\ R &= \mathrm H(\mspace{3mu} \mathbf a_{[n ^\prime :]} \mspace{6mu} , \mspace{6mu} 0 ^ {n ^\prime} \mspace{18mu} , \mspace{6mu} 0 ^ {n ^\prime} \mspace{18mu} , \mspace{6mu} \mathbf b_{[: n ^\prime]} \mspace{6mu} , \mspace{6mu} \langle \mathbf {a_{[n ^\prime :]}} , \mathbf {b_{[: n ^\prime]}} \rangle \mspace{3mu}) \mspace{20mu} \in \mathbb G \end{aligned} $$

The first reduction step is as follows:

  • The prover $ \mathcal{P} $ calculates $ L,R \in \mathbb G $ and sends it to the verifier $ \mathcal{V} $.
  • The verifier $ \mathcal{V} $ chooses a random $ x \overset{$}{\gets} \mathbb Z _p $ and sends it to the prover $ \mathcal{P} $.
  • The prover $ \mathcal{P} ​$ calculates $ \mathbf a^\prime , \mathbf b^\prime \in \mathbb Z^{n^\prime}_p ​$ and sends it to the verifier $ \mathcal{V} ​$:

$$ \begin{aligned} \mathbf a ^\prime &= x\mathbf a _{[: n ^\prime]} + x^{-1} \mathbf a _{[n ^\prime :]} \in \mathbb Z^{n^\prime}_p \\ \mathbf b ^\prime &= x^{-1}\mathbf b _{[: n ^\prime]} + x \mathbf b _{[n ^\prime :]} \in \mathbb Z^{n^\prime}_p \end{aligned} $$

  • The verifier $ \mathcal{V} ​$ calculates $ P^\prime = L^{x^2} \cdot P \cdot R^{x^{-2}} ​$ and accepts (verify true) if

$$ P^\prime = \mathrm H ( x^{-1} \mathbf a^\prime \mspace{3mu} , \mspace{3mu} x \mathbf a^\prime \mspace{6mu} , \mspace{6mu} x \mathbf b^\prime \mspace{3mu} , \mspace{3mu} x^{-1} \mathbf b^\prime \mspace{3mu} , \mspace{3mu} \langle \mathbf a^\prime , \mathbf b^\prime \rangle ) \tag{3} $$

So far, the prover $ \mathcal{P} $ only sent $ n + 2 $ elements to the verifier $ \mathcal{V} $, i.e. the four tuple $ ( L , R , \mathbf a^\prime , \mathbf b^\prime ) $, approximately half the length compared to sending the complete $ \mathbf a, \mathbf b \in \mathbb Z^n_p $. The test in relation (3) is the same as testing that $$ P^\prime = (\mathbf g ^ {x^{-1}} _{[: n ^\prime]} \circ \mathbf g ^ x _{[n ^\prime :]})^{\mathbf a^\prime} \cdot (\mathbf h ^ x _{[: n ^\prime]} \circ \mathbf h ^ {x^{-1}} _{[n ^\prime :]})^{\mathbf b^\prime} \cdot u^{\langle \mathbf a^\prime , \mathbf b^\prime \rangle} \tag{4} $$

Thus, the prover $ \mathcal{P} ​$ and verifier $ \mathcal{V} ​$ can recursively engage in an inner-product argument for $ P^\prime ​$ with respect to generators

$$ (\mathbf g ^ {x^{-1}} _{[: n ^\prime]} \circ \mathbf g ^ x _{[n ^\prime :]} \mspace{6mu} , \mspace{6mu} \mathbf h ^ x _{[: n ^\prime]} \circ \mathbf h ^ {x^{-1}} _{[n ^\prime :]} \mspace{6mu} , \mspace{6mu} u ) $$

which will result in a $ \log _2 n $ round protocol with $ 2 \log _2 n $ elements in $ \mathbb G $ and $ 2 $ elements in $ \mathbb Z _p $. The prover $ \mathcal{P} $ ends up sending the following terms to the verifier $ \mathcal{V} ​$: $$ (L_1 , R_1) \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} (L_{\log _2 n} , R _{\log _2 n}) \mspace{3mu} , \mspace{3mu} (a , b) $$

where $ a,b \in \mathbb Z _p ​$ are only sent right at the end. This protocol can be made non-interactive using the Fiat-Shamirdef heuristic.

Inner-product Verification through Multi-exponentiation (Protocol 2)

The inner product argument to be calculated is that of two vectors $ \mathbf a, \mathbf b \in \mathbb Z^n_p $ of size $ n $. Protocol 2 has a logarithmic number of rounds. In each round, the prover $ \mathcal{P} $ and verifier $ \mathcal{V} $ calculate a new set of generators $ ( \mathbf g ^\prime , \mathbf h ^\prime ) $, which would require a total of $ 4n $ computationally expensive exponentiations. Multi-exponentiation is a technique to reduce the number of exponentiations for a given calculation. In Protocol 2, the number of exponentiations is reduced to a single multi-exponentiation by delaying all the exponentiations until the last round. It can also be made non-interactive using the Fiat-Shamirdef heuristic, providing a further speedup.

Let $ g $ and $ h $ be the generators used in the final round of the protocol and $ x_j $ be the challenge from the $ j _{th} $ round. In the last round, the verifier $ \mathcal{V} $ checks that $ g^a h^b u ^{a \cdot b} = P $, where $ a,b \in \mathbb Z_p $ are given by the prover $ \mathcal{P} $. The final $ g $ and $ h $ can be expressed in terms of the input generators $ \mathbf {g},\mathbf {h} \in \mathbb G^n $ as:

$$ g = \prod _{i=1}^n g_i^{s_i} \in \mathbb{G}, \mspace{21mu} h=\prod _{i=1}^n h_i^{1/s_i} \in \mathbb{G} $$

where $ \mathbf {s} = (s_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} s_n) \in \mathbb Z_p^n $ only depends on the challenges $ (x_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} x_{\log_2(n)}) \in \mathbb Z_p^n $.  The scalars of $ \mathbf {s} $ are calculated as follows:

$$ s_i = \prod ^{\log _2 (n)} _{j=1} x ^{b(i,j)} _j \mspace{15mu} \mathrm {for} \mspace{15mu} i = 1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} n $$


$$ b(i,j) = \begin{cases} \mspace{12mu} 1 \mspace{18mu} \text{if the} \mspace{4mu} j \text{th bit of} \mspace{4mu} i-1 \mspace{4mu} \text{is} \mspace{4mu} 1 \\ -1 \mspace{15mu} \text{otherwise} \end{cases} $$

The entire verification check in the protocol reduces to a single multi-exponentiation of size $ 2n + 2 \log_2(n) + 1 ​$:

$$ \mathbf g^{a \cdot \mathbf{s}} \cdot \mathbf h^{b \cdot\mathbf{s^{-1}}} \cdot u^{a \cdot b} \mspace{12mu} \overset{?}{=} \mspace{12mu} P \cdot \prod _{j=1}^{\log_2(n)} L_j^{x_j^2} \cdot R_j^{x_j^{-2}} \tag{5} $$

Figure 2 shows Protocol 2:

Figure 2: Bulletproofs Protocol 2 [1]

Range Proof Protocol with Logarithmic Size

This protocol provides short and aggregatable range proofs, using the improved inner product argument from Protocol 1. It is built up in five parts:

  • how to construct a range proof that requires the verifier $ \mathcal{V} $ to check an inner product between two vectors;
  • how to replace the inner product argument with an efficient inner-product argument;
  • how to efficiently aggregate $ m $ range proofs into one short proof;
  • how to make interactive public coin protocols non-interactive by using the Fiat-Shamirdef heuristic; and
  • how to allow multiple parties to construct a single aggregate range proof.

Figure 3 gives a diagrammatic overview of a range proof protocol implementation using Elliptic Curve Pedersen Commitmentsdef:

Figure 3: Range Proof Protocol Implementation Example [22]

Inner-product Range Proof

This protocol provides the ability to construct a range proof that requires the verifier $ \mathcal{V} ​$ to check an inner product between two vectors. The range proof is constructed by exploiting the fact that a Pedersen Commitment $ V ​$ is an element in the same group $ \mathbb G ​$ that is used to perform the inner product argument. Let $ v \in \mathbb Z_p ​$ and let $ V \in \mathbb G ​$ be a Pedersen Commitment to $ v ​$ using randomness $ \gamma ​$. The proof system will convince the verifier $ \mathcal{V} ​$ that commitment $ V ​$ contains a number $ v \in [0,2^n - 1] ​$ such that

$$ { (g,h \in \mathbb{G}) , V , n \mspace{3mu} ; \mspace{12mu} v, \gamma \in \mathbb{Z_p} ) \mspace{3mu} : \mspace{3mu} V =h^\gamma g^v \mspace{5mu} \wedge \mspace{5mu} v \in [0,2^n - 1] } $$

without revealing $ v ​$. Let $ \mathbf {a}_L = (a_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} a_n) \in {0,1}^n ​$ be the vector containing the bits of $ v, ​$ so that $ \langle \mathbf {a}_L, \mathbf {2}^n \rangle = v ​$. The prover $ \mathcal{P} ​$ commits to $ \mathbf {a}_L ​$ using a constant size vector commitment $ A \in \mathbb{G} ​$. It will convince the verifier $ \mathcal{V} ​$ that $ v ​$ is in $ [0,2^n - 1] ​$ by proving that it knows an opening $ \mathbf {a}_L \in \mathbb Z_p^n ​$ of $ A ​$ and $ v, \gamma \in \mathbb{Z_p} ​$ such that $ V =h^\gamma g^v ​$ and

$$ \langle \mathbf {a}_L \mspace{3mu} , \mspace{3mu} \mathbf {2}^n \rangle = v \mspace{20mu} \mathrm{and} \mspace{20mu} \mathbf {a}_R = \mathbf {a}_L - \mathbf {1}^n \mspace{20mu} \mathrm{and} \mspace{20mu} \mathbf {a}_L \circ \mathbf {a}_R = \mathbf{0}^n \mspace{20mu} \tag{6} $$

This proves that $ a_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} a_n $ are all in $ {0,1} $ and that $ \mathbf {a}_L $ is composed of the bits of $ v $. However, the $ 2n + 1 $ constraints need to be expressed as a single inner-product constant so that Protocol 1 can be used, by letting the verifier $ \mathcal{V} $ choose a random linear combination of the constraints. To prove that a committed vector $ \mathbf {b} \in \mathbb Z_p^n $ satisfies $ \mathbf {b} = \mathbf{0}^n $, it suffices for the verifier $ \mathcal{V} $ to send a random $ y \in \mathbb{Z_p} $ to the prover $ \mathcal{P} $ and for the prover $ \mathcal{P} $ to prove that $ \langle \mathbf {b}, \mathbf {y}^n \rangle = 0 $, which will convince the verifier $ \mathcal{V} $ that $ \mathbf {b} = \mathbf{0}^n $. The prover $ \mathcal{P} $ can thus prove relation (6) by proving that $$ \langle \mathbf {a}_L \mspace{3mu} , \mspace{3mu} \mathbf {2}^n \rangle = v \mspace{20mu} \mathrm{and} \mspace{20mu} \langle \mathbf {a}_L - 1 - \mathbf {a}_R \mspace{3mu} , \mspace{3mu} \mathbf {y}^n \rangle=0 \mspace{20mu} \mathrm{and} \mspace{20mu} \langle \mathbf {a}_L \mspace{3mu} , \mspace{3mu} \mathbf {a}_R \circ \mathbf {y}^n \rangle = \mathbf{0}^n \mspace{20mu} $$

Building on this, the verifier $ \mathcal{V} $ chooses a random $ z \in \mathbb{Z_p} $ and lets the prover $ \mathcal{P} $ prove that

$$ z^2 \cdot \langle \mathbf {a}_L \mspace{3mu} , \mspace{3mu} \mathbf {2}^n \rangle + z \cdot \langle \mathbf {a}_L - 1 - \mathbf {a}_R \mspace{3mu} , \mspace{3mu} \mathbf {y}^n \rangle + \langle \mathbf {a}_L \mspace{3mu} , \mspace{3mu} \mathbf {a}_R \circ \mathbf {y}^n \rangle = z^2 \cdot v \mspace{20mu} \tag{7} $$

Relation (7) can be rewritten as

$$ \langle \mathbf {a}_L - z \cdot \mathbf {1}^n \mspace{3mu} , \mspace{3mu} \mathbf {y}^n \circ (\mathbf {a}_R + z \cdot \mathbf {1}^n) +z^2 \cdot \mathbf {2}^n \rangle = z^2 \cdot v + \delta (y,z) \tag{8} $$


$$ \delta (y,z) = (z-z^2) \cdot \langle \mathbf {1}^n \mspace{3mu} , \mspace{3mu} \mathbf {y}^n\rangle -z^3 \cdot \langle \mathbf {1}^n \mspace{3mu} , \mspace{3mu} \mathbf {2}^n\rangle \in \mathbb{Z_p} $$

can be easily calculated by the verifier $ \mathcal{V} ​$. The proof that relation (6) holds was thus reduced to a single inner-product identity.

Relation (8) cannot be used in its current form without revealing information about $ \mathbf {a}_L $. Two additional blinding vectors $ \mathbf {s}_L , \mathbf {s}_R \in \mathbb Z_p^n $ are introduced with the prover $ \mathcal{P} $ and verifier $ \mathcal{V} $ engaging in the zero-knowledge protocol shown in Figure 4:

Figure 4: Bulletproofs Inner-product Range Proof Part A [1]

Two linear vector polynomials $ l(X), r(X) \in \mathbb Z^n_p[X] $ are defined as the inner-product terms for relation (8), also containing the blinding vectors $ \mathbf {s}_L $ and $ \mathbf {s}_R $. A quadratic polynomial $ t(X) \in \mathbb Z_p[X] $ is then defined as the inner product between the two vector polynomials $ l(X), r(X) $ such that

$$ t(X) = \langle l(X) \mspace{3mu} , \mspace{3mu} r(X) \rangle = t_0 + t_1 \cdot X + t_2 \cdot X^2 \mspace{10mu} \in \mathbb {Z}_p[X] $$

The blinding vectors $ \mathbf {s}_L ​$ and $ \mathbf {s}_R ​$ ensure that the prover $ \mathcal{P} ​$ can publish $ l(x) ​$ and $ r(x) ​$ for one $ x \in \mathbb Z_p^* ​$ without revealing any information about $ \mathbf {a}_L ​$ and $ \mathbf {a}_R ​$. The constant term $ t_0 ​$ of the quadratic polynomial $ t(X) ​$ is then the result of the inner product in relation (8), and the prover $ \mathcal{P} ​$ needs to convince the verifier $ \mathcal{V} ​$ that

$$ t_0 = z^2 \cdot v + \delta (y,z) $$

In order to do so, the prover $ \mathcal{P} $ convinces the verifier $ \mathcal{V} $ that it has a commitment to the remaining coefficients of $ t(X) $, namely $ t_1,t_2 \in \mathbb Z_p $, by checking the value of $ t(X) $ at a random point $ x \in \mathbb Z_p^* $. This is illustrated in Figure 5:

Figure 5: Bulletproofs Inner-product Range Proof Part B [1]

The verifier $ \mathcal{V} $ now needs to check that $ l $ and $ r $ are in fact $ l(x) $ and $ r(x) $, and that $ t(x) = \langle l \mspace{3mu} , \mspace{3mu} r \rangle $. A commitment for $ \mathbf {a}_R \circ \mathbf {y}^n $ is needed and to do so, the commitment generators are switched from $ h \in \mathbb G^n $ to $ h ^\backprime = h^{(\mathbf {y}^{-1})}$. Thus $ A $ and $ S $ now become vector commitments to $ ( \mathbf {a}_L \mspace{3mu} , \mspace{3mu} \mathbf {a}_R \circ \mathbf {y}^n ) $ and $ ( \mathbf {s}_L \mspace{3mu} , \mspace{3mu} \mathbf {s}_R \circ \mathbf {y}^n ) $ respectively, with respect to the new generators $ (g, h ^\backprime, h) ​$. This is illustrated in Figure 6.

Figure 6: Bulletproofs Inner-product Range Proof Part C [1]

The range proof presented here has the following Commitment Scheme properties:

  • Perfect completeness (hiding). Every validity/truth is provable. Also refer to Definition 9 in [1].
  • Perfect special Honest Verifier Zero-knowledge (HVZK). The verifier $ \mathcal{V} $ behaves according to the protocol. Also refer to Definition 12 in [1].
  • Computational witness extended emulation (binding). A witness can be computed in time closely related to time spent by the prover $ \mathcal{P} $. Also refer to Definition 10 in [1].

Logarithmic Range Proof

This protocol replaces the inner product argument with an efficient inner-product argument. In step (63) in Figure 5, the prover $ \mathcal{P} $ transmits $ \mathbf {l} $ and $ \mathbf {r} $ to the verifier $ \mathcal{V} $, but their size is linear in $ n $. To make this efficient, a proof size that is logarithmic in $ n $ is needed. The transfer of $ \mathbf {l} $ and $ \mathbf {r} $ can be eliminated with an inner-product argument. Checking correctness of $ \mathbf {l} $ and $ \mathbf {r} $ (step 67 in Figure 6) and $ \hat {t} $ (step 68 in Figure 6) is the same as verifying that the witness $ \mathbf {l} , \mathbf {r} $ satisfies the inner product of relation (2) on public input $ (\mathbf {g} , \mathbf {h} ^ \backprime , P \cdot h^{-\mu}, \hat t) $. Transmission of vectors $ \mathbf {l} $ and $ \mathbf {r} $ to the verifier $ \mathcal{V} $ (step 63 in Figure 5) can then be eliminated and transfer of information limited to the scalar properties $ ( \tau _x , \mu , \hat t ) $ alone, thereby archiving a proof size that is logarithmic in $ n $.

Aggregating Logarithmic Proofs

This protocol efficiently aggregates $ m $ range proofs into one short proof with a slight modification to the protocol presented in Inner-product Range Proof. For aggregate range proofs, the inputs of one range proof do not affect the output of another range proof. Aggregating logarithmic range proofs is especially helpful if a single prover $ \mathcal{P} $ needs to perform multiple range proofs at the same time.

A proof system must be presented for the following relation:

$$ { (g,h \in \mathbb{G}) , \mspace{9mu} \mathbf {V} \in \mathbb{G}^m \mspace{3mu} ; \mspace{9mu} \mathbf {v}, \gamma \in \mathbb Z_p^m ) \mspace{6mu} : \mspace{6mu} V_j = h^{\gamma_j} g^{v_j} \mspace{6mu} \wedge \mspace{6mu} v_j \in [0,2^n - 1] \mspace{15mu} \forall \mspace{15mu} j \in [1,m] } \tag{9} $$

The prover $ \mathcal{P} $ should now compute $ \mspace{3mu} \mathbf a_L \in \mathbb Z_p^{n \cdot m} $ as the concatenation of all of the bits for every $ v_j $ such that

$$ \langle \mathbf{2}^n \mspace{3mu} , \mspace{3mu} \mathbf a_L[(j-1) \cdot n : j \cdot n-1] \rangle = v_j \mspace{9mu} \forall \mspace{9mu} j \in [1,m] \mspace{3mu} $$

The quantity $ \delta (y,z) $ is adjusted to incorporate more cross-terms $ n \cdot m $ , the linear vector polynomials $ l(X), r(X) $ are adjusted to be in $ \mathbb Z^{n \cdot m}_p[X] $ and the blinding factor $ \tau_x $ for the inner product $ \hat{t} $ (step 61 in Figure 5) is adjusted for the randomness of each commitment $ V_j $. The verification check (step 65 in Figure 6) is updated to include all $ V_j $ commitments and the definition of $ P $ (step 66 in Figure 6) is changed to be a commitment to the new $ r $.

This aggregated range proof that makes use of the inner product argument only uses $ 2 \cdot [ \log _2 (n \cdot m)] + 4 $ group elements and $ 5 $ elements in $ \mathbb Z_p $. The growth in size is limited to an additive term $ 2 \cdot [ \log _2 (m)] $ as opposed to a multiplicative factor $ m $ for $ m $ independent range proofs.

The aggregate range proof presented here has the following Commitment Scheme properties:

  • Perfect completeness (hiding): Every validity/truth is provable. Also refer to Definition 9 in [1].
  • Perfect special HVZK: The verifier $ \mathcal{V} $ behaves according to the protocol. Also refer to Definition 12 in [1].
  • Computational witness extended emulation (binding): A witness can be computed in time closely related to time spent by the prover $ \mathcal{P} $. Also refer to Definition 10 in [1].

Non-interactive Proof through Fiat-Shamir Heuristic

So far, the verifier $ \mathcal{V} ​$ behaves as an honest verifier and all messages are random elements from $ \mathbb Z_p^* ​$. These are the prerequisites needed to convert the protocol presented so far into a non-interactive protocol that is secure and has full zero-knowledge in the random oracle model (thus without a trusted setup) using the Fiat-Shamir Heuristicdef.

MPC Protocol for Bulletproofs

The Multi-party Computation (MPC) protocol for Bulletproofs allows multiple parties to construct a single, simple, efficient, aggregate range proof designed for Bulletproofs. This is valuable when multiple parties want to create a single joined confidential transaction, where each party knows some of the inputs and outputs, and needs to create range proofs for their known outputs. In Bulletproofs, $ m $ parties, each having a Pedersen Commitment $ (V_k)_{k=1}^m $, can generate a single Bulletproof to which each $ V_k $ commits in some fixed range.

Let $ k $ denote the $ k $th party's message, thus $ A^{(k)} $ is generated using only inputs of party $ k $. A set of distinct generators $ (g^{(k)}, h^{(k)})^m_{k=1} $ is assigned to each party, and $ \mathbf g,\mathbf h $ is defined as the interleaved concatenation of all $ g^{(k)} , h^{(k)} ​$ such that

$$ g_i=g_{[{i \over{m}}]}^{((i-1) \mod m+1)} \mspace{15mu} \mathrm{and} \mspace{15mu} h_i=h_{[{i \over{m}}]}^{((i-1) \mod m+1)} $$

The protocol either uses three rounds with linear communication in both $ m $ and the binary encoding of the range, or it uses a logarithmic number of rounds and communication that is only linear in $ m $. For the linear communication case, the protocol in Inner-product Range Proof is followed with the difference that each party generates its part of the proof using its own inputs and generators, i.e.

$$ A^{(k)} , S^{(k)}; \mspace{15mu} T_1^{(k)} , T_2^{(k)}; \mspace{15mu} \tau_x^{(k)} , \mu^{(k)} , \hat{t}^{(k)} , \mathbf{l}^{(k)} , \mathbf{r}^{(k)} $$

These shares are sent to a dealer (this could be anyone, even one of the parties) who adds them homomorphically to generate the respective proof components, e.g.

$$ A = \prod^m_{k=1} A^{(k)} \mspace{15mu} \mathrm{and} \mspace{15mu} \tau_x = \prod^m_{k=1} \tau_x^{(k)} $$

In each round, the dealer generates the challenges using the Fiat-Shamirdef heuristic and the combined proof components, and sends them to each party. In the end, each party sends $ \mathbf{l}^{(k)},\mathbf{r}^{(k)} $ to the dealer, who computes $ \mathbf{l},\mathbf{r} $ as the interleaved concatenation of all shares. The dealer runs the inner product argument (Protocol 1) to generate the final proof. Each proof component is the (homomorphic) sum of each party's proof components and each share constitutes part of a separate zero-knowledge proof. Figure 7 shows an example of the MPC protocol implementation using three rounds with linear communication [29]:

Figure 7: MPC Implementation Example [29]

The communication can be reduced by running a second MPC protocol for the inner product argument, reducing the rounds to $ \log_2(l) $. Up to the last $ \log_2(l) $ round, each party's witnesses are independent, and the overall witness is the interleaved concatenation of the parties' witnesses. The parties compute $ L^{(k)}, R^{(k)} $ in each round and the dealer computes $ L, R $ as the homomorphic sum of the shares. In the final round, the dealer generates the final challenge and sends it to each party, who in turn send their witness to the dealer, who completes Protocol 2.

MPC Protocol Security Discussion

With the standard MPC protocol implementation as depicted in Figure 7, there's no guarantee that the dealer behaves honestly according to the specified protocol and generates challenges honestly. Since the Bulletproofs protocol is special HVZK ([30], [31]) only, a secure MPC protocol requires all parties to receive partial proof elements and independently compute aggregated challenges, in order to avoid the alternative case where a single dealer maliciously generates them. A single dealer can, however, assign index positions to all parties at the start of the protocol, and may complete the inner product compression, since it does not rely on partial proof data.

HVZK implies witness-indistinguishability [32], but it isn't clear what the implications of this would be on the MPC protocol in practice. It could be that there are no practical attacks possible from a malicious dealer and that witness-indistinguishability is sufficient.

Zero-knowledge Proof for Arithmetic Circuits

Bulletproofs present an efficient zero-knowledge argument for arbitrary Arithmetic Circuitsdef with a proof size of $ 2 \cdot [ \log _2 (n)+13] $ elements with $ n $ denoting the multiplicative complexity (number of multiplication gates) of the circuit.

Bootle et al. [2] showed how an arbitrary arithmetic circuit with $ n $ multiplication gates can be converted into a relation containing a Hadamard Productdef relation with additional linear consistency constraints. The communication cost of the addition gates in the argument was removed by providing a technique that can directly handle a set of Hadamard products and linear relations together. For a two-input multiplication gate, let $ \mathbf a_L , \mathbf a_R $ be the left and right input vectors respectively, then $ \mathbf a_O = \mathbf a_L \circ \mathbf a_R $ is the vector of outputs. Let $ Q \leqslant 2 \cdot n $ be the number of linear consistency constraints, $ \mathbf W_{L,q} \mspace{3mu} , \mathbf W_{R,q} \mspace{3mu}, \mathbf W_{O,q} \in \mathbb Z_p^n $ be the gate weights and $ c_q \in \mathbb Z_p $ for all $ q \in [1,Q] $, then the linear consistency constraints have the form $$ \langle \mathbf W_{L,q}, \mathbf a_L \rangle + \langle \mathbf W_{R,q}, \mathbf a_R \rangle +\langle \mathbf W_{O,q}, \mathbf a_O \rangle = c_q $$

The high-level idea of this protocol is to convert the Hadamard Product relation along with the linear consistency constraints into a single inner product relation. Pedersen Commitments $ V_j $ are also included as input wires to the arithmetic circuit, which is an important refinement, otherwise the arithmetic circuit would need to implement a commitment algorithm. The linear constraints also include openings $ v_j $ of $ V_j $.

Inner-product Proof for Arithmetic Circuits (Protocol 3)

Similar to Inner-product Range Proof, the prover $ \mathcal{P} $ produces a random linear combination of the Hadamard Productdef and linear constraints to form a single inner product constraint. If the combination is chosen randomly by the verifier $ \mathcal{V} $, then with overwhelming probability, the inner-product constraint implies the other constraints. A proof system must be presented for relation (10) below:

$$ \begin{aligned} \mspace{3mu} (g,h \in \mathbb{G} \mspace{3mu} ; \mspace{3mu} \mathbf g,\mathbf h \in \mathbb{G}^n \mspace{3mu} ; \mspace{3mu} \mathbf V \in \mathbb{G}^m \mspace{3mu} ; \mspace{3mu} \mathbf W_{L} , \mathbf W_{R} , \mathbf W_{O} \in \mathbb Z_p^{Q \times n} \mspace{3mu} ; \\ \mathbf W_{V} \in \mathbb Z_p^{Q \times m} \mspace{3mu} ; \mspace{3mu} \mathbb{c} \in \mathbb Z_p^{Q} \mspace{3mu} ; \mspace{3mu} \mathbf a_L , \mathbf a_R , \mathbf a_O \in \mathbb Z_p^{n} \mspace{3mu} ; \mspace{3mu} \mathbf v , \mathbf \gamma \in \mathbb Z_p^{m}) \mspace{3mu} : \mspace{15mu} \\ V_j =h^{\gamma_j} g^{v_j} \mspace{6mu} \forall \mspace{6mu} j \in [1,m] \mspace{6mu} \wedge \mspace{6mu} \mathbf a_L + \mathbf a_R = \mathbf a_O \mspace{6mu} \wedge \mspace{50mu} \\ \mathbf W_L \cdot \mathbf a_L + \mathbf W_R \cdot \mathbf a_R + \mathbf W_O \cdot \mathbf a_O = \mathbf W_V \cdot \mathbf v + \mathbf c \mspace{50mu} \end{aligned} \tag{10} $$

Let $ \mathbf W_V \in \mathbb Z_p^{Q \times m} $ be the weights for a commitment $ V_j $. Relation (10) only holds when $ \mathbf W_{V} $ is of rank $ m $, i.e. if the columns of the matrix are all linearly independent.

Figure 8 presents Part 1 of the protocol, where the prover $ \mathcal{P} $ commits to $ l(X),r(X),t(X) $:

Figure 8: Bulletproofs' Protocol 3 (Part 1) [1]

Figure 9 presents Part 2 of the protocol, where the prover $ \mathcal{P} ​$ convinces the verifier $ \mathcal{V} ​$ that the polynomials are well formed and that $ \langle l(X),r(X) \rangle = t(X) ​$:

Figure 9: Bulletproofs' Protocol 3 (Part 2) [1]

The proof system presented here has the following Commitment Scheme properties:

  • Perfect completeness (hiding): Every validity/truth is provable. Also refer to Definition 9 in [1].
  • Perfect HVZK: The verifier $ \mathcal{V} ​$ behaves according to the protocol. Also refer to Definition 12 in [1].
  • Computational witness extended emulation (binding): A witness can be computed in time closely related to time spent by the prover $ \mathcal{P} $. Also refer to Definition 10 in [1].

Logarithmic-sized Non-interactive Protocol for Arithmetic Circuits

Similar to Logarithmic Range Proof, the communication cost of Protocol 3 can be reduced by using the efficient inner product argument. Transmission of vectors $ \mathbf {l} ​$ and $ \mathbf {r} ​$ to the verifier $ \mathcal{V} ​$ (step 82 in Figure 9) can be eliminated, and transfer of information limited to the scalar properties $ ( \tau _x , \mu , \hat t ) ​$ alone. The prover $ \mathcal{P} ​$ and verifier $ \mathcal{V} ​$ engage in an inner product argument on public input $ (\mathbf {g} , \mathbf {h} ^ \backprime , P \cdot h^{-\mu}, \hat t) ​$ to check correctness of $ \mathbf {l} ​$ and $ \mathbf {r} ​$ (step 92 in Figure 9) and $ \hat {t} ​$ (step 88 in Figure 9); this is the same as verifying that the witness $ \mathbf {l} , \mathbf {r} ​$ satisfies the inner product of relation. Communication is now reduced to $ 2 \cdot [ \log_22(n)] + 8 ​$ group elements and $ 5 ​$ elements in $ \mathbb Z ​$ instead of $ 2 \cdot n ​$ elements, thereby achieving a proof size that is logarithmic in $ n ​$.

Similar to Non-interactive Proof through Fiat-Shamir-Heuristic, the protocol presented so far can be turned into an efficient, non-interactive proof that is secure and full zero-knowledge in the random oracle model (thus without a trusted setup) using the Fiat-Shamir Heuristicdef.

The proof system presented here has the following Commitment Scheme properties:

  • Perfect completeness (hiding): Every validity/truth is provable. Also refer to Definition 9 in [1].
  • Statistical zero-knowledge: The verifier $ \mathcal{V} $ behaves according to the protocol and $ \mathbf {l} , \mathbf {r} $ can be efficiently simulated.
  • Computational soundness (binding): if the generators $ \mathbf {g} , \mathbf {h} , g , h ​$ are independently generated, then finding a discrete logarithm relation between them is as hard as breaking the Discrete Log Problem.

Optimized Verifier using Multi-exponentiation and Batch Verification

In many of the Bulletproofs' Use Cases, the verifier's runtime is of particular interest. This protocol presents optimizations for a single range proof that is also extendable to aggregate range proofs and the arithmetic circuit protocol.


In Protocol 2, verification of the inner-product is reduced to a single multi-exponentiation. This can be extended to verify the whole range proof using a single multi-exponentiation of size $ 2n + \log_2(n) + 7 $. In Protocol 2, the Bulletproof verifier $ \mathcal{V} $ only performs two checks: step 68 in Figure 6 and step 16 in Figure 2.

In the protocol presented in Figure 10, which is processed by the verifier $ \mathcal{V} $, $ x_u $ is the challenge from Protocol 1, $ x_j $ the challenge from round $ j $ of Protocol 2, and $ L_j , R_j $ the $ L , R $ values from round $ j $ of Protocol 2.

Figure 10: Bulletproofs Optimized Verifier using Multi-exponentiation and Batch Verification [1]

A further idea is that multi-exponentiation (steps 98 and 105 in Figure 10) be delayed until those checks are performed, and that they are also combined into a single check using a random value $ c \xleftarrow[]{$} \mathbf Z_p ​$. This follows from the fact that if $ A^cB = 1 ​$ for a random $ c ​$, then with high probability, $ A = 1 \mspace 3mu \wedge \mspace 3mu B = 1 ​$. Various algorithms are known to compute the multi-exponentiations and scalar quantities (steps 101 and 102 in Figure 10) efficiently (sub-linearly), thereby further improving the speed and efficiency of the protocol.

Batch Verification

A further important optimization concerns the verification of multiple proofs. The essence of the verification is to calculate a large multi-exponentiation. Batch verification is applied in order to reduce the number of expensive exponentiations. This is based on the observation that checking $ g^x = 1 \mspace 3mu \wedge \mspace 3mu g^y = 1 ​$ can be checked by drawing a random scalar $ \alpha ​$ from a large enough domain and checking that $ g^{\alpha x + y} = 1 ​$. With high probability, the latter equation implies the first. When applied to multi-exponentiations, $ 2n ​$ exponentiations can be saved per additional proof. Verifying $ m ​$ distinct range proofs of size $ n ​$ only requires a single multi-exponentiation of size $ 2n+2+m \cdot (2 \cdot \log (n) + 5 ) ​$ along with $ O ( m \cdot n ) ​$ scalar operations.

Evolving Bulletproof Protocols

Interstellar [24] recently introduced the Programmable Constraint Systems for Bulletproofs [23], an evolution of Zero-knowledge Proof for Arithmetic Circuits, extending it to support proving arbitrary statements in zero-knowledge using a constraint system, bypassing arithmetic circuits altogether. They provide an Application Programmers Interface (API) for building a constraint system directly, without the need to construct arithmetic expressions and then transform them into constraints. The Bulletproof constraint system proofs are then used as building blocks for a confidential assets protocol called Cloak.

The constraint system has three kinds of variables:

  • High-level witness variables
    • known only to the prover $ \mathcal{P} $, as external inputs to the constraint system;
    • represented as individual Pedersen Commitments to the external variables in Bulletproofs.
  • Low-level witness variables
    • known only to the prover $ \mathcal{P} $, as internal to the constraint system;
    • representing the inputs and outputs of the multiplication gates.
  • Instance variables
    • known to both the prover $ \mathcal{P} $ and the verifier;
    • $ \mathcal{V} $, as public parameters;
    • represented as a family of constraint systems parameterized by public inputs (compatible with Bulletproofs);
    • folding all instance variables into a single constant parameter internally.

Instance variables can select the constraint system out of a family for each proof. The constraint system becomes a challenge from a verifier $ \mathcal{V} ​$ to a prover $ \mathcal{P} ​$, where some constraints are generated randomly in response to the prover's $ \mathcal{P} ​$ commitments. Challenges to parametrize constraint systems make the resulting proof smaller, requiring only $ O(n) ​$ multiplications instead of $ O(n^2) ​$ in the case of verifiable shuffles when compared to a static constraint system.

Merlin transcripts [25] employing the Fiat-Shamir Heuristicdef are used to generate the challenges. The challenges are bound to the high-level witness variables (the external inputs to the constraint system), which are added to the transcript before any of the constraints are created. The prover $ \mathcal{P} $ and verifier $ \mathcal{V} $ can then compute weights for some constraints with the use of the challenges.

Because the challenges are not bound to low-level witness variables, the resulting construction can be unsafe. Interstellar are working on an improvement to the protocol that would allow challenges to be bound to a subset of the low-level witness variables, and have a safer API using features of Rust’s type system.

The resulting API provides a single code path used by both the prover $ \mathcal{P} ​$ and verifier $ \mathcal{V} ​$ to allocate variables and define constraints. This is organized into a hierarchy of task-specific gadgets, which manages allocation, assignment and constraints on the variables, ensuring that all variables are constrained. Gadgets interact with mutable constraint system objects, which are specific to the prover $ \mathcal{P} ​$ and verifier $ \mathcal{V} ​$. They also receive secret variables and public parameters and generate challenges.

The Bulletproofs library [22] does not provide any standard gadgets, but only an API for the constraint system. Each protocol built on top of the Bulletproofs library must create its own collection of gadgets to enable building a complete constraint system out of them. Figure 11 shows the Interstellar Bulletproof zero-knowledge proof protocol built with their programmable constraint system:

Figure 11: Interstellar Bulletproof Zero-knowledge Proof Protocol [23]

Conclusions, Observations and Recommendations

  • Bulletproofs have many potential use cases or applications, but are still under development. A new confidential blockchain protocol such as Tari should carefully consider expanded use of Bulletproofs to maximally leverage functionality of the code base.

  • Bulletproofs are not done yet, as illustrated in Evolving Bulletproof Protocols, and their further development and efficient implementation have a lot of traction in the community.

  • Bünz et al. [1] proposed that the switch commitment scheme defined by Ruffing et al. [10] can be used for Bulletproofs if doubts in the underlying cryptographic hardness (discrete log) assumption arise in future. The switch commitment scheme allows for a blockchain with proofs that are currently only computationally binding to later switch to a proof system that is perfectly binding and secure against quantum adversaries; this will weaken the perfectly hiding property as a drawback and slow down all proof calculations. In the Bünz et al. [1] proposal, all Pedersen Commitments will be replaced with ElGamal Commitmentsdef to move from computationally binding to perfectly binding. They also gave further ideas about how the ElGamal commitments can possibly be enhanced to improve the hiding property to be statistical or perfect. (Refer to the Grin projects' implementation here.)

  • It is important that developers understand more about the fundamental underlying mathematics when implementing something like Bulletproofs, even if they just reuse libraries developed by someone else.

  • With the standard MPC protocol implementation, there is no guarantee that the dealer behaves honestly according to the protocol and generates challenges honestly. The protocol can be extended to be more secure if all parties receive partial proof elements and independently compute aggregated challenges.


[1] B. Bünz, J. Bootle, D. Boneh, A. Poelstra, P. Wuille and G. Maxwell, "Bulletproofs: Short Proofs for Confidential Transactions and More", Blockchain Protocol Analysis and Security Engineering 2018 [online]. Available: Date accessed: 2018‑09‑18.

[2] J. Bootle, A. Cerulli, P. Chaidos, J. Groth and C. Petit, "Efficient Zero-knowledge Arguments for Arithmetic Circuits in the Discrete Log Setting", Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 327‑357. Springer, 2016 [online]. Available: Date accessed: 2018‑09‑21.

[3] A. Poelstra, A. Back, M. Friedenbach, G. Maxwell G. and P. Wuille, "Confidential Assets", Blockstream [online]. Available: Date accessed: 2018‑09‑25.

[4] Wikipedia: "Zero-knowledge Proof" [online]. Available: Date accessed: 2018‑09‑18.

[5] Wikipedia: "Discrete Logarithm" [online]. Available: Date accessed: 2018‑09‑20.

[6] A. Fiat and A. Shamir, "How to Prove Yourself: Practical Solutions to Identification and Signature Problems", CRYPTO 1986: pp. 186‑194 [online]. Available: Date accessed: 2018‑09‑20.

[7] D. Bernhard, O. Pereira and B. Warinschi, "How not to Prove Yourself: Pitfalls of the Fiat-Shamir Heuristic and Applications to Helios" [online]. Available: Date accessed: 2018‑09‑20.

[8] "Pedersen-commitment: An Implementation of Pedersen Commitment Schemes" [online]. Available: Date accessed: 2018‑09‑25.

[9] "Zero Knowledge Proof Standardization - An Open Industry/Academic Initiative" [online]. Available: Date accessed: 2018‑09‑26.

[10] T. Ruffing and G. Malavolta, "Switch Commitments: A Safety Switch for Confidential Transactions" [online]. Available: Date accessed: 2018‑09‑26.

[11] GitHub: "adjoint-io/Bulletproofs, Bulletproofs are Short Non-interactive Zero-knowledge Proofs that Require no Trusted Setup" [online]. Available: Date accessed: 2018‑09‑10.

[12] Wikipedia: "Commitment Scheme" [online]. Available: Date accessed: 2018‑09‑26.

[13] Cryptography Wikia: "Commitment Scheme" [online]. Available: Date accessed: 2018‑09‑26.

[14] Adjoint Inc. Documentation: "Pedersen Commitment Scheme" [online]. Available: Date accessed: 2018‑09‑27.

[15] T. Pedersen. "Non-interactive and Information-theoretic Secure Verifiable Secret Sharing" [online]. Available: Date accessed: 2018‑09‑27.

[16] A. Sadeghi and M. Steiner, "Assumptions Related to Discrete Logarithms: Why Subtleties Make a Real Difference" [online]. Available: Date accessed: 2018‑09‑24.

[17] P. Sharma P, A. Gupta and S. Sharma, "Intensified ElGamal Cryptosystem (IEC)", International Journal of Advances in Engineering & Technology, January 2012 [online].* Available: Date accessed: 2018‑10‑09.

[18] Y. Tsiounis and M. Yung M, "On the Security of ElGamal Based Encryption" [online]. Available: Date accessed: 2018‑10‑09.

[19] Wikipedia: "Decisional Diffie–Hellman Assumption" [online]. Available: Date accessed: 2018‑10‑09.

[20] Wikipedia: "Arithmetic Circuit Complexity" [online]. Available: Date accessed: 2018‑11‑08.

[21] Wikipedia: "Hadamard Product (Matrices)" [online]. Available: Date accessed: 2018‑11‑12.

[22] Dalek Cryptography - Crate Bulletproofs [online]. Available: Date accessed: 2018‑11‑12.

[23] "Programmable Constraint Systems for Bulletproofs" [online]. Available: Date accessed: 2018‑11‑22.

[24] Inter/stellar website [online]. Available: Date accessed: 2018‑11‑22.

[25] "Dalek Cryptography - Crate Merlin" [online]. Available: Date accessed: 2018‑11‑22.

[26] B. Franca, "Homomorphic Mini-blockchain Scheme", April 2015 [online]. Available: Date accessed: 2018‑11‑22.

[27] C. Franck and J. Großschädl, "Efficient Implementation of Pedersen Commitments Using Twisted Edwards Curves", University of Luxembourg [online]. Available: Date accessed: 2018‑11‑22.

[28] A. Gibson, "An Investigation into Confidential Transactions", July 2018 [online]. Available: Date accessed: 2018‑11‑22.

[29] "Dalek Cryptography - Module bulletproofs::range_proof_mpc" [online]. Available: Date accessed: 2019‑07‑10.

[30] "What is the difference between honest verifier zero knowledge and zero knowledge?" [Online]. Available: Date accessed: 2019‑11‑12.

[31] "600.641 Special Topics in Theoretical Cryptography - Lecture 11: Honest Verifier ZK and Fiat-Shamir" [online]. Available: Date accessed: 2019‑11‑12.

[32] Wikipedia: "Witness-indistinguishable proof" [online]. Available: Date accessed: 2019‑11‑12.


Appendix A: Definition of Terms

Definitions of terms presented here are high level and general in nature. Full mathematical definitions are available in the cited references.

  • Arithmetic Circuits: An arithmetic circuit $ C $ over a field $ F $ and variables $ (x_1, ..., x_n) $ is a directed acyclic graph whose vertices are called gates. Arithmetic circuits can alternatively be described as a list of addition and multiplication gates with a collection of linear consistency equations relating the inputs and outputs of the gates. The size of an arithmetic circuit is the number of gates in it, with the depth being the length of the longest directed path. Upper bounding the complexity of a polynomial $ f $ is to find any arithmetic circuit that can calculate $ f $, whereas lower bounding is to find the smallest arithmetic circuit that can calculate $ f $. An example of a simple arithmetic circuit with size six and depth two that calculates a polynomial is shown here ([11], [20]):

  • Discrete Logarithm/Discrete Logarithm Problem (DLP): In the mathematics of real numbers, the logarithm $ \log_b^a $ is a number $ x $ such that $ b^x=a $​, for given numbers $ a $ and $ b ​$. Analogously, in any group $ G $ , powers $ b^k $ can be defined for all integers $ k $, and the discrete logarithm $ \log_ba $ is an integer $ k $ such that $ b^k=a $​. Algorithms in public-key cryptography base their security on the assumption that the discrete logarithm problem over carefully chosen cyclic finite groups and cyclic subgroups of elliptic curves over finite fields has no efficient solution ([5], [16]).
  • ElGamal Commitment/Encryption: An ElGamal commitment is a Pedersen Commitmentdef with an additional commitment $ g^r $ to the randomness used. The ElGamal encryption scheme is based on the Decisional Diffe-Hellman (DDH) assumption and the difficulty of the DLP for finite fields. The DDH assumption states that it is infeasible for a Probabilistic Polynomial-time (PPT) adversary to solve the DDH problem ([1], [17], [18], [19]). Note: The ElGamal encryption scheme should not be confused with the ElGamal signature scheme.
  • Fiat–Shamir Heuristic/Transformation: The Fiat–Shamir heuristic is a technique in cryptography to convert an interactive public-coin protocol (Sigma protocol) between a prover $ \mathcal{P} $ and a verifier $ \mathcal{V} $ into a one-message (non-interactive) protocol using a cryptographic hash function ([6], [7]).
    • The prover $ \mathcal{P} $ will use a Prove() algorithm to calculate a commitment $ A $ with a statement $ Y $ that is shared with the verifier $ \mathcal{V} $ and a secret witness value $ w $ as inputs. The commitment $ A $ is then hashed to obtain the challenge $ c $, which is further processed with the Prove() algorithm to calculate the response $ f $. The single message sent to the verifier $ \mathcal{V} $ then contains the challenge $ c $ and response $ f $.

    • The verifier $ \mathcal{V} $ is then able to compute the commitment $ A $ from the shared statement $ Y $, challenge $ c $ and response $ f $. The verifier $ \mathcal{V} $ will then use a Verify() algorithm to verify the combination of shared statement $ Y $, commitment $ A $, challenge $ c $ and response $ f $.

    • A weak Fiat–Shamir transformation can be turned into a strong Fiat–Shamir transformation if the hashing function is applied to the commitment $ A $ and shared statement $ Y $ to obtain the challenge $ c $ as opposed to only the commitment $ A $.

  • Hadamard Product: In mathematics, the Hadamard product is a binary operation that takes two matrices $ \mathbf {A} , \mathbf {B} $ of the same dimensions, and produces another matrix of the same dimensions where each element $ i,j $ is the product of elements $ i,j $ of the original two matrices. The Hadamard product $ \mathbf {A} \circ \mathbf {B} $ is different from normal matrix multiplication, most notably because it is also commutative $ [ \mathbf {A} \circ \mathbf {B} = \mathbf {B} \circ \mathbf {A} ] $ ​along with being associative $ [ \mathbf {A} \circ ( \mathbf {B} \circ \mathbf {C} ) = ( \mathbf {A} \circ \mathbf {B} ) \circ \mathbf {C} ] $ and distributive over addition $ [ \mathbf {A} \circ ( \mathbf {B} + \mathbf {C} ) = \mathbf {A} \circ \mathbf {B} + \mathbf {A} \circ \mathbf {C} ] $ ([21]).

$$ \mathbf {A} \circ \mathbf {B} = \mathbf {C} = (a_{11} \cdot b_{11} \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_{1m} \cdot b_{1m} \mspace{6mu} ; \mspace{6mu} . . . \mspace{6mu} ; \mspace{6mu} a_{n1} \cdot b_{n1} \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_{nm} \cdot b_{nm} ) $$

  • Zero-knowledge Proof/Protocol: In cryptography, a zero-knowledge proof/protocol is a method by which one party (the prover $ \mathcal{P} $) can convince another party (the verifier $ \mathcal{V} $) that a statement $ Y $ is true, without conveying any information apart from the fact that the prover $ \mathcal{P} $ knows the value of $ Y $. The proof system must be complete, sound and zero-knowledge ([4], [9]).
    • Complete: If the statement is true, and both the prover $ \mathcal{P} $ and verifier $ \mathcal{V} $ follow the protocol, the verifier will accept.

    • Sound: If the statement is false, and the verifier $ \mathcal{V} $ follows the protocol, the verifier $ \mathcal{P} $ will not be convinced.

    • Zero-knowledge: If the statement is true, and the prover $ \mathcal{P} $ follows the protocol, the verifier $ \mathcal{V} $ will not learn any confidential information from the interaction with the prover $ \mathcal{P} $ apart from the fact that the statement is true.


Pure-Rust Elliptic Curve Cryptography

    Isis Agora Lovecruft


Fast, Safe, Pure-Rust Elliptic Curve Cryptography by Isis Lovecruft & Henry De Valence

This talk discusses the design and implementation of curve25519-dalek, a pure-Rust implementation of operations on the elliptic curve known as Curve25519.

They discuss the goals of the library and give a brief overview of the implementation strategy.

They also discuss features of the Rust language that allow them to achieve competitive performance without sacrificing safety or readability, and future features that could allow them to achieve more safety and more performance.

Finally, they will discuss how the dalek library makes it easy to implement complex cryptographic primitives, such as zero-knowledge proofs.

The content in this video is not complicated, but it is an advanced topic, since it dives deep into the internals of how cryptographic operations are implemented on computers.





Zero-knowledge proof protocols have gained much attention in the past decade, due to the popularity of cryptocurrencies. A zero-knowledge Succinct Non-interactive Argument of Knowledge (zk-SNARK), referred to here as an argument of knowledge, is a special kind of a zero-knowledge proof. The difference between a proof of knowledge and an argument of knowledge is rather technical for the intended audience of this report. The distinction lies in the difference between statistical soundness and computational soundness. The technical reader is referred to [1] or [2].

zk-SNARKs have found many applications in zero-knowledge proving systems, libraries of proving systems such as libsnark and bellman and general-purpose compilers from high-level languages such as Pinocchio. "DIZK: A Distributed Zero Knowledge Proof System", by Wu et al. [3], is one of the best papers on zk-SNARKs, at least from a cryptographer's point of view. Coins such as Zerocoin and Zcash use zk-SNARKs. The content reflected in this curated content report, although not all inclusive, covers all the necessary basics one needs to understand zk-SNARKs and their implementations.

What is ZKP? A Complete Guide to Zero-knowledge Proof

    Hasib Anwar
    Just is a born geek


A zero-knowledge proof is a technique one uses to prove to a verifier that one has knowledge of some secret information, without disclosing the information. This is a powerful tool in the blockchain world, particularly in cryptocurrencies. The aim is to achieve a trustless network, i.e. anyone in the network should be able to verify information recorded in a block.


In this post, Anwar provides an excellent zero-knowledge infographic that summarizes what a zero-knowledge proof is, its main properties (completeness, soundness and zero-knowledge), as well as its use cases such as authentication, secure data sharing and file-system control. Find what Anwar calls a complete guide to zero-knowledge proofs, here.

Introduction to zk-SNARKs

    Dr Christian Reitwiessner
    Ethereum Foundation


A typical zero-knowledge proof protocol involves at least two participants: the Verifier and the Prover. The Verifier sends a challenge to the Prover in the form of a computational problem. The Prover has to solve the computational problem and, without revealing the solution, send proof of the correct solution to the Verifier.


zk-SNARKs are important in blockchains for at least two reasons:

  • Blockchains are by nature not scalable. They thus benefit in that zk-SNARKs allow a verifier to verify a given proof of a computation without having to actually carry out the computation.
  • Blockchains are public and need to be trustless, as explained earlier. The zero-knowledge property of zk‑SNARKs as well as the possibility to put in place a so-called trusted setup make this almost possible.

Reitwiessner uses an example of a mini 4 x 4 Sudoku challenge as an example of an interactive zero-knowledge proof. He explains how it would fall short of the zero-knowledge property without the use of homomorphic encryption as well as putting in place a trusted setup. Reitwiessner proceeds to explain how computations involving polynomials are better suited as challenges posed by the Verifier to the Prover.


The slides from the talk can be found here.

Comparing General-purpose zk-SNARKs

    Ronald Mannak
    Open-source Blockchain Developer


Recently, zk-SNARK constructs such as Supersonic [4] and Halo [5] were created mainly for efficiency of proofs. The following article by Mannak gives a quick survey of the most recent developments, comparing general-purpose zk-SNARKs. The article is easy to read. It provides the technical reader with relevant references to scholarly research papers.


The main drawback of zk-SNARKs is their reliance on a common reference string that is created using a trusted setup. In this post, Mannak mentions three issues with reference strings or having a trusted setup:

  • A leaked reference string can be used to create undetectable fake proofs.
  • One setup is only applicable to one computation, thus making smart contracts impossible.
  • Reference strings are not upgradeable. This means that a whole new ceremony is required, even for minor bug fixes in crypto coins.

After classifying zk-SNARKs according to the type of trusted setup they use, Mannak compares their proof and verification sizes as well as performance.

Quadratic Arithmetic Programs - from Zero to Hero

    Vitalik Buterin
    Co-founder of Ethereum


The zk-SNARK end-to-end journey is to create a function or a protocol that takes the proof, given by the Prover, and checks its veracity. In a zk-SNARK proof, a computation is verified step by step. To do so, the computation is first turned into an arithmetic circuit. Each of its wires is then assigned a value that results from feeding specific inputs to the circuit. Next, each computing node of the arithmetic circuit (called a “gate” - an analogy of the nomenclature of electronic circuits) is transformed into a constraint that verifies the output wire has the value it should have for the values assigned to the input wires. This process involves transforming statements or computational problems into various formats on which a zk-SNARK proof can be performed. The following seven steps depicts the process for achieving a zk-SNARK:

Computational Problem —> Arithmetic Circuit —> R1CS —> QAP —> Linear PCP —> Linear Interactive Proof -> zk-SNARK


In this post, Buterin explains how zk-SNARKs work, using an example that focuses on the first three steps of the process given above. Buterin explains how a computational problem can be written as an arithmetic circuit, converted into a rank-1 constraint system or R1CS, and ultimately transform the R1CS into a quadratic arithmetic program.

Explaining SNARKs Series: Part I to Part VII

    Ariel Gabizon
    Engineer at Zcash


The explanation of zk-SNARKs given by Buterin above, and similar explanations by Pinto (6], [7]), although excellent in clarifying the R1CS and the Quadratic Arithmatic Program (QAP) concepts, do not explain how zero-knowledge is achieved in zk-SNARKs. For a step-by-step and mathematical explanation of how this is achieved, as used in Zcash, refer to the seven-part series listed here.

[Part I: Homomorphic Hidings]

This post explains how zk-SNARKs use homomorphic hiding or homomorphic encryption in order to achieve zero-knowledge proofs. Gabizon dives into the mathematics that underpins the cryptographic security of homomorphic encryption afforded by the difficulty of solving discrete log problems in a finite group of a large prime order.

[Part II: Blind Evaluation of Polynomials]

This post explains how the power of the homomorphic property of these types of hidings is seen in how it easily extends to linear combinations. Since any polynomial evaluated at a specific value $x = \bf{s} $ is a weighted linear combination of powers of $\bf{s}​$, this property allows sophisticated zero-knowledge proofs to be set up.

For example, two parties can set up a zero-knowledge proof where the Verifier can request the Prover to prove knowledge of the "right" polynomial $P(x)$, without revealing $P(x)$ to the Verifier. All that the Verifier requests is for the Prover to evaluate $P(x)$ at a secret point $\bf{s}$, without learning what $\bf{s}$ is. Thus, instead of sending $\bf{s}$ in the open, the Verifier sends homomorphic hidings of the necessary power of $\bf{s}$. The Prover therefore simply evaluates the right linear combination of the hidings as dictated to by the polynomial $P(x)$. This is how the Prover performs what is called a blind evaluation of the polynomial $P(x)$ at a secret point $\bf{s}​$ only known by the Verifier.

[Part III: The Knowledge of Coefficient Test and Assumption]

In this post, Gabizon notes that it is necessary to force the Prover to comply with the rules of the protocol. Although this is covered in the next part of the series, in this post he considers the Knowledge of Coefficient (KC) Test, as well as its KC assumption.

The KC Test is in fact a request for a proof in the form of a challenge that a Verifier poses to a Prover. The Verifier sends a pair $( a, b )$ of elements of a prime field, where $a$ is such that $b = \alpha \cdot a$, to the Prover. The Verifier challenges the Prover to produce a similar pair $( a', b' )$, where $b' = \alpha \cdot a' $ for the same scalar $\alpha$. The KC assumption is that if the Prover succeeds with a non-negligible probability, then the Prover knows the ratio between $a$ and $a'$. Gabizon explains how this two concept can be formalized using an extractor of the Prover.

[Part IV: How to make Blind Evaluation of Polynomials Verifiable]

In this part of the series, Gabizon explains how to make the blind evaluation of polynomials of Part II above, verifiable. This requires an extension of the Knowledge of Coefficient Assumption considered in Part III. Due to the homomorphic property of the used homomorphic hiding function, the Prover is able to receive several hidings of $\alpha​$-pairs from the Verifier, evaluate the polynomial $P(x)​$ on a particular linear combination of these hidings of $\alpha​$-pairs and send the resulting pair to the Verifier. Now, according to the extended Knowledge of Coefficient Assumption of degree $d​$, the Verifier can know, with a high probability, that the Prover knows the "right" polynomial, $P(x)​$, without disclosing it.

[Part V: From Computations to Polynomials]

This post aims to translate statements that are to be proved and verified into the language of polynomials. Gabizon explains the same process discussed above by Buterin, for transforming a computational problem into an arithmetic circuit and ultimately into a QAP. However, unlike Buterin, Gabizon does not mention constraint systems.

[Part VI: The Pinocchio Protocol]

The Pinocchio protocol is used as an example of how the QAP computed in the previous parts of this series can be used between both the Prover and the Verifier to achieve a zero-knowledge proof with negligible probability that the Verifier would accept a wrong polynomial as correct. The low probability is guaranteed by a well-known theorem that "two different polynomials of degree at most $2d$ can agree on at most $2d$ points in the given prime field". Gabizon further discusses how to restrict the Prover to choose their polynomials according to the assignment $\bf{s}$ given by the Verifier, and how the Prover can use randomly chosen field elements to blind all the information they send to the Verifier.

[Part VII: Pairings of Elliptic Curves]

This part of the series aims to set up a Common Reference String (CRS) model that can be used to convert the verifiable blind evaluation of the polynomial of Part IV into a non-interactive proof system. A homomorphic hiding that supports both addition and multiplication is needed for this purpose. Such a homomorphic hiding is created from what is known as Tate pairings. Since Tate pairings emanate from Elliptic Curve Groups, Gabizon starts by defining these groups.

The Pinnochio SNARK, however, uses a relaxed notion of a non-interactive proof, where the use of a CRS is allowed. The CRS is created before any proofs are constructed, according to a certain randomized process, and is broadcast to all parties. The assumption here is that the randomness used in creating the CRS is not known to any of the parties.

The intended non-interactive evaluation protocol has three parts; Setup, Proof, and Verification. In the Setup, the CRS and a random field element $\bf{s}$ are used to calculate the Verifier's challenge (i.e. the set of $\alpha$-pairs a in Part IV).

zk-SHARKs: Combining Succinct Verification and Public Coin Setup

    Madars Virza
    Scientist, MIT


Most of the research done on zero-knowledge proofs has been about the efficiency of these types of proofs and making them more practical, especially in cryptocurrencies. One of the most recent innovations is that of the so-called zk-SHARKs (short for zero-knowledge Succinct Hybrid ARguments of Knowledge) designed by Mariana Raykova, Eran Tromer and Madars Virza.


Virza starts with a concise survey of the best zk-SNARK protocols and their applications while giving an assessment of the efficiency of zero-knowledge proof implementations in the context of blockchains. He mentions that although zero-knowledge proofs have found practical applications in privacy preserving cryptocurrencies, privacy preserving smart contracts, proof of regulatory compliance and blockchain-based sovereign identity, they still have a few shortcomings. While QAP-based zero-knowledge proofs can execute fast verifications, they still require a trusted setup. Also, in Probabilistically Checkable Proof (PCP)-based zk-SNARKs, the speed of verification decays with the increasing statement size.

Virza mentions that the danger of slow verification can tempt miners to skip validation of transactions. This can cause forks such as the July 2015 Bitcoin fork. He uses the Bitcoin fork example and slow verification as motivation for a zero-knowledge protocol that allows multiple verification modes. This will allow miners to carry out an optimistic verification without losing much time and later check the validity of transactions by using prudent verification. The zk-SHARK is introduced as one such zero-knowledge protocol that implements these two types of verification modes. It is a hybrid, as it incorporates a Non-interactive Zero-knowledge (NIZK) proof inside a SNARK. The internal design of the NIZK verifier is algebraic in nature, using a new compilation technique for linear PCPs. The special-purpose SNARK, which constitutes the main part of the zk-SHARK, is dedicated to verifying an encoded polynomial delegation problem.

The design of zk-SHARKs is ingenious and aims at avoiding unnecessary coin forkings.


The slides from the talk can be found here.


[1] G. Brassard, D. Chaum and C. Crepeau, "Minimum Disclosure Proofs of Knowledge" [online]. Available: Date accessed: 2019‑12‑07.

[2] M. Nguyen, S. J. Ong and S. Vadhan, "Statistical Zero-knowledge Arguments for NP from any One-way Function (Extended Abstract)" [online]. Available: Date accessed: 2019‑12‑07.

[3] H. Wu, W. Zheng, A. Chiesa, R. A. Popa and I. Stoica, "DIZK: A Distributed Zero Knowledge Proof System", UC Berkeley [online]. Available: Date accessed: 2019‑12‑07.

[4] B. Bünz, B. Fisch and A. Szepieniec, "Transparent SNARKs from DARK Compilers" [online]. Available: Date accessed: 2019‑12‑07.

[5] S. Bowe, J. Grigg and D. Hopwood, "Halo: Recursive Proof Composition without a Trusted Setup" [online]. Available: Date accessed: 2019‑12‑07.

[6] A. Pinto, "Constraint Systems for ZK SNARKs" [online]. Available: Date accessed: 2019‑03‑06.

[7] A. Pinto, "The Vanishing Polynomial for QAPs", 23 March 2019 [online]. Available: Date accessed: 2019‑10‑10.

Rank-1 Constraint System with Application to Bulletproofs


This report explains the technical underpinnings of Rank-1 Constraint Systems (R1CSs) as applied to Bulletproofs.

The literature on the use of R1CSs in zero-knowledge (ZK) proofs, for example in zero-knowledge Succinct Non-interactive ARguments of Knowledge (zk‑SNARKs), shows that this mathematical tool is used simply as one part of many in a complex process towards achieving the proof [1]. Not much attention is given to it, not even in explaining what "rank-1" actually means. Although the terminology is similar to the traditional rank of a matrix in linear algebra, examples on the Internet do not yield a reduced matrix with only one non-zero row or column.

R1CSs became more prominent for Bulletproofs due to research work done by Cathie Yun and her colleagues at Interstellar. The constraint system, in particular R1CS, is used as an add-on to Bulletproof protocols. The title of Yun's article, "Building on Bulletproofs" [2], suggests this is true. One of the Interstellar team's goals is to use the constraint system in their Confidential Asset Protocol called the Cloak and in their envisaged Spacesuit. Despite their work on using R1CS being research in progress, their detailed notes on constraint systems and their implementation in RUST are available in [3].

The aim of this report is to:

  • highlight the connection between arithmetic circuits and R1CSs;
  • clarify the difference R1CSs make in Bulletproofs and in range proofs;
  • compare ZK proofs for arithmetic circuits and programmable constraint systems.

Arithmetic Circuits


Many problems in Symbolic Computationdef and cryptography can be expressed as the task of computing some polynomials. Arithmetic circuits are the most standard model for studying the complexity of such computations. ZK proofs form a core building block of many cryptographic protocols. Of special interest are ZK proof systems capable of proving the correct computation of arbitrary arithmetic circuits ([6], [7]).

Definition of Arithmetic Circuit

An arithmetic circuit $\mathcal{A}$ over the fielddef $\mathcal{F}$ and the set of variables $X = \lbrace {x_1,\dots,x_n} \rbrace$ is a directed acyclic graph such that the vertices of $\mathcal{A}$ are called gates, while the edges are called wires [7]:

  • A gate is of in-degree $l$ if it has $l$ incoming wires, and similarly, of out-degree $k$ if it has $k$ outgoing wires.
  • Every gate in $\mathcal{A}$ of in-degree 0 is labeled with either a variable ${x_i}$ or some field element from $\mathcal{F}$. Such a gate is called an input gate. Every gate of out-degree 0 is called an output gate or the root.
  • Every other gate in $\mathcal{A}$ is labeled with either $\otimes$ or $\oplus$, and called a multiplication gate or addition gate, respectively.
  • An arithmetic circuit is called a formula if it is a directed tree whose edges are directed from the leaves to the root.
  • The depth of a node is the number of edges in the longest directed path between the node and the root.
  • The size of $\mathcal{A}$, denoted $|\mathcal{A}|$, is the number of wires, i.e. edges, in $\mathcal{A}$.

Arithmetic circuits of interest and most applicable to this report are those with gates of in-degree 2 and out-degree 1. A typical multiplication gate has a left input $a_L$, a right input $a_R$ and an output $a_O$ (shown in Figure 1). Note that $ a_L \cdot a_R - a_O = 0 $.

Figure 1: Typical Multiplication Gate

In cases where the inputs and outputs are all vectors of size $n$, i.e. $\mathbf{a_L} = ( a_{L, 1}, a_{L, 2}, \dots, a_{L, n})$, $\mathbf{a_R} = ( a_{R, 1}, a_{R, 2}, \dots, a_{R, n})$ and $\mathbf{a_O} = ( a_{O, 1}, a_{O, 2}, \dots, a_{O, n})$, then multiplication of $ \mathbf{a_L} $ and $ \mathbf{a_R} $ is defined as an entry-wise product called the Hadamard product:

$$ \mathbf{a_L}\circ \mathbf{a_R} = \big(( a_{L, 1} \cdot a_{R, 1} ), ( a_{L, 2} \cdot a_{R, 2} ), \dots, ( a_{L, n} \cdot a_{R, n} ) \big) = \mathbf{a_O} $$

The equation ${ \mathbf{a_L}\circ \mathbf{a_R} = \mathbf{a_O} }$ is referred to as a multiplicative constraint, but is also known as the Hadamard relation [4].

Example of Arithmetic Circuit

An arithmetic circuit computes a polynomial in a natural way, as shown in this example. Consider the following arithmetic circuit $\mathcal{A}$ with inputs $\lbrace x_1, x_2, 1 \rbrace$ over some field $\mathcal{F}​$:

Figure 2: Arithmetic Circuit

The output of $\mathcal{A}​$ above is the polynomial $x^2_1 \cdot x_2 + x_1 + 1 ​$ of total degree three. A typical computational problem would involve finding the solution to, let's say, $x^2_1 \cdot x_2 + x_1 + 1 = 22​$. Or, in a proof of knowledge scenario, the prover has to prove to the verifier that they have the correct solution to such an equation. Following the wires in the example shows that an arithmetic circuit actually breaks down the given computation into smaller equations corresponding to each gate:

$$ u = x_1 \cdot x_1 \text{,} \ \ v = u \cdot x_2 \text{,} \ \ y = x_1 + 1 \ \text {and} \ z = v + y $$

The variables $u, v​$ and $ y ​$ are called auxiliary variables or low-level variables, while ${ z }​$ is the output of $ \mathcal{A} ​$. Thus, in addition to computing polynomials naturally, an arithmetic circuit helps in reducing a computation to a low-level language involving only two variables, one operation and an output.

ZK proofs in general require that statements to be proved are expressed in their simplest terms for efficiency. A ZK proof's end-to-end journey is to create a function to write proofs about, yet such a function needs to work with specific constructs. In making ZK proofs more efficient: "these functions have to be specified as sequences of very simple terms, namely, additions and multiplications of only two terms in a particular field" [8]. This is where arithmetic circuits come in.

In verifying a ZK proof, the verifier needs to carry out a step-by-step check of the computations. When these computations are expressed in terms of arithmetic circuits, the process translates to checking whether the output $ a_O $ of each gate is correct with respect to the given inputs $ a_L $ and $ a_R $. That is, testing if $ a_L \cdot a_R - a_O = 0 $ for each multiplication gate.

Rank-1 Constraint Systems


Although a computational problem is typically expressed in terms of a high-level programming language, a ZK proof requires it to be expressed in terms of a set of quadratic constraints, which are closely related to circuits of logical gates.

Other than in Bulletproof constructions, R1CS types of constraint systems have been featured in several constructions of zk‑SNARKs. At times they were simply referred to as quadratic constraints or quadratic equations; refer to [6], [9] and [10].

Definition of Constraint System

A constraint system was originally defined by Bootle et al. in [5], who first expressed arithmetic circuit satisfiability in terms of the Hadamard relation and linear constraints:

$$ \mathbf{W_L\cdot { a_L} + W_R\cdot { a_R} + W_O\cdot { a_O } = c } $$

where $\mathbf{c}​$ is a vector of constant terms used in linear constraints, and $\mathbf{W_L, W_R}​$ and $\mathbf{W_O}​$ are weights applied to respective input vectors and output vectors. Bunz et al. [4] incorporated a vector $\mathbf{v}​$ and vector of weights $\mathbf{W_V}​$ into the Bootle et al. definition: $$ \mathbf{W_L\cdot { a_L} + W_R\cdot { a_R} + W_O\cdot { a_O } = W_V\cdot { v + c} } $$

where $\mathbf{v} = {(v_1, v_2, \dots, v_m )}$ is a secret vector of openings ${v_i}$ of the Pedersen Commitments $V_i$, $ i \in (1, 2, \cdots, m) $ and $\mathbf{W_V}$ is a vector of weights for all commitments $V_i$. This incorporation of the secret vector $\mathbf{v}$ and the corresponding vector of weights $\mathbf{W_V}​$ into the linear consistency constraints enabled them to provide a protocol for a more general setting [4, page 24].

The Dalek team give a more general definition of a constraint system [3]. A constraint system is a collection of arithmetic constraints over a set of variables. There are two kinds of variables in the constraint system:

  • ${m}$ high-level variables, the input secrets ${ \mathbf{v}}$;
  • $ n$ low-level variables, the internal input vectors ${ \mathbf{a}_L}$ and ${ \mathbf{a}_R},$ and output vectors ${ \mathbf{a}_O } $ of the multiplication gates.

Specifically, an R1CS is a system that consists of two sets of constraints [3]:

  • ${ n}$ multiplicative constraints, $ \mathbf{ a_L \circ a_R = a_O } $; and
  • ${ q}$ linear constraints, $\mathbf{W_L\cdot { a_L} + W_R\cdot { a_R} + W_O\cdot { a_O } = W_V\cdot { v + c} } $.

R1CS Definition of zk‑SNARKs

Arithmetic circuits and R1CS are more naturally applied to zk‑SNARKs, and these are implemented in cryptocurrencies such as Zerocoin and Zcash; refer to [1]. To illustrate the simplicity of this concept, a definition (from [11]) of an R1CS as it applies to zk‑SNARKs is provided in this paragraph.

An R1CS is a sequence of groups of three vectors ${ \bf{a_L}}, { \bf{a_R}}, { \bf{a_O}},$ and the solution to an R1CS is a vector ${ \bf{s}}$ that satisfies the equation:

$$ { \langle {\mathbf{a_L}, \mathbf{s}} \rangle \cdot \langle {\mathbf{a_R}, \mathbf{s}} \rangle - \langle {\mathbf{a_O}, \mathbf{s}} \rangle = 0 } $$


$$ \langle {\mathbf{a_L}, \mathbf{s}} \rangle = a_{L, 1} \cdot s_1 + a_{L, 2} \cdot s_2 + \cdots + a_{L, n} \cdot s_n ​ $$

which is the inner product of the vectors $ \mathbf{a_{L}} = (a_{L,1}, a_{L,2}, ..., a_{L,n} )​$ and $ {\mathbf{s}} = (s_1, s_2, ..., s_n) ​$.

Example of Rank-1 Constraint System

One solution to the equation ${x^2_1 x_2 + x_1 + 1 = 22}$, from the preceding example of an arithmetic circuit, is ${ x_1 = 3}$ and ${ { x_2 = 2 }}$ belonging to the appropriate field ${ \mathcal{F}}$. Thus the solution vector ${ { s = ( const, x_1, x_2, z, u, v, y )}}$ becomes ${ { s = ( 1, 3, 2, 22, 9, 18, 4 )}}$.

It is easy to check that the R1CS for the computation problem in the preceding example is as follows (one need only test if ${ \langle {\mathbf{a_L}, \mathbf{s}} \rangle \cdot \langle {\mathbf{a_R}, \mathbf{s}} \rangle - \langle {\mathbf{a_O}, \mathbf{s}} \rangle = 0}​$ for each equation).

Table 1: Equations and Rank-1 Constraint System Vectors
EquationRank-1 Constraint System Vectors
${ u = x_1\cdot x_1}$$ {\bf{a_L}} = ( 0, 1, 0, 0, 0, 0, 0 ), \ \ {\bf{a_R}} = ( 0, 1, 0, 0, 0, 0, 0 ),\ \ {\bf{a_O}} = ( 0, 0, 0, 0, 1, 0, 0 ) $
$ { v = u\cdot x_2 }$$ {\bf{a_L}} = ( 0, 0, 0, 0, 1, 0, 0 ),\ \ {\bf{a_R}} = ( 0, 0, 1, 0, 0, 0, 0 ),\ \ {\bf{a_O}} = ( 0, 0, 0, 0, 0, 1, 0 ) $
$ { y = 1\cdot( x_1 + 1 ) } $${\bf{a_L}} = ( 1, 1, 0, 0, 0, 0, 0 ),\ \ {\bf{a_R}} = ( 1, 0, 0, 0, 0, 0, 0 ),\ \ {\bf{a_O}} = ( 0, 0, 0, 0, 0, 0, 1 ) $
$ { z = 1\cdot( v + y )} $${\bf{a_L}} = ( 0, 0, 0, 0, 0, 1, 1 ),\ \ {\bf{a_R}} = ( 1, 0, 0, 0, 0, 0, 0 ),\ \ {\bf{a_O}} = ( 0, 0, 0, 1, 0, 0, 0 )$

In a more formal definition, an R1CS is a set of three matrices ${\bf{ A_L, A_R }}$ and ${\bf A_O}$, where the rows of each matrix are formed by the corresponding vectors $ {\bf{a_L }}$, ${ \bf{a_R }}$ and ${ \bf{a_O}} $, respectively, as shown in Table 1:

$$ \bf{A_L} = \bf{\begin{bmatrix} 0&1&0&0&0&0&0 \\ 0&0&0&0&1&0&0 \\ 1&1&0&0&0&0&0 \\ 0&0&0&0&0&1&1 \end{bmatrix}} \text{,} \quad \bf{A_R} = \bf{\begin{bmatrix} 0&1&0&0&0&0&0 \\ 0&0&1&0&0&0&0 \\ 1&0&0&0&0&0&0 \\ 1&0&0&0&0&0&0 \end{bmatrix}} \text{,} \quad \bf{A_O} = \bf{\begin{bmatrix} 0&0&0&0&1&0&0 \\ 0&0&0&0&0&1&0 \\ 0&0&0&0&0&0&1 \\ 0&0&0&1&0&0&0 \end{bmatrix}} $$

Observe that ${ \bf{ (A_L * s^T) \cdot (A_R * s^T ) - (A_O * s^T)} = 0 }$, where "$ * $" is matrix multiplication and ${ \bf s^T}$ is the transpose of the solution vector ${ \bf{s}}$ [8].

From Arithmetic Circuits to Programmable Constraint Systems for Bulletproofs

Interstellar's "Programmable Constraint Systems for Bulletproofs" [12] is an extension of "Efficient Zero-knowledge Arguments for Arithmetic Circuits in the Discrete Log Setting" by Bootle et al. [5], enabling protocols that support proving of arbitrary statements in ZK using constraint systems. Although the focus here is on the two works of research [5] and [12], the Bulletproofs paper by Bunz et al. [4] is here recognized as a bridge between the two. Table 2 compares these three research documents.

All these ZK proofs are based on the difficulty of the discrete logarithm problem.

Table 2: Comparison of Three Research Documents on ZK Proofs
No.Efficient Zero-knowledge Arguments for Arithmetic Circuits in the Discrete Log Setting [5] (2016)Bulletproofs: Short Proofs for Confidential Transactions and More [4] (2017)Programmable Constraint Systems [12] (2018)
1.Introduces the Hadamard relation and linear constraints.Turns the Hadamard relation and linear constraints into a single linear constraint, and these are in fact the R1CS.Generalizes constraint systems and uses what is called gadgets as building blocks for constraint systems.
2.Improves on Groth's work [13] on ZK proofs. Reducing a $\sqrt{N}$ complexity to $6log_2(N) + 13$, where $N$ is the circuit size.Improves on Bootle et al.'s work [5]. Reducing a $6log_2(N) + 13$ complexity to $2log_2(N) + 13$, where $N$ is the circuit size.Adds constraint systems to Bunz et al.'s work on Bulletproofs, which are short proofs. The complexity advantage is seen in proving several statements at once.
3.Introduces logarithm-sized inner-product ZK proofs.Introduces Bulletproofs, extending proofs to proofs of arbitrary statements. The halving method is used on the inner-products, resulting in the above reduction in complexity.Introduces gadgets that are actually add-ons to an ordinary ZK proof. A range proof is an example of a gadget.
4.Uses Fiat-Shamir heuristics in order to achieve non-interactive ZK proofs.Bulletproofs also use the Fiat Shamir heuristics to achieve non-interaction.Merlin transcripts are specifically used for a Fiat-Shamir transformation to achieve non-interaction.
5.The Pedersen commitments are used in order to achieve ZK property.Eliminates the need for a commitment algorithm by including Pedersen commitments among the inputs to the verification proof.Low-level variables, representing inputs and outputs to multiplication gates, are computed per proof and committed using a single vector Pedersen commitment.
6.The ZK proof involves conversion of the arithmetic circuit into an R1CS.The mathematical expression of a Hadamard relation is closely related to an arithmetic circuit. The use of this relation plus linear constraints as a single constraint amounts to using a constraint system.Although arithmetic circuits are not explicitly used here, the Hadamard relation remains the same as first seen in Bulletproofs, more so in the inner-product proof.

Interstellar is building an Application Programming Interface (API) that allows developers to choose their own collection of gadgets suitable for the protocol they wish to develop, as discussed in the next section.

Interstellar's Bulletproof Constraint System


The Interstellar team paved the way for the implementation of several cryptographic primitives in the RUST language, including Ristretto [14], a construction of a prime-order group using a cofactor-8 curve known as Curve25519. They reported on how they implemented Bulletproofs in Henry de Valence's article, "Bulletproofs Pre-release" [15]. An update on their progress in extending the implementation of Bulletproofs [16] to a constraint system API, which enables ZK proofs of arbitrary statements, was given in Cathie Yun's article, "Programmable Constraint Systems for Bulletproofs" [12]. The Hadamard relation and linear constraints together form the constraint system as formalized by the Interstellar team. Most of the mathematical background of these constraints and bulletproofs is contained in Bunz et al.'s paper [4].

Dalek's constraint system, as defined earlier in Definition of Constraint System, is a collection of two types of arithmetic constraints: multiplicative constraints and linear constraints, over a set of high-level and low-level variables.

Easy-to-build Constraint Systems

In this Bulletproofs framework, a prover can build a constraint system in two steps:

  • Firstly, committing to secret inputs and allocating high-level variables corresponding to the inputs.
  • secondly, selecting a suitable combination of multiplicative constraints and linear constraints, as well as requesting a random scalar in response to the high-level variables already committed [3].

Reference [17] gives an excellent outline of ZK proofs that use Bulletproofs:

  1. The prover commits to a value(s) that they want to prove knowledge of.
  2. The prover generates the proof by enforcing the constraints over the committed values and any additional public values. The constraints might require the prover to commit to some additional variables.
  3. The prover sends the verifier all the commitments made in step 1 and step 2, along with the proof from step 2.
  4. The verifier now verifies the proof by enforcing the same constraints over the commitments, plus any public values.

About Gadgets

Consider a verifiable shuffle: given two lists of committed values ${ x_1, x_2, . . ., x_n}$ and ${ y_1, y_2, . . ., y_n},$ prove that the second list is a permutation of the first. Bunz et al. ([4, page 5]) mention that the use of bulletproofs improves the complexity of such a verifiable shuffle to size $\mathcal{O}(log(n))​$ compared to previous implementation results. Although not referred to as a gadget in the paper, this is in fact a shuffle gadget. The term gadget was used and popularized by the Interstellar team, who introduced gadgets as building blocks of constraint systems; refer to [2].

A shuffle gadget (Figure 3) is any function whose outputs are but a permutation of its inputs. By definition of a permutation, the number of inputs to a shuffle gadget is always the same as the number of outputs.

Figure 3: Simple Shuffle Gadgets with Two Inputs [2]

The Interstellar team mentions other gadgets: “merge”, “split” and a “range proof” that are implemented in their Confidential Assets scheme called the Cloak. Just as a shuffle gadget creates constraints that prove that two sets of variables are equal up to a permutation, a range-proof gadget checks that a given value is in the interval $ [ 0, 2^n ] $ where $ n $ is the size of the input vector [3].

Gadgets in their simplest form merely receive some variables as inputs and produce corresponding output values. However, they may allocate more variables, sometimes called auxiliary variables, for internal use, and produce constraints involving all these variables. The main advantage of gadgets is that they are composable, thus a more complex gadget can always be created from a number of single gadgets. Interstellar's Bulletproofs API allows developers to choose their own collection of gadgets suitable for the protocol they wish to develop.

Interstellar's Concluding Remarks

Cathie Yun reports in [12] that their "work on Cloak and Spacesuit is far from complete" and mentions that they still have two goals to achieve:

  • Firstly, in order to "ensure that challenge-based variables cannot be inspected" and prevent the user from accidentally breaking soundness of their gadgets, the Bulletproofs protocol needs to be slightly extended, enabling it to commit "to a portion of low-level variables using a single vector Pedersen commitment without an overhead of additional individual high-level Pedersen commitments" [12].
  • Secondly, to "improve privacy in Cloak" by enabling "multi-party proving of a single constraint system". That is, "building a joint proof of a single constraint system by multiple parties, without sharing their secret inputs with each other".

All-in-all, Yun regards constraint systems as "very powerful because they can represent any efficiently verifiable program" [2].

R1CS Factorization Example for Bulletproofs

In [17], Harchandani explores the Dalek Bulletproofs API, using various examples. Of interest is the factorization problem, which is one out of the six R1CS Bulletproof examples discussed in the article. The computational challenge is to "prove knowledge of factors p and q of a given number r without revealing the factors".

Table 3 gives an outline of the description and the code lines of the example. Harchandani's complete code of this example can be found in [18]. Important to note is that the verifier must have an exact copy of the R1CS circuit used by the prover.

Table 3: Example of Bulletproof Constraint
No.DescriptionCode Lines
1.Create two pairs of generators; one pair for the
Pedersen commitments and the other for the
let pc_gens = PedersenGens::default();
let bp_gens = BulletproofGens::new(128, 1);
2.Instantiate the prover using the commitment
and Bulletproofs generators of Step 1, to
produce the prover's transcript.
let mut prover_transcript = Transcript::new(b"Factors");
let mut prover = Prover::new(&bp_gens, &pc_gens, &mut prover_transcript);
3.Prover commits to variables using the Pedersen
commitments, creates variables corresponding to
each commitment and adds the variables to the
let x1 = Scalar::random(&mut rng);
let (com_p, var_p) = prover.commit(p.into(), x1);
let x2 = Scalar::random(&mut rng);
let (com_q, var_q) = prover.commit(q.into(), x2);
4.Prover constrains the variables in two steps:
a. Prover multiplies the variables of step 3 and captures
the product in the "output" variable O.
b. Prover wants to ensure the difference of the product O and r is zero.
let (_, _, o) = prover.multiply(var_p.into(), var_q.into());
let r_lc: LinearCombination = vec![(Variable::One(), r.into())].iter().collect();
prover.constrain(o - r_lc);
5.Prover creates the proof.let proof = prover.prove().unwrap();
6.Instantiation of the verifier using the Pedersen
commitments and Bulletproof generators. The
verifier creates its own transcript.
let mut verifier_transcript = Transcript::new(b"Factors");
let mut verifier = Verifier::new(&bp_gens, &pc_gens, &mut verifier_transcript);
7.Verifier records commitments for p and q sent by
prover in the transcript, and creates variables for
them similar to the prover's.
let var_p = verifier.commit(commitments.0);
let var_q = verifier.commit(commitments.1);
8.Verifier constrains variables corresponding to
the commitments.
let (_, _, o) = verifier.multiply(var_p.into(), var_q.into());
let r_lc: LinearCombination = vec![(Variable::One(), r.into())].iter().collect();
verifier.constrain(o - r_lc);
9.Verifier verifies the proof.verifier.verify(&proof)

Conclusions, Observations and Recommendations

Constraint systems form a natural language for most computational problems expressed as arithmetic circuits, and have found ample application in both zk‑SNARKs and Bulletproofs. The use of ZK proofs for arithmetic circuits and programmable constraint systems has evolved considerably as illustrated in Table 2. Although much work still needs to be done, Bulletproofs with constraint systems built on them promise to be powerful tools for efficient handling of verifiable programs. The leverage that developers have, in choosing whatever gadgets they wish to implement, leaves enough room to build proof systems that have some degree of modularity. Proof system examples by both Dalek and Harchandani are valuable examples of what can be achieved with Bulletproofs and R1CSs. This framework provides great opportunities in building blockchain-enabled confidential digital asset schemes.


[1] A. Gabizon, "Explaining SNARKs Part V: From Computations to Polynomials" [online]. Available: Date accessed: 2020‑01‑03.

[2] C. Yun, "Building on Bulletproofs" [online]. Available: Date accessed: 2020‑01‑03.

[3] Dalek's R1CS documents, "Module Bulletproofs::r1cs_proof" [online]. Available: Date accessed: 2020‑01‑07.

[4] B. Bünz, J. Bootle, D. Boneh, A. Poelstra, P. Wuille and G. Maxwell, "Bulletproofs: Short Proofs for Confidential Transactions and More" [online], Blockchain Protocol Analysis and Security Engineering 2018 . Available: Date accessed: 2019‑11‑21.

[5] J. Bootle, A. Cerulli, P. Chaidos, J. Groth and C. Petit, "Efficient Zero-knowledge Arguments for Arithmetic Circuits in the Discrete Log Setting" [online], Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 327‑357. Springer, 2016 . Available: Date accessed: 2019‑12‑21.

[6] A. Szepieniec and B. Preneel, "Generic Zero-knowledge and Multivariate Quadratic Systems" [online]. Available: Date accessed: 2019‑12‑31.

[7] A. Shpilka and A. Yehudayoff, "Arithmetic Circuits: A Survey of Recent Results and Open Questions" [online], Technion-Israel Institute of Technology, Haifa, Israel, 2010 . Available: Date accessed: 2019‑12‑21.

[8] A. Pinto, "Constraint Systems for ZK SNARKs" [online]. Available:;SNARKs/. Date accessed: 2019‑12‑23.

[9] H. Wu, W. Zheng, A. Chiesa, R. Ada Popa, and I. Stoica, "DIZK: A Distributed Zero Knowledge Proof System" [online], Proceedings of the 27th USENIX Security Symposium, August 15–17, 2018. Available: Date accessed: 2019‑12‑14.

[10] E. Ben-Sasson, A. Chiesa, D. Genkin, E. Tromer and M. Virza, "SNARKs for C: Verifying Program Executions Succinctly and in Zero Knowledge (extended version)" [online], October 2013. Available: Date accessed: 2019‑12‑17.

[11] V. Buterin, "Quadratic Arithmetic Programs: from Zero to Hero" [online], 12 December 2016. Available: Date accessed: 2019‑12‑19.

[12] C. Yun, "Programmable Constraint Systems for Bulletproofs" [online]. Available: Date accessed: 2019‑12‑04.

[13] J. Groth, "Linear Algebra with Sub-linear Zero-knowledge Arguments" [online], Advances in Cryptology – CRYPTO 2009, pages 192–208, 2009. Available: Date accessed: 2019‑12‑04.

[14] Dalek, "Ristretto" [online]. Available: Date accessed: 2019‑10‑17

[15] H. Valence, "Bulletproofs Pre-release" [online]. Available: Date accessed: 2019‑11‑21.

[16] Dalek, "Bulletproofs Implementation" [online]. Available: Date accessed: 2019‑10‑02.

[17] L. Harchandani, "Zero Knowledge Proofs using Bulletproofs" [online]. Available: Date accessed: 2020‑01‑03.

[18] L. Harchandani, "Factors R1CS Bulletproofs Example" [online]. Available: Date accessed: 2019‑10‑02.

[19] "Computer Algebra" [online]. Available: Date accessed: 2020‑02‑05.

[20] Wikipedia: "Binary Operation" [online]. Available: < >. Date accessed: 2020‑02‑06.

[21] WolframMathWorld, "Field Theory" [online]. Available: Date accessed: 2020‑02‑06.

[22] Wikipedia: "Finite Theory" [online]. Available: Date accessed: 2020‑02‑06.


Appendix A: Definition of Terms

Definitions of terms presented here are high level and general in nature. Full mathematical definitions are available in the cited references.

  • Symbolic Computation: The study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects [19].
  • Binary Operation: An operation $ * $ or a calculation that combines two elements $ \mathcal{a} $ and $ \mathcal{b}$, called operands, to produce another element $ \mathcal{a} * \mathcal{b} $ [20].

  • Field: Any set $ \mathcal{F} $ of elements together with Binary Operations $ + $ and $ \cdot $, called addition and multiplication, respectively, is a field if for any three elements $ \mathcal{a}, \mathcal{b} $ and $ \mathcal{c} $ in $ \mathcal{F} $ satisfy the field axioms given in the following table. A Finite field is any field $ \mathcal{F} $ that contains a finite number of elements ([21], [22]).

Table A.1: Axioms of a Field
inversesa+(-a)=0=(-a)+aaa^(-1)=1=a^(-1)a if a!=0

Appendix B: Notation Used

  • Let $ \mathcal{F} $ be a field.

  • Let $ \mathbf{a} = (a_1, a_2, ..., a_n ) $ be a vector with $n$ components $ a_1, a_2, ..., a_n $, which are elements of some field $ \mathcal{F} ​$.

  • Let $\mathbf{s}^T​$ be the transpose of a vector $ \mathbf{s} = ( s_1, s_2, \dots, s_n )​$ of size $ n ​$ such that $ \mathbf{s}^T = \begin{bmatrix} s_1 \\ s_2 \\ \cdot \\ \cdot \\ \cdot \\ s_n \end{bmatrix} ​$


Consensus Mechanisms


Consensus mechanisms "are crucial for a blockchain in order to function correctly. They make sure everyone uses the same blockchain" [1].


  • From Investopedia: A consensus mechanism is a fault-tolerant mechanism that is used in computer and blockchain systems to achieve the necessary agreement on a single data value or a single state of the network among distributed processes or multi-agent systems [2].

  • From KPMG: Consensus mechanism - A method of authenticating and validating a value or transaction on a Blockchain or a distributed ledger without the need to trust or rely on a central authority. Consensus mechanisms are central to the functioning of any blockchain or distributed ledger [3].


[1] "Different Blockchain Consensus Mechanisms", Hacker Noon [online]. Available: Date accessed: 2019‑06‑07.

[2] Investopedia: "Consensus Mechanism (Cryptocurrency)" [online]. Available: Date accessed: 2019‑06‑07.

[3] KPMG: "Consensus - Immutable Agreement for the Internet of Value" [online]. Available: Date accessed: 2019‑06‑07.

Understanding Byzantine Fault-tolerant Consensus


When considering the concept of consensus in cryptocurrency and cryptographic protocols, the Byzantine Generals Problem is often referenced, where a protocol is described as being Byzantine Fault Tolerant (BFT). This stems from an analogy, as a means to understand the problem of distributed consensus.

To classify Byzantine failure: If a node in a system is exhibiting Byzantine failure, it is called a traitor node. The traitor (which is a flaky or malicious node) sends conflicting messages, leading to an incorrect result of the calculation that the distributed system is trying to perform.

Randomized Gossip Methods

    Dr. Dahlia Malkhi
    Ph.D. Computer Science


"Randomized Gossip Methods" by Dahlia Malkhi, PWL Conference, September 2016.

As an introduction, gossip-based protocols are simple, robust, efficient and fault tolerant. This talk provides insight into gossip protocols and how they function, as well as the reasoning behind the instances in which they do not function. It touches on three protocols from randomized gossip methods: Rumor Mongering, which spreads gossip in each communication; Name Dropper, which pushes new neighbors in each communication; and Scalable Weakly-consistent Infection-style Process Group Membership (SWIM), which pulls a heartbeat in each communication.


Note: Transcripts are available here.


Byzantine Fault Tolerance, Blockchain and Beyond

    Dr. Ittai Abraham
    Ph.D. Computer Science


"BFT, Blockchain and Beyond" by Ittai Abraham, Israel Institute for Advanced Studies, 2018.

This talk provides an overview of blockchain and decentralized trust, with the focus on Byzantine Fault Tolerance (BFT). Traditional BFT protocols are assessed and compared with the modern Nakamoto Consensus.

The presentation looks at a hybrid solution of combining traditional and modern consensus mechanisms. The talk delves into the types of consensus; asynchrony, synchrony and partial synchrony, and provides a brief history of all three, as well as their recent implementation and responsiveness.

In addition, the fundamental trade-off of decentralized trust is assessed, and performance, decentralization and privacy are compared.


Part 1

Part 2

BFT Consensus Mechanisms

Having trouble viewing this presentation?

View it in a separate window.

Introduction to Applications of Byzantine Consensus Mechanisms


When considering how Tari will potentially build its second layer, an analysis of the most promising Byzantine Consensus Mechanisms and their applications was sought.

Important to consider is the "scalability trilemma". This phrase, referred to in [1], takes into account the potential trade-offs regarding decentralization, security and scalability:

  • Decentralization. A core principle on which the majority of the systems are built, taking into account censorship-resistance and ensuring that everyone, without prejudice, is permitted to participate in the decentralized system.

  • Scalability. Encompasses the ability of the network to process transactions. Thus, if a public blockchain is deemed to be efficient, effective and usable, it should be designed to handle millions of users on the network.

  • Security. Refers to the immutability of the ledger and takes into account threats of 51% attacks, Sybil attacks, Distributed Denial of Service (DDoS) attacks, etc.

Through the recent development of this ecosystem, most blockchains have focused on two of the three factors, namely decentralization and security, at the expense of scalability. The primary reason for this is that nodes must reach consensus before transactions can be processed [1].

This report examines proposals considering Byzantine Fault-tolerant (BFT) consensus mechanisms, and considers their feasibility and efficiency in meeting the characteristics of scalability, decentralization and security. In each instance, the following are assessed:

  • protocol assumptions;
  • reference implementations; and
  • discernment regarding whether the protocol may be used for Tari as a means to maintain the distributed asset state.

Appendix A contains terminology related to consensus mechanisms, including definitions of Consensus; Binary Consensus; Byzantine Fault Tolerance; Practical Byzantine Fault-tolerant Variants; Deterministic and Non-deterministic Protocols; and Scalability-Performance Trade-off.

Appendix B discusses timing assumptions, including degrees of synchrony, which range from Synchrony and Partial Synchrony to Weak Synchrony, Random Synchrony and Asynchrony; as well as the problem with timing assumptions, including Denial of Service (DoS) Attack, FLP Impossibility and Randomized Agreement.

Brief Survey of Byzantine Fault-tolerant Consensus Mechanisms

Many peer-to-peer, online, real-time strategy games use a modified lockstep protocol as a consensus protocol in order to manage the game state between players in a game. Each game action results in a game state delta broadcast to all other players in the game, along with a hash of the total game state. Each player validates the change by applying the delta to their own game state and comparing the game state hashes. If the hashes do not agree, then a vote is cast, and those players whose game states are in the minority are disconnected and removed from the game (known as a desync) [2].

Permissioned Byzantine Fault-tolerant Protocols


Byzantine agreement schemes are considered well suited for permissioned blockchains, where the identity of the participants is known. Examples include Hyperledger and Tendermint. Here, the Federated Consensus Algorithm is implemented [3].


Hyperledger Fabric

Hyperledger Fabric (HLF) began as a project under the LinX Foundation in early 2016 [4]. The aim was to create an open-source, cross-industry, standard platform for distributed ledgers. HLF is an implementation of a distributed ledger platform for running smart contracts and leveraging familiar and proven technologies. It has a modular architecture, allowing pluggable implementations of various functions. The distributed ledger protocol of the fabric is run on the peers. The fabric distinguishes peers as:

  • validating peers (they run the consensus algorithm, thus validating the transactions); and
  • non-validating peers (they act as a proxy that helps in connecting clients to validating peers).

The validating peers run a BFT consensus protocol for executing a replicated state machine that accepts deploy, invoke and query transactions as operations [5].

The blockchain's hash chain is computed based on the executed transactions and resulting persistent state. The replicated execution of chaincode (the transaction that involves accepting the code of the smart contract to be deployed) is used for validating the transactions. It is assumed that among n validating peers, at most f<n/3 (where f is the number of faulty nodes and n is the number of nodes present in the network) may behave arbitrarily, while others will execute correctly, thus adapting to concept BFT consensus. Since HLF proposes to follow a Practical Byzantine Fault-tolerant (PBFT) consensus protocol, the chaincode transactions must be deterministic in nature, otherwise different peers might have different persistent states. The SIEVE protocol is used to filter out the non-deterministic transactions, thus assuring a unique persistent state among peers [5].

While being redesigned for a v1.0 release, the format's goal was to achieve extensibility. This version allowed for modules such as membership and consensus mechanism to be exchanged. Being permissioned, this consensus mechanism is mainly responsible for receiving the transaction request from the clients and establishing a total execution order. So far, these pluggable consensus modules include a centralized, single order for testing purposes, and a crash-tolerant ordering service based on Apache Kafka [3].



Tendermint Core is a BFT Proof-of-Stake (PoS) protocol, which is composed of two protocols in one: a consensus algorithm and a peer-to-peer networking protocol. Inspired by the design goal behind Raft, the authors of [6] specified Tendermint as being an easy-to-understand, developer-friendly algorithm that can do algorithmically complex systems engineering.

Tendermint is modeled as a deterministic protocol, live under partial synchrony, which achieves throughput within the bounds of the latency of the network and individual processes themselves.

Tendermint rotates through the validator set, in a weighted round-robin fashion. The higher the stake (i.e. voting power) that a validator possesses, the greater its weighting; and the more times, proportionally, it will be elected as a leader. Thus, if one validator has the same amount of voting power as another validator, they will both be elected by the protocol an equal amount of times [6].

Critics have argued that Tendermint is not decentralized, and that one can distinguish and target leadership, launching a DDoS attack against them, and sniffling the progression of the chain. Although Sentry Architecture (containing sentry nodes, refer to Sentry Nodes) has been implemented in Tendermint, the argument regarding the degree of decentralization is still questionable.

Sentry Nodes

Sentry nodes are guardians of a validator node and provide validator nodes with access to the rest of the network. They are well connected to other full nodes on the network. Sentry nodes may be dynamic, but should maintain persistent connections to some evolving random subset of each other. They should always expect to have direct incoming connections from the validator node and its backup(s). They do not report the validator node's address in the Peer Exchange Reactor (PEX) and may be more strict about the quality of peers they keep.

Sentry nodes belonging to validators that trust each other may wish to maintain persistent connections via Virtual Private Network (VPN) with one another, but only report each other sparingly in the PEX [7].

Permissionless Byzantine Fault-tolerant Protocols


BFT protocols face several limitations when utilized in permissionless blockchains. They do not scale well with the number of participants, resulting in performance deterioration for the targeted network sizes. In addition, they are not well established in this setting, thus they are prone to security issues such as Sybil attacks. Currently, there are approaches that attempt to circumvent or solve this problem [3].

Protocols and Algorithms


The Paxos family of protocols includes a spectrum of trade-offs between the number of processors, number of message delays before learning the agreed value, activity level of individual participants, number of messages sent and types of failures. Although the FLP theorem (named after the authors Michael J. Fischer, Nancy Lynch, and Mike Paterson) states that there is no deterministic fault-tolerant consensus protocol that can guarantee progress in an asynchronous network, Paxos guarantees safety (consistency), and the conditions that could prevent it from making progress are difficult to provoke [8].

Paxos achieves consensus as long as there are f failures, where f < (n-1)/2. These failures cannot be Byzantine, otherwise the BFT proof would be violated. Thus it is assumed that messages are never corrupted, and that nodes do not collude to subvert the system.

Paxos proceeds through a set of negotiation rounds, with one node having "Leadership" status. Progress will stall if the leader becomes unreliable, until a new leader is elected, or if an old leader suddenly comes back online and a dispute between two leader nodes arises.


The Chandra–Toueg consensus algorithm [9], published in 1996, relies on a special node that acts as a failure detector. In essence, it pings other nodes to make sure they are still responsive. This implies that the detector stays online and that the detector must continuously be made aware when new nodes join the network.

The algorithm itself is similar to the Paxos algorithm, which also relies on failure detectors and requires f<n/2, where n is the total number of processes.


Raft is a consensus algorithm designed as an alternative to Paxos. It was meant to be more understandable than Paxos by means of separation of logic, but it is also formally proven safe and offers some additional features [10].

Raft achieves consensus via an elected leader. Each follower has a timeout in which it expects the heartbeat from the leader. It is thus a synchronous protocol. If the leader fails, an election is held to find a new leader. This entails nodes nominating themselves on a first-come, first-served basis. Hung votes require the election to be scrapped and restarted. This suggests that a high degree of cooperation is required by nodes, and that malicious nodes could easily collude to disrupt a leader and then prevent a new leader from being elected. Raft is a simple algorithm, but is clearly unsuitable for consensus in cryptocurrency applications.

While Paxos, Raft and many other well-known protocols tolerate crash faults, BFT protocols, beginning with PBFT, tolerate even arbitrary corrupted nodes. Many subsequent protocols offer improved performance, often through optimistic execution that provides excellent performance when there are no faults; when clients do not contend much; and when the network is well behaved, and there is at least some progress.

In general, BFT systems are evaluated in deployment scenarios where latency and the central processing unit (CPU) are the bottleneck. Thus the most effective protocols reduce the number of rounds and minimize expensive cryptographic operations.

In a recent line of work, [11] advocated improvement of the worst-case performance, providing service quality guarantees even when the system is under attack, even if this comes at the expense of performance in the optimistic case. However, although the "Robust BFT protocols in this vein gracefully tolerate comprised nodes, they still rely on timing assumptions about the underlying network" [28]. Thus, focus shifted to asynchronous networks.



The Hashgraph consensus algorithm [12] was released in 2016. It claims Byzantine Fault Tolerance (BFT) under complete asynchrony assumptions, no leaders, no round robin, no Proof-of-Work (PoW), eventual consensus with probability one, and high speed in the absence of faults.

Hashgraph is based on the gossip protocol, which is a fairly efficient distribution strategy that entails nodes randomly sharing information with each other, similar to how human beings gossip.

Nodes jointly build a Hashgraph reflecting all of the gossip events. This allows Byzantine agreement to be achieved through virtual voting. Alice does not send Bob a vote over the Internet. Instead, Bob calculates what vote Alice would have sent, based on his knowledge of what Alice knows.

Hashgraph uses digital signatures to prevent undetectable changes to transmitted messages. It does not violate the FLP theorem, since it is non-deterministic.

The Hashgraph has some similarities to a blockchain. To quote the white paper: "The Hashgraph consensus algorithm is equivalent to a blockchain in which the 'chain' is constantly branching, without any pruning, where no blocks are ever stale, and where each miner is allowed to mine many new blocks per second, without proof-of-work" [12].

Because each node keeps track of the Hashgraph, there is no need to have voting rounds in Hashgraph; each node already knows what all of its peers will vote for, and thus consensus is reached purely by analyzing the graph.

Gossip Protocol

The gossip protocol works like this:

  • Alice selects a random peer node, say Bob, and sends him everything she knows. She then selects another random node and repeats the process indefinitely.
  • Bob, on receiving Alice's information, marks this as a gossip event and fills in any gaps in his knowledge from Alice's information. Once done, he continues gossiping, using his updated information.

The basic idea behind the gossip protocol is the following: A node wants to share some information with the other nodes in the network. Then, periodically, it randomly selects a node from the set of nodes and exchanges the information. The node that receives the information then randomly selects a node from the set of nodes and exchanges the information, and so on. The information is periodically sent to N targets, where N is the fanout [13].

The cycle is the number of rounds to spread the information. The fanout is the number of nodes a node gossips with in each cycle.

With a fanout = 1, $O(LogN)$ cycles are necessary for the update to reach all the nodes. In this way, information spreads throughout the network in an exponential fashion [12].

The gossip history can be represented as a directed graph, as shown in Figure 1.

Figure 1: Gossip Protocol Directed Graph

Hashgraph introduces a few important concepts that are used repeatedly in later BFT consensus algorithms: famous witnesses and strongly seeing.


If an event (x1) comes before another event (x2) and they are connected by a line, the older event is an ancestor of that event. If both events were created by the same node, then x1 is a self-ancestor of x2.

Note: The gossip protocol defines an event as being a (self-)ancestor of itself!


If an event x1 is an ancestor of x2, then we say that x1 sees x2, as long as the node is not aware of any forks from x2. So in the absence of forks, all events will see all of their ancestors.

     +-----> y
x +--+
     +-----> z

In the preceding example, x is an ancestor of both y and z. However, because there is no ancestor relationship between y and z, the seeing condition fails, and so y cannot see x, and z cannot see x.

It may be the case that it takes time before nodes in the protocol detect the fork. For instance, Bob may create z and y, but share z with Alice and y with Charlie. Both Alice and Charlie will eventually learn about the deception, but until that point, Alice will believe that y sees x, and Charlie will believe that z sees x. This is where the concept of strongly seeing comes in.

Strongly Seeing

If a node examines its Hashgraph and notices that an event z sees an event x, and not only that, but it can draw an ancestor relationship (usually via multiple routes) through a super-majority of peer nodes, and that a different event from each node also sees x, then it is said that according to this node, z strongly sees x. The example in Figure 2 comes from [12]:

Figure 2: Illustration of Strongly-seeing

Construct of Gossiping

The main consensus algorithm loop consists of every node (Alice), selecting a random peer node (Bob) and sharing their graph history. Now Alice and Bob have the same graph history.

Alice and Bob both create a new event with the new knowledge they have just learnt from their peer. Alice repeats this process continuously.

Internal Consensus

After a sync, a node will determine the order for as many events as possible, using three procedures. The algorithm uses constant n (the number of nodes) and a small constant value c>2.

  • Firstly, we have the Swirlds Hashgraph consensus algorithm. Each member runs this in parallel. Each sync brings in new events, which are then added to the Hashgraph. All known events are then divided into rounds. Then the first events in each round are decided on as being famous or not (through purely local Byzantine agreement with virtual voting). Then the total order is found on those events for which enough information is available. If two members independently assign a position in history to an event, they are guaranteed to assign the same position, and guaranteed to never change it, even as more information comes in. Furthermore, each event is eventually assigned such a position, with a probability of one [12].
in parallel:
      sync all known events to a random member
    end loop

      receive a sync
      create a new event
      call divideRounds
      call decideFame
      call findOrder
    end loop
  • Secondly, we have the divideRounds procedure. As soon as an event x is known, it is assigned a round number x.round, and the Boolean value x.witness is calculated, indicating whether it is the first event that a member created in that round [12].
      for each event x
        r ← max round of parents of x ( or 1 if none exist )
        if x can strongly see more than 2/3*n round r witnesses
          x.round ← r + 1
          x.round ← r
        x.witness ← ( x has no self parent ) || ( x.round > x.selfParent.round )
  • Thirdly, we have the decideFame procedure. For each witness event (i.e. an event x where x.witness is true), decide whether it is famous (i.e. assign a Boolean to x.famous). This decision is done by a Byzantine agreement protocol based on virtual voting. Each member runs it locally, on their own copy of the Hashgraph, with no additional communication. The protocol treats the events in the Hashgraph as if they were sending votes to each other, although the calculation is purely local to a member’s computer. The member assigns votes to the witnesses of each round, for several rounds, until more than two-thirds of the population agrees [12].
      for each event x in order from earlier rounds to later
        x.famous ← UNDECIDED
        for each event y in order from earlier rounds to later
          if x.witness and y.witness and y.round > x.round
            d ← y.round - x.round
            s ← the set of witness events in round y.round-1 that y can strongly see
            v ← majority vote in s ( is TRUE for a tie )
            t ← number of events in s with a vote of v
            if d = 1 // first round of the election
     ← can y see x ?
            else if d mod c > 0 // this is a normal round
                if t > 2* n /3 // if supermajority, then decide
                  x.famous ← v
         ← v
                  break // y loop
                else // else, just vote
         ← v
            else if t > 2* n /3 // this is a coin round
     ← v
            else // else flip a coin
     ← middle bit of y.signature

An attempt to address some of the following criticisms has been presented [14]:

  • The Hashgraph protocol is patented and is not open source.
  • In addition, the Hashgraph white paper assumes that n, the number of nodes in the network, is constant. In practice, n can increase, but performance likely degrades badly as n becomes large [15].
  • Hashgraph is not as "fair" as claimed in their paper, with at least one attack being proposed [16].


SINTRA is a Secure INtrusion-Tolerant Replication Architecture used for the coordination in asynchronous networks subject to Byzantine faults. It consists of a collection of protocols and is implemented in Java, providing secure replication and coordination among a group of servers connected by a wide-area network, such as the Internet. For a group consisting of n servers, it tolerates up to $t<n/3$ servers failing in arbitrary, malicious ways, which is optimal for the given model. The servers are connected only by asynchronous point-to-point communication links. Thus, SINTRA automatically tolerates timing failures as well as attacks that exploit timing. The SINTRA group model is static. This means that failed servers must be recovered by mechanisms outside of SINTRA, and the group must be initialized by a trusted process.

The protocols exploit randomization, which is needed to solve Byzantine agreement in such asynchronous distributed systems. Randomization is provided by a threshold-cryptographic pseudorandom generator, a coin-tossing protocol based on the Diffie-Hellman problem. Threshold cryptography is a fundamental concept in SINTRA, as it allows the group to perform a common cryptographic operation for which the secret key is shared among the servers such that no single server or small coalition of corrupted servers can obtain useful information about the key. SINTRA provides threshold-cryptographic schemes for digital signatures, public-key encryption and unpredictable pseudorandom number generation (coin-tossing). It contains broadcast primitives for reliable and consistent broadcasts, which provide agreement on individual messages sent by distinguished senders. However, these primitives cannot guarantee a total order for a stream of multiple messages delivered by the system, which is needed to build fault-tolerant services using the state machine replication paradigm. This is the problem of atomic broadcast and requires more expensive protocols based on Byzantine agreement. SINTRA provides multiple randomized Byzantine agreement protocols, for binary and multi-valued agreement, and implements an atomic broadcast channel on top of agreement. An atomic broadcast that also maintains a causal order in the presence of Byzantine faults is provided by the secure causal atomic broadcast channel [17].

Figure 3 illustrates SINTRA's modular design. Modularity greatly simplifies the construction and analysis of the complex protocols needed to tolerate Byzantine faults.

Figure 3: Design of SINTRA


HoneyBadgerBFT was released in November 2016 and is seen as the first practical asynchronous BFT consensus algorithm. It was designed with cryptocurrencies in mind, where bandwidth is considered scarce, but an abundance of CPU power is available. Thus, the protocol implements public-private key encryption to increase the efficiency of the establishment of consensus. The protocol works with a fixed set of servers to run the consensus. However, this leads to centralization and allows an attacker to specifically target these servers [3].

In its threshold encryption scheme, any one party can encrypt a message using a master public key, and it requires f+1 correct nodes to compute and reveal decryption shares for a ciphertext before the plaintext can be recovered.

The work of HoneyBadgerBFT is closely related to SINTRA, which, as mentioned earlier, is a system implementation based on the asynchronous atomic broadcast protocol from [18]. This protocol consists of a reduction from Atomic Broadcast Channel (ABC) to Asynchronous Common Subset (ACS), as well as a reduction from ACS to Multi-value Validated Agreement (MVBA).

HoneyBadger offers a novel reduction from ABC to ACS that provides better efficiency (by O(N) factor) through batching, while using threshold encryption to preserve censorship resilience. Better efficiency is also obtained by cherry-picking improved instantiations of subcomponents. For example, the expensive MVBA is circumvented by using an alternative ACS along with an efficient reliable broadcast (RBC) [28].

Stellar Consensus Protocol

Stellar Consensus Protocol (SCP) is an asynchronous protocol proposed. It considered to be a global consensus protocol consisting of nomination protocol and ballot protocol. SCP is said to be BFT, by bringing with it the concept of quorum slices and defeated BFT [5].

Each participant forms a quorum of other users, thus creating a trust hierarchy, which requires complex trust decisions [3]. Initially, the nomination proctor is run. During this, new values called candidate values are proposed for agreement. Each node receiving these values will vote for a single value among these. Eventually, it results in unanimously selected values for that slot.

After successful execution of nomination protocol, the nodes deploy the ballot protocol. This involves the federated voting to either commit or abort the values resulting from nomination protocol. This results in externalizing the ballot for the current slot. The aborted ballots are now declared irrelevant. However, there can be stuck states where nodes cannot reach a conclusion regarding whether to abort or commit a value. This situation is avoided by moving it to a higher-valued ballot, and considering it in a new ballot protocol execution. This aids in case a node believes that this stuck ballot was committed. Thus SCP assures avoidance and management of stuck states and provides liveliness.

The concept of quorum slices in case of SCP provides asymptotic security and flexible trust, making it more acceptable than other earlier consensus algorithms utilizing Federated BFT, such as the Ripple protocol consensus algorithm [29]. Here, the user is provided more independence in deciding whom to trust [30].

SCP protocol claims to be free of blocked states, and provides decentralized control, asymptotic security, flexible trust and low latency. However, it does not guarantee safety all the time. If the user node chooses an inefficient quorum slice, security is not guaranteed. In the event of partition or misbehaving nodes, it halts progress of the network until consensus can be reached.


LinBFT is a BFT protocol for blockchain systems. It allows for the amortized communication volume per block O(n) under reasonable conditions (where n is the number of participants) while satisfying deterministic guarantees on safety and liveness. It satisfies liveness in a partially synchronous network.

LinBFT cuts down its O(n4) complexity by implementing changes by O(n): linear view change, threshold signature and verifiable random functions. This is clearly optimal, in that disseminating a block already takes O(n) transmissions.

LinBFT is designed to be implemented for permissionless, public blockchain systems and takes into account anonymous participants without a public-key infrastructure, PoS, rotating leader and dynamic participant set [31]. For instance, participants can be anonymous, without a centralized public key infrastructure (PKI) public key among themselves, and participate in a distributed key generation (DKG) protocol required to create threshold signatures, both of which are communication-heavy processes.

LinBFT is compatible with PoS, which counters Sybil attacks and deters dishonest behavior through slashing [31].


The Algorand white paper was released in May 2017. Algorand is a synchronous BFT consensus mechanism, where the blocks are added at a minimum rate [19]. It allows participants to privately check whether they are chosen for consensus participation and requires only one message per user, thus limiting possible attacks [3].

Alogrand scales up to 500,000 users by employing verifiable random functions, which are pseudorandom functions able to provide verifiable proofs that the output of said function is correct [3]. It introduces the concept of a concrete coin. Most of these BFT algorithms require some type of randomness oracle, but all nodes need to see the same value if the oracle is consulted. This had previously been achieved through a _common _coin idea. The concrete coin uses a much simpler approach, but only returns a binary value [19].


Thunderella implements an asynchronous strategy, where a synchronous strategy is used as a fallback in the event of a malfunction [20], thus it achieves both robustness and speed. It can be applied in permissionless networks using PoW. Network robustness and "instant confirmations" require 75% of the network to be honest, as well as the presence of a leader node.

Snowflake to Avalanche

This consensus protocol was first seen in [39]. The paper outlines four protocols that are building blocks forming a protocol family. These leaderless BFT protocols, published by Team Rocket, are built on a metastable mechanism. They are called Slush, Snowflake, Snowball and Avalanche.

The protocols differ from the traditional consensus protocols and the Nakamoto consensus protocols by not requiring an elected leader. Instead, the protocol simply guides all the nodes to consensus.

These four protocols are described as a new family of protocols due to this concept of metastability: a means to establish consensus by guiding all nodes towards an emerging consensus without requiring leaders, while still maintaining the same level of security and inducing a speed that exceeds current protocols.

This is achieved through the formation of "sub-quorums", which are small, randomized samples from nodes on the network. This allows for greater throughputs and sees parallel consensuses running before they merge to form the overarching consensus, which can be seen as similar in nature to the gossip protocol.

With regard to safety, throughput (the number of transactions per second) and scalability (the number of people supported by the network), Slush, Snowflake, Snowball and Avalanche seem to be able to achieve all three. They impart a probabilistic safety guarantee in the presence of Byzantine adversaries and achieve a high throughput and scalability due to their concurrent nature. A synchronous network is assumed.

The current problem facing the design of BFT protocols is that a system can be very fast when a small number of nodes are active, as there are fewer decisions to make. However, when there are many users and an increase in transactions, the system cannot be maintained.

Unlike the PoW implementation, which requires constant active participation from the miners, Avalanche can function even when nodes are dormant.

While traditional consensus protocols require O(n2) communication, their communication complexity ranges from O(kn log n) to O(kn) for some security parameter k<<n. In a sense, Team Rocket highlight that the communication complexity of their protocols is less intensive than that of O(n2) communications, thus making these protocols faster and more scalable.

To backtrack a bit, Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. It describes the worst-case scenario and can be used to describe the execution time required by an algorithm [21]. In the case of consensus algorithms, O describes a finite expected number of steps or operations [22]. For example, O(1) describes an algorithm that will always execute in the same time, regardless of the size of the input data set. O(n) describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set. O(n2) represents an algorithm whose performance is directly proportional to the square of the size of the input data set.

The reason for this is O(n2) suggests that the rate of growth of function is determined by n2, where n is the number of people on the network. Thus, the addition of a person exponentially increases the time taken to disseminate the information on the network, while traditional consensus protocols require everyone to communicate with one another, making it a laborious process [32].

Despite assuming a synchronous network, which is susceptible to the DoS attacks, this new family of protocols "reaches a level of security that is simply good enough while surging forward with other advancements" [32].


PARSEC is a BFT consensus algorithm possessing weak synchrony assumptions (highly asynchronous, assuming random delays with finite expected value). Similar to Hashgraph, it has no leaders, no round robin, no PoW and reaches eventual consensus with probability one. It differs from Hashgraph, in that it provides high speed in the absence and presence of faults. Thus, it avoids the structures of delegated PoS (DPoS), which requires a trusted set of leaders, and does not have a round robin (where a permissioned set of miners sign each block).

PARSEC is fully open, unlike Hashgraph, which is patented and closed sourced. The reference implementation of PARSEC, written in Rust, was released a few weeks after the white paper ([33], [23]).

The general problem in reaching Byzantine agreement on any value is reduced to the simple problem of reaching binary Byzantine agreement on the nodes participating in each decision. This has allowed PARSEC to reuse the binary Byzantine agreement protocol (Signature-free Asynchronous Byzantine Consensus) after adapting it to the gossip protocol [34].

Similar to HoneybadgerBFT, this protocol is composed through the additions of interesting ideas presented in literature. Similar to Hashgraph and Avalanche, a gossip protocol is used to allow efficient communication between nodes [33].

Finally, the need for a trusted leader or a trusted setup phase implied in [27] is removed by porting the key ideas to an asynchronous setting [35].

The network is set to N of N instances of the algorithm communicating via randomly synchronous connections. Due to random synchrony, all users can reach an agreement as to what is going on. There is no guarantee for nodes on the timing that they should be receiving messages, and a possibility of up to t Byzantine (arbitrary) failures are allowed, where 3t<N. The instances where no failures have occurred are deemed correct or honest, while the failed instances are termed faulty or malicious. Since a Byzantine failure model allows for malicious behavior, any set of instances containing more than 2/3N of them is referred to as the supermajority.

When a node receives a gossip request, it creates a new event and sends a response back (in Hashgraph, the response was optional). Each gossip event contains [24]:

  1. The data being transmitted.
  2. The self-parent (the hash of another gossip event created by the same node).
  3. The other-parent (a hash of another gossip event created by a different node).
  4. The Cause for creation, which can be a Request for information, a Response to another node’s request, or an Observation. An observation is when a node creates a gossip event to record an observation that the node made themselves.
  5. The creator ID (public key).
  6. The signature – signing the preceding information.

The self-parent and other-parent prevent tampering because they are signed and related to other gossip events [24].

As with Hashgraph, it is difficult for adversaries to interfere with the consensus algorithm, because all voting is virtual and done without sharing details of votes cast. Each node figures out what other nodes would have voted, based on their copy of the gossip graph.

PARSEC also uses the concept of a concrete coin, from Algorand. This is used to break ties, particularly in cases where an adversary is carefully managing communication between nodes in order to maintain a deadlock on votes.

In step 1, nodes try and converge on a "true" result for a set of results. If this is not achieved, they move on to step  2, which is trying to converge on a "false" result. If consensus still cannot be reached, a coin flip is made and they go back to step 1 in another voting round.

Democratic BFT

This is a deterministic Byzantine consensus algorithm that relies on a new weak coordinator. This protocol is implemented in the Red Belly blockchain and is said to achieve 30,000 transactions per second on Amazon Cloud Trials [25]. Through the coupling with an optimized variant of the reduction of multivalve to binary consensus from [26], the Democratic BFT (DBFT) consensus algorithm was generated. It terminates in four message delays in the good case, when all non-faulty processes propose the same value [36].

The term weak coordinator is used to describe the ability of the algorithm to terminate in the presence of a faulty or slow coordinator, unlike previous algorithms that do not have the ability to terminate. The fundamental idea here is to allow processes to complete asynchronous rounds as soon as they receive a threshold of messages, instead of having to wait for a message from a coordinator that may be slow. The resulting algorithm assumes partial synchrony; is resilience and time optimal; and does not require signatures.

Moving away from the impossibility of solving consensus in asynchronous message systems, where processes can be faulty or Byzantine, the technique of randomization or additional synchrony is adopted.

Randomized algorithms can use per-process "local" coins, or a shared common coin to solve consensus probabilistically among n processes despite $t<n/3​$ Byzantine processes. When based on local coins, the existing algorithms converge O(n2.5) expected time.

A recent randomized algorithm that does not contain a signature solves consensus in O(1) expected time under a fair scheduler, where O is the binary.

To solve the consensus problem deterministically and prevent the use of the common coin, researchers have assumed partial or eventual synchrony. Here, these solutions require a unique coordinator process, referred to as the leader, in order to remain non-faulty.

There are advantages and disadvantages to this technique:

  • The advantage is that if the coordinator is non-faulty, and if the messages are delivered in a timely manner in an asynchronous round, then the coordinator broadcasts its proposal to all processes and this value is decided after a constant number of message delays.

  • The disadvantage is that a faulty coordinator can dramatically impact the algorithm performance by leveraging the power it has in a round, and imposing its value on all. Non-faulty processes thus have no other choice but to decide nothing in this round.

This protocol sees the use of a weak coordinator, which allows for the introduction of a new deterministic Byzantine consensus algorithm that is time optimal, resilience optimal and does not require the use of signatures. Unlike the classic, strong coordinator, the weak coordinator does not impose its value. It allows non-faulty processes to decide a value quickly, without the need of the coordinator, while helping the algorithm to terminate if non-faulty processes know that they proposed distinct values that might all be decided. In addition, the presence of a weak coordinator allows rounds to be executed optimistically without waiting for a specific message. This is unlike classic BFT algorithms that have to wait for a particular message from their coordinator, and occasionally have to recover from a slow network or faulty coordinator.

With regard to the problem of a slow of Byzantine coordinator, the weak coordinator helps agreement by contributing a value while still allowing termination in a constant number of message delays. It is thus unlike the classic coordinator or the eventual leader, which cannot be implemented in the Binary Byzantine Consensus Algorithm, BAMPn,t[t<n/3].

The validation of protocol was conducted similarly to that of the HoneyBadger blockchain, where "Coin", the randomization algorithm from [27] was used. Using the 100 Amazon Virtual Machines located in five data centers on different continents, it was shown that the DBFT algorithm outperforms that of "Coin", which is known to terminate in O(1) rounds in expectation. In addition, since Byzantine behaviors have been seen to severely affect the performance of strong coordinator-based consensus, four different Byzantine attacks have been tested in the validation.

Summary of Findings

Table 1 highlights the characteristics of the above-mentioned BFT Protocols. Asymptotic Security, Permissionless Blockchain, Timing Assumptions, Decentralized Control, Low Latency and Flexible Trust form part of the value system.

  • Asymptotic Security - this depends only on digital signatures (and hash functions) for security.
  • Permissionless Protocol - this allows anybody to create an address and begin interacting with the protocol.
  • Timing Assumptions - refer to Forms of Timing Assumptions - Degrees of Synchrony.
  • Decentralized Control - consensus is achieved and defended by protecting the identity of that node until its job is done, through a leaderless node.
  • Low Latency - this describes a computer network that is optimized to process a very high volume of data messages with minimal delay.
  • Flexible Trust - where users have the freedom to trust any combinations of parties they see fit.

Table 1: Characteristics of BFT Protocols

ProtocolPermissionless ProtocolTiming AssumptionsDecentralized ControlLow LatencyFlexible TrustAsymptotic Security
Hyperledger Fabric (HLF)Partially synchronous
TendermintPartially synchronous
PaxosPartially synchronous
Chandra-TouregPartially synchronous
RaftWeakly synchronous
Stellar Consensus ProtocolAsynchronous
LinBFTPartially synchronous
PARSECWeakly synchronous
Democratic BFTPartially synchronous

BFT consensus protocols have been considered as a means to disseminate and validate information. The question of whether Schnorr multisignatures can perform the same function in validating information through the action of signing will form part of the next review.


[1] B. Asolo, "Breaking down the Blockchain Scalability Trilemma" [online]. Available: Date accessed: 2018‑10‑01.

[2] Wikipedia: "Consensus Mechanisms" [online]. Available: Date accessed: 2018‑10‑01.

[3] S. Rusch, "High-performance Consensus Mechanisms for Blockchains" [online]. Available: Date accessed: 2018‑08‑30.

[4] C. Cachin "Architecture of the Hyperledger Blockchain Fabric" [online]. Available: Date accessed: 2018‑09‑16.

[5] L. S. Sankar, M. Sindhu and M. Sethumadhavan, "Survey of Consensus Protocols on Blockchain Applications" [online]. Available: Date accessed: 2018‑08‑30.

[6] "Tendermint Explained - Bringing BFT-based PoS to the Public Blockchain Domain" [online]. Available: Date accessed: 2018‑09‑30.

[7] "Tendermint Peer Discovery" [online]. Available: Date accessed: 2018‑10‑22.

[8] Wikipedia: "Paxos" [online]. Available: Date accessed: 2018‑10‑01.

[9] Wikipedia: "Chandra-Toueg Consensus Algorithm" [online]. Available: Date accessed: 2018‑09‑13.

[10] Wikipedia: "Raft" [online]. Available: Date accessed: 2018‑09‑13.

[11] A. Clement, E. Wong, L. Alvisi, M. Dahlin and M. Marchetti, "Making Byzantine Fault Tolerant Systems Tolerate Byzantine Faults" [online]. Available: Date accessed 2018‑10‑22.

[12] L. Baird, "The Swirlds Hashgraph Consensus Algorithm: Fair, Fast, Byzantine Fault Tolerance" [online]. Available: Date accessed: 2018‑09‑30.

[13] "Just My Thoughts: Introduction to Gossip" [online]. Available: Date accessed 2018‑10‑22.

[14] L. Baird, "Swirlds and Sybil Attacks" [online]. Available: Date accessed: 2018‑09‑30.

[15] Y. Jia, "Demystifying Hashgraph: Benefits and Challenges" [online]. Available: Date accessed: 2018‑09‑30.

[16] M. Graczyk, "Hashgraph: A Whitepaper Review" [online]. Available: Date accessed: 2018‑09‑30.

[17] C. Cachin and J. A. Poritz, "Secure Intrusion-tolerant Replication on the Internet" [online]. Available: Date accessed: 2018‑10‑22.

[18] C. Cachin, K. Kursawe, F. Petzold and V. Shoup, "Secure and Efficient Asynchronous Broadcast Protocols" [online]. Available: Date accessed 2018‑10‑22.

[19] J. Chen and S. Micali, "Algorand" White Paper" [online]. Available: Date accessed: 2018‑09‑13.

[20] R. Pass and E. Shi, "Thunderella: Blockchains with Optimistic Instant Confirmation" White Paper [online]. Available: Date accessed: 2018‑09‑13.

[21] "A Beginner's Guide to Big O Notation" [online]. Available: Date accessed: 2018‑10‑22.

[22] J. Aspnes and M. Herlihy, "Fast Randomized Consensus using Shared Memory" [online]. Available: Date accessed: 2018‑10‑22.

[23] "Prototol for Asynchronous, Reliable, Secure and Efficient Consensus" [online]. Available: Date accessed 2018‑10‑22.

[24] "Project Spotlight: Maidsafe and PARSEC Part 2" [online]. Available: Date accessed: 2018‑09‑18.

[25] "Red Belly Blockchain" [online]. Available: Date accessed: 2018‑10‑10.

[26] M. Ben-Or, B. Kelmer and T Rabin, "Asynchronous Secure Computations with Optimal Resilience" [online]. Available: Date accessed 2018‑10‑22.

[27] A. Mostefaoui, M.Hamouna and Michel Raynal, "Signature-free Asynchronous Byzantine Consensus with $t<n/3$ and O(n2) Messages" [online]. Available: Date accessed 2018‑10‑22.

[28] A. Miller, Y. Xia, K. Crowman, E. Shi and D. Song, "The Honey Badger of BFT Protocols", White Paper [online]. Available: Date accessed: 2018‑08‑30.

[29] D. Schwartz, N. Youngs and A. Britto, "The Ripple Protocol Consensus Algorithm" [online]. Available: Date accessed: 2018‑09‑13.

[30] J. Kwon, "TenderMint: Consensus without Mining" [online]. Available: Date accessed: 2018‑09‑20.

[31] Y. Yang, "LinBFT: Linear-Communication Byzantine Fault Tolerance for Public Blockchains" [online]. Available: Date accessed: 2018‑09‑20.

[32] "Protocol Spotlight: Avalanche Part 1" [online]. Available: Date Accessed: 2018‑09‑09.

[33] P. Chevalier, B. Kaminski, F. Hutchison, Q. Ma and S. Sharma, "Protocol for Asynchronous, Reliable, Secure and Efficient Consensus (PARSEC)". White Paper [online]. Available: Date accessed: 2018‑08‑30.

[34] "Project Spotlight: Maidsafe and PARSEC Part 1" [online]. Available: Date accessed: 2018‑08‑30.

[35] S. Micali, "Byzantine Agreement Made Trivial" [online]. Available: Date accessed: 2018‑08‑30.

[36] T. Crain, V. Gramoli, M. Larrea and M. Raynal, "DBFT: Efficient Byzantine Consensus with a Weak Coordinator and its Application to Consortium Blockchains" [online]. Available: Date accessed: 2018‑09‑30.

[37] Wikipedia: "Byzantine Fault Tolerance" [online]. Available: Date accessed: 2018‑09‑30.

[38] Wikipedia: "Liveness" [online]. Available: Date accessed: 2018‑09‑30.

[39] Team Rocket, "Snowflake to Avalanche: A Novel Metastable Consensus Protocol Family for Cryptocurrencies" [online]. Available: Date accessed: 2018‑09‑30.


Appendix A: Terminology

In order to gain a full understanding of the field of consensus mechanisms, specifically BFT consensus mechanisms, certain terms and concepts need to be defined and fleshed out.


Distributed agents (these could be computers, generals coordinating an attack, or sensors in a nuclear plant) that communicate via a network (be it digital, courier or mechanical) need to agree on facts in order to act as a coordinated whole.

When all non-faulty agents agree on a given fact, then we say that the network is in consensus. Consensus is achieved when all non-faulty agents agree on a prescribed fact.

A consensus protocol may adhere to a host of formal requirements, including:

  • Agreement - where all correct processes agree on the same fact.
  • Weak Validity - where, for all correct processes, the output must be the input for some correct process.
  • Strong Validity - where, if all correct processes receive the same input value, they must all output that value.
  • Termination - where all processes must eventually decide on an output value [2].

Binary Consensus

There is a unique case of the consensus problem, referred to as the binary consensus, which restricts the input and hence the output domain to a single binary digit {0,1}.

When the input domain is large, relative to the number of processes, e.g. an input set of all the natural numbers, it can be shown that consensus is impossible in a synchronous message passing model [2].

Byzantine Fault Tolerance

Byzantine failures are considered the most general and most difficult class of failures among the failure modes. The so-called fail-stop failure mode occupies the simplest end of the spectrum. Whereas fail-stop failure mode simply means that the only way to fail is a node crash, detected by other nodes, Byzantine failures imply no restrictions, which means that the failed node can generate arbitrary data, pretending to be a correct one. Thus, Byzantine failures can confuse failure detection systems, which makes fault tolerance difficult [37].

Several papers in the literature contextualize the problem using generals at different camps, situated outside the enemy castle, needing to decide whether or not to attack. A consensus algorithm that would fail, would perhaps see one general attack while all the others stay back, resulting in the vulnerability of first general.

One key property of a blockchain system is that the nodes do not trust each other, meaning that some may behave in a Byzantine manner. The consensus protocol must therefore tolerate Byzantine failures.

A network is Byzantine Fault Tolerant (BFT) when it can provide service and reach a consensus despite faults or failures of the system. The processes use a protocol for consensus or atomic broadcast (a broadcast where all correct processes in a system of multiple processes receive the same set of messages in the same order); i.e. the same sequence of messages [[46]]) agree on a common sequence of operations to execute [20].

The literature on distributed consensus is vast, and there are many variants of previously proposed protocols being developed for blockchains. They can be largely classified along a spectrum:

  • One extreme consists of purely computation-based protocols, which use proof of computation to randomly select a node that single-handedly decides the next operation.

  • The other extreme is purely communication-based protocols, in which nodes have equal votes and go through multiple rounds of communication to reach consensus, Practical Byzantine Fault Tolerance (PBFT) being the prime example, which is a replication algorithm designed to be BFT [10].

For systems with n nodes, of which f are Byzantine, it has been shown that no algorithm exists that solves the consensus problem for f > n/3 [21].

So how then does the Bitcoin protocol get away with only needing 51% honest nodes to reach consensus? Well, strictly speaking, Bitcoin is NOT a BFT-CM, because there is never absolute finality in bitcoin ledgers; there is always a chance (however small) that someone can 51% attack the network and rewrite the entire history. Bitcoin is a probabilistic consensus, rather than deterministic.

Practical Byzantine Fault-tolerant Variants

PoW suffers from non-finality, i.e. a block appended to a blockchain is not confirmed until it is extended by many other blocks. Even then, its existence in the blockchain is only probabilistic. For example, eclipse attacks on Bitcoin exploit this probabilistic guarantee to allow double spending. In contrast, the original PBFT protocol is deterministic [10].

Deterministic and Non-deterministic Protocols

Deterministic, bounded Byzantine agreement relies on consensus being finalized for each epoch before moving to the next one, ensuring that there is some safety about a consensus reference point prior to continuing. If, instead, you allow an unbounded number of consensus agreements within the same epoch, then there is no overall consensus reference point with which to declare finality, and thus safety is compromised [8].

For non-deterministic or probabilistic protocols, the probability that an honest node is undecided after r rounds approaches zero as r approaches infinity.

Non-deterministic protocols that solve consensus under the purely asynchronous case potentially rely on random oracles and generally incur high message complexity overhead, as they depend on reliable broadcasting for all communication.

Protocols such as HoneyBadgerBFT fall into this class of nondeterministic protocols under asynchrony. Normally, they require three instances of reliable broadcast for a single round of communication [6].

Scalability-Performance Trade-off

As briefly mentioned in the Introduction, the scalability of BFT protocols considering the number of participants is highly limited and the performance of most protocols deteriorates as the number of involved replicas increases. This effect is especially problematic for BFT deployment in permissionless blockchains [7].

The problem of BFT scalability is twofold: a high throughput, as well as a large consensus group with good reconfigurability that can tolerate a high number of failures are both desirable properties in BFT protocols. However, they are often in direct conflict.

Bitcoin mining, for example, supports thousands of participants, offers good reconfigurability, i.e. nodes can join or leave the network at any time, and can tolerate a high number of failures. However, they are only able to process a severely limited number of transactions per second. Most BFT protocols achieve a significantly higher throughput, but are limited to small groups of participants of less than 20 nodes and the group reconfiguration is not easily achievable.

Several approaches have been employed to remedy these problems, e.g. threshold cryptography, creating new consensus groups for every round and limiting the number of necessary messages to reach consensus [3].

Appendix B: Timing Assumptions

Forms of Timing Assumptions - Degrees of Synchrony


Here, the time for nodes to wait and receive information is predefined. If a node has not received an input within the redefined time structure, there is a problem [5].

In synchronous systems, it is assumed that all communications proceed in rounds. In one round, a process may send all the messages it requires, while receiving all messages from other processes. In this manner, no message from one round may influence any messages sent within the same round [21].

A △T-synchronous network guarantees that every message sent is delivered after, at most, a delay of △T (where △T is a measure of real time) [6]. Synchronous protocols come to a consensus after △T [5].

Partial Synchrony

Here, the network retains some form of a predefined timing structure. However, it can operate without knowing said assumption of how fast nodes can exchange messages over the network. Instead of pushing out a block every x seconds, a partially synchronous blockchain would gauge the limit, with messages always being sent and received within the unknown deadline.

Partially synchronous protocols come to a consensus in an unknown, but finite period [5].

Unknown-△T Model

The protocol is unable to use the delay bound as a parameter [6].

Eventually Synchronous

The message delay bound △ is only guaranteed to hold after some (unknown instant, called the "Global Stabilization Time" [6].

Weak Synchrony

Most existing BFT systems, even those called "robust", assume some variation of weak synchrony, where messages are guaranteed to be delivered after a certain bound △T, but △T may be time-varying or unknown to the protocol designer.

However, the liveness properties of weakly synchronous protocols can fail completely when the expected timing assumptions are violated, e.g. due to a malicious network adversary. In general, liveness refers to a set of properties of concurrent systems, that require a system to make progress despite the fact that its concurrently executing components may have to "take turns" in critical sections. These are parts of the program that cannot be simultaneously run by multiple components [38].

Even when the weak synchrony assumptions are satisfied in practice, weakly synchronous protocols degrade significantly in throughput when the underlying network is unpredictable. Unfortunately, weakly asynchronous protocols require timeout parameters that are difficult to tune, especially in cryptocurrency application settings; and when the chosen timeout values are either too long or too short, throughput can be hampered.

In terms of feasibility, both weak and partially synchronous protocols are equivalent. A protocol that succeeds in one setting can be systematically adapted for another. In terms of concrete performance, however, adjusting for weak synchrony means gradually increasing the timeout parameter over time, e.g. by an exponential back-off policy. This results in delays when recovering from transient network partition. Protocols typically manifest these assumptions in the form of a timeout event. For example, if parties detect that no progress has been made within a certain interval, then they take a corrective action such as electing a new leader. Asynchronous protocols do not rely on timers, and make progress whenever messages are delivered, regardless of actual clock time [6].

Random Synchrony

Messages are delivered with random delays, such that the average delay is finite. There may be periods of arbitrarily long days (this is a weaker assumption than weak synchrony, and only a bit stronger than full asynchrony, where the only guarantee is that messages are eventually delivered). It is impossible to tell whether an instance has failed by completely stopping, or if there is just a delay in message delivery [1].

Asynchronous Networks and Protocols

In an asynchronous network, the adversary can deliver messages in any order and at any time. However, the message must eventually be delivered between correct nodes. Nodes in an asynchronous network effectively have no use for real-time clocks, and can only take actions based on the ordering of messages they receive [6]. The speed is determined by the speed at which the network communicates, instead of a fixed limit of x seconds.

An asynchronous protocol requires a different means to decide when all nodes are able to come to a consensus.

As will be discussed in FLP Impossibility, FLP result rules out the possibility of the deterministic asynchronous protocols for atomic broadcast and many other tasks. A deterministic protocol must therefore make some stronger timing assumptions [6].

Counting Rounds in Asynchronous Networks

Although the guarantee of eventual delivery is decoupled from notions of "real time", it is nonetheless desirable to characterize the running time of asynchronous protocols. The standard approach is for the adversary to assign each message a virtual round number, subject to the condition that every (r-1) message between correct nodes must be delivered before any (r+1) message is sent.

Problem with Timing Assumptions


The problem with both synchronous and partially synchronous assumptions is that "the protocols based on timing assumptions are unsuitable for decentralized, cryptocurrency settings, where network links can be unreliable, network speeds change rapidly, and network delays may even be adversarially induced" [6].

Denial of Service Attack

Basing a protocol on timings, exposes the network to Denial of Service (DoS) attacks. A synchronous protocol will be deemed unsafe if a DoS slows down the network sufficiently. Even though a partially synchronous protocol would be safe, it would be unable to operate, as the messages would be exposed to interference.

An asynchronous protocol would be able to function under a DoS attack. However, it is difficult to reach consensus, as it is impossible to know if the network is under attack, or if a particular message is delayed by the protocol itself.

FLP Impossibility

Reference [22] maps out what it is possible to achieve with distributed processes in an asynchronous environment.

The result, referred to as the FLP result, which raised the problem of consensus, i.e. getting a distributed network of processors to agree on a common value. This problem was known to be solvable in a synchronous setting, where processes could proceed in simultaneous steps. The synchronous solution was seen as resilient to faults, where processors crash and take no further part in the computation. Synchronous models allow failures to be detected by waiting one entire step length for a reply from a processor, and presuming that it has crashed if no reply is received.

This kind of failure detection is not possible in an asynchronous setting, as there are no bounds on the amount of time a processor might take to complete its work and then respond. The FLP result shows that in an asynchronous setting, where only one processor might crash, there is no distributed algorithm that solves the consensus problem [23].

Randomized Agreement

The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. Every protocol for this problem has the possibility of nontermination [22]. While the vast majority of PBFT protocols steer clear of this impossibility result by making timing assumptions, randomness (and, in particular, cryptography) provides an alternative route. Asynchronous BFT protocols have been used for a variety of tasks such as binary agreement (ABA), reliable broadcast (RBC) and more [6].




The blockchain scalability problem refers to the discussion concerning the limits on the transaction throughput a blockchain network can process. It is related to the fact that records (known as blocks) in the bitcoin blockchain are limited in size and frequency [1].


  • From Bitcoin StackExchange: The term "on-chain scaling" is frequently used to exclusively refer to increasing the blockchain capacity by means of bigger blocks. However, in the literal sense of the term, it should refer to any sort of protocol change that improves the network's capacity at the blockchain layer. These approaches tend to provide at most a linear capacity increase, although some are also scalability improvements [2].


    • blocksize/blockweight increase;
    • witness discount of segregated witness;
    • smaller size of Schnorr signatures;
    • Bellare-Neven signature aggregation;
    • key aggregation.
  • From Tari Labs: Analogous to the OSI layers for communication, in blockchain technology decentralized Layer 2 protocols, also commonly referred to as Layer 2 scaling, refers to transaction throughput scaling solutions. Decentralized Layer 2 protocols run on top of the main blockchain (off-chain), while preserving the attributes of the main blockchain (e.g. crypto-economic consensus). Instead of each transaction, only the resultant of a number of transactions is embedded on-chain [3].


[1] Wikipedia: "Bitcoin Scalability Problem" [online]. Available: Date accessed: 2019‑06‑11.

[2] Bitcoin StackExchange: "What is the Difference between On-chain Scaling and Off-chain Scaling?" [online]. Available: Date accessed: 2019‑06‑10.

[3] Tari Labs: "Layer 2 Scaling Survey - What is Layer 2 Scaling?" [online]. Available: layer2scaling-landscape/layer2scaling-survey.html#what-is-layer-2-scaling. Date accessed: 2019‑06‑10.

Layer 2 Scaling Survey

What is Layer 2 Scaling?

In the blockchain and cryptocurrency world, transaction processing scaling is a tough problem to solve. This is limited by the average block creation time, the block size limit, and the number of newer blocks needed to confirm a transaction (confirmation time). These factors make "over the counter" type transactions similar to Master Card or Visa nearly impossible if done on the main blockchain (on-chain).

Let's postulate that blockchain and cryptocurrency "take over the world" and are responsible for all global non-cash transactions performed, i.e. 433.1 billion in 2014 to 2015 [24]. This means 13,734 transactions per second (tx/s) on average! (To put this into perspective, VisaNet currently processes 160 billion transactions per year [25] and is capable of handling more than 65,000 transaction messages per second [26].) This means that if all of those were simple single-input-single-output non-cash transactions and performed on:

  • SegWit-enabled Bitcoin "like" blockchains that can theoretically handle ~21.31tx/s, we would need ~644 parallel versions, and with a SegWit transaction size of 190 bytes [27], the combined blockchain growth would be ~210GB per day!

  • Ethereum "like" blockchains, and taking current gas prices into account, Ethereum can theoretically process ~25.4tx/s, then ~541 parallel versions would be needed and, with a transaction size of 109 bytes ([28], [29]), the combined blockchain growth would be ~120GB per day!

This is why we need a proper scaling solution that would not bloat the blockchain.

The Open Systems Interconnection (OSI) model defines seven layers for communication functions of a computing system. Layer 1 refers to the physical layer and Layer 2 to the data link layer. Layer 1 is never concerned with functions of Layer 2 and up; it just delivers transmission and reception of raw data. In turn, Layer 2 only knows about Layer 1 and defines the protocols that deliver node-to-node data transfer [1].

Analogous to the OSI layers for communication, in blockchain technology, decentralized Layer 2 protocols, also commonly referred to as Layer 2 scaling, refers to transaction throughput scaling solutions. Decentralized Layer 2 protocols run on top of the main blockchain (off-chain), while preserving the attributes of the main blockchain (e.g. crypto-economic consensus). Instead of each transaction, only the result of a number of transactions is embedded on-chain [2].


  • Does every transaction need every parent blockchain node in the world to verify it?

  • Would I be willing to have (temporary) lower security guarantees for most of my day-to-day transactions if I could get them validated (whatever we take that to mean) near-instantly?

If you can answer "no" and "yes", then you're looking for a Layer 2 scaling solution.

How will this be Applicable to Tari?

Tari is a high-throughput protocol that will need to handle real-world transaction volumes. For example, Big Neon, the initial business application to be built on top of the Tari blockchain, requires high-volume transactions in a short time, especially when tickets sales open and when tickets will be redeemed at an event. Imagine filling an 85,000 seat stadium with 72 entrance queues on match days. Serialized real-world scanning boils down to approximately 500 tickets in four minutes, or approximately two spectators allowed access per second per queue.

This would be impossible to do with parent blockchain scaling solutions.

Layer 2 Scaling Current Initiatives

Micropayment Channels

What are they?

A micropayment channel is a class of techniques designed to allow users to make multiple Bitcoin transactions without committing all of the transactions to the Bitcoin blockchain. In a typical payment channel, only two transactions are added to the blockchain, but an unlimited or nearly unlimited number of payments can be made between the participants [10].

Several channel designs have been proposed or implemented over the years, including:

  • Nakamoto high-frequency transactions;
  • Spillman-style payment channels;
  • CLTV-style payment channels;
  • Poon-Dryja payment channels;
  • Decker-Wattenhofer duplex payment channels;
  • Decker-Russell-Osuntokun eltoo channels;
  • Hashed Time-Locked Contracts (HTLCs).

With specific focus on Hashed Time-Locked Contracts: This technique can allow payments to be securely routed across multiple payment channels. HTLCs are integral to the design of more advanced payment channels such as those used by the Lightning Network.

The Lightning Network is a second-layer payment protocol that operates on top of a blockchain. It enables instant transactions between participating nodes. The Lightning Network features a peer-to-peer system for making micropayments of digital cryptocurrency through a network of bidirectional payment channels without delegating custody of funds and minimizing the trust of third parties [11].

Normal use of the Lightning Network consists of opening a payment channel by committing a funding transaction to the relevant blockchain. This is followed by making any number of Lightning transactions that update the tentative distribution of the channel's funds without broadcasting to the blockchain; and optionally followed by closing the payment channel by broadcasting the final version of the transaction to distribute the channel's funds.

Who uses them?

The Lightning Network is spreading across the cryptocurrency landscape. It was originally designed for Bitcoin. However, Litecoin, Zcash, Ethereum and Ripple are just a few of the many cryptocurrencies planning to implement or test some form of the network [12].


  • Micropayment channels are one of the leading solutions that have been presented to scale Bitcoin, which do not require a change to the underlying protocol.
  • Transactions are processed instantly, the account balances of the nodes are updated, and the money is immediately accessible to the new owner.
  • Transaction fees are a fraction of the transaction cost [13].


  • Micropayment channels are not suitable for making bulk payment, as the intermediate nodes in the multichannel payment network may not be loaded with money to move the funds along.
  • Recipients cannot receive money unless their node is connected and online at the time of the transaction. At the time of writing (July 2018), channels were only bilateral.


Opportunities are fewer than expected, as Tari's ticketing use case requires many fast transactions with many parties, and not many fast transactions with a single party. Non-fungible assets must be "broadcasted", but state channels are private between two parties.

State Channels

What are they?

State channels are the more general form of micropayment channels. They can be used not only for payments, but for any arbitrary "state update" on a blockchain, such as changes inside a smart contract [16].

State channels allow multiple transactions to be made within off-chain agreements with very fast processing, and the final settlement on-chain. They keep the operation mode of blockchain protocol, but change the way it is used so as to deal with the challenge of scalability.

Any change of state within a state channel requires explicit cryptographic consent from all parties designated as "interested" in that part of the state [19].

Who uses them?

On Ethereum:

  • Raiden Network ([16], [21])

    • Uses state channels to research state channel technology, define protocols and develop reference implementations.
    • State channels work with any ERC20-compatible token.
    • State updates between two parties are done via digitally signed and hash-locked transfers as the consensus mechanism, called balance proofs, which are also secured by a time-out. These can be settled on the Ethereum blockchain at any time. Raiden Network uses HTLCs in exactly the same manner as the Lightning Network.

  • Counterfactual ([16], [19], [31])

    • Uses state channels as a generalized framework for the integration of native state channels into Ethereum-based decentralized applications.
    • A generalized state channel generalized framework is one where state is deposited once, and is then used afterwards by any application or set of applications.
    • Counterfactual instantiation means to instantiate a contract without actually deploying it on-chain. It is achieved by making users sign and share commitments to the multisig wallet.
    • When a contract is counterfactually instantiated, all parties in the channel act as though it has been deployed, even though it has not.
    • A global registry is introduced. This is an on-chain contract that maps unique deterministic addresses for any Counterfactual contract to actual on-chain deployed addresses. The hashing function used to produce the deterministic address can be any function that takes into account the bytecode, its owner (i.e. the multisig wallet address), and a unique identifier.
    • A typical Counterfactual state channel is composed of counterfactually instantiated objects.

  • Funfair ([16], [23], [32])

    • Uses state channels as a decentralized slot machine gambling platform, but still using centralized server-based random number generation.
    • Instantiates a normal "Raiden-like" state channel (called fate channel) between the player and the casino. Final states are submitted to blockchain after the betting game is concluded.
    • Investigating the use of threshold cryptography such as Boneh-Lynn-Shacham (BLS) signature schemes to enable truly secure random number generation by a group of participants.


Trinity ([3], [17], [18])

  • Trinity is an open-source network protocol based on NEP-5 smart contracts.
  • Trinity for NEO is the same as the Raiden Network for Ethereum.
  • Trinity uses the same consensus mechanism as the Raiden network.
  • A new token, TNC, has been introduced to fund the Trinity network, but NEO, NEP-5 and TNC tokens are supported.


  • Allows payments and changes to smart contracts.
  • State channels have strong privacy properties. Everything is happening "inside" a channel between participants.
  • State channels have instant finality. As soon as both parties sign a state update, it can be considered final.


State channels rely on availability; both parties must be online.


Opportunities are fewer than expected, as Tari's ticketing use case requires many fast transactions with many parties, and not many fast transactions with a single party. Non-fungible assets must be "broadcasted", but state channels are private between two parties.

Off-chain Matching Engines

What are they?

Orders are matched off-chain in a matching engine and fulfilled on-chain. This allows complex orders, supports cross-chain transfers, and maintains a public record of orders as well as a deterministic specification of behavior. Off-chain matching engines make use of a token representation smart contract that converts global assets into smart contract tokens and vice versa [5].

Who uses them?

  • Neon Exchange (NEX) ( [5], [35])

    • NEX uses a NEO decentralized application (dApp) with tokens.
    • Initial support is planned for NEO, ETH, NEP5 and ERC20 tokens.
    • Cross-chain support is planned for trading BTC, LTC and RPX on NEX.
    • The NEX off-chain matching engine will be scalable, distributed, fault-tolerant, and function continuously without downtime.
    • Consensus is achieved using cryptographically signed requests; publicly specified deterministic off-chain matching engine algorithms; and public ledgers of transactions and reward for foul play. The trade method of the exchange smart contract will only accept orders signed by a private key held by the matching engine.
    • The matching engine matches the orders and submits them to the respective blockchain smart contract for execution.
    • A single invocation transaction on NEO can contain many smart contract calls. Batch commit of matched orders in one on-chain transaction is possible.

  • 0x ([33], [34])

    • An Ethereum ERC20-based smart contract token (ZRX).
    • Provides an open-source protocol to exchange ERC20-compliant tokens on the Ethereum blockchain using off-chain matching engines in the form of dApps (Relayers) that facilitate transactions between Makers and Takers.
    • Off-chain order relay + on-chain settlement.
    • Maker chooses Relayer, specifies token exchange rate, expiration time, fees to satisfy Relayer's fee schedule, and signs order with a private key.
    • Consensus is governed with the publicly available DEX smart contract: addresses, token balances, token exchange, fees, signatures, order status and final transfer.


  • Performance {NEX, 0x}:

    • off-chain request/order;
    • off-chain matching.
  • NEX-specific:

    • batched on-chain commits;
    • cross-chain transfers;
    • support of national currencies;
    • public JavaScript Object Notation (JSON) Application Programmers Interface (API) and web extension API for third-party applications to trade tokens;
    • development environment - Elixir on top of Erlang to enable scalable, distributed, and fault-tolerant matching engine;
    • Cure53 full security audit on web extension, NEX tokens will be regulated as registered European securities.
  • 0x-specific:

    • open-source protocol to enable creation of independent off-chain dApp matching engines (Relayers);
    • totally transparent matching of orders with no single point of control
      • maker's order only enters a Relayer's order book if fee schedule is adhered to,
      • exchange can only happen if a Taker is willing to accept;
    • consensus and settlement governed by the publicly available DEX smart contract.


  • At the time of writing (July 2018), both NEX and 0x were still in development.

  • NEX-specific:

    • a certain level of trust is required, similar to a traditional exchange;
    • closed liquidity pool.
  • 0x-specific:

    • a trusted Token Registry will be required to verify ERC20 token addresses and exchange rates;
    • front-running transactions and transaction collisions possible, more development needed ([36], [37]);
    • batch processing ability unknown.


Matching engines in general have opportunities for Tari; the specific scheme is to be investigated further.


What are they?

A masternode is a server on a decentralized network. It is utilized to complete unique functions in ways ordinary mining nodes cannot, e.g. features such as direct send, instant transactions and private transactions. Because of their increased capabilities, masternodes typically require an investment in order to run. Masternode operators are incentivized and are rewarded by earning portions of block rewards in the cryptocurrency they are facilitating. Masternodes will get the standard return on their stakes, but will also be entitled to a portion of the transaction fees, allowing for a greater return on investment (ROI) ([7], [9]).

  • Dash Example [30]. Dash was the first cryptocurrency to implement the masternode model in its protocol. Under what Dash calls its proof of service algorithm, a second-tier network of masternodes exists alongside a first-tier network of miners to achieve distributed consensus on the blockchain. This two-tiered system ensures that proof of service and proof of work perform symbiotic maintenance of Dash's network. Dash masternodes also enable a decentralized governance system that allows node operators to vote on important developments within the blockchain. A masternode for Dash requires a stake of 1,000 DASH. Dash and the miners each have 45% of the block rewards. The other 10% goes to the blockchain's treasury fund. Operators are in charge of voting on proposals for how these funds will be allocated to improve the network.
  • Dash Deterministic Ordering. A special deterministic algorithm is used to create a pseudorandom ordering of the masternodes. By using the hash from the proof-of-work for each block, security of this functionality is provided by the mining network.
  • Dash Trustless Quorums. The Dash masternode network is trustless, where no single entity can control the outcome. N pseudorandom masternodes (Quorum A) are selected from the total pool to act as an oracle for N pseudorandom masternodes (Quorum B) that are selected to perform the actual task. Quorum A nodes are the nodes that are the closest, mathematically, to the current block hash, while Quorum B nodes are the furthest. This process is repeated for each new block in the blockchain.
  • Dash Proof of Service. Bad actors could also run masternodes. To reduce the possibility of bad acting, nodes must ping the rest of the network to ensure they remain active. All masternode verification is done randomly via the Quorum system by the masternode network itself. Approximately 1% of the network is verified each block. This results in the entire masternode network being verified approximately six times per day. Six consecutive violations result in the deactivation of a masternode.

Who uses them?

Block, Bata, Crown, Chaincoin, Dash, Diamond, ION, Monetary Unit, Neutron, PIVX, Vcash and XtraBytes [8].



  • help to sustain and take care of the ecosystem and can protect blockchains from network attacks;
  • can perform decentralized governance of miners by having the power to reject or orphan blocks if required ([22], [30]);
  • can support decentralized exchanges by overseeing transactions and offering fiat currency gateways;
  • can be used to facilitate smart contracts such as instant transactions, anonymous transactions and decentralized payment processor;
  • can facilitate a decentralized marketplace such as the blockchain equivalent of peer-run commerce sites such as eBay [22];
  • compensate for Proof of Work's limitations - they avoid mining centralization and consume less energy [22];
  • promise enhanced stability and network loyalty, as larger dividends and high initial investment costs make it less likely that operators will abandon their position in the network [22].


  • Maintaining masternodes can be a long and arduous process.
  • ROI is not guaranteed and is inconsistent. In some applications, Masternodes only get rewarded if they mine a block and if they are randomly chosen to get paid.
  • In general, a masternode's IP address is publicized and thus open to attacks.


  • Masternodes do not have a specific standard or protocol; many different implementations exist. If the Tari protocol employs Masternodes, they can be used to facilitate smart contracts off-chain and to enhance the security of the primary blockchain.
  • Masternodes increase the incentives for people to be involved with Tari.


What is it?

Plasma blockchains are a chain within a blockchain, with state transitions enforced by bonded (time to exit) fraud proofs (block header hashes) submitted on the root chain. Plasma enables management of a tiered blockchain without a full persistent record of the ledger on the root blockchain, and without giving custodial trust to any third party. The fraud proofs enforce an interactive protocol of rapid fund withdrawals in case of foul play such as block withholding, and in cases where bad actors in a lower-level tier want to commit blocks to the root chain without broadcasting this to the higher-level tiers [4].

Plasma is a framework for incentivized and enforced execution of smart contracts, scalable to a significant amount of state updates per second, enabling the root blockchain to be able to represent a significant amount of dApps, each employing its own blockchain in a tree format [4].

Plasma relies on two key parts, namely reframing all blockchain computations into a set of MapReduce functions, and an optional method to do Proof of Stake (PoS) token bonding on top of existing blockchains (enforced in an on-chain smart contract). Nakamoto Consensus incentives discourage block withholding or other Byzantine behavior. If a chain is Byzantine, it has the option of going to any of its parents (including the root blockchain) to continue operation or exit with the current committed state [4].

MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key [38]. The Plasma MapReduce includes commitments on data to computation as input in the map phase, and includes a merkleized proof-of-state transition in the reduce step when returning the result [4].

Who uses it?

  • Loom Network, using Delegated Proof of Stake (DPoS) consensus and validation, enabling scalable Application Specific Side Chains (DAppChains), running on top of Ethereum ([4], [15]).
  • OMG Network (OmiseGO), using PoS consensus and validation, a Plasma blockchain scaling solution for finance running on top of Ethereum ([6], [14]).


  • Not all participants need to be online to update state.
  • Participants do not need a record of entry on the parent blockchain to enable their participation in a Plasma blockchain.
  • Minimal data is needed on the parent blockchain to confirm transactions when constructing Plasma blockchains in a tree format.
  • Private blockchain networks can be constructed, enforced by the root blockchain. Transactions may occur on a local private blockchain and have financial activity bonded by a public parent blockchain.
  • Rapid exit strategies in case of foul play.


At the time of writing (July 2018), Plasma still needed to be proven on other networks apart from Ethereum.


  • Has opportunities for Tari as a Layer 2 scaling solution.
  • Possibility to create a Tari ticketing Plasma dAppChain running of Monero without creating a Tari-specific root blockchain? Note: This will make the Tari blockchain dependent on another blockchain.
  • The Loom Network's Software Development Kit (SDK) makes it extremely easy for anyone to create a new Plasma blockchain. In less than a year, a number of successful and diverse dAppChains have launched. The next one could easily be for ticket sales...


What is it?

TrueBit is a protocol that provides security and scalability by enabling trustless smart contracts to perform and offload complex computations. This makes it different from state channels and Plasma, which are more useful for increasing the total transaction throughput of the Ethereum blockchain. TrueBit relies on solvers (akin to miners), who have to stake their deposits in a smart contract, solve computation and, if correct, get their deposit back. If the computation is incorrect, the solver loses their deposit. TrueBit uses an economic mechanism called the "verification game", where an incentive is created for other parties, called challengers, to check the solvers' work ([16], [40], [43]).

Who uses it?

Golem cites TrueBit as a verification mechanism for its forthcoming outsourced computation network LivePeer, a video streaming platform ([39], [41], [42]).


  • Outsourced computation - anyone in the world can post a computational task, and anyone else can receive a reward for completing it [40].
  • Scalable - by decoupling verification for miners into a separate protocol, TrueBit can achieve high transaction throughput without facing a Verifier's Dilemma [40].


At the time of writing (July 2018), TrueBit had not yet been fully tested.


Nothing at the moment as, Tari would not be doing heavy/complex computation, at least not in the short term.


What is it?

The TumbleBit protocol was invented at the Boston University. It is a unidirectional, unlinkable payment hub that is fully compatible with the Bitcoin protocol. TumbleBit allows parties to make fast, anonymous, off-chain payments through an untrusted intermediary called the Tumbler. No-one, not even the Tumbler, can tell which payer paid which payee during a TumbleBit epoch, i.e. a time period of significance.

Two modes of operation are supported:

  • a classic mixing/tumbling/washing mode; and
  • a fully-fledged payment hub mode.

TumbleBit consists of two interleaved fair-exchange protocols that rely on the Rivest–Shamir–Adleman (RSA) cryptosystem's blinding properties to prevent bad acting from either users or Tumblers, and to ensure anonymity. These protocols are:

  • RSA-Puzzle-Solver Protocol; and
  • Puzzle-Promise Protocol.

TumbleBit also supports anonymizing through Tor to ensure that the Tumbler server can operate as a hidden service ([87], [88], [94], [95], [96]).

TumbleBit combines off-chain cryptographic computations with standard on-chain Bitcoin scripting functionalities to realize smart contracts [97] that are not dependent on Segwit. The most important Bitcoin functionality used here includes hashing conditions, signing conditions, conditional execution, 2‑of‑2 multi-signatures and timelocking [88].

Who does it?

The Boston University provided a proof-of-concept and reference implementation alongside the white paper [90]. NTumbleBit [91] is being developed as a C# production implementation of the TumbleBit protocol that at the time of writing (July 2018) was being deployed by Stratis with its Breeze implementation [92], at alpha/experimental release level in TestNet.

NTumbleBit will be a cross-platform framework, server and client for the TumbleBit payment scheme. TumbleBit is separated into two modes, tumbler mode and payment hub mode. The tumbler mode improves transaction fungibility and offers risk free unlinkable transactions. Payment hub mode is a way of making off-chain payments possible without requiring implementations like Segwit or the lightning network. [89]


  • Anonymity properties. TumbleBit provides unlinkability without the need to trust the Tumbler service, i.e. untrusted intermediary [88].
  • Denial of Service (DoS) and Sybil protection. "TumbleBit uses transaction fees to resist DoS and Sybil attacks." [88]
  • Balance. "The system should not be exploited to print new money or steal money, even when parties collude." [88]
  • As a classic tumbler. TumbleBit can also be used as a classic Bitcoin tumbler [88].
  • Bitcoin compatibility. TumbleBit is fully compatible with the Bitcoin protocol [88].
  • Scalability. Each TumbleBit user only needs to interact with the Tumbler and the corresponding transaction party; this lack of coordination between all TumbleBit users makes scalability possible for the tumbler mode [88].
  • Batch processing. TumbleBit supports one-to-one, many-to-one, one-to-many and many-to-many transactions in payment hub mode [88].
  • Masternode compatibility. The TumbleBit protocol can be fully implemented as a service in a Masternode. "The Breeze Wallet is now fully capable of providing enhanced privacy to bitcoin transactions through a secure connection. Utilizing Breeze Servers that are preregistered on the network using a secure, trustless registration mechanism that is resistant to manipulation and censorship." ([92], [93], [98])
  • Nearly production ready. The NTumbleBit and Breeze implementations have gained TestNet status.


  • Privacy is not 100% proven. Payees have better privacy than the payers, and theoretically collusion involving payees and the Tumbler can exist to discover the identity of the payer [99].
  • The Tumbler service is not distributed. More work needs to be done to ensure a persistent transaction state in case a Tumbler server goes down.
  • Equal denominations are required. The TumbleBit protocol can only support a common denominator unit value [88].


TumbleBit has benefits for Tari as a trustless Masternode matching/batch processing engine with strong privacy features.


What is it?

Counterparty is NOT a blockchain. Counterparty is a token protocol released in January 2014 that operates on Bitcoin. It has a fully functional Decentralized Exchange (DEX), as well as several hardcoded smart contracts defined that include contracts for difference and binary options ("bets"). To operate, Counterparty utilizes "embedded consensus", which means that a Counterparty transaction is created and embedded into a Bitcoin transaction, using encoding such as 1-of-3 multi-signature (multisig), Pay to Script Hash (P2SH) or Pay To Public Key Hash (P2PKH). Counterparty nodes, i.e. nodes that run both bitcoind and the counterparty-server applications, will receive Bitcoin transactions as normal (from bitcoind). The counterparty-server will then scan each, and decode and parse any embedded Counterparty transactions it finds. In effect, Counterparty is a ledger within the larger Bitcoin ledger, and the functioning of embedded consensus can be thought of as similar to the fitting of one Russian stacking doll inside another ([73], [74], [75]).

Embedded consensus also means that the nodes maintain identical ledgers without using a separate peer-to-peer network, solely using the Bitcoin blockchain for all communication (i.e. timestamping, transaction ordering and transaction propagation). Unlike Bitcoin, which has the concept of both a soft fork and a hard fork, a change to the protocol or "consensus" code of Counterparty always has the potential to create a hard fork. In practice, this means that each Counterparty node must run the same version of counterparty-server (or at least the same minor version, e.g. the "3" in 2.3.0) so that the protocol code matches up across all nodes ([99], [100]).

Unlike Bitcoin's UTXO model, the Counterparty token protocol utilizes an accounts system where each Bitcoin address is an account, and Counterparty credit and debit transactions for a specific token type affect account balances of that token for that given address. The decentralized exchange allows for low-friction exchanging of different tokens between addresses, utilizing the concept of "orders", which are individual transactions made by Counterparty clients, and "order matches", which the Counterparty protocol itself will generate as it parses new orders that overlap existing active orders in the system. It is the Counterparty protocol code itself that manages the escrow of tokens when an order is created, the exchange of tokens between two addresses that have overlapping orders, and the release of those assets from escrow post-exchange.

Counterparty uses its own token, XCP, which was created through a "proof of burn" process during January 2014 [101]. In that month, over 2,000 bitcoins were destroyed by various individuals sending them to an unspendable address on the Bitcoin network (1CounterpartyXXXXXXXXXXXXXXXUWLpVr), which caused the Counterparty protocol to award the sending address with a corresponding amount of XCP. XCP is used for payment of asset creation fees, collateral for contracts for difference/binary options, and often as a base token in decentralized exchange transactions (largely due to the complexities of using Bitcoin (BTC) in such trades).

Support for the Ethereum Virtual Machine (EVM) was implemented, but never included on the MainNet version [73]. With the Counterparty EVM implementation, all published Counterparty smart contracts “live” at Bitcoin addresses that start with a C. Counterparty is used to broadcast an execute transaction to call a specific function or method in the smart contract code. Once an execution transaction is confirmed by a Bitcoin miner, the Counterparty federated nodes will receive the request and execute that method. The contract state is modified as the smart contract code executes and is stored in the Counterparty database [99].

General consensus is that a federated network is a distributed network of centralized networks. The Ripple blockchain implements a Federated Byzantine Agreement (FBA) consensus mechanism. Federated sidechains implements a security protocol using a trusted federation of mutually distrusting functionaries/notaries. Counterparty utilizes a "full stack" packaging system for its components and all dependencies, called the "federated node" system. However, this meaning refers to federated in the general definition, i.e. "set up as a single centralized unit within which each state or division keeps some internal autonomy" ([97], [98], [71]).

Who uses it?

The most notable projects built on top of Counterparty are Age of Chains, Age of Rust, Augmentors, Authparty, Bitcorns, Blockfreight™, Blocksafe,, CoinDaddy, COVAL, FoldingCoin, FootballCoin, GetGems, IndieBoard, LTBCoin -, Mafia Wars, NVO, Proof of Visit,, SaruTobi Island, Spells of Genesis, Takara, The Scarab Experiment, Token.FM, Tokenly, TopCoin and XCP DEX [75]. In the past, projects such as Storj and SWARM also built on Counterparty.

COVAL is being developed with the primary purpose of moving value using “off-chain” methods. It uses its own set of node runners to manage various "off-chain" distributed ledgers and ledger assigned wallets to implement an extended transaction value system, whereby tokens as well as containers of tokens can be securely transacted. Scaling within the COVAL ecosystem is thus achievable, because it is not only reliant on the Counterparty federated nodes to execute smart contracts [76].


  • Counterparty provides a simple way to add "Layer 2" functionality, i.e. hard-coded smart contracts, to an already existing blockchain implementation that supports basic data embedding.
  • Counterparty's embedded consensus model utilizes "permissionless innovation", meaning that even the Bitcoin core developers could not stop the use of the protocol layer without seriously crippling the network.


  • Embedded consensus requires lockstep upgrades from network nodes to avoid forks.
  • Embedded consensus imposes limitations on the ability of the secondary layer to interact with the primary layer's token. Counterparty was not able to manipulate BTC balances or otherwise directly utilize BTC.
  • With embedded consensus, nodes maintain identical ledgers without using a peer-to-peer network. One could claim that this hampers the flexibility of the protocol. It also limits the speed of the protocol to the speed of the underlying blockchain.


  • Nodes can implement improved consensus models such as Federated Byzantine Agreement [98].
  • Refer to Scriptless Scripts.

2-way Pegged Secondary Blockchains

What are they?

A 2-way peg (2WP) allows the "transfer" of BTC from the main Bitcoin blockchain to a secondary blockchain and vice versa at a fixed rate by making use of an appropriate security protocol. The "transfer" actually involves BTC being locked on the main Bitcoin blockchain and unlocked/made available on the secondary blockchain. The 2WP promise is concluded when an equivalent number of tokens on the secondary blockchain are locked (in the secondary blockchain) so that the original bitcoins can be unlocked ([65], [71]).

  • Sidechain: When the security protocol is implemented using Simplified Payment Verification (SPV) proofs - blockchain transaction verification without downloading the entire blockchain, the secondary blockchain is referred to as a Sidechain [65].
  • Drivechain: When the security protocol is implemented by giving custody of the BTC to miners - where miners vote when to unlock BTC and where to send them, the secondary blockchain is referred to as a Drivechain. In this scheme, the miners will sign the block header using a Dynamic Membership Multiparty Signature (DMMS) ([65], [71]).
  • Federated Peg/Sidechain: When the security protocol is implemented by having a trusted federation of mutually distrusting functionaries/notaries, the secondary blockchain is referred to as a Federated Peg/Sidechain. In this scheme, the DMMS is replaced with a traditional multi-signature scheme ([65], [71]).
  • Hybrid Sidechain-Drivechain-Federated Peg: When the security protocol is implemented by SPV proofs going to the secondary blockchain and dynamic mixture of miner DMMS and functionaries/notaries multi-signatures going back to the main Bitcoin blockchain, the secondary blockchain is referred to as a Hybrid Sidechain-Drivechain-Federated Peg ([65], [71], [72]).

The following figure shows an example of a 2WP Bitcoin secondary blockchain using a Hybrid Sidechain-Drivechain-Federated Peg security protocol [65]:

BTC on the main Bitcoin blockchain are locked by using a P2SH transaction, where BTC can be sent to a script hash instead of a public key hash. To unlock the BTC in the P2SH transaction, the recipient must provide a script matching the script hash and data, which makes the script evaluate to true [66].

Who does them?

  • RSK (formerly Rootstock) is a 2WP Bitcoin secondary blockchain using a hybrid sidechain-drivechain security protocol. RSK is scalable up to 100 transactions per second (Tx/s) and provides a second-layer scaling solution for Bitcoin, as it can relieve on-chain Bitcoin transactions ([100], [101], [102]).
  • Hivemind (formerly Truthcoin) is implementing a Peer-to-Peer Oracle Protocol that absorbs accurate data into a blockchain so that Bitcoin users can speculate in Prediction Markets [67].
  • Blockstream is implementing a Federated Sidechain called Liquid, with the functionaries/notaries being made up of participating exchanges and Bitcoin businesses [72].


  • Permissionless innovation: Anyone can create a new blockchain project that uses the underlying strengths of the main Bitcoin blockchain, using real BTC as the currency [63].
  • New features: Sidechains/Drivechains can be used to test or implement new features without risk to the main Bitcoin blockchain or without having to change its protocol, such as Schnorr signatures and zero-knowledge proofs ([63], [68]).
  • Chains-as-a-Service (CaaS): It is possible to create a CaaS with data storage 2WP secondary blockchains [68].
  • Smart Contracts: 2WP secondary blockchains make it easier to implement smart contracts [68].
  • Scalability: 2WP secondary blockchains can support larger block sizes and more transactions per second, thus scaling the Bitcoin main blockchain [68].


  • Security: Transferring BTC back into the main Bitcoin blockchain is not secure enough and can be manipulated because Bitcoin does not support SPV from 2WP secondary blockchains [64].
  • 51% attacks: 2WP secondary blockchains are hugely dependent on merged mining. Mining power centralization and 51% attacks are thus a real threat, as demonstrated for Namecoin and Huntercoin (refer to Merged Mining Introduction).
  • The DMMS provided by mining is not very secure for small systems, while the trust of the federation/notaries is riskier for large systems [71].


2WP secondary blockchains may present interesting opportunities to scale multiple payments that would be associated with multiple non-fungible assets living on a secondary layer. However, take care with privacy and security of funds as well as with transferring funds into and out of the 2WP secondary blockchains.


What is it?

Lumino Transaction Compression Protocol (LTCP) is a technique for transaction compression that allows the processing of a higher volume of transactions, but the storing of much less information. The Lumino Network is a Lightning-like extension of the RSK platform that uses LTCP. Delta (difference) compression of selected fields of transactions from the same owner is done by using aggregate signing of previous transactions so that previous signatures can be disposed of [103].

Each transaction contains a set of persistent fields called the Persistent Transaction Information (PTI) and a compound record of user transaction data called the SigRec. A Lumino block stores two Merkle trees: one containing all PTIs; and the other containing all transaction IDs (hash of the signed SigRec). This second Merkle tree is conceptually similar to the Segwit witness tree, thus forming the witness part. Docking is the process where SicRec and signature data can be pruned from the blockchain if valid linked PTI information exists [103].

Who does it?

RSK, which was newly launched on MainNet in January 2018. The Lumino Network must still be launched in TestNet ([61], [62]).


The Lumino Network promises high efficiency in pruning the RSK blockchain.


  • The Lumino Network has not yet been released.
  • Details about how the Lumino Network will handle payment channels were not decisive in the white paper [103].


LTCP pruning may be beneficial to Tari.

Scriptless Scripts

What is it?

Scriptless Scripts was coined and invented by mathematician Andrew Poelstra. It entails offering scripting functionality without actual scripts on the blockchain to implement smart contracts. At the time of writing (July 2018) it can only work on the Mimblewimble blockchain and makes use of a specific Schnorr signature scheme [81] that allows for signature aggregation, mathematically combining several signatures into a single signature, without having to prove Knowledge of Secret Keys (KOSK). This is known as the plain public-key model, where the only requirement is that each potential signer has a public key. The KOSK scheme requires that users prove knowledge (or possession) of the secret key during public key registration with a certification authority, and is one way to generically prevent rogue-key attacks ([78], [79]).

Signature aggregation properties sought here are ([78], [79]):

  • must be provably secure in the plain public-key model;
  • must satisfy the normal Schnorr equation, whereby the resulting signature can be written as a function of a combination of the public keys;
  • must allow for Interactive Aggregate Signatures (IAS), where the signers are required to cooperate;
  • must allow for Non-interactive Aggregate Signatures (NAS), where the aggregation can be done by anyone;
  • must allow each signer to sign the same message;
  • must allow each signer to sign their own message.

This is different to a normal multi-signature scheme where one message is signed by all.

Let's say Alice and Bob each needs to provide half a Schnorr signature for a transaction, whereby Alice promises to reveal a secret to Bob in exchange for one crypto coin. Alice can calculate the difference between her half Schnorr signature and the Schnorr signature of the secret (adaptor signature) and hand it over to Bob. Bob then has the ability to verify the correctness of the adaptor signature without knowing the original signatures. Bob can then provide his half Schnorr signature to Alice so she can broadcast the full Schnorr signature to claim the crypto coin. By broadcasting the full Schnorr signature, Bob has access to Alice's half Schnorr signature and he can then calculate the Schnorr signature of the secret because he already knows the adaptor signature, thereby claiming his prize. This is also known as Zero-Knowledge Contingent payments ([77], [80]).

Who does it?

Mimblewimble is being cited by Andrew Poelstra as being the ultimate Scriptless Script [80].


  • Data savings. Signature aggregation provides data compression on the blockchain.
  • Privacy. Nothing about the Scriptless Script smart contract, other than the settlement transaction, is ever recorded on the blockchain. No one will ever know that an underlying smart contract was executed.
  • Multiplicity. Multiple digital assets can be transferred between two parties in a single settlement transaction.
  • Implicit scalability. Scalability on the blockchain is achieved by virtue of compressing multiple transactions into a single settlement transaction. Transactions are only broadcast to the blockchain once all preconditions are met.


Recent work by Maxwell et al. ([78], [79]) showed that a naive implementation of Schnorr multi-signatures that satisfies key aggregation is not secure, and that the Bellare and Neven (BN) Schnorr signature scheme loses the key aggregation property in order to gain security in the plain public-key model. They proposed a new Schnorr-based multi-signature scheme with key aggregation called MuSig, which is provably secure in the plain public-key model. It has the same key and signature size as standard Schnorr signatures. The joint signature can be verified in exactly the same way as a standard Schnorr signature with respect to a single “aggregated” public-key, which can be computed from the individual public keys of the signers. Note that the case of interactive signature aggregation where each signer signs their own message must still be proven by a complete security analysis.


Tari plans to implement the Mimblewimble blockchain and should implement the Scriptless Scripts together with the MuSig Schnorr signature scheme. However, this in itself will not provide the Layer 2 scaling performance that will be required. Big Neon, the initial business application to be built on top of the Tari blockchain, requires to "facilitate 500 tickets in 4 minutes", i.e. approximately two spectators allowed access every second, with negligible latency.

The Mimblewimble Scriptless Scripts could be combined with a federated node (or specialized masternode), similar to that being developed by Counterparty. The secrets that are revealed by virtue of the MuSig Schnorr signatures can instantiate normal smart contracts inside the federated node, with the final consolidated state update being written back to the blockchain after the event.

Directed Acyclic Graph (DAG) Derivative Protocols

What is it?

In mathematics and computer science, a Directed Acyclic Graph (DAG) is a finite directed graph with no directed cycles. A directed graph is acyclic if and only if it has a topological ordering, i.e. for every directed edge uv from vertex u to vertex v, u comes before v in the ordering (age) [85].

DAGs in blockchain were first proposed as the GHOST protocol ([87], [88]), a version of which is implemented in Ethereum as the Ethash PoW algorithm (based on Dagger-Hashimoto ([102], [103])). Then Braiding ([83], [84]), Jute [86], SPECTRE [89] and PHANTOM [95] were presented. The principle of DAG in blockchain is to present a way to include traditional off-chain blocks into the ledger, which is governed by mathematical rules. A parent that is simultaneously an ancestor of another parent is disallowed:

The main problems to be solved by the DAG derivative protocols are:

  1. inclusion of orphaned blocks (decrease the negative effect of slow propagation); and
  2. mitigation against selfish mining attacks.

The underlying concept is still in the research and exploration phase [82].

In most DAG derivative protocols, blocks containing conflicting transactions, i.e. conflicting blocks, are not orphaned. A subsequent block is built on top of both of the conflicting blocks, but the conflicting transactions themselves are thrown out while processing the chain. SPECTRE, for one, provides a scheme whereby blocks vote to decide which transactions are robustly accepted, robustly rejected or stay in an indefinite “pending” state in case of conflicts. Both conflicting blocks become part of the shared history, and both conflicting blocks earn their respective miners a block reward ([82], [93], [94]).

Note: Braiding requires that parents and siblings may not contain conflicting transactions.

Inclusive (DAG derivative) protocols that integrate the contents of traditional off-chain blocks into the ledger result in incentives for behavior changes by the nodes, which leads to an increased throughput, and a better payoff for weak miners [88].

DAG derivative protocols are not Layer 2 Scaling solutions, but they offer significant scaling of the primary blockchain.

Who does it?

  • The School of Engineering and Computer Science, The Hebrew University of Jerusalem ([87], [88], [89], [93], [94])

  • DAGlabs [96] (Note: This is the commercial development chapter.)

      • SPECTRE provides high throughput and fast confirmation times. Its DAG structure represents an abstract vote regarding the order between each pair of blocks, but this pairwise ordering may not be extendable to a full linear ordering due to possible Condorcet cycles.
      • PHANTOM provides a linear ordering over the blocks of the DAG and can support consensus regarding any general computation (smart contracts), which SPECTRE cannot. In order for a computation or contract to be processed correctly and consistently, the full order of events in the ledger is required, particularly the order of inputs to the contract. However, PHANTOM’s confirmation times are much slower than those in SPECTRE.
  • Ethereum as the Ethash PoW algorithm that has been adapted from GHOST

  • [Dr. Bob McElrath] ([83], [84])

    • Brading
  • David Vorick [86]

    • Jute
  • Crypto currencies:

    • IOTA [90]
    • Nano [91]
    • Byteball [92]


  • Layer 1 scaling: increased transaction throughput on the main blockchain.
  • Fairness: better payoff for weak miners.
  • Decentralization mitigation: weaker miners also get profits.
  • Transaction confirmation times: confirmation times of several seconds (SPECTRE).
  • Smart contracts: support smart contracts (PHANTOM).


  • Still not proven 100%, development continuing.
  • The DAG derivative protocols differ on important aspects such as miner payment schemes, security models, support for smart contracts, and confirmation times. Thus, all DAG derivative protocols are not created equal - beware!


Opportunities exist for Tari in applying the basic DAG principles to make a 51% attack harder by virtue of fairness and miner decentralization resistance. Choosing the correct DAG derivative protocol can also significantly improve Layer 1 scaling.


  • Further investigation into the more promising Layer 2 scaling solutions and technologies is required to verify alignment, applicability and usability.
  • Although not all technologies covered here are Layer 2 Scaling solutions, the strengths should be considered as building blocks for the Tari protocol.


[1] "OSI Mode" [online]. Available: Date accessed: 2018‑06‑07.

[2] "Decentralized Digital Identities and Block Chain - The Future as We See It" [online]. Available: Date accessed: 2018‑06‑07.

[3] "Trinity Protocol: The Scaling Solution of the Future?" [Online.] Available: Date accessed: 2018‑06‑08.

[4] J. Poon and V. Buterin, "Plasma: Scalable Autonomous Smart Contracts" [online]. Available: Date accessed: 2018‑06‑14.

[5] "NEX: A High Performance Decentralized Trade and Payment Platform" [online]. Available: Date accessed: 2018‑06‑11.

[6] J. Poon and OmiseGO Team, "OmiseGO: Decentralized Exchange and Payments Platform" [online]. Available: Date accessed: 2018‑06‑14.

[7] "The Rise of Masternodes Might Soon be Followed by the Creation of Servicenodes" [online]. Available: Date accessed: 2018‑06‑13.

[8] "What are Masternodes?- Beginner's Guide" [online]. Available: Date accessed: 2018‑06‑14.

[9] "What the Heck is a DASH Masternode and How Do I get One" [online]. Available: Date accessed: 2018‑06‑14.

[10] "Payment Channels" (online). Available: Date accessed: 2018‑06‑14.

[11] "Lightning Network" (online). Available: Date accessed: 2018‑06‑14.

[12] "Bitcoin Isn't the Only Crypto Adding Lightning Tech Now" [online]. Available: Date accessed: 2018‑06‑14.

[13] "What is Bitcoin Lightning Network? And How Does it Work?" [Online.] Available: Date accessed: 2018‑06‑14.

[14] "OmiseGO" [online]. Available: Date accessed: 2018‑06‑14.

[15] "Everything You Need to Know About Loom Network, All In One Place (Updated Regularly)" [online]. Available: Date accessed: 2018‑06‑14.

[16] "Making Sense of Ethereum's Layer 2 Scaling Solutions: State Channels, Plasma, and Truebit" [online]. Available: Date accessed: 2018‑06‑14.

[17] "Trinity: Universal Off-chain Scaling Solution" [online]. Available: Date accessed: 2018‑06‑14.

[18] "Trinity White Paper: Universal Off-chain Scaling Solution" [online]. Available: Date accessed: 2018‑06‑14.

[19] J. Coleman, L. Horne, and L. Xuanji, "Counterfactual: Generalized State Channels" [online]. Available at: and Date accessed: 2018‑06‑15.

[20] "The Raiden Network" [online]. Available: Date accessed: 2018‑06‑15.

[21] "What is the Raiden Network?" [Online.] Available: Date accessed: 2018‑06‑15.

[22] "What are Masternodes? An Introduction and Guide" [online]. Available: Date accessed: 2018‑06‑15.

[23] "State Channels in Disguise?" [Online.] Available: Date accessed: 2018‑06‑15.

[24] "World Payments Report 2017, © 2017 Capgemini and BNP Paribas" [online]. Available: Date accessed: 2018‑06‑20.

[25] "VISA" [online]. Available: Date accessed: 2018‑06‑20.

[26] "VisaNet Fact Sheet 2017 Q4" [online]. Available: Date accessed: 2018‑06‑20.

[27] "With 100% segwit transactions, what would be the max number of transaction confirmation possible on a block?" [Online.] Available: Date accessed: 2018‑06‑21.

[28]: "A Gentle Introduction to Ethereum" [online]. Available: Date accessed: 2018‑06‑21.

[29] "What is the size (bytes) of a simple Ethereum transaction versus a Bitcoin transaction?" [Online.] Available: Date accessed: 2018‑06‑21.

[30] "What is a Masternode?" [Online.] Available at: Date accessed: 2018‑06‑14.

[31] "Counterfactual: Generalized State Channels on Ethereum" [online]. Available: Date accessed: 2018‑06‑26.

[32] J. Longley and O. Hopton, "FunFair Technology Roadmap and Discussion" [online]. Available: Date accessed: 2018‑06‑27.

[33] "0x Protocol Website" [online]. Available: Date accessed: 2018‑06‑28.

[34] "0x: An open protocol for decentralized exchange on the Ethereum blockchain" [online]. Available: Date accessed: 2018‑06‑28.

[35] "NEX/Nash website" [online]. Available: Date accessed: 2018‑06‑28.

[36] "Front-running, Griefing and the Perils of Virtual Settlement (Part 1)" [online]. Available: Date accessed: 2018‑06‑29.

[37] "Front-running, Griefing and the Perils of Virtual Settlement (Part 2)" [online]. Available: Date accessed: 2018‑06‑29.

[38] "MapReduce: Simplified Data Processing on Large Clusters" [online]. Available: Date accessed: 2018‑07‑02.

[39] "The Golem WhitePaper (Future Integrations)" [online]. Available: Date accessed: 2018‑06‑22.

[40] J. Teutsch, and C. Reitwiessner, "A Scalable Verification Solution for Block Chains" [online]. Available: Date accessed: 2018‑06‑22.

[41] "Livepeer's Path to Decentralization" [online]. Available: Date accessed: 2018‑06‑22.

[42] "Golem Website" [online]. Available: Date accessed: 2018‑06‑22.

[43] "TruBit Website" [online]. Available: Date accessed: 2018‑06‑22.

[44] "TumbleBit: An Untrusted Bitcoin-compatible Anonymous Payment Hub" [online]. Available: Date accessed: 2018‑07‑12.

[45] E. Heilman, L. AlShenibr, F. Baldimtsi, A. Scafuro and S. Goldberg, "TumbleBit: An Untrusted Bitcoin-compatible Anonymous Payment Hub" [online].Available: Date accessed: 2018‑07‑08.

[46] "Anonymous Transactions Coming to Stratis" [online]. Available: Date accessed: 2018‑07‑08.

[47] "TumbleBit Proof of Concept GitHub Repository" [online]. Available: Date accessed: 2018‑07‑08.

[48] "NTumbleBit GitHub Repository" [online]. Available: Date accessed: 2018‑07‑12.

[49] "Breeze Tumblebit Server Experimental Release" [online]. Available: Date accessed: 2018‑07‑12.

[50] "Breeze Wallet with Breeze Privacy Protocol (Dev. Update)" [online]. Available: Date accessed: 2018‑07‑12.

[51] E. Heilman, F. Baldimtsi and S. Goldberg, "Blindly Signed Contracts - Anonymous On-chain and Off-chain Bitcoin Transactions" [online]. Available: Date accessed: 2018‑07‑12.

[52] E. Heilman and L. AlShenibr, "TumbleBit: An Untrusted Bitcoin-compatible Anonymous Payment Hub - 08 October  2016", in Conference: Scaling Bitcoin 2016 Milan. Available: Date accessed: 2018‑07‑13.

[53] "Better Bitcoin Privacy, Scalability: Developers Making TumbleBit a Reality" [online]. Available: Date accessed: 2018‑07‑13.

[54] Bitcoinwiki: "Contract" [online]. Available: Date accessed: 2018‑07‑13.

[55] "Bitcoin Privacy is a Breeze: TumbleBit Successfully Integrated Into Breeze" [online]. Available: Date accessed: 2018-07‑13.

[56] "TumbleBit Wallet Reaches One Step Forward" [online]. Available: Date accessed: 2018‑07‑13.

[57] "A Survey of Second Layer Solutions for Blockchain Scaling Part 1" [online]. Available: Date accessed: 2018‑07‑16.

[58] "Second-layer Scaling" [online]. Available: Date accessed: 2018‑07‑16.

[59] "RSK Website" [online]. Available: Date accessed: 2018‑07‑16.

[60] S. D. Lerner, "Lumino Transaction Compression Protocol (LTCP)" [online]. Available: Date accessed: 2018‑07‑16.

[61] "Bitcoin-based Ethereum Rival RSK Set to Launch Next Month" [online]. Available: Date accessed: 2018‑07‑16.

[62] "RSK Blog Website" [online]. Available: Date accessed: 2018‑07‑16.

[63] "Drivechain: Enabling Bitcoin Sidechain" [online]. Available: Date accessed: 2018‑07‑17.

[64] "Drivechain - The Simple Two Way Peg" [online]. Available: Date accessed: 2018‑07‑17.

[65] "Sidechains, Drivechains, and RSK 2-Way Peg Design" [online]. Available: or Date accessed: 2018‑07‑18.

[66] "Pay to Script Hash" [online]. Available: Date accessed: 2018‑07‑18.

[67] Hivemind Website [online]. Available: Date accessed: 2018‑07‑18.

[68] "Drivechains: What do they Enable? Cloud 3.0 Services Smart Contracts and Scalability" [online]. Available: Date accessed: 2018‑07‑19.

[69] "Bloq’s Paul Sztorc on the 4 Main Benefits of Sidechains" [online]. Available: Date accessed: 2018‑07‑19.

[70] Blockstream Website [online]. Available: Date accessed: 2018‑07‑19.

[71] A. Back, M. Corallo, L. Dashjr, M. Friedenbach, G. Maxwell, A. Miller, A. Poelstra, J. Timón and P. Wuille, 2014-10-22. "Enabling Blockchain Innovations with Pegged Sidechains" [online]. Available: Date accessed: 2018‑07‑19.

[72] J. Dilley, A. Poelstra, J. Wilkins, M. Piekarska, B. Gorlick and M. Friedenbachet, "Strong Federations: An Interoperable Blockchain Solution to Centralized Third Party Risks" [online]. Available: Date accessed: 2018‑07‑19.

[73] "CounterpartyXCP/Documentation/Smart Contracts/EVM FAQ" [online]. Available: Date accessed: 2018‑07‑23.

[74] "Counterparty Development 101" [online]. Available: Date accessed: 2018‑07‑23.

[75] "Counterparty Website" [online]. Available: Date accessed: 2018‑07‑24.

[76] "COVAL Website" [online]. Available: Date accessed: 2018‑07‑24.

[77] "Scriptless Scripts: How Bitcoin Can Support Smart Contracts Without Smart Contracts" [online]. Available: Date accessed: 2018‑07-24.

[78] "Key Aggregation for Schnorr Signatures" [online]. Available: Date accessed: 2018‑07‑24.

[79] G. Maxwell, A. Poelstra, Y. Seurin and P. Wuille, "Simple Schnorr Multi-signatures with Applications to Bitcoin" [online]. Available: 20 May 2018, Date accessed: 2018‑07‑24.

[80] A. Poelstra, 4 March 2017, "Scriptless Scripts" [online]. Available: Date accessed: 2018‑07‑24.

[81] "bip-schnorr.mediawiki" [online]. Available: Date accessed: 2018‑07‑26.

[82] "If There is an Answer to Selfish Mining, Braiding could be It" [online]. Available: Date accessed: 2018‑07‑27.

[83] B. McElrath, "Braiding the Blockchain", in Conference: Scaling Bitcoin, Hong Kong, 7 Dec 2015 [online]. Available: Date accessed: 2018‑07‑27.

[84] "Braid Examples" [online]. Available: Date accessed: 2018‑07‑27.

[85] "Directed Acyclic Graph" [online]. Available: Date accessed: 2018‑07‑30.

[86] "Braiding Techniques to Improve Security and Scaling" [online]. Available: Date accessed: 2018‑07‑30.

[87] Y. Sompolinsky and A. Zohar, "GHOST: Secure High-rate Transaction Processing in Bitcoin" [online]. Available: Date accessed: 2018‑07‑30.

[88] Y. Lewenberg, Y. Sompolinsky and A. Zohar, "Inclusive Blockchain Protocols" [online]. Available: Date accessed: 2018‑07‑30.

[89] Y. Sompolinsky, Y. Lewenberg and A. Zohar, "SPECTRE: A Fast and Scalable Cryptocurrency Protocol" [online]. Available: Date accessed: 2018‑07‑30.

[90] "IOTA Website" [online]. Available: Date accessed: 2018‑07‑30.

[91] NANO: "Digital Currency for the Real World – the fast and free way to pay for everything in life" [online]. Available: Date accessed: 2018‑07‑30.

[92] "Byteball" [online]. Available: Date accessed: 2018‑07‑30.

[93] "SPECTRE: Serialization of Proof-of-work Events, Confirming Transactions via Recursive Elections" [online]. Available: Date accessed: 2018‑07‑30.

[94] Y. Sompolinsky, Y. Lewenberg and A. Zohar, "SPECTRE: Serialization of Proof-of-work Events: Confirming Transactions via Recursive Elections" [online]. Available: Date accessed: 2018‑07‑30.

[95] Y. Sompolinsky and A. Zohar, "PHANTOM: A Scalable BlockDAG Protocol" [online]. Available: Date accessed: 2018‑07‑30.

[96] "DAGLabs Website" [online]. Available: Date accessed: 2018‑07‑30.

[97] "Beyond Distributed and Decentralized: what is a federated network?" [Online.] Available: Date accessed: 2018‑08‑13.

[98] "Federated Byzantine Agreement" [online]. Available: Date accessed: 2018‑08‑13.

[99] "Counterparty Documentation: Frequently Asked Questions" [online]. Available: Date accessed: 2018‑09‑14.

[100] "Counterparty Documentation: Protocol Specification" [online]. Available: Date accessed: 2018‑09‑14.

[101] "Counterparty News: Why Proof-of-Burn, March 23, 2014" [online]. Available: Date accessed: 2018‑09‑14.

[102] V. Buterin, "Dagger: A Memory-Hard to Compute, Memory-Easy to Verify Scrypt Alternative" [online]. Available: Date accessed: 2019‑02‑12.

[103] T. Dryja, "Hashimoto: I/O Bound Proof of Work" [online]. Available: Date accessed: 2019‑02‑12.


Layer 2 Scaling - Executive Summary

Having trouble viewing this presentation?

View it in a separate window.

Directed Acyclic Graphs


The principle of Directed Acyclic Graphs (DAGs) in blockchain is to present a way to include traditional off-chain blocks into the ledger, which is governed by mathematical rules. The main problems to be solved by the DAG derivative protocols are:

  1. inclusion of orphaned blocks (decrease the negative effect of slow propagation); and
  2. mitigation against selfish mining attacks.

Braiding the Blockchain

    Dr. Bob McElrath
    Ph.D. Theoretical Physics


"Braiding the Blockchain" by Dr. Bob McElrath, Scaling Bitcoin, Hong Kong, December 2015.

This talk discusses the motivation for using Directed Acyclic Graphs (DAGs), which are orphaned blocks, throughput and more inclusive mining. New terms are defined to make DAGs applicable to blockchain, as it needs to be more specific: Braid vs. DAG, Bead vs. block, Sibling and Incest.

The Braid approach:

  • incentivizes miners to quickly transmit beads;
  • prohibits parents from containing conflicting transactions (unlike GHOST or SPECTRE);
  • constructs beads to be valid Bitcoin blocks if they meet the difficulty target.


Note: Transcripts are available here.



    Aviv Zohar
    Prof. at The Hebrew University and Chief Scientist @ QED-it


"The GHOST-DAG Protocol" by Yonatan Sompolinsky, Scaling Bitcoin, Tokyo, October 2018.

This talk discusses the goal going from chain to DAG being to get an ordering of the blocks that does not change when time between blocks approaches block propagation time; and security that does not break at higher throughputs. Terms introduced here are Anticone (the view of the DAG a block sees in the past and the future) and $k​$-cluster (a set of blocks with an Anticone at most $k​$). The protocol also makes use of a greedy algorithm in order to find the optimal $k​$-cluster at each step as it attempts to find the overall optimal $k​$-cluster.



  • Start watching at 30min or open the video in a separate tab here.
  • Transcripts are available here.



    Aviv Zohar
    Prof. at The Hebrew University and Chief Scientist @ QED-it


"Scalability II - GHOST/SPECTRE" by Dr. Aviv Zohar, Technion Summer School on Decentralized Cryptocurrencies and Blockchains, 2017.

This talk discusses the application of DAGs in the SPECTRE protocol. Three insights into the Bitcoin protocol are shared: DAGs are more powerful; Bitcoin is related to voting; and amplification. These are discussed in relation to SPECTRE, while properties sought are consistency, safety and weak liveness. Voting outcomes are strongly rejected, strongly accepted or pending.



    Yonatan Sompolinsky
    Founding Scientist, Daglabs


""BlockDAG Protocols - SPECTRE, PHANTOM"* by Yonatan Sompolinsky, Blockchain Protocol Analysis and Security Engineering, Stanford, 2018.

This talk introduces the BlockDAG protocol PHANTOM, and motivates it by virtue of blockchains not being able to scale and BlockDAGs being a generalization of blockchains. The mining protocol references all tips in the DAG (as opposed to the tip of the longest chain) and also publishes all mined blocks as soon as possible (similar to Bitcoin). Blocks honestly created (i.e. honest blocks) will only be unconnected if they were created at approximately the same time. PHANTOM's goal is to recognize honest $k$-clusters, order the blocks within and disregard the rest.


Laser Beam


Proof-of-Work (PoW) blockchains are notoriously slow, as transactions need to be a number of blocks in the past to be confirmed, and have poor scalability properties. A payment channel is a class of techniques designed to allow two or more parties to make multiple blockchain transactions, without committing all of the transactions to the blockchain. Resulting funds can be committed back to the blockchain. Payment channels allow multiple transactions to be made within off-chain agreements. They keep the operation mode of the blockchain protocol, but change the way in which it is used in order to deal with the challenge of scalability [1].

The Lightning Network is a second-layer payment protocol that was originally designed for Bitcoin, and which enables instant transactions between participating nodes. It features a peer-to-peer system for making micropayments through a network of bidirectional payment channels. The Lightning Network's dispute mechanism requires all users to constantly watch the blockchain for fraud. Various mainnet implementations that support Bitcoin exist and, with small tweaks, some of them are also able to support Litecoin ([2], [3], [4], [10]).

Laser Beam is an adaptation of the Lightning Network for the Mimblewimble protocol, to be implemented for Beam ([5], [6], [7]). At the time of writing of this report (November 2019), the specifications were far advanced, but still work in progress. Beam has a working demonstration in its mainnet repository, which at this stage demonstrates off-chain transactions in a single channel between two parties [8]. According to the Request for Comment (RFC) documents, Beam plans to implement routing across different payment channels in the Lightning Network style.

Detail Scheme

Beam's version of a multisignature (MultiSig) is actually a $2\text{-of-}2$ multiparty Unspent Transaction Output (UTXO), where each party keeps its share of the blinding factor of the Pedersen commitment, $C(v,k_{1}+k_{2})=\Big(vH+(k_{1}+k_{2})G\Big)$, secret. (Refer to Appendix A for notation used.) The multiparty commitment is accompanied by a single multiparty Bulletproof range proof, similar to that employed by Grin, where the individual shares of the blinding factor are used to create the combined range proof [9].

In the equations that follow Alice's and Bob's contributions are denoted by subscripts $_{a}$ and $_{b}$ respectively; $f$ is the fee and $\mathcal{X}$ is the excess. Note that blinding factors denoted by $\hat{k}$, $k^{\prime}$ and $k^{\prime\prime}$, and values denoted by $v^{\prime}$ and $v^{\prime\prime}$, have a special purpose, discussed later, so that $\hat{k}_{N_{a}} \neq k_{N_{a}} \neq k^{\prime} \neq k^{\prime\prime}_{N_{a}}$ and $v_{N_{a}} \neq v^{\prime}_{N_{a}} \neq v^{\prime\prime}_{N_{a}}$.

Funding Transaction

The parties collaborate to create the multiparty UTXO (i.e. commitment and associated multiparty range proof), combined on‑chain funding transaction (Figure 1) and an initial refund transaction for each party (off‑chain). All refund transactions have a relative time lock in their kernel, referencing the kernel of the original combined funding transaction, which has to be confirmed on the blockchain.

Figure 1: Laser Beam Funding for Round (N)

The initial funding transaction between Alice and Bob is depicted in (1). The lock height in the signature challenge corresponds to the current blockchain height. Input commitment values and blinding factors are identified by superscript $^{\prime\prime}$.

$$ \begin{aligned} -\text{Inputs}(0)+\text{MultiSig}(0)+\text{fee} &= \text{Excess}(0) \\ -\Big((v^{\prime\prime}_{0_{a}}H+k^{\prime\prime}_{0_{a}}G)+(v^{\prime\prime}_{0_{b}}H+k^{\prime\prime}_{0_{b}}G)\Big)+\Big(v_{0}H+(k_{0_{a}}+k_{0_{b}})G\Big)+fH &= \mathcal{X}_{0} \end{aligned} \tag{1} $$

Alice and Bob also need to set up their own respective refund transactions so they can be compensated should the channel never be used; this is performed via a refund procedure. A refund procedure (off‑chain) consists of four parts, whereby each user creates two transactions: one kept partially secret (discussed below) and the other shared. Each partially secret transaction creates a different intermediate multiparty UTXO, which is then used as input in two shared transactions, to spending the same set of outputs to each participant.

All consecutive refund procedures work in exactly the same way. In the equations that follow, double subscripts $_{AA}$, $_{AB}$, $_{BA}$ and $_{BB}$ have the following meaning: the first letter and the second letter indicate who controls the transaction and who created the value, respectively. Capitalized use of $R$ and $P$ denotes public nonce and public blinding factor respectively. The $N_{\text{th}}$ refund procedure is as follows:

Refund Procedure

Alice - Part 1

Figure 2: Laser Beam Refund Part 1 for Round (N)

Alice and Bob set up Alice's intermediate MultiSig funding transaction (Figure 2), spending the original funding MultiSig UTXO. The lock height $h_{N}$ corresponds to the current blockchain height.

$$ \begin{aligned} -\text{MultiSig}(0)+\text{MultiSig}(N)_{A}+\text{fee} & =\text{Excess}(N)_{A1} \\ -\Big(v_{0}H+(k_{0_{a}}+k_{0_{b}})G\Big) + \Big((v_{0}-f)H+(\hat{k}_{N_{a}}+k_{N_{b}})G\Big) + fH &= \mathcal{X}_{N_{A1}} \end{aligned} \tag{2} $$

They collaborate to create $\text{MultiSig}(N)_{A}$, its Bulletproof range proof, the signature challenge and Bob's portion of the signature, $s_{N_{AB1}}$. Alice does not share the final kernel and thus keeps her part of the aggregated signature, $s_{N_{AA1}}$, hidden.

$$ \begin{aligned} \mathcal{X}_{N_{A1}} &= (-k_{0_{a}}+\hat{k}_{N_{a}})G+(-k_{0_{b}}+k_{N_{b}})G \\ &= P_{N_{AA1}}+P_{N_{AB1}} \end{aligned} \tag{3} $$

$$ \begin{aligned} \text{Challenge:}\quad\mathcal{H}(R_{N_{AA1}}+R_{N_{AB1}}\parallel P_{N_{AA1}}+P_{N_{AB1}}\parallel f\parallel h_{N}) \end{aligned} \tag{4} $$

$$ \begin{aligned} \text{Final signature tuple, kept secret:}\quad(s_{N_{AA1}}+s_{N_{AB1}},R_{N_{AA1}}+R_{N_{AB1}}) \end{aligned} \tag{5} $$

$$ \begin{aligned} \text{Kernel of this transaction:}\quad\mathcal{K}_{N_{AA1}} \end{aligned} \tag{6} $$

Bob - Part 1

Alice and Bob set up Bob's intermediate MultiSig funding transaction (Figure 2), also spending the original funding MultiSig UTXO. The lock height $h_{N}$ again corresponds to the current blockchain height.

$$ \begin{aligned} -\text{MultiSig}(0)+\text{MultiSig}(N)_{B}+\text{fee} & =\text{Excess}(N)_{B1} \\ -\Big(v_{0}H+(k_{0_{a}}+k_{0_{b}})G\Big) + \Big((v_{0}-f)H+(k_{N_{a}}+\hat{k}_{N_{b}})G\Big) + fH &= \mathcal{X}_{N_{B1}} \end{aligned} \tag{7} $$

They collaborate to create $\text{MultiSig}(N)_{A}$, its Bulletproof range proof, the signature challenge and Alice's portion of the signature. Bob does not share the final kernel and thus keeps his part of the aggregated signature hidden.

$$ \begin{aligned} \mathcal{X}_{N_{B1}} &= (-k_{0_{a}}+k_{N_{a}})G+(-k_{0_{b}}+\hat{k}_{N_{b}})G \\ &= P_{N_{BA1}}+P_{N_{BB1}} \end{aligned} \tag{8} $$

$$ \begin{aligned} \text{Challenge:}\quad\mathcal{H}(R_{N_{BA1}}+R_{N_{BB1}}\parallel P_{N_{BA1}}+P_{N_{BB1}}\parallel f\parallel h_{N}) \end{aligned} \tag{9} $$

$$ \begin{aligned} \text{Final signature tuple, kept secret:}\quad(s_{N_{BA1}}+s_{N_{BB1}},R_{N_{BA1}}+R_{N_{BB1}}) \end{aligned} \tag{10} $$

$$ \begin{aligned} \text{Kernel of this transaction:}\quad\mathcal{K}_{N_{BB1}} \end{aligned} \tag{11} $$

Alice - Part 2

Figure 3: Laser Beam Refund Part 2 for Round (N)

Alice and Bob set up a refund transaction (Figure 3), which Alice controls, with the same relative time lock $h_{rel}$ to the intermediate funding transaction's kernel, $\mathcal{K}_{N_{AA1}}$. Output commitment values and blinding factors are identified by superscript $^{\prime}$.

$$ \begin{aligned} -\text{MultiSig}(N)_{A}+\text{Outputs}(N)+\text{fee} & =\text{Excess}(N)_{A2} \\ -\Big((v_{0}-f)H+(\hat{k}_{N_{a}}+k_{N_{b}})G\Big)+\Big((v_{N_{a}}^{\prime}H+k_{N_{a}}^{\prime}G)+(v_{N_{b}}^ {\prime}H+k_{N_{b}}^{\prime}G)\Big)+fH &=\mathcal{X}_{N_{A2}} \\ \end{aligned} \tag{12} $$

They collaborate to create the challenge and the aggregated signature. Because the final kernel $\mathcal{K}_{N_{AA1}}$ is kept secret, only its hash is shared. Alice shares this transaction's final kernel with Bob. The signature challenge is determined as follows:

$$ \begin{aligned} \mathcal{X}_{N_{A2}} &= (-\hat{k}_{N_{a}}+k_{N_{a}}^{\prime})G + (-k_{N_{b}}+k_{N_{b}}^{\prime})G \\ &= P_{N_{AA2}}+P_{N_{AB2}} \\ \end{aligned} \tag{13} $$

$$ \begin{aligned} \text{Challenge:}\quad\mathcal{H}(R_{N_{AA2}}+R_{N_{AB2}}\parallel P_{N_{AA2}}+P_{N_{AB2}}\parallel f \parallel\mathcal{H}(\mathcal{K}_{N_{AA1}})\parallel h_{rel}) \end{aligned} \tag{14} $$

Bob - Part 2

Alice and Bob set up a refund transaction (Figure 3), which Bob controls, with a relative time lock $h_{rel}$ to the intermediate funding transaction's kernel, $\mathcal{K}_{N_{BB1}}$. Output commitment values and blinding factors are identified by superscript $^{\prime}$.

$$ \begin{aligned} -\text{MultiSig}(N)_{B}+\text{Outputs}(N)+\text{fee} & =\text{Excess}(N)_{B2} \\ -\Big((v_{0}-f)H+(k_{N_{a}}+\hat{k}_{N_{b}})G\Big)+\Big((v_{N_{a}}^{\prime}H+k_{N_{a}}^{\prime}G)+(v_{N_{b}}^ {\prime}H+k_{N_{b}}^{\prime}G)\Big)+fH &=\mathcal{X}_{N_{B2}} \\ \end{aligned} \tag{15} $$

They collaborate to create the challenge and the aggregated signature. Because the final kernel $\mathcal{K}_{N_{BB1}}$ is kept secret, only its hash is shared. Bob shares this transaction's final kernel with Alice. The signature challenge is determined as follows:

$$ \begin{aligned} \mathcal{X}_{N_{B2}} &= (-k_{N_{a}}+k_{N_{a}}^{\prime})G + (-\hat{k}_{N_{b}}+k_{N_{b}}^{\prime})G \\ &= P_{N_{BA2}}+P_{N_{BB2}} \end{aligned} \tag{16} $$

$$ \begin{aligned} \text{Challenge:}\quad\mathcal{H}(R_{N_{BA2}}+R_{N_{BB2}}\parallel P_{N_{BA2}}+P_{N_{BB2}}\parallel f \parallel\mathcal{H}(\mathcal{K}_{N_{BB1}})\parallel h_{rel}) \end{aligned} \tag{17} $$

Revoke Previous Refund

Whenever the individual balances in the channel change, a new refund procedure is negotiated, revoking previous agreements (Figure 4).

Figure 4: Laser Beam Revoke for Round (N)

Revoking refund transactions involves revealing blinding factor shares for the intermediate multiparty UTXOs, thereby nullifying their further use. After the four parts of the refund procedure have been concluded successfully, the previous round's blinding factor shares $\hat{k}_{(N-1)}$ are revealed to each other, in order to revoke the previous agreement.


$$ \begin{aligned} \text{MultiSig}(N-1)_{A}:\quad\Big((v_{0}-f)H+(\hat{k}_{(N-1)_{a}}+k_{(N-1)_{b}})G\Big) & \quad & \lbrace\text{Alice's commitment}\rbrace \\ \hat{k}_{(N-1)_{a}} & \quad & \lbrace\text{Alice shares with Bob}\rbrace \\ \end{aligned} \tag{18} $$


$$ \begin{aligned} \text{MultiSig}(N-1)_{B}:\quad\Big((v_{0}-f)H+(k_{(N-1)_{a}}+\hat{k}_{(N-1)_{b}})G\Big) & \quad & \lbrace\text{Bob's commitment}\rbrace \\ \hat{k}_{(N-1)_{b}} & \quad & \lbrace\text{Bob shares with Alice}\rbrace \end{aligned} \tag{19} $$

Note that although the kernels for transactions (2) and (10) were kept secret, when the Bulletproof range proofs for $\text{MultiSig}(N-1)_{A}$ and $\text{MultiSig}(N-1)_{B}$ were constructed, resultant values of those MultiSig Pedersen commitments were revealed to the counterparty. Each of them is thus able to verify the counterparty's blinding factor share by constructing the counterparty's MultiSig Pedersen commitment.

Alice verifies:

$$ \begin{aligned}
\Big((v_{0}-f)H+(k_{(N-1)_{a}}+\hat{k}_{(N-1)_{b}})G\Big) \overset{?}{=} C(v_{0}-f,\ k_{(N-1)_{a}}+ \hat{k}_{(N-1)_{b}}) \end{aligned} \tag{20} $$

Bob verifies:

$$ \begin{aligned} \Big((v_{0}-f)H+(\hat{k}_{(N-1)_{a}}+k_{(N-1)_{b}})G\Big) \overset{?}{=} C(v_{0}-f,\ \hat{k}_{(N-1)_{a}}+ k_{(N-1)_{b}}) \end{aligned} \tag{21} $$

Although each party will now have its counterparty's blinding factor share in the counterparty's intermediate multiparty UTXO, they will still not be able to spend it, because the corresponding transaction kernel is still kept secret by the counterparty. The previous round's corresponding transaction (2) or (10), with a fully signed transaction kernel, was never published on the blockchain.

Punishment Transaction

If a counterparty decides to broadcast a revoked set of refund transactions, and if the honest party is actively monitoring the blockchain and able to detect the attempted foul play, a punishment transaction can immediately be constructed before the relative time lock $h_{rel}$ expires. Whenever any of the counterparty's intermediate multiparty UTXOs, $\text{MultiSig}(N-m)\ \text{for}\ 0<m<N$, becomes available in the blockchain, the honest party can spend all the funds to its own output, because it knows the total blinding factor and the counterparty does not.

Channel Closure

Whenever the parties agree to a channel closure, the original on‑chain multiparty UTXO, $\text{MultiSig}(0)$, is spent to their respective outputs in a collaborative transaction. In the event of a single party deciding to close the channel unilaterally for whatever reason, its latest refund transaction is broadcast, effectively closing the channel. In either case, the parties have to wait for the relative time lock $h_{rel}$ to expire before being able to claim their funds.

Opening a channel requires one collaborative funding transaction on the blockchain. Closing a channel involves each party broadcasting its respective portion of the refund transaction to the blockchain, or collaborating to broadcast a single settlement transaction. A round-trip open channel, multiple off‑chain spending and close channel thus involves, at most, three on‑chain transactions.

Conclusions, Observations and Recommendations

  1. MultiSig

    The Laser Beam MultiSig corresponds to a Mimblewimble $2-of-2$ Multiparty Bulletproof UTXO, as described here. There is more than one method for creating the MultiSig's associated Bulletproof range proof, and utilizing the Bulletproofs MPC Protocol will enable wallet reconstruction. It may also make information sharing, while constructing the Bulletproof range proof, more secure.

  2. Linked Transactions

    Part 2 of the refund procedure requires a kernel with a relative time lock to the kernel of its corresponding part 1 refund procedure, when those kernels are not yet available in the base layer, as well as a different, non-similar, signature challenge. Metadata about the linked transaction kernel and non-similar signature challenge creation must therefore be embedded within part 2 refund transaction kernels.

  3. Refund Procedure

    In the event that for round $N$, Alice or Bob decides to stop negotiations after their respective part 1 has been concluded and that transaction has been broadcast, the result would be that funding UTXO, $\text{MultiSig}(0)$, would be replaced by the respective $\text{MultiSig}(N)$. The channel will still be open. However, it also cannot be spent unilaterally, as the blinding factor is shared. Any new updates of the channel will then need to be based on $\text{MultiSig}(N)$ as the funding UTXO. Note that this cannot happen if the counterparties construct transactions for part 1 and part 2, conclude part 2 by signing it, and only then sign part 1.

    In the event that for round $N$, Alice or Bob decides to stop negotiations after their respective part 1 and part 2 have been concluded and those transactions have been broadcast, it will effectively be a channel closure.

  4. Revoke Attack Vector

    When revoking the previous refund $(N-1)$, Alice can get hold of Bob's blinding factor share $\hat{k}_{(N-1)_{b}}$ (19), and after verifying that it is correct (20), refuse to give up her blinding factor share. This will leave Alice with the ability to broadcast any of refund transactions $(N-1)$ and $N$, without fear of Bob broadcasting a punishment transaction. Bob, on the other hand, will only be able to broadcast refund transaction $N$.


Appendix A: Notation Used

  • Let $p$ be a large prime number, $\mathbb{Z}_{p}$ denote the ring of integers $\mathrm{mod} \text{ } p$, and $\mathbb{F}_{p}$ be the group of elliptic curve points.

  • Let $G\in\mathbb{F}_{p}$ be a random generator point (base point) and let $H\in\mathbb{F}_{p}$ be specially chosen so that the value $x_{H}$ to satisfy $H=x_{H}G$ cannot be found, except if the Elliptic Curve Discrete Logarithm Problem (ECDLP) is solved.

  • Let commitment to value $v\in\mathbb{Z}_{p}$ be determined by calculating $C(v,k)=(vH+kG)$, which is called the Elliptic Curve Pedersen Commitment (Pedersen Commitment), with $k\in\mathbb{Z}_{p}$ (the blinding factor) a random value.

  • Let scalar multiplication be depicted by $\cdot$, e.g. $e\cdot(vH+kG)=e\cdot vH+e\cdot kG$.


[1] J. A. Odendaal, "Layer 2 Scaling Survey - Tari Labs University" [online]. Available: Date accessed: 2019‑10‑06.

[2] J. Poon, T. Dryja (2016). "The Bitcoin Lightning Network: Scalable Off-Chain Instant Payments" [online]." Available: Date accessed: 2019‑07‑04.

[3] "GitHub: lightningnetwork/lightning-rfc: Lightning Network Specifications" [online]. Available: Date accessed: 2019‑07‑04.

[4] X. Wang, "What is the situation of Litecoin's Lightning Network now?" [online]. Available: Date accessed: 2019‑09‑11.

[5] The Beam Team, "GitHub: Lightning Network - BeamMW/beam Wiki" [online]. Available: Date accessed: 2019‑07‑05.

[6] F. Jahr, "Beam - Lightning Network Position Paper. (v 1.0)" [online]. Available: Date accessed: 2019‑07‑04.

[7] F. Jahr, "GitHub: fjahr/lightning-mw, Lightning Network Specifications" [online]. Available: Date accessed: 2019‑07‑04.

[8] The Beam Team, "GitHub: beam/node/laser_beam_demo at master - BeamMW/beam" [online]. Available: Date accessed: 2019‑07‑05.

[9] The Beam Team, "GitHub: beam/ecc_bulletproof.cpp at mainnet - BeamMW/beam" [online]. Available: Date accessed: 2019‑07‑05.

[10] D. Smith, N. Kohen, and C. Stewart, “Lightning 101 for Exchanges” [online]. Available: Date accessed: 2019‑11‑06.


Merged Mining


The purpose of merged mining is to allow the mining of more than one cryptocurrency without necessitating additional Proof-of-Work effort (PoW) [1].


  • From BitcoinWiki: Merged mining is the act of using work done on another block chain (the Parent) on one or more Auxiliary block chains and to accept it as valid on its own chain, using Auxiliary Proof-of-Work (AuxPoW), which is the relationship between two block chains for one to trust the other's work as their own. The Parent block chain does not need to be aware of the AuxPoW logic as blocks submitted to it are still valid blocks [2].

  • From CryptoCompare: Merged mining is the process of allowing two different crypto currencies based on the same algorithm to be mined simultaneously. This allows low hash powered crypto currencies to increase the hashing power behind their network by bootstrapping onto more popular crypto currencies [3].


[1] "Merged Mining" [online]. Conference Paper. Available: Date accessed: 2019‑06‑10.

[2] BitcoinWiki: "Merged Mining Specification" [online]. Available: Date accessed: 2019‑06‑10.

[3] CryptoCompare: "What is merged mining – Bitcoin & Namecoin – Litecoin & Dogecoin?" [online]. Available: Date accessed: 2019‑06‑10.

Merged Mining Introduction

What is Merged Mining?

Merged mining is the act of using work done on another blockchain (the Parent) on one or more than one Auxiliary blockchain and to accept it as valid on its own chain, using Auxiliary Proof-of-Work (AuxPoW), which is the relationship between two blockchains for one to trust the other's work as their own. The Parent blockchain does not need to be aware of the AuxPoW logic, as blocks submitted to it are still valid blocks [1].

As an example, the structure of merged mined blocks in Namecoin and Bitcoin is shown here [25]:

A transaction set is assembled for both blockchains. The hash of the AuxPoW block header is then inserted in the "free" bytes region (coinbase field) of the coinbase transaction and submitted to the Parent blockchain's Proof-of-Work (PoW). If the merge miner solves the block at the difficulty level of either blockchain or both blockchains, the respective block(s) are reassembled with the completed PoW and submitted to the correct blockchain. In the case of the Auxiliary blockchain, the Parent's block hash, Merkle tree branch and coinbase transaction are inserted in the Auxiliary block's AuxPoW header. This is to prove that enough work that meets the difficulty level of the Auxiliary blockchain was done on the Parent blockchain ([1], [2], [25]).

The propagation of Parent and Auxiliary blocks is totally independent and only governed by each chain's difficulty level. As an example, the following diagram shows how this can play out in practice with Namecoin and Bitcoin when the Parent difficulty (DBTC) is more than the Auxiliary difficulty (DNMC). Note that BTC block 2' did not become part of the Parent blockchain propagation.

Merged Mining with Multiple Auxiliary Chains

A miner can use a single Parent to perform merged mining on multiple Auxiliary blockchains. The Merkle tree root of a Merkle tree that contains the block hashes of the Auxiliary blocks as leaves must then be inserted in the Parent's coinbase field as shown in the following diagram. To prevent double spending attacks, each Auxiliary blockchain must specify a unique ID that can be used to derive the leaf of the Merkle tree where the respective block hash must be located [25].

Merged Mining - Interesting Facts and Case Studies

Namecoin (#307) with Bitcoin (#1)

  • Namecoin, the first fork of Bitcoin, introduced merged mining with Bitcoin [1] from block 19,200 onwards [3]. At the time of writing (May 2018), the block height of Namecoin was greater than 400,500 [4].
  • Over the five-day period from 23 May 2018 to 27 May 2018, only 226 out of 752 blocks posted transaction values over and above the block reward of 25 NMC, with an average transaction value of 159.231 NMC including the block reward. [4]
  • Slush Pool merged mining Namecoin with Bitcoin rewards all miners with BTC equivalent to NMC via an external exchange service [5].
  • P2pool, Multipool, Slush Pool, Eligius and F2pool are cited as top Namecoin merged mining pools [6].
@ 2018-05-30Bitcoin [16]Namecoin [16]Ratio
Block time target (s)600600100.00%
Hash rate (Ehash/s)31.70521.81468.80%
Blocks count525,064400,79476.33%

Dogecoin (#37) with Litecoin (#6)

  • Dogecoin introduced merged mining with Litecoin [8] from block 371,337 onwards [9]. At the time of writing (May 2018), the block height of Dogecoin was greater than 2,240,000 [10].
  • Many in the Dogecoin user community believe merged mining with Litecoin saved Dogecoin from a 51% attack [8].
@ 2018-05-30Litecoin [16]Dogecoin [16]Ratio
Block time target (s)1506040.00%
Hash rate (Thash/s)311.188235.55275.69%
Blocks count1,430,5172,241,120156.67%

Huntercoin (#779) with Bitcoin (#1) or Litecoin (#6)

  • Huntercoin was released as a live experimental test to see how blockchain technology could handle full-on game worlds [22].
  • Huntercoin was originally designed to be supported for only one year, but development and support will continue [22].
  • Players are awarded coins for gaming, thus the world's first human mineable cryptocurrency.
  • Coin distribution: 10 coins per block, nine for the game world and one for the miners [22].
@ 2018-06-01Huntercoin
Block time target (s)120
blockchain size (GB)17
Pruned blockchain size (GB)0.5
Blocks count2,291,060
PoW algorithm (for merged mining)SHA256, Scrypt

Myriad (#510) with Bitcoin (#1) or Litecoin (#6)

  • Myriad is the first currency to support five PoW algorithms and claims its multi-PoW algorithm approach offers exceptional 51% resistance [23].
  • Myriad introduced merged mining from block 1,402,791 onwards [24].
@ 2018-06-01Myriad
Block time target (s)60
blockchain size (GB)2.095
Blocks count2,442,829
PoW algorithm (for merged mining)SHA256d, Scrypt
PoW algorithm (others)Myr-Groestl, Skein, Yescrypt
  • Some solved multi-PoW block examples follow:

    • Myriad-1
    • Myriad-2
    • Myriad-3

Monero (#12)/DigitalNote (#166) + FantomCoin (#1068)

  • FantamCoin was the first CryptoNote-based coin to develop merged mining with Monero, but was abandoned until DigitalNote developers became interested in merged mining with Monero and revived FantamCoin in October 2016 ([17], [18], [19]).
 FantamCoin Release notes 2.0.0
 - Fantomcoin 2.0 by XDN-project, major FCN update to the latest
   cryptonotefoundation codebase 
 - New FCN+XMR merged merged mining 
 - Default block size - 100Kb
 DigitalNote Release notes 4.0.0-beta
 - EmPoWering XDN network security with merged mining with any CryptoNote 
 - Second step to the PoA with the new type of PoW merged mining blocks
  • DigitalNote and FantomCoin merged mining with Monero are now stuck with the recent CryptoNight-based Monero forks such as Monero Classic and Monero Original after Monero's recent hard fork to CryptoNight v7. (Refer to Attack Vectors.)
@ 2018-05-31Monero [16]DigitalNote [16]Ratio
Block time target (s)120240200.00%
Hash rate (Mhash/s)410.80413.863.37%
Blocks count1,583,869660,07541.67%
@ 2018-05-31Monero [16]FantomCoin [16]Ratio
Block time target (s)1206050.00%
Hash rate (Mhash/s)410.80419.294.70%
Blocks count1,583,8692,126,079134.23%

Some Statistics

Merge-mined blocks in some cryptocurrencies on 18 June 2017 [24]:


  • The Auxiliary blockchain's target block times can be smaller than, equal to or larger than the Parent blockchain.
  • The Auxiliary blockchain's hash rate is generally smaller than, but of the same order of magnitude as that of, the Parent blockchain.
  • A multi-PoW algorithm approach may further enhance 51% resistance.

Attack Vectors

51% Attacks

  • 51% attacks are real and relevant today. Bitcoin Gold (rank #28 @ 2018‑05‑29) and Verge (rank #33 @ 2018‑05‑29) suffered recent attacks with double spend transactions following ([11], [12]).

  • In a conservative analysis, successful attacks on PoW cryptocurrencies are more likely when dishonest entities control more than 25% of the total mining power [24].

  • Tari tokens are envisaged to be merged mined with Monero [13]. The Monero blockchain security is therefore important to the Tari blockchain.

  • Monero recently (6 April 2018) introduced a hard fork with upgraded PoW algorithm CryptoNight v7 at block height 1,546,000 to maintain its Application Specific Integrated Circuit (ASIC) resistance and hence guard against 51% attacks. The Monero team proposes changes to their PoW every scheduled fork (i.e. every six months) ([14], [15]).

  • An interesting question arises regarding what needs to happen to the Tari blockchain if the Monero blockchain is hard forked. Since the CryptoNight v7 hard fork, the network hash rate for Monero hovers around approximately 500MH/s, whereas in the two months immediately prior it was approximately 1,000MH/s [20]( Thus 50% of the hash power can be ascribed to ASICS and botnet miners.


NiceHash statistics for CryptoNight v7 [21] show a lag of two days for approximately 100,600 miners to get up to speed with providing the new hashing power after the Monero hard fork.

The Tari blockchain will have to fork together with or just after a scheduled Monero fork. The Tari blockchain will be vulnerable to ASIC miners until it has been forked.

Double Proof

  • A miner could cheat the PoW system by putting more than one Auxiliary block header into one Parent block [7].
  • Multiple Auxiliary blocks could be competing for the same PoW, and could subject your Auxiliary blockchain to nothing-at-stake attacks if the chain is forked, maliciously or by accident, with consequent attempts to reverse transactions ([7], [26]).
  • More than one Auxiliary blockchain will be merge-mined with Monero.

Analysis of Mining Power Centralization Issues

With reference to [24] and [25]:

  • In Namecoin, F2Pool reached and maintained a majority of the mining power for prolonged periods.
  • Litecoin has experienced slight centralization since mid-2014, caused by Clevermining and F2Pool, among others.
  • In Dogecoin, F2Pool was responsible for generating more than 33% of the blocks per day for significant periods, even exceeding the 50% threshold around the end of 2016.
  • Huntercoin was instantly dominated by F2Pool and remained in this state until mid-2016.
  • Myriadcoin appears to have experienced only a moderate impact. Multi-merge-mined blockchains allow for more than one parent cryptocurrency and have a greater chance of acquiring a higher difficulty per PoW algorithm than the respective parent blockchain.
  • Distribution of overall percentage of days below or above the centralization indicator thresholds on 18 June 2017 was as follows:

Introduction of New Attack Vectors

With reference to [24] and [25]:

  • Miners can generate blocks for the merge-mined child blockchains at almost no additional cost, enabling attacks without risking financial losses.
  • Merged mining as an attack vector works both ways, as parent cryptocurrencies cannot easily prevent being merge-mined by auxiliary blockchains.
  • Merged mining can increase the hash rate of auxiliary blockchains, but it is not conclusively successful as a bootstrapping technique.
  • Empirical evidence suggests that only a small number of mining pools are involved in merged mining, and they enjoy block shares beyond the desired security and decentralization goals.


[1] "Merged Mining Specification" [online). Available: Date accessed: 2018‑05‑28.

[2] "How does Merged Mining Work?" [Online.] Available: Date accessed: 2018‑05‑28.

[3] "Merged-Mining.mediawiki" [online]. Available: Date accessed: 2018‑05‑28.

[4] " - Blockchain Explorer (NMC)" [online]. Available: Date accessed: 2018‑05‑28.

[5] "SlushPool Merged Mining" [online]. Available: Date accessed: 2018‑05‑28.

[6] "5 Best Namecoin Mining Pools of 2018 (Comparison)" [online]. Available: Date accessed: 2018‑05‑28.

[7] "Alternative Chain" [online]. Available: Date accessed: 2018‑05‑28.

[8] "Merged Mining AMA/FAQ" [online]. Available: Date accessed: 2018‑05‑29.

[9] "The Forkening is Happening at ~9:00AM EST" [online]. Available: Date accessed: 2018‑05‑29.

[10] "Dogecoin Blockchain Explorer" [online]. Available: Date accessed: 2018‑05‑29.

[11] "Bitcoin Gold Hit by Double Spend Attack, Exchanges Lose Millions" [online]. Available: Date accessed: 2018‑05‑29.

[12] "Privacy Coin Verge Succumbs to 51% Attack" [Again]. [Online.] Available: Date accessed: 2018‑05‑29.

[13] "Tari Official Website" [online]. Available: Date accessed: 2018‑05‑29.

[14] "Monero Hard Forks to Maintain ASIC Resistance, but ‘Classic’ Hopes to Spoil the Party" [online]. Available: Date accessed: 2018‑05‑29.

[15] "PoW Change and Key Reuse" [online]. Available: Date accessed: 2018‑05‑29.

[16] "BitInfoCharts" [online]. Available: Date accessed: 2018‑05‑30.

[17] "Merged Mining with Monero" [online]. Available: Date accessed: 2018‑05‑30.

[18] "ANN DigitalNote |XDN| - ICCO Announce - NEWS" [online]. Available: Date accessed: 2018‑05‑31.

[19] "DigitalNote xdn-project" [online]. Available: Date accessed: 2018‑05‑31.

[20] "Monero Charts" [online]. Available: Date accessed: 2018‑05‑31.

[21] "Nicehash Statistics for CryptoNight v7" [online]. Available: Date accessed: 2018‑05‑31.

[22] "Huntercoin: A Blockchain based Game World" [online]. Available: Date accessed: 2018‑06‑01.

[23] "Myriad: A Coin for Everyone" [online]. Available: Date accessed: 2018‑06‑01.

[24] "Merged Mining: Curse or Cure?" [Online.] Available: Date accessed: 2019‑02‑12.

[25] Merged Mining: Analysis of Effects and Implications [online]. Available: Date accessed: 2019‑02‑12.

[26] "Problems - Consensus - 8. Proof of Stake" [online]. Available: Date accessed: 2018‑06‑05.


Digital Assets


"All digital assets provide value, but not all digital assets are valued equally" [1].


Digital Assets

  • From Simplicable: A digital asset is something that has value and can be owned but has no physical presence [2].

  • From The Digital Beyond: A digital asset is content that is stored in digital form or an online account, owned by an individual. The associated digital data are classified as intangible, personal property, as long as they stay digital, otherwise they quickly become tangible personal property [3].

  • From Wikipedia: A digital asset, in essence, is anything that exists in a binary format and comes with the right to use. Data that do not possess that right are not considered assets. Digital assets come in many forms and may be stored on many types of digital appliances which are, or will be in existence once technology progresses to accommodate for the conception of new modalities which would be able to carry digital assets; notwithstanding the proprietorship of the physical device onto which the digital asset is located [4].

Non-fungible Tokens

  • From Wikipedia: A non-fungible token (NFT) is a special type of cryptographic token which represents something unique; non-fungible tokens are thus not interchangeable. This is in contrast to cryptocurrencies like Bitcoin, and many network or utility tokens that are fungible in nature. NFTs are used to create verifiable digital scarcity, for example in several specific applications that require unique digital items like crypto-collectibles and crypto-gaming [5].

  • From BTC Manager: Non-fungible tokens (NFTs) are blockchain tokens that are designed to be distinguishable from each other. Using unique metadata, avatars, individual token IDs, and custody chains, NFTs are created to ensure that no two NFT tokens are identical. This is because they exist to store information rather than value, unlike their fungible counterparts [6].


[1] "What Exactly is a Digital Asset & How to Get the Most Value from Them?" [Online.] Available: Date accessed: 2019‑06‑10.

[2] "11 Examples of Digital Assets" [online]. Available: Date accessed: 2019‑06‑10.

[3] J. Romano, "A Working Definition of Digital Assets" [online]. The Digital Beyond. Available: Date accessed: 2019‑06‑10.

[4] Wikipedia: "Digital Asset" [online]. Available: Date accessed: 2019‑06‑10.

[5] Wikipedia: "Non-fungible Token" [online]. Available: Date accessed: 2019‑06‑10.

[6] BTC Manager: "Non-fungible Tokens: What Are They?" [Online.] Available: Date accessed: 2019‑06‑10.

Application of Howey to Blockchain Network Token Sales


Many blockchain networks use, or intend to use, cryptographic tokens ("tokens" or "digital assets") for various purposes. These purposes include an incentive for network participants to contribute computing power to the network; or to gain access to certain goods, services or other network functionality. Some promoters of such a network sell tokens, or the right to receive tokens once the network has launched. They then use the sales proceeds to finance development and maintenance of the networks.

The U.S. Securities and Exchange Commission (SEC) has determined that many of the token offerings and sales over the past few years have been in violation of federal securities laws. In some cases, the SEC has taken action against the promoters of the offerings.

Refer to the following Orders under the Order Instituting Cease-and-Desist Proceedings Pursuant to [1, Section 8A]:

  • Making Findings, and Imposing a Cease-and-Desist Order against Munchee Inc. (the Munchee Order);
  • Making Findings, and Imposing Penalties and a Cease-and-Desist Order against CarrierEQ, Inc., D/B/A/ Airfox (the Airfox Order);
  • Making Findings, and Imposing Penalties and a Cease-and-Desist Order against Paragon Coin, Inc. (the Paragon Order); and
  • Making Findings, and Imposing a Cease-and-Desist Order against GladiusNetwork LLC (the Gladius Order).

In each case, the SEC's decision to take action has been underpinned by its determination that the digital assets that were offered and sold were securities pursuant to Section 2(a)(1) of the Securities Act of 1933 - the "Securities Act" or "Act" [1]. Under [1, Section 5], it is generally unlawful for any person, directly or indirectly, to offer or sell a security without complying with the registration requirement of Section 5, unless the securities offering qualifies for an exemption from registration.

The sanctions for a violation of [1, Section 5] can be significant. They include a preliminary and permanent injunction; rescission; disgorgement; prejudgment interest; and civil money penalties. It is important that those seeking to promote token offerings and sales do so only after ensuring that the tokens to be sold are not securities under [1], or that the offering, if it involves securities, complies with [1, Section 5], or otherwise qualifies for an exemption from registration thereunder.

While the Securities Act [1]; relevant case law and administrative proceedings; as well as guidance and statements of the SEC and its officials shed much light on many of the considerations involved in an analysis of whether a given digital asset is a security, the facts and circumstances relating to each token, token offering and sale are typically quite unique. Many who seek to undertake token offerings and sales struggle to achieve clarity regarding whether such offerings and sales must be registered under [1], or have to fit within an exemption from the Act's registration requirements. As a result, there is significant regulatory uncertainty surrounding the offering and sale of tokens.

The Commission recognizes the need for more guidance in this area and continues to assist those seeking to achieve a greater understanding of whether the U.S. federal securities laws apply to the offer or sale of a particular digital asset. This report:

  • Reviews [2] (the "Framework", which was recently published by the SEC's Strategic Hub for Innovation and Financial Technology, along with previous statements by William Hinman, the Director of the SEC's Division of Corporation Finance. Note: "The Framework" represents the views of SEC Staff. It is not a rule, regulation or statement of the SEC, and it is not binding on the Divisions of the SEC or the SEC itself.

  • Explains how [2] is indicative of the SEC's approach to applying the investment contract test laid out in [3] to digital assets. The SEC adopted this test, commonly referred to as the Howey test [4], and has applied the test in subsequent actions involving digital assets.f1 Note: [3, Section 21(a)] authorizes the SEC to investigate violations of the federal securities laws and, in its discretion, to "publish information concerning any such violations". The Report does not constitute an adjudication of any fact or issue, nor does it make any findings of violations by any individual or entity.

In particular, this report examines some of the SEC Staff's statements relating to the fourth prong of the Howey test, i.e. the "efforts of others", and discusses factors for promoters of token offerings to consider when assessing whether a token purchaser may have a reasonable expectation of earning profits from the significant efforts of others. It should be noted that this report is in no way intended to constitute or be relied upon as legal advice, or to substitute for obtaining competent legal advice from an experienced attorney.

What is a Security?

Section 2(a)(1) of the Securities Act [1] defines a security as follows:

The term "security" means any note, stock, treasury stock, security future, security-based swap, bond, debenture, evidence of indebtedness, certificate of interest or participation in any profit-sharing agreement, collateral-trust certificate, preorganization certificate or subscription, transferable share, investment contract, voting-trust certificate, certificate of deposit for a security, fractional undivided interest in oil, gas, or other mineral rights, any put, call, straddle, option, or privilege on any security, certificate of deposit, or group or index of securities (including any interest therein or based on the value thereof), or any put, call, straddle, option, or privilege entered into on a national securities exchange relating to foreign currency, or, in general, any interest or instrument commonly known as a "security", or any certificate of interest or participation in, temporary or interim certificate for, receipt for, guarantee of, or warrant or right to subscribe to or purchase, any of the foregoing.

While blockchain network tokens (as well as a wide variety of other instruments) are not explicitly included in the definition of "security" under the Act, courts have abstracted the common elements of an "investment contract", which is included in the definition of "security" under Section 2(a)(1) (and Section 3(a)(10) of the Exchange Act of 1934, which is contained in [3]), to establish a "flexible rather than a static principle, one that is capable of adaptation to meet the countless and variable schemes devised by those who seek the use of the money of others on the promise of profits" [3, paragraph 299].

Accordingly, a determination of whether an instrument not specifically enumerated under Section 2(a)(1) of the Act may be deemed to be a security that implicates the federal securities laws must include an analysis of whether it is an investment contract under Howey.

What is an Investment Contract (i.e. Howey Test)?

Howey (and its progeny) established the framework of a four-part test to determine whether an instrument is an investment contract and therefore a security subject to the U.S. federal securities laws. Howey provides that an investment contract exists if there is:

  1. an investment of money;
  2. into a common enterprise;
  3. with a reasonable expectation of profits;
  4. derived from the entrepreneurial or managerial efforts of others.

For an instrument to be an investment contract, each element of the Howey test must be met. If any element of the test is not met, the instrument is not an investment contract.

Fourth Prong of Howey Test - Efforts of Others

While the "efforts of others" prong of the Howey test is, at some level, no more important in an application of Howey than any of the other prongs, it is frequently the prong on which the most uncertainty hangs. The "efforts of others" is often the focus when it comes to public blockchain networks. This is because the decentralization of control that many such projects seek to foster prompts the following question: do the kind and degree of the existing decentralization mean that any expectation of profits token purchasers may have does not come from the efforts of others for purposes of Howey? The determination that token purchasers reasonably expected profits to come from the efforts of a centralized (or at least coordinated) person or group has been central in the SEC's findings that the tokens were securities in each of the recent SEC enforcement actions relating to token offerings and sales.f1

Decentralization, Consumptive Purpose and Priming Purchasers' Expectations


On 14 June 2018, Hinman delivered a speechf2 addressing whether "a digital asset that was originally offered in a securities offering [can] ever later be sold in a manner that does not constitute an offering of a security", and noted two cases where he believed this was indeed possible:

  • "where there is no longer any central enterprise being invested in", e.g. purchases of the digital assets related to a decentralized enterprise or network; and

  • "where the digital asset is sold only to be used to purchase a good or service available through the network on which it was created", e.g. purchases of digital assets for a consumptive purpose.

He posed a set of six questions directly related to the application of the "efforts of others" prong of the Howey test to offerings and sales of digital assets:

  1. Is there a person or group that has sponsored or promoted the creation and sale of the digital asset, the efforts of whom play a significant role in the development and maintenance of the asset and its potential increase in value?

  2. Has this person or group retained a stake or other interest in the digital asset such that it would be motivated to expend efforts to cause an increase in value in the digital asset? Would purchasers reasonably believe such efforts will be undertaken and may result in a return on their investment in the digital asset?

  3. Has the promoter raised an amount of funds in excess of what may be needed to establish a functional network, and, if so, has it indicated how those funds may be used to support the value of the tokens or to increase the value of the enterprise? Does the promoter continue to expend funds from proceeds or operations to enhance the functionality and/or value of the system within which the tokens operate?

  4. Are purchasers 'investing', that is seeking a return? In that regard, is the instrument marketed and sold to the general public instead of to potential users of the network for a price that reasonably correlates with the market value of the good or service in the network?

  5. Does application of the Securities Act protections make sense? Is there a person or entity others are relying on that plays a key role in the profit-making of the enterprise such that disclosure of their activities and plans would be important to investors? Do informational asymmetries exist between the promoters and potential purchasers/investors in the digital asset?

  6. Do persons or entities other than the promoter exercise governance rights or meaningful influence?

(He also posed a separate set of seven questions exploring "contractual or technical ways to structure digital assets so they function more like a consumer item and less like a security").

Hinman noted that these questions are useful to consider when assessing the facts and circumstances surrounding offerings and sales of digital assets to determine "whether a third party - be it a person, entity or coordinated group of actors - drives the expectation of a return". If such a party does drive an expectation of a return on a purchased digital asset, i.e. priming purchasers' expectations, there is a greater likelihood that the asset will be deemed to be an investment contract and therefore a security.

In the Framework [2], the SEC Staff similarly focuses attention on whether a purchaser of a digital asset has a reasonable expectation of profits derived from the efforts of others. These three themes: decentralization, consumptive purpose and priming purchasers' expectations, provide useful context for much of its discussion. The following offers for consideration select implications of the Framework's [2] guidance and Hinman's speech, as industry participants seek a greater understanding of whether or not a digital asset is a security.


The Framework [2] reinforces that the degree and nature of a promoter's involvement will have a bearing on the Howey analysis. The facts and circumstances relating to a promoter are key. If a promoter does not play a significant role in the "development, improvement (or enhancement), operation, or promotion of the network" [2, Section 3] underlying the tokens, it cuts against finding the "efforts of others" prong has been met. A promoter may seek to play a more limited role in these areas. Or, along similar lines, influence over the "essential tasks and responsibilities" of the network may be widely dispersed, i.e. decentralized, among many unaffiliated network stakeholders so that there is no identifiable "person or group" that continues to play a significant role, especially as compared to the role that the dispersed stakeholders play [2, Section 4]f3.

The SEC has placed particular attention on promoter efforts to impact a token's supply and/or demand and has also focused on a promoter's efforts to use the proceeds from a token offering to create an ecosystem that will drive demand for the tokens once the network is functional.f1 Further, the SEC has singled out promoter efforts to maintain a token's price by intervening in the buying and selling of tokens, separate from developing and maintaining the underlying network.

Further, the SEC's Senior Advisor for Digital Assets, Valerie Szczepanik, noted promoter efforts in this area may implicate U.S. securities law, recently stating she has "seen stablecoins that purport to control price through some kind of pricing mechanism… controlled through supply and demand in some way to keep the price within a certain band", and "[W]here there is one central party controlling the price fluctuation over time, [that] might be getting into the land of securities" [6].

In contrast to the kinds of efforts that meet Howey's fourth prong, to the extent that a token's value is driven by market forces (supply and demand conditions), it suggests that the value of the token is attributable to factors other than the promoter's efforts. This is assuming that the promoter does not put some mechanism in place to manage token supply or demand in order to maintain a token's price.

In addition, decentralization of control calls into question whether the application of the Securities Act [1] makes sense to begin with. A main purpose of [1] is to ensure that securities issuers "tell the public the truth about their businesses, the securities they are selling, and the risks involved in investing" [5]. Typically, the issuer of a security plays a key role in the success of the enterprise. Investors rely on the experience, judgment and skill of the enterprise's management team and board of directors to drive profits. The disclosure requirements of [1] focus primarily on issuers of securities. A decentralized network, where no central person or entity plays a key role, however, challenges both identification of a party who should be subject to the disclosure obligations of [1], and the information expected to be disclosed. Indeed, speaking about Bitcoin, Hinman acknowledged as much, stating he "[does] not see a central third party whose efforts are a key determining factor in the enterprise. The network on which Bitcoin functions is operational and appears to have been decentralized for some time. Applying the disclosure regime of the federal securities laws to the offer and resale of Bitcoin would seem to add little value". Decentralization ultimately invokes consideration of whether application of [1] truly serves its purpose. as well as the practical realities of how doing so would even work.

Consumptive Purpose

Token purchasers who plan to use their tokens to access a network's functionality typically are not speculating that the tokens will increase in value. The finding that a token was offered to potential purchasers in a manner inconsistent with a consumptive purpose has been a factor in several of the SEC's recent orders, for example:

  • From paragraph 18 of the Munchee Order [1, Section 8A]: "[The] marketing did not use the Munchee App or otherwise specifically target current users of the Munchee App to promote how purchasing MUN tokens might let them qualify for higher tiers and bigger payments on future reviews... Instead, Munchee and its agents promoted the MUN token offering in forums aimed at people interested in investing in Bitcoin and other digital assets").

  • From paragraph 16 of the Airfox Order [1, Section 8A]: "AirFox primarily aimed its promotional efforts for the initial coin offering at digital token investors rather than anticipated users of AirTokens").

Accordingly, a promoter may seek to limit the offering and sale of tokens to prospective network users. A promoter may also seek to ensure that, in both its content and intended audience, the marketing, and other communication related to the token offering, is consistent with consumption being the reason for purchasing tokens. The SEC Staff has indicated that the argument that purchasers are acquiring tokens for consumption will be stronger once the network is functional and tokens can actually be used to purchase goods or access services through the network.

For example:

  • Reference [2, Section 9] states that it is less likely the Howey test is met if various characteristics are present, including "[h]olders of the digital asset are immediately able to use it for its intended functionality on the network").

  • Paragraph 7 of the Airfox Order [1, Section 8A] states "The terms of AirFox's initial coin offering purported to require purchasers to agree that they were buying AirTokens for their utility as a medium of exchange for mobile airtime, and not as an investment or a security. At the time of the ICO, this functionality was not available… Despite the reference to AirTokens as a medium of exchange, at the time of the ICO, investors purchased AirTokens based upon anticipation that the value of the tokens would rise through AirFox's future managerial and entrepreneurial efforts."

Promoters may also consider whether "restrictions on the transferability of the digital asset are consistent with the asset's use and not facilitating a speculative market" [2, Section 10].

Priming Purchasers' Expectations

The SEC has given much attention to whether token purchasers were led to expect that the promoter would make efforts to increase token value. In this respect, in bringing enforcement actions, the SEC has highlighted statements made by promoters that specifically tout the opportunity to profit from purchasing a token.

For example, from paragraph 17 of the Munchee Order [1, Section 8A]:

Munchee made public statements or endorsed other people's public statements that touted the opportunity to profit. For example… Munchee created a public posting on Facebook, linked to a third-party YouTube video, and wrote '199% GAINS on MUN token at ICO price! Sign up for PRE‑SALE NOW!' The linked video featured a person who said 'Today we are going to talk about Munchee. Munchee is a crazy ICO… Pretty much, if you get into it early enough, you'll probably most likely get a return on it.' This person… 'speculate[d]' that a \$1,000 investment could create a $94,000 return.

Promotional statements explaining that tokens would be listed on digital asset exchanges for secondary trading also draw attention. Refer to the following examples.

  • From paragraph 13 of the Munchee Order [1, Section 8A]: Munchee stated it would work to ensure that MUN holders would be able to sell their MUN tokens on secondary markets, saying that "Munchee will ensure that MUN token is available on a number of exchanges in varying jurisdictions to ensure that this is an option for all token-holders.");

  • From paragraph 22 of the Gladius Order [1, Section 8A]: During and following the offering, Gladius attempted to make GLA Tokens available for trading on major digital asset trading platforms. On Gladius Web Pages, Gladius principals and agents stated that "[w]e've been approached by some of the largest exchanges, they're very interested," and represented that the GLA Token would be available to trade on 'major' trading platforms after the ICO.

  • From [2, page 5], which states that it is more likely that there is a reasonable expectation of profit if various characteristics are present, including if the digital asset is marketed using "[t]he availability of a market for the trading of the digital asset, particularly where the [promoter] implicitly or explicitly promises to create or otherwise support a trading market for the digital asset".

Secondary trading on exchanges provides token holders an alternative to using the token on the network and, especially when facilitated and highlighted by the promoter, can support a finding of investment intent for token purchasers, i.e. buying with an eye on reselling for a profit instead of consuming goods or services through the network.

The Framework [2, Section 6] states that it is more likely there is a reasonable expectation of profit if various characteristics are present, including if "the digital asset is offered broadly to potential purchasers as compared to being targeted to expected users of the goods or services or those who have a need for the functionality of the network", and/or "[t]he digital asset is offered and purchased in quantities indicative of investment intent instead of quantities indicative of a user of the network".

The Framework [2] also notes that purchasers may have an expectation that the promoter will "undertake efforts to promote its own interests and enhance the value of the network or digital assets" [2, Section 5]. Token purchasers who are not aware of the promoter's interest in the network's success, such as through the promoter having retained a stake in the tokens, should not have any reason to expect the promoter to undertake any efforts to drive token value in order to increase its own profits.


Parties seeking to undertake offerings and sales of tokens in compliance with U.S. securities laws may look to the various actions and statements of the SEC for guidance as to whether the token is a security under the Act. As tokens are not specifically enumerated under the definition of "security", the analysis hinges on the application of the Howey test to determine if the token is an "investment contract" under the Act, and therefore a security. The decentralization of control many public blockchain projects seek to foster places Howey's "efforts of others" prong as a key area of focus. Questions regarding the application of this prong generally relate to:

  • decentralization of the enterprise underlying the tokens;
  • consumptive purpose of token purchasers; and
  • whether token purchasers were led to have an expectation that the promoter would take efforts to drive token value.

This report has offered insight into some of the factors that promoters of token offerings may consider in assessing whether the kind and degree of decentralization that exists means that any expectation of profits token purchasers may have does not come from the efforts of others. It is also indicative of the SEC's approach to evaluating the offer and sale of digital assets.


f1:    For examples, refer to paragraph 32 of the Munchee Order and paragraph 21 of the Airfox Order and accompanying text.
Paragraph 32 of Munchee Order: "MUN token purchasers had a reasonable expectation of profits from their investment in the Munchee enterprise. The proceeds of the MUN token offering were intended to be used by Munchee to build an "ecosystem" that would create demand for MUN tokens and make MUN tokens more valuable".
Paragraph 21 of Airfox Order: "AirFox told investors that the company would improve the AirFox App, add new functionality, enter into agreements with third-party telecommunication companies, and take other steps to encourage the use of AirTokens and foster the growth of the ecosystem. Investors reasonably expected they would profit from the success of AirFox's efforts to grow the ecosystem and the concomitant rise in the value of AirTokens".

f2:    William Hinman, Director of the Division of Corp. Fin., SEC, Remarks at the Yahoo Finance All Markets Summit: Crypto: Digital Asset Transactions: When Howey Met Gary (Plastic) (14 June 2018).

f3:    Reference [2, Section 4] states that it is more likely that the purchaser of a digital asset is relying on the efforts of others if various characteristics are present. This includes "[t]here are essential tasks or responsibilities performed and expected to be performed by [a promoter], rather than an unaffiliated, dispersed community of network users".


[1] "U.S. Congress. United States Code: Securities Act of, 15 U.S.C. §§  77a-77mm 1934" [online]. Available: Date accessed: 2019‑03‑07.

[2] "Framework for 'Investment Contract' Analysis of Digital Assets" [online]. Available:>. Date accessed: 2019‑03‑10.

[3] "U.S. Congress. United States Code: Securities Exchanges, 15 U.S.C. §§ 78a-78jj, 1964" [online]. Available: Date accessed: 2019‑03‑07.

[4] "SEC v. W. J. Howey Co., 328 U.S. 293 (1946)" [online]. Available: Date accessed: 2019‑04‑05.

[5] "U.S. Securities and Exchange Commission, What We Do" [online]. Available: Date accessed: 2018‑03‑07.

[6] Guillermo Jimenez, SEC's Crypto Czar: "Stablecoins might be Violating Securities Laws", Decrypt (2019) [online]. Available: Date accessed: 2019‑03‑19.


Non-Fungible Tokens

Having trouble viewing this presentation?

View it in a separate window.

Confidential Assets


Confidential assets, in the context of blockchain technology and blockchain-based cryptocurrencies, can have different meanings to different audiences, and can also be something totally different or unique depending on the use case. They are a special type of digital asset and inherit all its properties, except that they are also confidential. Confidential assets therefore have value, can be owned but have no physical presence. The confidentiality aspect implies that the amount of assets owned, as well as the asset type that was transacted in, can be confidential. A further classification can be made with regard to whether they are fungible (interchangeable) or non-fungible (unique, not interchangeable). Confidential assets can only exist in the form of a cryptographic token or derivative thereof that is also cryptographically secure, at least under the Discrete Logarithmic Problemdef (DLP) assumption.

The basis of confidential assets is confidential transactions as proposed by Maxwell [4] and Poelstra et al. [5], where the amounts transferred are kept visible only to participants in the transaction (and those they designate). Confidential transactions succeed in making the transaction amounts private, while still preserving the ability of the public blockchain network to verify that the ledger entries and Unspent Transaction Output (UTXO) set still add up. All amounts in the UTXO set are blinded, while preserving public verifiability. Poelstra et al. [5] showed how the asset types can also be blinded in conjunction with the output amounts. Multiple asset types can be accommodated within single transactions on the same blockchain.

This report investigates confidential assets as a natural progression of confidential transactions.


This section gives the general notation of mathematical expressions when specifically referenced in the report. This notation is important pre-knowledge for the remainder of the report.

  • Let $ p ​$ be a large prime number.
  • Let $ \mathbb G $ denote a cyclic group of prime order $ p $.
  • Let $ \mathbb Z_p $ denote the ring of integers $ modulo \mspace{4mu} p $.
  • Let $ \mathbb F_p $ be a group of elliptic curve points over a finite (prime) field.
  • All references to Pedersen Commitment will imply Elliptic Curve Pedersen Commitment.

Basis of Confidential Assets

Confidential transactions, asset commitments and Asset Surjection Proofs (ASPs) are really the basis of confidential assets. These concepts are discussed in the following sections.

Confidential Transactions

Confidential transactions are made confidential by replacing each explicit UTXO with a homomorphic commitment, such as a Pedersen Commitment, and made robust against overflow and inflation attacks by using efficient zero-knowledge range proofs, such as a Bulletproof [1].

Range proofs provide proof that a secret value, which has been encrypted or committed to, lies in a certain interval. They prevent any numbers coming near the magnitude of a large prime, say $ 2^{256} $, that can cause wraparound when adding a small number, e.g. proof that a number $ x \in [0,2^{64} - 1] $.

Pedersen Commitments are perfectly hiding (an attacker with infinite computing power cannot tell what amount has been committed to) and computationally binding (no efficient algorithm running in a practical amount of time can produce fake commitments, except with small probability). The Elliptic Curve Pedersen Commitment to value $ x \in \mathbb Z_p $ has the following form:

$$ C(x,r) = xH + rG \tag{1} $$

where $ r \in \mathbb Z_p $ is a random blinding factor, $ G \in \mathbb F_p $ is a random generator point and $ H \in \mathbb F_p $ is specially chosen so that the value $ x_H $ satisfying $ H = x_H G $ cannot be found, except if the Elliptic Curve DLPdef (ECDLP) is solved. The number $ H $ is what is known as a Nothing Up My Sleeve (NUMS) number. With secp256k1, the value of $ H $ is the SHA256 hash of a simple encoding of the $ x $-coordinate of the generator point $ G $. The Pedersen Commitment scheme is implemented with three algorithms: Setup() to set up the commitment parameters $ G $ and $ H $; Commit() to commit to the message $ x $ using the commitment parameters $ r $, $ H $ and $ G $; and Open() to open and verify the commitment ([5], [6], [7], [8]).

Mimblewimble ([9], [10]) is based on and achieves confidentiality using these confidential transaction primitives. If confidentiality is not sought, inputs may be given as explicit amounts, in which case the homomorphic commitment to the given amount will have a blinding factor $ r = 0 ​$.

Asset Commitments and Surjection Proofs

The different assets need to be identified and transacted with in a confidential manner, and proven to not be inflationary. This is made possible by using asset commitments and ASPs ([1], [14]).

Given some unique asset description $ A $, the associated asset tag $ H_A \in \mathbb G $ is calculated using the Pedersen Commitment function Setup() using $ A $ as auxiliary input. (Selection of $ A $ is discussed in Asset Issuance.) Consider a transaction with two inputs and two outputs involving two distinct asset types $ A $ and $ B $:

$$ \begin{aligned} in_A = x_1H_A + r_{A_1}G \mspace{15mu} \mathrm{,} \mspace{15mu} out_A = x_2H_A + r_{A_2}G \\ in_B = y_1H_B + r_{B_1}G \mspace{15mu} \mathrm{,} \mspace{15mu} out_B = y_2H_B + r_{B_2}G \end{aligned} \tag{2} $$

In the equations that follow, a commitment to the value of $ 0 $ will be denoted by $ (\mathbf{0}) $, i.e. $C(0,r) = 0H + rG = (\mathbf{0})$. For relation (2) to hold, the sum of the outputs minus the sum of the inputs must be a commitment to the value of $ 0 $:

$$ \begin{aligned} (out_A + out_B) - (in_A + in_B) = (\mathbf{0}) \\ (x_2H_A + r_{A_2}G) + (y_2H_B + r_{B_2}G) - (x_1H_A + r_{A_1}G) - (y_1H_B + r_{B_1}G) = (\mathbf{0}) \\ (r_{A_2} + r_{B_2} - r_{A_1} - r_{B_1})G + (x_2 - x_1)H_A + (y_2 - y_1)H_B = (\mathbf{0}) \end{aligned} \tag{3} $$

Since $ H_A $ and $ H_B $ are both NUMS asset tags, the only way relation (3) can hold is if the total input and output amounts of asset $ A $ are equal and if the same is true for asset $ B $. This concept can be extended to an unlimited amount of distinct asset types, as long as each asset tag can be a unique NUMS generator. The problem with relation (3) is that the asset type of each output is publicly visible, thus the assets that were transacted in are not confidential. This can be solved by replacing each asset tag with a blinded version of itself. The asset commitment to asset tag $ H_A $ (blinded asset tag) is then defined as the point

$$ H_{0_A} = H_A + rG \tag{4} $$

Blinding of the asset tag is necessary to make transactions in the asset, i.e. which asset was transacted in, confidential. The blinded asset tag $ H_{0_A} $ will then be used in place of the generator $ H $ in the Pedersen Commitments. Such Pedersen Commitments thus commit to the committed amount as well as to the underlying asset tag. Inspecting the Pedersen Commitment, it is evident that a commitment to the value $ x_1 $ using the blinded asset tag $ H_{0_A} $ is also a commitment to the same value using the asset tag $ H_A $:

$$ x_1H_{0_A} + r_{A_1}G = x_1(H_A + rG) + r_{A_1}G = x_1H_A + (r_{A_1} + x_1r)G \tag{5} $$

Using blinded asset tags, the transaction in relation (2) then becomes:

$$ \begin{aligned} in_A = x_1H_{0_A} + r_{A_1}G \mspace{15mu} \mathrm{,} \mspace{15mu} out_A = x_2H_{0_A} + r_{A_2}G \\ in_B = y_1H_{0_B} + r_{B_1}G \mspace{15mu} \mathrm{,} \mspace{15mu} out_B = y_2H_{0_B} + r_{B_2}G \end{aligned} \tag{6} $$

Correspondingly, the zero sum rule translates to:

$$ \begin{aligned} (out_A + out_B) - (in_A + in_B) = (\mathbf{0}) \\ (x_2H_{0_A} + r_{A_2}G) + (y_2H_{0_B} + r_{B_2}G) - (x_1H_{0_A} + r_{A_1}G) - (y_1H_{0_B} + r_{B_1}G) = (\mathbf{0}) \\ (r_{A_2} + r_{B_2} - r_{A_1} - r_{B_1})G + (x_2 - x_1)H_{0_A} + (y_2 - y_1)H_{0_B} = (\mathbf{0}) \end{aligned} \tag{7} $$

However, using only the sum to zero rule, it is still possible to introduce negative amounts of an asset type. Consider blinded asset tag

$$ H_{0_A} = -H_A + rG \tag{8} $$

Any amount of blinded asset tag $ H_{0_A} $ will correspond to a negative amount of asset $ A $, thereby inflating its supply. To solve this problem, an ASP is introduced, which is a cryptographic proof. In mathematics, a surjection function simply means that for every element $ y $ in the codomain $ Y $ of function $ f $, there is at least one element $ x $ in the domain $ X $ of function $ f $ such that $ f(x) = y$.

An ASP scheme provides a proof $ \pi ​$ for a set of input asset commitments $ [ H_i ] ^n_{i=1} ​$, an output commitment $ H = H_{\hat i} + rG ​$ for $ \hat i = 1 \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} n ​$ and blinding factor $ r ​$. It proves that every output asset type is the same as some input asset type, while blinding which outputs correspond to which inputs. Such a proof $ \pi ​$ is secure if it is a zero-knowledge proof of knowledge for the blinding factor $ r ​$. Let $ H_{0_{A1}} ​$ and $ H_{0_{A2}} ​$ be blinded asset tags that commit to the same asset tag $ H_A ​$:

$$ H_{0_{A1}} = H_A + r_1G \mspace{15mu} \mathrm{and} \mspace{15mu} H_{0_{A2}} = H_A + r_2G \tag{9} $$


$$ H_{0_{A1}} - H_{0_{A2}} = (H_A + r_1G) - (H_A + r_2G) = (r_1 - r_2)G \tag{10} $$

will be a signature key with secret key $ r_1 - r_2 $. Thus for an $ n $ distinct multiple asset type transaction, differences can be calculated between each output and all inputs, e.g. $ (out_A - in_A) , (out_A - in_B) \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} (out_A - in_n) $, and so on for all outputs. This has the form of a ring signature, and if $ out_A $ has the same asset tag as one of the inputs, the transaction signer will know the secret key corresponding to one of these differences, and be able to produce the ring signature. The ASP is based on the Back-Maxwell range proof (refer to Definition 9 of [1]), which uses a variation of Borromean ring signatures [18]. The Borromean ring signature, in turn, is a variant of the Abe-Ohkubo-Suzuki (AOS) ring signature [19]. An AOS ASP computes a ring signature that is equal to the proof $ \pi $ as follows:

  • Calculate $ n $ differences $ H - H_{\hat i } $ for $ \hat i = 1 \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} n $, one of which will be equal to the blinding factor $ r $.
  • Calculate a ring signature $ S $ of an empty message using the $ n $ differences.

The resulting ring signature $ S $ is equal to the proof $ \pi $, and the ASP consists of this ring signature $ S ​$ ([1], [14]).

Confidential Asset Scheme

Using the building blocks discussed in Basis of Confidential Assets, asset issuance, asset transactions and asset re-issuance can be performed in a confidential manner.

Asset Transactions

Confidential assets propose a scheme where multiple non-interchangeable asset types can be supported within a single transaction. This all happens within one blockchain and can theoretically improve the value of the blockchain by offering a service to more users. It can also enable extended functionality such as base layer atomic asset trades. The latter implies Alice can offer Bob $ 100 $ of asset type $ A $ for $ 50 $ of asset type $ B ​$ in a single transaction, both participants using a single wallet. In this case, no relationship between output asset types can be established or inferred, because all asset tags are blinded. Privacy can be increased, as the blinded asset types bring another dimension that needs to be unraveled in order to obtain user identity and transaction data by not having multiple single-asset transactions. Such a confidential asset scheme simplifies verification and complexity, and reduces on-chain data. It also prohibits censorship of transactions involving specific asset types, and blinds assets with low transaction volume where users could be identified very easily [1].

Assets originate in asset-issuance inputs, which take the place of coinbase transactions in confidential transactions. The asset type to pay fees must be revealed in each transaction, but in practice, all fees could be paid in only one asset type, thus preserving privacy. Payment authorization is achieved by means of the input signatures. A confidential asset transaction consists of the following data:

  • A list of inputs, each of which can have one of the following forms:

    • a reference to an output of another transaction, with a signature using that output's verification key; or
    • an asset issuance input, which has an explicit amount and asset tag.
  • A list of outputs that contains:

    • a signature verification key;
    • an asset commitment $ H_0 $ with an ASP from all input asset commitments to $ H_0 $;
    • Pedersen commitment to an amount using generator $ H_0 $ in place of $ H $, with the associated Back-Maxwell range proof.
  • A fee, listed explicitly as $ { (f_i , H_i) }_{i=1}^n $, where $ f_i $ is a non-negative scalar amount denominated in the asset with tag $ H_i $.

Every output has a range proof and ASP associated with it, which are proofs of knowledge of the Pedersen commitment opening information and asset commitment blinding factor. Every range proof can be considered as being with respect to the underlying asset tag $ H_A ​$, rather than the asset commitment $ H_0 ​$. The confidential transaction is restricted to only inputs and outputs with asset tag $ H_A ​$, except that output commitments minus input commitments minus fee sum to a commitment to $ 0 ​$ instead of to the point $ 0 ​$ itself [1].

However, confidential assets come at an additional data cost. For a transaction with $ m $ outputs and $ n $ inputs, in relation to the units of space used for confidential transactions, the asset commitment has size $ 1$, the ASP has size $ n + 1 $ and the entire transaction therefore has size $ m(n + 2) $ [1].

Asset Issuance

It is important to ensure that any auxiliary input $ A $ used to create asset tag $ H_A \in \mathbb G $ only be used once to prevent inflation by means of many independent issuances. Associating a maximum of one issuance with the spending of a specific UTXO can ensure this uniqueness property. Poelstra et al. [5] suggest the use of a Ricardian contract [11] to be hashed together with the reference to the UTXO being spent. This hash can then be used to generate the auxiliary input $ A $ as follows. Let $I $ be the input being spent (an unambiguous reference to a specific UTXO used to create the asset), let $ \widehat {RC} $ be the issuer-specified Ricardian contract, then the asset entropy $ E $ is defined as

$$ E = \mathrm {Hash} ( \mathrm {Hash} (I) \parallel \mathrm {Hash} (\widehat {RC})) \tag{11} $$

The auxiliary input $ A $ is then defined as

$$ A = \mathrm {Hash} ( E \parallel 0) \tag{12} $$

Note that a Ricardian contract $ \widehat {RC} $ is not crucial for entropy $ E $ generation, as some other unique NUMS value could have been used in its stead, but only suggested. Ricardian contracts warrant a bit more explanation and are discussed in Appendix B.

Every non-coinbase transaction input can have up to one new asset issuance associated with it. An asset issuance (or asset definition) transaction input then consists of the UTXO being spent, the Ricardian contract, either an initial issuance explicit value or a Pedersen commitment, a range proof and a Boolean field indicating whether reissuance is allowed ([1], [13]).

Asset Reissuance

The confidential asset scheme allows the asset owner to later increase or decrease the amount of the asset in circulation, given that an asset reissuance token is generated together with the initial asset issuance. Given an asset entropy $ E $, the asset reissuance capability is the element (asset tag) $ H_{\hat A} \in \mathbb G $ obtained using an alternative auxiliary input $ \hat A $ defined as

$$ \hat A = \mathrm {Hash} ( E \parallel 1) \tag{13} $$

The resulting asset tag $ H_{\hat A} \in \mathbb G $ is linked to its reissuance capability, and the asset owner can assert their reissuance right by revealing the blinding factor $ r $ for the reissuance capability along with the original asset entropy $ E $. An asset reissuance (or asset definition) transaction input then consists of the spend of a UTXO containing an asset reissuance capability, the original asset entropy, the blinding factor for the asset commitment of the UTXO being spent, either an explicit reissuance amount or Pedersen commitment and a range proof [1].

The same mechanism can be used to manage capabilities for other restricted operations, e.g. to decrease issuance, to destroy the asset, or to make the commitment generator the hash of a script that validates the spending transaction. It is also possible to change the name of the default asset that is created upon blockchain initialization and the default asset used to pay fees on the network ([1], [13]).


ASPs prove that the asset commitments associated with outputs commit to legitimately issued asset tags. This feature allows compatible blockchains to support indefinitely many asset types, which may be added after the chain has been defined. There is room to adapt this scheme for optimal trade-off between ASP data size and privacy by introducing a global dynamic list of assets, whereby each transaction selects a subset of asset tags for the corresponding ASPs [1].

If all the asset tags are defined at the instantiation of the blockchain, this will be compatible with the Mimblewimble protocol. The range proofs used for the development of this scheme were based on the Back-Maxwell range proof scheme (refer to Definition 9 of [1]). Poelstra et al. [1] suggest more efficient range proofs, ASPs and use of aggregate range proofs. It is thus an open question whether Bulletproofs could fulfill this requirement.

Confidential Asset Implementations

Three independent implementations of confidential assets are summarized here. The first two implementations closely resemble the theoretical description given in [1], with the last implementation adding Bulletproofs' functionality to confidential assets.

Elements Project

Elements [31] is an open source, sidechain-capable blockchain platform, providing access to advanced features such as Confidential Transactions and Issued Assets in Github: ElementsProject/elements [16]. It allows digitizable collectables, reward points and attested assets (e.g. gold coins) to be realized on a blockchain. The main idea behind Elements is to serve as a research platform and testbed for changes to the Bitcoin protocol. The Elements project's implementation of confidential assets is called Issued Assets ([13], [14], [15]) and is based on its formal publication in [1].

The Elements project hosts a working demonstration (shown in Figure 2) of confidential asset transfers involving five parties in Github: ElementsProject/confidential-assets-demo [17]. The demonstration depicts a scenario where a coffee shop owner, Dave, charges a customer, Alice, for coffee in an asset called MELON. Alice does not hold enough MELON and needs to convert some AIRSKY into MELON, making use of an exchange operated by Charlie. The coffee shop owner, Dave, has a competitor, Bob, who is trying to gather information about Dave's sales. Due to the blockchain's confidential transactions and assets features, he will not be able to see anything useful by processing transactions on the blockchain. Fred is a miner and does not care about the detail of the transactions, but he makes blocks on the blockchain when transactions enter his miner mempool. The demonstration also includes generating the different types of assets.

Figure 2: Elements Confidential Assets Transfer Demonstration [17]

Chain Core Confidential Assets

Chain Core [20] is a shared, multi-asset, cryptographic ledger, designed for enterprise financial infrastructure. It supports the coexistence and interoperability of multiple types of assets on the same network in their Confidential Assets framework. Chain Core is based on [1], and available as an open source project in Github: chain/chain [21], which has been archived. It has been succeeded by Sequence, a ledger-as-a-service project, which enables secure tracking and transfer of balances in a token format ([22], [23]). Sequence offers a free plan for up to 1,000,000 transactions per month.

Chain Core implements all native features as defined in [1]. It was also working towards implementing ElGamal commitments into Chain Core to make its Confidential Assets framework quantum secure, but it is unclear if this effort was concluded at the time the project was archived ([24], [25]).


Chain/Interstellar [26] introduced Cloak [29], a redesign of Chain Core's Confidential Assets framework to make use of Bulletproof range proofs [27]. It is available as an open source project in Github: interstellar/spacesuit [28]. Cloak is all about confidential asset transactions, called cloaked transactions, which exchange values of different asset types, called flavors. The protocol ensures that values are not transmuted to any other asset types, that quantities do not overflow, and that both quantities and asset types are kept secret.

A traditional Bulletproofs implementation converts an arithmetic circuit into a Rank-1 Constraint System (R1CS); Cloak bypasses arithmetic circuits and provides an Application Programmers Interface (API) for building a constraint system directly. The R1CS API consists of a hierarchy of task-specific “gadgets”, and is used by the Prover and Verifier alike to allocate variables and define constraints. Cloak uses a collection of gadgets such as “shuffle”, “merge”, “split” and “range proof” to build a constraint system for cloaked transactions. All transactions of the same size are indistinguishable, because the layout of all the gadgets is only determined by the number of inputs and outputs.

At the time of writing this report, the Cloak development was still ongoing.

Conclusions, Observations and Recommendations

  • The idea to embed a Ricardian contract in the asset tag creation as suggested by Poelstra et al. [1] warrants more investigation for a new confidential blockchain protocol such as Tari. Ricardian contracts could be used in asset generation in the probable second layer.

  • Asset commitments and ASPs are important cryptographic primitives for confidential asset transactions.

  • The Elements project implemented a range of useful confidential asset framework features and should be critically assessed for usability in a probable Tari second layer.

  • Cloak has the potential to take confidential assets implementation to the next level in efficiency and should be closely monitored. Interstellar is in the process of fully implementing and extending Bulletproofs for use in confidential assets.

  • If confidential assets are to be implemented in a Mimblewimble blockchain, all asset tags must be defined at their instantiation, otherwise they will not be compatible.


[1] A. Poelstra, A. Back, M. Friedenbach, G. Maxwell and P, Wuille, "Confidential Assets", Blockstream [online]. Available: Date accessed: 2018‑12‑25.

[2] Wikipedia: "Discrete Logarithm" [online]. Available: Date accessed: 2018‑12‑20.

[3] A. Sadeghi and M. Steiner, "Assumptions Related to Discrete Logarithms: Why Subtleties Make a Real Difference" [online]. Available: Date accessed: 2018‑12‑24.

[4] G. Maxwell, "Confidential Transactions Write up" [online]. Available: Date accessed: 2018‑12‑10.

[5] A. Gibson, "An Investigation into Confidential Transactions", July 2018 [online]. Available: Date accessed: 2018‑12‑22.

[6] Pedersen-commitment: An Implementation of Pedersen Commitment Schemes [online]. Available: Date accessed: 2018‑12‑25.

[7] B. Franca, "Homomorphic Mini-blockchain Scheme", April 2015 [online]. Available: Date accessed: 2018‑12‑22.

[8] C. Franck and J. Großschädl, "Efficient Implementation of Pedersen Commitments Using Twisted Edwards Curves", University of Luxembourg [online]. Available: Date accessed: 2018‑12‑22.

[9] A. Poelstra, "Mimblewimble", October 2016 [online]. Available: Date accessed: 2018‑12‑13.

[10] A. Poelstra, "Mimblewimble Explained", November 2016 [online]. Available: Date accessed: 2018‑12‑10.

[11] I. Grigg, "The Ricardian Contract", First IEEE International Workshop on Electronic Contracting. IEEE (2004) [online]. Available: Date accessed: 2018‑12‑13.

[12] D. Koteshov, "Smart vs. Ricardian Contracts: What’s the Difference?" February 2018 [online]. Available: Date accessed: 2018‑12‑13.

[13] Elements by Blockstream: "Issued Assets - You can Issue your own Confidential Assets on Elements"
[online]. Available: Date accessed: 2018‑12‑14.

[14] Elements by Blockstream: "Issued Assets - Investigation, Principal Investigator: Andrew Poelstra" [online]. Available: Date accessed: 2018‑12‑14.

[15] Elements by Blockstream: "Elements Code Tutorial - Issuing your own Assets" [online]. Available: Date accessed: 2018‑12‑14.

[16] Github: ElementsProject/elements [online]. Available: Date accessed: 2018‑12‑18.

[17] Github: ElementsProject/confidential-assets-demo [online]. Available: Date accessed: 2018‑12‑18.

[18] G. Maxwell and A. Poelstra, "Borromean Ring Signatures" (2015) [online]. Available: Date accessed: 2018‑12‑18.

[19] M. Abe, M. Ohkubo and K. Suzuki, "1-out-of-n Signatures from a Variety of Keys" [online]. Available: Date accessed: 2018‑12‑18.

[20] Chain Core [online]. Available: Date accessed: 2018‑12‑18.

[21] Github: chain/chain [online]. Available: Date accessed: 2018‑12‑18.

[22] Chain: "Sequence" [online]. Available: Date accessed: 2018‑12‑18.

[23] Sequence Documentation [online]. Available: Date accessed: 2018‑12‑18.

[24] O Andreev: "Hidden in Plain Sight: Transacting Privately on a Blockchain - Introducing Confidential Assets in the Chain Protocol", Chain [online]. Available: Date accessed: 2018‑12‑18.

[25] Blockchains in a Quantum Future - Protecting Against Post-Quantum Attacks on Cryptography [online]. Available: Date accessed: 2018‑12‑18.

[26] Inter/stellar website [online]. Available: Date accessed: 2018‑12‑22.

[27] C. Yun, "Programmable Constraint Systems for Bulletproofs" [online]. Available: Date accessed: 2018‑12‑22.

[28] Github: interstellar/spacesuit [online]. Available: Date accessed: 2018‑12‑18.

[29] Github: interstellar/spacesuit/ [online]. Available: Date accessed: 2018‑12‑18.

[30] Wikipedia: "Ricardian Contract" [online]. Available: Date accessed: 2018‑12‑21.

[31] Wikipedia: "Elements by Blockstream" [online]. Available: Date accessed: 2018‑12‑21.


Appendix A: Definition of Terms

Definitions of terms presented here are high level and general in nature. Full mathematical definitions are available in the cited references.

  • Discrete Logarithm/Discrete Logarithm Problem (DLP): In the mathematics of real numbers, the logarithm $ \log_b^a $ is a number $ x $ such that $ b^x=a $, for given numbers $ a $ and $ b $. Analogously, in any group $ G $, powers $ b^k $ can be defined for all integers $ k $, and the discrete logarithm $ \log_ba $ is an integer $ k $ such that $ b^k=a $. Algorithms in public-key cryptography base their security on the assumption that the discrete logarithm problem over carefully chosen cyclic finite groups and cyclic subgroups of elliptic curves over finite fields has no efficient solution ([2], [3]).

Appendix B: Ricardian Contracts vs. Smart Contracts

A Ricardian contract is “a digital contract that defines the terms and conditions of an interaction, between two or more peers, that is cryptographically signed and verified, being both human and machine readable and digitally signed” [12]. With a Ricardian contract, the information from the legal document is placed in a format that can be read and executed by software. The cryptographic identification offers high levels of security. The main properties of a Ricardian contract are (also refer to Figure 1):

  • human readable;
  • document is printable;
  • program parsable;
  • all forms (displayed, printed and parsed) are manifestly equivalent;
  • signed by issuer;
  • can be identified securely, where security means that any attempts to change the linkage between a reference and the contract are impractical.

Figure 1: Bowtie Diagram of a Ricardian Contract [12]

Ricardian contracts are robust (due to identification by cryptographic hash functions), transparent (due to readable text for legal prose) and efficient (due to computer markup language to extract essential information [30].

A smart contract is “a computerized transaction protocol that executes the terms of a contract. The general objectives are to satisfy common contractual conditions” [12]. With smart contracts, digital assets can be exchanged in a transparent and non-conflicting way. They provide trust. The main properties of a smart contract are:

  • self-executing (it doesn't run unless someone initiates it);
  • immutable;
  • self-verifying;
  • auto-enforcing;
  • cost saving;
  • removes third parties or escrow agents.

It is possible to implement a Ricardian contract as a smart contract, but not in all instances. A smart contract is a pre-agreed digital agreement that can be executed automatically. A Ricardian contract records “intentions” and “actions” of a particular contract, no matter if it has been executed or not. Hashes of Ricardian contracts can refer to documents or executable code ([12], [30]).

The Ricardian contract design pattern has been implemented in several projects and is free of any intellectual property restrictions [30].


Blockchain-related Protocols


Users often take blockchain protocols for granted when analyzing the potential of a cryptocurrency. The different blockchain protocols used can play a prominent role in the success of a cryptocurrency [1].


  • From Wikipedia - 1: A blockchain is a growing list of records, called blocks, which are linked using cryptography. Each block contains a cryptographic hash of the previous block, a time stamp, and transaction data (generally represented as a Merkle tree root hash) [2].

  • From Wikipedia - 2: A security protocol (cryptographic protocol or encryption protocol) is an abstract or concrete protocol that performs a security-related function and applies cryptographic methods, often as sequences of cryptographic primitives. A protocol describes how the algorithms should be used. A sufficiently detailed protocol includes details about data structures and representations, at which point it can be used to implement multiple, interoperable versions of a program [3].

  • From Market Protocol: A protocol can be seen as a methodology agreed upon by two or more parties that establish a common set of rules that
    they can agree upon so as to enter into a binding agreement. Protocols are therefore the set of rules that govern the network. Blockchain protocols usually include rules about consensus, transaction validation and network participation [4].

  • From Encyclopaedia Britannica: Protocol, in computer science, is a set of rules or procedures for transmitting data between electronic devices, such as computers. In order for computers to exchange information, there must be a preexisting agreement as to how the information will be structured and how each side will send and receive it. Without a protocol, a transmitting computer, for example, could be sending its data in eight-bit packets while the receiving computer might expect the data in 16-bit packets [5].


[1] "A List of Blockchain Protocols - Explained and Compared" [online]. Available: Date accessed: 2019‑06‑11.

[2] Wikipedia: "Blockchain" [online]. Available: Date accessed: 2019‑06‑10.

[3] Wikipedia: "Cryptographic Protocol" [online]. Available: Date accessed: 2019‑06‑11.

[4] L. Jovanovic, "Blockchain Crash Course: Protocols, dApps, APIs and DEXs" [online]. MARKET Protocol. Available: Date accessed: 2019‑06‑11.

[5] "Protocol - Computer Science" [online]. Encyclopaedia Britannica. Available: Date accessed: 2019‑06‑11.


This presentation contains somewhat outdated content. Whilst the description here follows the original Mimblewimble paper pretty closely, it no longer precisely describes how Mimblewimble transactions are implemented in say, Grin or Tari.

The presentation is left here as is since it offers a pretty gentle introduction to the math involved, and it may be instructive to see how things used to be before you head off to read a more detailed and up-to-date account.

Having trouble viewing this presentation?

View it in a separate window.

Mimblewimble Transactions Explained

High-level Overview

Mimblewimble is a privacy-oriented, cryptocurrency technology. It differs from Bitcoin in some key areas:

  • No addresses. The concept of Mimblewimble addresses does not exist.
  • Completely private. Every transaction is confidential.
  • Compact blockchain. Mimblewimble uses a different set of security guarantees to Bitcoin, which leads to a far more compact blockchain.

Transactions Explained

Confidential transactions [1] were invented by Dr. Adam Back and are employed in several cryptocurrency projects, including Monero and Tari - by way of Mimblewimble.

Recipients of Tari create the private keys for receiving coins on the fly. Therefore they must be involved in the construction of a Tari transaction.

This doesn't necessarily mean that recipients have to be online. However, they do need to be able to communicate, whether it be by email, Instant Messaging (IM) or carrier pigeon.

Basic Transaction

We'll explain how Alice can send Tari to Bob using a two-party protocol for Mimblewimble. Multiparty transactions are similar, but the flow of information is a bit different and takes place over additional communication rounds.

Let's say Alice has 300 µT and she wants to send 200 µT to Bob.

Here’s the basic transaction:

Alice's UTXOTo BobChangefee

If we write this as a mathematical equation, where outputs are positive and inputs are negative, we should be able to balance things out so that there's no creation of coins out of thin air:

$$ -300 + 200 + 90 + 10 = 0 ​$$

This is basically the information that sits in the Bitcoin blockchain. Anyone can audit anyone else's transactions simply by inspecting the global ledger's transaction history. This isn't great for privacy.

Here is where confidential transactions come in. We can start by multiplying both sides of the preceding equation by some generator point H on the elliptic curve (for an introduction to elliptic curve math, refer to this presentation):

$$ 300.H = 200.H + 90.H + 10.H ​$$

Since H is a constant, the math above still holds, so we can validate that Alice is not stealing by checking that

$$(3.H) - (2.H) - (1.H) - (f.H) \equiv 0.H = 0 $$

Notice that we only see public keys and thus the values are hidden. Cool!

Technically, only scalar integer values are valid for elliptic curve multiplication. This is why we use µT in the transactions so that the amounts are always integers.
There's a catch though. If _H_ is constant and known (it is), couldn’t someone just pre-calculate $n.H​$ for all reasonable values of _n_ and scan the blockchain for those public keys?[^a]

In short, yes. So we’re not done yet.


"This is called a pre-image attack."

Blinding Factors

To prevent a pre-image attack from unblinding all the values in Tari transactions, we need to add randomness to each input and output. The basic idea is to do this by adding a second public key to each transaction output.

So if we rewrite the inputs and outputs as follows:

$$ C_i = n_i.H + k_i.G $$

where G is another generator on the same curve, this completely blinds the inputs and outputs so that no pre-image attack is possible. This formulation is called a Pedersen commitment [3].

The two generators, _H_ and _G_ must be selected in a way that it's impossible to convert values from one generator to the other [[2]]. Specifically, if _G_ is the base generator, then there exists some _k_ where $$ H = kG $$

If anyone is able to figure out this k, the whole security of Confidential Transactions falls apart. It's left as an exercise for the reader to figure out why.

For a semi-gentle introduction to these concepts, Adam Gibson's paper on the subject is excellent [5].

Alice Prepares a Transaction

Alice can now start to build a transaction.

Input$$ -300.H - k_1.G $$C1
Change output$$ 90.H + k_2.G $$C2
Fee$$ 10.H + 0.G $$f
Total spent$$ 200.H + 0.G $$C*
Sum$$ 0.H + (k_2-k_1).G $$X

The \( k_i \)-values, \(k_1, k_2\) are the spending keys for those outputs.

The _only pieces of information you need to spend Tari outputs_ are the spending key (also called a blinding factor) and its associated value.

Therefore for this transaction, Alice's wallet, which tracks all of her Tari unspent outputs, would have provided the blinding factor and the value "300" to complete the commitment C1.

Notice that when all the inputs and outputs are summed (including the fee), all the values cancel to zero, as they should. Notice also that the only term left is multiplied by the point G. All the H terms are gone. We call this sum the public excess for Alice's part of the transaction.

We define the excess of the transaction to be

$$ x_s = k_2 - k_1 $$

A simple way for Alice to calculate her excess (and how the Tari wallet software does it) is to sum her output blinding factors and minus the sum of her input blinding factors.

Let's say Alice was trying to create some money for herself and made the change 100 µT instead of 90. In this instance, the sum of the outputs and inputs would not cancel on H and we would have

$$X^* = 10.H + x_s.G$$

We'll see in a bit how the Mimblewimble protocol catches Alice out if she tries to pull shenanigans like this.

Alice actually also prepares a range proof for each output, which is a proof that the value of the output is between zero and 2^64 µT. Without range proofs, Alice could send negative amounts to people, enriching herself, and not breaking any of the accounting of Tari.
In Tari and Grin, the excess value is actually split into two values for added privacy. The Grin team has a good explanation of why this `offset` value is necessary [[4]]. We leave off this step to keep the explanation simple(r).

Alice also chooses a random nonce,

$$ r_a $$

and calculates the corresponding public nonce,

$$ R_a = r_a.G $$

Alice then sends the following information to Bob:

Amount paid to Bob200
Public nonce$$ R_a $$
Public excessX

The message metadata is some extra bits of information that pertains to the transaction (such as when the output can be spent).

Bob Prepares his Response

Bob receives this information and then sets about completing his part of the transaction.

First, he can verify that Alice has sent over the correct information by checking that the public excess, X, is correct by following the same procedure that Alice used to calculate it above. This step isn't strictly necessary, since he doesn't have enough information to detect any fraud at this point.

He then builds a commitment from the amount field that Alice is sending to him:

$$ C_b = 200.H + k_b.G $$

where \(k_b\) is Bob's private spend key. He calculates

$$P_b = k_b.G$$

and generates a range proof for the commitment.

Bob then needs to sign that he's happy that everything is complete to his satisfaction. He creates a partial Schnorr Signature with the challenge,

$$ e = H(R_a + R_b \Vert X + P_b \Vert f \Vert m) $$

and the signature is given by

$$ s_b = r_b + ek_b $$

Bob sends back

Output (commitment and range proof)$$C_b$$
Public nonce$$R_b$$
Public key$$P_b$$

Alice Completes and Broadcasts the Transaction

After hearing back from Bob, Alice can wrap things up.

First, she can now use Bob's public nonce and public key to independently calculate the same challenge that Bob signed:

$$ e = H(R_a + R_b \Vert X + P_b \Vert f \Vert m) $$

Alice then creates both her own partial signature,

$$ s_a = r_a + e.x_s $$

and the combined aggregate signature, $$ s = s_a + s_b, R = R_a + R_b $$.

Alice can now broadcast this transaction to the network. The final transaction looks as follows:

Transaction Kernel
Public excess$$ X + P_b $$
Signature$$ (s, R) $$
Transaction metadatam
Transaction Body
Inputs with range proofs$$[C_1]$$
Outputs with range proofs$$[C_2, C_B]$$

Transaction Verification and Propagation

When a full node receives Alice's transaction, it needs to verify that it's on the level before sending it on to its peers. The node wants to check the following:

  1. All inputs come from the current UTXO set

    All full nodes keep track of the set of unspent outputs, so the node will check that C1 is in that list.

  2. All outputs have a valid range proof

    The range proof is verified against its matching commitment.

  3. The values balance

    In this check, the node wants to make sure that no coins are created or destroyed in the transaction. But how can it do this if the values are blinded?

    Recall that in an honest transaction, all the values (which are multiplied by H) cancel, and you're left with the sum of the public keys of the outputs minus the public keys of the inputs. This non-coincidentally happens to be the same value that is stored in the kernel as the public excess.

    The summed public nonces, R are also stored in the kernel, so this allows the node to verify the signature by checking the following, where the challenge e is calculated as before:

$$ s.G \stackrel{?}{=} R + e(X + P_b) $$

  1. The signature in the kernel is valid

    Therefore by validating the kernel signature, we also prove to ourselves that the transaction accounting is correct.

  2. Various other consensus checks

    Such as whether the fee is greater than the minimum.

If all these checks pass, then the node will forward the transaction on to its peers and it will eventually be mined and be added to the blockchain.

Stopping Fraud

Now let's say Alice tried to be sneaky and used \( X^* \) as her excess; the one where she gave herself 100 µT change instead of 90 µT. Now the values won't balance. The sum of outputs, inputs and fees will look something like this:

$$ 10.H + (x_s + k_b).G ​$$

So now when a full node checks the signature:

$$ \begin{align} R + e(X^* + P_b) &\stackrel{?}{=} s.G \\ R + e(10.H + x_s.G + P_b) &\stackrel{?}{=} (r_a + r_b + e(x_s + k_b)).G \\ R + e(10.H + X + P_b) &\stackrel{?}{=} (r_a + r_b).G + e(x_s + k_b).G \\ \mathbf{10e.H} + R + e(X + P_b) &\stackrel{?}{=} R + e(X + P_b) \\ \end{align} $$

The signature doesn't verify! The node can't tell exactly what is wrong with the transaction, but it knows something is up, and so it will just drop the transaction silently and get on with its life.

Transaction Summary

To sum up: a Tari/MimbleWimble transaction includes the following:

  • From Alice, a set of inputs that reference and spend a set of previous outputs.
  • From Alice and Bob, a set of new outputs, including:
    • A value and a blinding factor (which is just a new private key).
    • A range proof that shows that the value is non-negative.
  • The transaction fee, in cleartext,
  • The public excess, which is the sum of all blinding factors used in the transaction.
  • Transaction metadata.
  • The excess blinding value used as the private key to sign a message attesting to the transaction metadata, and the public excess.


[1] "What are Bitcoin Confidential Transactions?" [Online.] Available: Date accessed: 2019‑04‑09.

[2] "Nothing-Up-My_Sleeve Number" [online].
Available: Date accessed: 2019‑04‑09.

[3] Wikipedia: "Commitment Scheme" [online]. Available: Date accessed: 2019‑04‑09.

[4] "Kernel Offsets, in Introduction to MimbleWimble and Grin" [online]. Available: Date accessed: 2019‑04‑09.

[5] A. Gibson, "From Zero (Knowledge) to BulletProofs" [online]. Available: Date accessed: 2019‑04‑10.


Mimblewimble-Grin Blockchain Protocol Overview


Depending on whom you ask, Mimblewimble is either a tongue-tying curse or a blockchain protocol designed to be private and scalable. The transactions in Mimblewimble are derived from confidential transactions by Greg Maxwell [1], which in turn are based on the Pedersen commitment scheme. On 19 July 2016, Tom Elvis Jedusor left a white paper on the tor network describing how Mimblewimble could work. As the potential for this was realized, work was done to make this a reality. One of these projects is Grin, which is a minimalistic implementation of Mimblewimble. Further information can be found in [2] and [3].

Mimblewimble Protocol Overview


Mimblewimble publishes all transactions as confidential transactions. All inputs, outputs and change are expressed in the following form:

​ $ r \cdot G + v \cdot H $

where $ G $ and $ H $ are elliptic curves, $ r $ a private key used as a blinding factor, $ v $ the value and "$ \cdot $" is Elliptic-curve cryptography (ECC) multiplication.

An example transaction can be expressed as input = output + change.

​ $ (r_i \cdot G + v_i \cdot H) = (r_o \cdot G + v_o \cdot H) + (r_c \cdot G + v_c + \cdot H) $

But this requires that

​ $ r_i = r_o + r_c $

A more detailed explanation of how Mimblewimble works can be found in the Grin GitHub documents [4].

Cut-through and Pruning


Grin includes something called "cut-through" in the transaction pool. Cut-through removes outputs from the transaction pool, which have already been spent as new inputs, using the fact that every transaction in a block should sum to zero. This is shown below:

​ $ output - inputs = kernel_-excess +(part \mspace{3mu} of)kernel_- offset $

The kernel offset is used to hide which kernel belongs to which transaction and we only have a summed kernel offset stored in the header of each block.

We don't have to record these transactions inside the block, although we still have to record the kernel as the kernel proof transfer of ownership to make sure that the whole block sums to zero, as expressed in the following formula:

​ $ sum(ouputs) - sum(inputs) = sum(kernel_-excess) + kernel_-offset $

An example of cut-through follows:

 I1(x1)    +---> O1
           +---> O2

 I2(x2,O2) +---> O3

 I3(O3)    +---> O4
           +---> O5

After cut-through:

 I1(x1)    +---> O1
 I2(x2)    +---> O4
           +---> O5

In the preceding examples, "I" represents new inputs, "X" represents inputs from previous blocks and "O" represents outputs.

This causes Mimblewimble blocks to be much smaller than normal bitcoin blocks, as the cut-through transactions are no longer listed under inputs and outputs. In practice, after this we can still see there was a transaction, because the kernel excess still remains, but the actual hidden values are not recorded.


Pruning takes this same concept but goes into past blocks. Therefore, if an output in a previous block gets spent, it will be removed from that block. Pruning removes the leaves from the Merkle Mountain Range (MMR) as well. Thus, it allows the ledger to be small and scalable. According to the Grin team [3], assuming 10 million transactions with 100,000  unspent outputs, the ledger will be roughly 130GB, which can be divided into the following parts:

  • 128GB transaction data (inputs and outputs);
  • 1GB transaction proof data;
  • 250MB block headers.

The total storage requirements can be reduced if cut-through and pruning are applied. The ledger will shrink to approximately 1.8GB and will result in the following:

  • 1GB transaction proof data;
  • UTXO size 520MB;
  • 250MB block headers.

Grin Blocks

The grin block contains the following data:

  1. Transaction outputs, which include for each output:
    • A Pedersen commitment (33 bytes).
    • A range proof (over 5KB at this time).
  2. Transaction inputs, which are just output references (32 bytes).
  3. Transaction fees, in clear text.
  4. Transaction "proofs", which include for each transaction:
    • The excess commitment sum for the transaction (33 bytes).
    • A signature generated with the excess (71 bytes average).
  5. A block header that includes Merkle trees and proof of work (approximately 250 bytes).

The Grin header:

Header FieldDescription
HashUnique hash of block
VersionGrin version
Previous BlockUnique hash of previous block
AgeTime the block was mined
Cuckoo SolutionThe wining cuckoo solution
DifficultyDifficulty of the solved cuckoo
Target DifficultyDifficulty of this block
Total DifficultyTotal difficulty of mined chain up to age
Total Kernel OffsetKernel offset
NonceRandom number for cuckoo
Block RewardCoinbase + fee reward for block

The rest of the block contains a list of kernels, inputs and outputs. Appendix A contains an example of a grin block.

Trustless Transactions

Schnorr signatures have been done in Tari Labs University (TLU). Please look here for a more detailed explanation [7].

Since Grin transactions are obscured by Pedersen Commitments, there is no proof that money was actually transferred. To solve this problem, we require the receiver to collaborate with the sender in building a transaction and, more specifically, the kernel signature [4].

When Alice wants to pay Bob, the transaction will be performed using the following steps:

  1. Alice selects her inputs and her change. The sum of all blinding factors (change output minus inputs) is $ r_s $.
  2. Alice picks a random nonce ks and sends her partial transaction, $ k_s\cdot G $ and $ r_s\cdot G $ to Bob.
  3. Bob picks his own random nonce $ k_r $ and the blinding factor for his output $ r_r $. Using $ r_r $, Bob adds his output to the transaction.
  4. Bob computes the following:
    • message $ m = fee \Vert lock_-height $;
    • Schnorr challenge $ e = SHA256(m \Vert k_r \cdot G + k_s\cdot G \Vert r_r\cdot G + r_s\cdot G) $; and
    • his side of the signature, $ s_r = k_r + e\cdot G $.
  5. Bob sends $ s_r $ and $ k_r\cdot G $ and $ r_r\cdot G $ to Alice.
  6. Alice computes $ e $, just like Bob did, and can check that $ s_r\cdot G = k_r\cdot G + e\cdot r_r \cdot G $.
  7. Alice sends her side of the signature, $ s_s = k_s + e\cdot r_s $, to Bob.
  8. Bob validates $ s_s\cdot G $, just like Alice did for $ s_r\cdot G $ in step 5, and can produce the final signature $ sig = (s_s + s_r , \mspace{6mu} k_s\cdot G + k_s\cdot G) $ as well as the final transaction kernel, including $ sig $ and the public key $ r_r\cdot G + r_s\cdot G$.




In a normal Grin transaction, the signature [4], just the normal fee, gets signed as the message. But to get an absolute time-locked transaction, the message can be modified taking the block height and appending the fee to that. A block with a kernel that includes a lock height greater than the current block height is then rejected.

​ $ m = fee \Vert h $


Taking into account how an absolute time-locked transaction is constructed, the same idea can be extended by taking the relative block height and not the absolute height, but also adding a specific kernel commitment. In this way, the signature references a specific block as height. The same principle counts as with absolute time-locked transactions in that a block with a kernel containing a relative time-locked transaction that has not passed, is rejected.

​ $ m = fee \Vert h \Vert c $


Multi-signatures (multisigs) are also known as N-of-M signatures. This means that N amount out of M amount of peers need to agree before a transaction can be spent.

When Bob and Alice [6] want to do a 2‑of‑2 multisig contract, the contract can be done using the following steps:

  1. Bob picks a blinding factor $ r_b $ and sends $ r_b\cdot G $ to Alice.
  2. Alice picks a blinding factor $ r_a $ and builds the commitment $ C= r_a\cdot G + r_b\cdot G + v\cdot H $; she sends the commitment to Bob.
  3. Bob creates a range proof for $ v $ using $ C $ and $ r_b $, and sends it to Alice.
  4. Alice generates her own range proof and aggregates it with Bob, finalizing the multiparty output $ O_{ab} $.
  5. The kernel is built following the same procedure as used with Trustless Transactions.

We observe that the output $ O_{ab} $ is unknown to both parties, because neither knows the whole blinding factor. To be able to build a transaction spending $ O_{ab} $, someone would need to know $ r_a + r_b $ to produce a kernel signature. To produce the original spending kernel, Alice and Bob need to collaborate.

Atomic Swaps

Atomic swaps can be used to exchange coins from different blockchains in a trustless environment. In the Grin documentation, this is handled in the contracts documentation [6] and in the contract ideas documentation [8]. In practice, there has already been an atomic swap between Grin and Ethereum [9], but this only used the Grin testnet with a modified Grin implementation, as the release version of Grin did not yet support the required contracts. TLU has a section about Atomic Swaps [7].

Atomic swaps work with 2‑of‑2 multisig contracts, one public key being Alice's, the other being the hash of a preimage that Bob has to reveal. Consider public key derivation $ x\cdot G $ to be the hash function and by Bob revealing $ x $, Alice can then produce an adequate signature, proving she knows $ x $ (in addition to her own private key).

Alice will swap Grin with Bob for Bitcoin. We assume Bob created an output on the Bitcoin blockchain that allows spending by Alice if she learns a hash preimage $ x $, or by Bob after time $ T_b $. Alice is ready to send her Grin to Bob if he reveals $ x $.

Alice will send her Grin to a multiparty timelock contract with a refund time $ T_a < T_b $. To send the 2‑of‑2 output to Bob and execute the swap, Alice and Bob start as if they were building a normal trustless transaction:

  1. Alice picks a random nonce $ k_s $ and her blinding sum $ r_s $ and sends $ k_s\cdot G $ and $ r_s\cdot G $ to Bob.
  2. Bob picks a random blinding factor $ r_r $ and a random nonce $ k_r $. However, this time, instead of simply sending $ s_r = k_r + e\cdot r_r $ with his $ r_r\cdot G $ and $ k_r\cdot G $, Bob sends $ s_r' = k_r + x + e\cdot r_r $ as well as $ x\cdot G $.
  3. Alice can validate that $ s_r'\cdot G = k_r\cdot G + x\cdot G + r_r\cdot G $. She can also check that Bob has money locked with $ x\cdot G $ on the other chain.
  4. Alice sends back her $ s_s = k_s + e\cdot x_s $ as she normally would, now that she can also compute $ e = SHA256(m \Vert k_s\cdot G+k_r\cdot G) $.
  5. To complete the signature, Bob computes $ s_r = k_r + e\cdot r_r $ and the final signature is $ (s_r + s_s, \mspace{6mu} k_r\cdot G + k_s\cdot G) $.
  6. As soon as Bob broadcasts the final transaction to get his Grin, Alice can compute $ s_r' - s_r $ to get $ x $.

Prior to completing the atomic swap, Bob needs to know Alice's public key. Bob would then create an output on the Bitcoin blockchain with a 2‑of‑2 multisig similar to alice_pubkey secret_pubkey 2 OP_CHECKMULTISIG. This should be wrapped in an OP_IF so Bob can get his money back after an agreed-upon time. All of this can even be wrapped in a Pays To Script Hash (P2SH). Here, secret_pubkey is $x\cdot G$ from the previous section.

To verify the output, Alice would take $x\cdot G$, recreate the bitcoin script, hash it and check that her hash matches what's in theP2SH (step 2 in the Multisig section). Once she gets $x$ (step 6), she can build the two signatures necessary to spend the 2‑of‑2, having both private keys, and get her bitcoin.


[1] G. Maxwell, "Confidential Transactions", 2017 [online]. Available: Accessed: 2018‑10‑24.

[2] P. Robinson, H. Odendaal, S. W. van Heerden and A. Zaidelson, "Grin vs. BEAM, a Comparison", 2018 [online]. Available: https://tari&#8209; Accessed: 2018‑10‑08.

[3] Y. Roodt, H. Odendaal, S. W. van Heerden, R. Robinson and A. Kamal, "Grin Design Choice Criticisms - Truth or Fiction", 2018 [online]. Available: Accessed: 2018‑10‑08.

[4] B. Simon et al., "Grin Document Structure", 2018 [online]. Available: Accessed: 2018‑10‑24).

[5] I. Peverell et al., "Pruning Blockchain Data", 2016 [online]. Available: Accessed: 2018‑10‑26.

[6] I. Peverell et al., "Contracts", 2018 [online]. Available: Accessed: 2018‑10‑26.

[7] "Tari Labs University". Tari Labs, 2018 [online]. Available: Accessed: 2018‑10‑27.

[8] Q. Le Sceller, "Contract Ideas", 2018 [online]. Available: (Accessed: 2018‑10‑27.)

[9] Jasper, "First Grin Atomic Swap!" (2018) [online]. Available: Accessed: 2018‑10‑27.


Appendix A: Example of Grin Block

Previous Block0343597fe7c69f497177248913e6e485f3f23bb03b07a0b8a5b54f68187dbc1d
Age2018-10-23, 08:03:46 UTC
Cuckoo SolutionSize:
Target Difficulty17,736
Total Difficulty290,138,524
Total Kernel Offsetb52ccdf119fe18d7bd12bcdf0642fcb479c6093dca566e0aed33eb538f410fb5
Block Reward60 grin
Fees14 mg

4 x Inputs


5 x Outputs

No.Output TypeCommitSpent

3 x Kernels

No.FeaturesFeeLock Height

Apart from the header information, we can only see that this block contains two transactions from the two kernels present. Between these two transactions, we only know that there were four inputs and four outputs. Because of the way in which Mimblewimble obfuscates the transaction, we don't know the values or which input and output belong to which transaction.


Grin vs. BEAM, a Comparison


Grin and BEAM are two open-source cryptocurrency projects based on the Mimblewimble protocol. The Mimblewimble protocol was first proposed by an anonymous user using the pseudonym Tom Elvis Jedusor (the French translation of Voldemort's name from the Harry Potter series of books). This user logged onto a bitcoin research Internet Relay Chat (IRC) channel and posted a link to a text article hosted on a Tor hidden service [1]. This article provided the basis for a new way to construct blockchain-style transactions that provided inherent privacy and the ability to dramatically reduce the size of the blockchain by compressing the transaction history of the chain. The initial article presented the main ideas of the protocol, but left out a number of critical elements required for a practical implementation and even contained a mistake in the cryptographic formulation. Andrew Poelstra published a follow-up paper that addresses many of these issues and refines the core concepts of Mimblewimble [2], which have been applied to the practical implementation of this protocol in both the Grin [3] and BEAM [4] projects.

The Mimblewimble protocol describes how transacting parties will interactively work to build a valid transaction using their public/private key pairs (which are used to prove ownership of transaction outputs) and interactively chosen blinding factors. Blinding factors are used to obfuscate the participants' public keys from everyone, including each other, and to hide the value of the transaction from everyone except the counterparty in that specific transaction. The protocol also performs a process called cut-through, which condenses transactions by eliminating intermediary transactions. This improves privacy and compresses the amount of data that is maintained on the blockchain [3]. This cut-through process precludes general-purpose scripting systems such as those found in Bitcoin. However, Andrew Poelstra proposed the concept of Scriptless scripts, which make use of Schnorr signatures to build adaptor signatures that allow for encoding of many of the behaviors that scripts are traditionally used to achieve. Scriptless scripts enable functionality such as payment channels (e.g. Lightning Network) and atomic swaps [5].

The Grin and BEAM projects both implement the Mimblewimble protocol, but both have been built from scratch. Grin is written in RUST and BEAM in C++. This report describes some of the aspects of each project that set them apart. Both projects are still in their early development cycles and many of these details are changing on a daily basis. Furthermore, the BEAM project documentation is mostly available only in Russian. As of the writing of this report, not all the technical details are available for English readers. Therefore the discussion in this report will most likely become outdated as the projects evolve.

This report is structured as follows:

  • Firstly, some implementation details are given and unique features of the projects are discussed.
  • Secondly, the differences in the projects' Proof-of-Work (PoW) algorithms are examined.
  • Finally, the projects' governance models are discussed.

Comparison of Features and Implementation in Grin vs. BEAM

The two projects are being independently built from scratch by different teams in different languages (Rust [6] and C++ [7]), so there will be many differences in the raw implementations. For example, Grin uses the Lightning Memory-mapped Database (LMDB) for its embedded database and BEAM uses SQLite. Although they have performance differences, they are functionally similar. Grin uses a Directed Acyclic Graph (DAG) to represent its mempool to avoid transaction reference loops [8], while BEAM uses a multiset key-value data structure with logic to enable some of its extended features [7].

From the perspective of features, the two projects exhibit all the features inherent to Mimblewimble. Grin's stated goal is to produce a simple and easy to maintain implementation of the Mimblewimble protocol [3]. BEAM's implementation, however, contains a number of modifications to the Mimblewimble approach, with the aim to provide some unique features for its implementation. Before we get into the features and design elements that set the two projects apart, let's discuss an interesting feature that both projects have implemented.

Both Grin and BEAM have incorporated a version of the Dandelion relay protocol, which supports transaction aggregation. One of the major outstanding challenges for privacy that cryptocurrencies face is that it is possible to track transactions as they are added to the mempool and propagate across the network, and to link those transactions to their originating IP addresses. This information can be used to deanonymize users, even on networks with strong transaction privacy. The Dandelion network propagation scheme was proposed to improve privacy during the propagation of transactions to the network [9]. In this scheme, transactions are propagated in two phases: the Anonymity (or "stem") phase and the Spreading (or "fluff") phase, as illustrated in Figure 1.

  • In the stem (anonymity) phase, a transaction is propagated to only a single randomly selected peer from the current node's peer list. After a random number of hops along the network, each hop propagating to only a single random peer, the propagation process enters the second phase.

  • During the fluff (spreading) phase, the transaction is propagated using a full flood/diffusion method, as found in most networks. This approach means that the transaction has first propagated to a random point in the network before flooding the network, thereby making it much more difficult to track its origin.

Figure 1: Two Phases of Dandelion Peer-to-peer Transaction Propagation [9]

Both projects have adapted this approach to work with Mimblewimble transactions. Grin's implementation allows for transaction aggregation and cut-through in the stem phase of propagation, which provides even greater anonymity to the transactions before they spread during the fluff phase [10]. In addition to transaction aggregation and cut-through, BEAM introduces “dummy” transactions that are added in the stem phase to compensate for situations when real transactions are not available [33].

Grin Unique Features

Grin is aiming to be a simple and minimal reference implementation of a Mimblewimble blockchain. It is therefore not aiming to include many features to extend the core Mimblewimble functionality, as discussed. However, the Grin implementation does include some interesting implementation choices that it has documented in depth on its growing Github repository's wiki.

For example, Grin has implemented a method for a node to sync the blockchain very quickly by only downloading a partial history [11]. A new node entering the network will query the current head block of the chain and then request the block header at a horizon. In the example, the horizon is initially set at 5,000 blocks before the current head. The node then checks if there is enough data to confirm consensus. If there isn't consensus, the node will increase its horizon until consensus is reached. At that point, it will download the full Unspent Transaction Output (UTXO) set of the horizon block. This approach does introduce a few security risks, but mitigations are provided and the result is that a node can sync to the network with an order of magnitude less data.

Since the initial writing of this article (October 2018), BEAM has published its solution for fast node synchronization using macroblocks. A macroblock is a complete state of all UTXOs, periodically created by BEAM nodes [12].

BEAM Unique Features

BEAM has set out to extend the feature set of Mimblewimble in a number of ways. BEAM supports setting an explicit incubation period on a UTXO, which limits its ability to be spent to a specific number of blocks after its creation [13]. This is different to a timelock, which prevents a transaction from being added to a block before a certain time. BEAM also supports the traditional timelock feature, but includes the ability to also specify an upper time limit, after which the transaction can no longer be included in a block [13]. This feature means that a party can be sure that if a transaction is not included in a block on the main blockchain after a certain time, it will never appear.

Another unique feature of BEAM is the implementation of an auditable wallet. For a business to operate in a given regulatory environment, it will need to demonstrate its compliance to the relevant authorities. BEAM has proposed a wallet designed for compliant businesses, which generates additional public/private key pairs specifically for audit purposes. These signatures are used to tag transactions so that only the auditing authority that is given the public key can identify those transactions on the blockchain, but cannot create transactions with this tag. This allows a business to provide visibility of its transactions to a given authority without compromising its privacy to the public [14].

BEAM's implementation of Dandelion improves privacy by adding decoy transaction outputs at the stem phase. Each such output has a value of zero, but is indistinguishable from regular outputs. At a later stage (a randomly calculated number of blocks for each output), the UTXOs are added as inputs to new transactions, thus spending them and removing them from the blockchain.

BEAM has also proposed another feature aimed at keeping the blockchain as compact as possible. In Mimblewimble, as transactions are added, cut-through is performed, which eliminates all intermediary transaction commitments [3]. However, the transaction kernels for every transaction are never removed. BEAM has proposed a scheme to reuse these transaction kernels to validate subsequent transactions [13]. In order to consume the existing kernels without compromising the transaction irreversibility principle, BEAM proposes that a multiplier be applied to an old kernel by the same user who has visibility of the old kernel, and that this be used in a new transaction. In order to incentivize transactions to be built in this way, BEAM includes a fee refund model for these types of transactions. This feature will not be part of the initial release.

When constructing a valid Mimblewimble transaction, the parties involved need to collaborate in order to choose blinding factors that balance. This interactive negotiation requires a number of steps and it implies that the parties need to be in communication to finalize the transaction. Grin facilitates this process by the two parties connecting directly to one another using a socket-based channel for a "real-time" session. This means that both parties need to be online simultaneously. BEAM has implemented a Secure Bulletin Board System (SBBS) that is run on BEAM full-nodes to allow for asynchronous negotiation of transactions ([30], [31]).

Requiring the interactive participation of both parties in constructing a transaction can be a point of friction in using a Mimblewimble blockchain. In addition to the secure BBS communication channel, BEAM also plans to support one-sided transactions where the payee in a transaction who expects to be paid a certain amount can construct their half of the transaction and send this half-constructed transaction to the payer. The payer can then finish constructing the transaction and publish it to the blockchain. Under the normal Mimblewimble system this is not possible, because it would involve revealing your blinding factor to the counterparty. BEAM solves this problem by using a process it calls kernel fusion, whereby a kernel can include a reference to another kernel so that it is only valid if both kernels are present in the transaction. In this way, the payee can build their half of the transaction with a secret blinding factor and a kernel that compensates for their blinding factor, which must be included when the payer completes the transaction [13].

Both projects make use of a number of Merkle tree structures to keep track of various aspects of the respective blockchains. Details of the exact trees and what they record are documented for both projects ([16], [17]). BEAM, however, makes use of a Radix-Hash tree structure for some of its trees. This structure is a modified Merkle tree that is also a binary search tree. This provides a number of features that the standard Merkle trees do not have, and which BEAM exploits in its implementation [17].

The features discussed here can all be seen in the code at the time of writing (May 2019), although this is not a guarantee that they are working. A couple of features, which have been mentioned in the literature as planned for the future, have not yet been implemented. These include embedding signed textual content into transactions that can be used to record contract text [13], and issuing confidential assets [18].

Proof-of-Work Mining Algorithm

BEAM has announced that it will employ the Equihash PoW mining algorithm with the parameters set to n=150, k=5 [32]. Equihash was proposed in 2016 as a memory-hard PoW algorithm that relied heavily on memory usage to achieve Application Specific Integrated Circuit (ASIC) resistance [19]. The goal was to produce an algorithm that would be more efficient to run on consumer Graphics Processing Units (GPUs) as opposed to the growing field of ASIC miners, mainly produced by Bitmain at the time. It was hoped this would assist cryptocurrencies that used this algorithm, to decentralize the mining power. The idea behind Equihash's ASIC resistance was that at the time, implementing memory in an ASIC was expensive and GPUs were more efficient at calculating the Equihash PoW. This ASIC resistance lasted for a while, but in early 2018, Bitmain released an ASIC for Equihash that was significantly more efficient than GPUs for the Equihash configurations used by Zcash, Bitcoin Gold and Zencash, to name a few. It is possible to tweak the parameters of the Equihash algorithm to make it more memory intensive and thus make current ASICs and the older GPU mining farms obsolete, but it remains to be seen if BEAM will do this. No block time has been published at the time of writing of this report (May 2019).

Grin initially opted to use the new Cuckoo Cycle PoW algorithm, also purported to be ASIC resistant due to being memory latency bound [20]. This means that the algorithm is bound by memory bandwidth rather than raw processor speed, with the hope that it will make mining possible on commodity hardware.

In August 2018, the Grin team announced at the launch of its mainnet that it had become aware that it was likely that an ASIC would be available for the Cuckoo cycle algorithm [21]. While acknowledging that ASIC mining is inevitable, Grin is concerned that the current ASIC market is very centralized (i.e. Bitmain), and it wants to foster a grassroots GPU mining community for two years, in the early days of Grin. After two years, Grin hopes that ASICs will have become more of a commodity and thus decentralized.

To address this, it was proposed to use two PoW algorithms initially: one that is ASIC Friendly (AF) and one that is ASIC Resistant (AR), and then to select which PoW is used per block to balance the mining rewards between the two algorithms, over a 24‑hour period. The Governance committee resolved on 25 September 2018 to go ahead with this approach, using a modified version of the Cuckoo Cycle algorithm, called the Cuckatoo Cycle. The AF algorithm at launch will be Cuckatoo32+, which will gradually increase its memory requirements to make older single-chip ASICs obsolete over time. The AR algorithm is still not defined [23].

Governance Models and Monetary Policy

Both the Grin and BEAM projects are open-source and available on Github ([6], [7]). The Grin project has 75 contributors, eight of which have contributed the vast majority of the code. BEAM has 10 contributors, four of which have contributed the vast majority of the code (at the time of writing). The two projects have opted for different models of governance. BEAM has opted to set up a foundation that includes the core developers, to manage the project. This is the route taken by most cryptocurrency projects in this space. The Grin community has decided against setting up a central foundation and has compiled an interesting discussion of the pros and cons of a centralized foundation [22]. This document contains a very in-depth discussion that weighs up the various governance functions that a foundation might serve, and evaluates each use case. The Grin community came to the conclusion that while foundations are useful, they do not provide the only solution to governance problems, and have therefore opted to remain a completely decentralized community-driven project. Currently, decisions are made at periodic governance meetings that are convened on Gitter with community members, where an agenda is discussed and decisions are ratified. Agendas and minutes of these meetings can be found in the Grin Forums governance section. An example of the outcomes of such a meeting can be seen in [23].

Neither project will engage in an Initial Coin Offering (ICO) or pre-mine, but the two projects also have different funding models. BEAM set up a Limited Liability Company (LLC) and has attracted investors to it for its initial round of non-profit BEAM Foundation that will take over the management of the protocol during the first year after launch [24]. The goal of the Foundation will be to support maintenance and further development of BEAM; promote relevant cryptographic research; support awareness and education in the areas of financial privacy; and support academic work in adjacent areas. In the industry, this treasury mechanism is called a dev tax. Grin will not levy a dev tax on the mining rewards, and will rely on community participation and community funding. The Grin project does accept financial support, but these funding campaigns are conducted according to their "Community Funding Principles" [25], which will be conducted on a "need-by-need" basis. A campaign will specify a specific need it aims to fulfill (e.g. "Hosting fees for X for the next year") and the funding will be received by the community member who ran the campaign. This will provide 100% visibility regarding who is responsible for the received funds. An example of a funding campaign is the Developer Funding Campaign run by Yeastplume to fund his full-time involvement in the project from October 2018 to February 2019. Refer to [26].

In terms of the monetary policy of the two projects, BEAM has stated that it will be using a deflationary model with periodic halving of its mining reward and a maximum supply of BEAM of 262,800,000 coins. BEAM will start with 100 coins emitted per block. The first halving will occur after one year. Halving will then happen every four years [32]. Grin has opted for an inflationary model where the block reward will remain constant, making its arguments for this approach in [27]. This approach will asymptotically tend towards a zero percent dilution as the supply increases, instead of enforcing a set supply [28]. Grin has not yet specified its mining reward or fees structure, but based on its current documentation, it is planning on a 60 grin per block reward. Neither project has made a final decision regarding how to structure fees, but the Grin project has started to explore how to set a fee baseline by using a metric of "fees per reward per minute" [28].

Conclusions, Observations and Recommendations

In summary, Grin and BEAM are two open-source projects that are implementing the Mimblewimble blockchain scheme. Both projects are building from scratch. Grin uses Rust while BEAM uses C++; therefore there are many technical differences in their design and implementation. However, from a functional perspective, both projects will support all the core Mimblewimble functionality. Each project does contain some unique functionality, but as Grin's goal is to produce a minimalistic implementation of Mimblewimble, most of the unique features that extend Mimblewimble lie in the BEAM project. The following list summarizes the functional similarities and differences between the two projects.

  • Similarities:
    • Core Mimblewimble feature set.
    • Dandelion relay protocol.
  • Grin unique features:
    • Partial history syncing.
    • DAG representation of Mempool to prevent duplicate UTXOs and cyclic transaction references.
  • BEAM unique features:
    • Secure BBS hosted on the nodes for establishing communication between wallets. Removes the need for sender and receiver to be online at the same time.
    • Use of decoy outputs in Dandelion stem phase. Decoy outputs are later spent to avoid clutter on blockchain.
    • Explicit UTXO incubation period.
    • Timelocks with a minimum and maximum threshold.
    • Auditable transactions as part of the roadmap.
    • One-sided transaction construction for non-interactive payments.
    • Use of Radix-Hash trees.

Both projects are still very young. At the time of writing this report (May 2019), both are still in the testnet phase, and many of their core design choices have not yet been built or tested. Much of the BEAM wiki is still in Russian, so it is likely that there are details to which we are not yet privy. It will be interesting to keep an eye on these projects to see how their various decisions play out, both technically and in terms of their monetary policy and governance models.


[1] T. E. Jedusor, "MIMBLEWIMBLE" [online]. Available: Date accessed: 2018‑09‑30.

[2] A. Poelstra, "Mimblewimble" [online]. Available: Date accessed: 2018‑09‑30.

[3] "Introduction to Mimblewimble and Grin" [online]. Available: Date accessed: 2018‑09‑30.

[4] "BEAM: The Scalable Confidential Cryptocurrency" [online]. Available: Date accessed: 2018‑09‑28.

[5] A. Gibson, "Flipping the Scriptless Script on Schnorr" [online]. Available: Date accessed: 2018‑09‑30.

[6] "Grin Github Repository" [online]. Available: Date accessed: 2018‑09‑30.

[7] "BEAM Github Repository" [online]. Available: Date accessed: 2018‑09‑30.

[8] "Grin - Transaction Pool" [online]. Available: Date accessed: 2018‑10‑22.

[9] S. B. Venkatakrishnan, G. Fanti and P. Viswanath, "Dandelion: Redesigning the Bitcoin Network for Anonymity" [online]. Available: Proc. ACM Meas. Anal. Comput. Syst. 1, 1, 2017. Date accessed: 2018‑10‑22.

[10] "Dandelion in Grin: Privacy-Preserving Transaction Aggregation and Propagation" [online]. Available: Date accessed: 2018‑09‑30.

[11] "Grin - Blockchain Syncing" [online]. Available: Date accessed: 2018‑10‑22.

[12] "BEAM - Node Initialization Synchronization" [online]. Available: Date accessed: 2018‑12‑24.

[13] "BEAM Description. Comparison with Classical MW" [online]. Available: Date accessed: 2018‑10‑18.

[14] "BEAM - Wallet Audit" [online]. Available: Date accessed: 2018‑09‑30.

[15] "Beam's Offline Transaction using Secure BBS System" [online]. Available: Date accessed: 2018‑10‑22.

[16] "GRIN - Merkle Structures" [online]. Available: Date accessed: 2018‑10‑22.

[17] "BEAM - Merkle Trees" [online]. Available: Date accessed: 2018‑10‑22.

[18] "BEAM - Confidential Assets" [online]. Available: Date accessed: 2018‑10‑22.

[19] A. Biryukov and D. Khovratovich, "Equihash: Asymmetric Proof-of-work based on the Generalized Birthday Problem" [online] Available: Proceedings of NDSS, 2016. Date accessed: 2018‑09‑30.

[20] "Cuckoo Cycle" [online]. Available: Date accessed: 2018‑09‑30.

[21] I. Peverell, "Proof of Work Update" [online]. Available: Date accessed: 2018‑09‑30.

[22] "Regarding Foundations" [online]. Available: Date accessed: 2018‑09‑30.

[23] "Meeting Notes: Governance, Sep 25 2018" [online]. Available: Date accessed: 2018‑09‑30.

[24] "BEAM Features" [online]. Available: Date accessed: 2018‑09‑30.

[25] "Grin's Community Funding Principles" [online]. Available: Date accessed: 2018‑09‑28.

[26] "Oct 2018 - Feb 2019 Developer Funding - Yeastplume" [online]. Available: Date accessed: 2018‑09‑30.

[27] "Monetary Policy" [online]. Available: Date accessed: 2018‑09‑30.

[28] "Economic Policy: Fees and Mining Reward" [online]. Available: Date accessed: 2018‑09‑30.

[29] "Grin's Proof-of-Work" [online]. Available: Date accessed: 2018‑09‑30.

[30]: R Lahat, "The Secure Bulletin Board System (SBBS) Implementation in Beam" [online]. Available: Date accessed: 2018‑12‑24.

[31]: "Secure Bulletin Board System (SBBS)" [online]. Available: Date accessed: 2018‑12‑24.

[32]: "Beam’s Mining Specification" [online]. Available: Date accessed: 2018‑12‑24.

[33]: "Beam’s Transaction Graph Obfuscation" [online]. Available: Date accessed: 2018‑12‑24.


This section contains some information on topics discussed in the report, details of which are not directly relevant to the Grin vs. BEAM discussion.

Appendix A: Cuckoo/Cuckatoo Cycle Proof-of-Work Algorithm

The Cuckoo Cycle algorithm is based on finding cycles of a certain length of edges in a bipartite graph of N nodes and M edges. The graph is bipartite because it consists of two separate groups of nodes with edges that connect nodes from one set to the other. As an example, let's consider nodes with even indices to be in one group and nodes with odd indices to be in another group. Figure 2 shows eight nodes with four randomly placed edges, N = 8 and M = 4. So if we are looking for cycles of length 4, we can easily confirm that none exist in Figure 2. By adjusting the number of edges present in the graph vs. the number of nodes, we can control the probability that a cycle of a certain length exists in the graph. When looking for cycles of length 4, the difficulty, as illustrated in Figure 2, is that a 4/8 (M/N) graph would mean that the four edges would need to be randomly chosen in an exact cycle for one to exist [29].

Figure 2: Eight Nodes with Four Edges, no Solution [29]

If we increase the number of edges in the graph relative to the number of nodes, we adjust the probability of a cycle occurring in the randomly chosen set of edges. Figure 3 shows an example of M = 7 and N = 8, and it can be seen that a four-edged cycle appeared. Thus, we can control the probability of a cycle of a certain length occurring by adjusting the ratio of M/N [29].

Figure 3: Cycle Found from 0-5-4-1-0 [29]

Detecting that a cycle of a certain length has occurred in a graph with randomly selected edges becomes significantly more difficult as the number of graphs gets larger. Figure 4 shows a 22‑node graph with 14 random edges in it. Can you determine if a cycle of eight edges is present? [29]

Figure 4: 22 Nodes with 14 Edges, can you find a Cycle Eight Edges Long? [29]

The Cuckoo cycle PoW algorithm is built to solve this problem. The bipartite graph that is analyzed is called a "Cuckoo Hashtable", where a key is inserted into two arrays, each with its own hash function, in a location based on the hash of the key. Each key inserted in this way produces an edge between the locations generated by the two hashing functions. Nonces are enumerated for the hashing functions until a cycle of the desired length is detected. This algorithm has two main control parameters. It is difficult to establish the M/N ratio and the number of nodes in the graph. There are a number of variants of this algorithm that make speed/memory trade-offs [20]. In Grin, a third difficulty parameter was introduced to more finely tune the difficulty of the PoW algorithm to ensure a one-minute block time in the face of changing network hash rates. This was done by taking a Blake2b hash of the set of nonces and ensuring that the result is above a difficulty threshold [29].


Grin Design Choice Criticisms - Truth or Fiction


Grin is a cryptocurrency, implemented in Rust, that makes use of Mimblewimble transactions and the Cuckatoo algorithm to perform Proof-of-Work (PoW) calculations. The main design goals of the Grin project are privacy, transaction scaling and design simplicity to promote long-term maintenance of the Grin source code [1].

During the development of the Grin project, the developers have been criticized by the community regarding a number of their design and implementation decisions. This report will look at some of this criticism and determine if there is any truth to these concerns, or if they are unwarranted or invalid. Some suggestions will be made as to how these problems could be mitigated or addressed.

This report will also investigate Grin's selected emission scheme, PoW algorithm, choice of cryptographic curve used for signatures, and selection of key-store library. Each of these topics will be discussed in detail.

Monetary Policy due to Choice of Static Emission Scheme

Bitcoin has a limited and finite supply of coins. It makes use of 10‑minute block times, where the initial reward for solving the first block was 50 BTC. This reward is reduced every four years, by halving it, until a maximum of 21 million coins are in circulation [2]. During this process, the transaction fees and newly minted coins are paid to miners and used as an incentive for miners to maintain the blockchain. Once all 21 million Bitcoins are released, only transaction fees will be paid to miners. Many fear that paying miners only transaction fees in the future will not be sufficient to maintain a large network of miners. This would result in the centralization of the network, as only large mining farms would be able to perform the mining task in a profitable manner. Others believe that in time, mining fees will increase and hardware costs for miners will decrease, making the act of mining lucrative and profitable, and maintaining the bitcoin blockchain [3].

Grin has decided on a different approach, where its number of coins will not be capped at a fixed supply. It will make use of a static emission rate, where a constant 60 Grin is released as a reward for solving every block. This algorithm makes use of a block goal of 60 seconds. This will result in approximately one coin being created every second for as long as the blockchain is being maintained [4].

Grin's primary motivation for selecting a static emission rate is:

  • there will be no upper limit on the number of coins that can be created;
  • the percentage of newly created coins compared to the total coins in circulation will tend toward zero;
  • it will mitigate the effect of orphaned and lost coins; and
  • it will encourage spending rather than holding of coins.

The selected emission rate will result in Grin becoming a high inflationary currency with more than 10% inflation for the first 10 years, which is higher than most competing cryptocurrencies or successful fiat systems. This is in comparison to other cryptocurrencies such as Monero, which will have less than 1% inflation after the first eight years in circulation, and a decreasing 0.87% inflation with the start of its tail emissions [5]. Monero will have a better potential of being used as a Store of Value (SoV) in the long run.

The fixed emission rate of Grin, on the other hand, will limit its use as an SoV, as it will experience constant price pressure. This might make it difficult for Grin to maintain a high value initially, while the inflation rate remains high. The high inflation rate may encourage Grin to rather be used as a Medium of Exchange (MoE) [6], as it will take approximately 50 years for the inflation to drop below 2%. The Grin team believes that the inflation rate is not that high, as many coins are lost and become unusable on a blockchain. These lost coins, which the team believes can be as much as 2% per year of the total supply, should be excluded from the inflation rate calculation [7]. The total percentage of lost transactional coins is difficult to estimate [8]. It appears that this value is higher for low‑ value coins than for high-value coins, where users tend to be more careful. The Grin team believes that selecting a high inflation rate will improve the distribution of coins, as holding of coins will be discouraged. It also hopes that a high inflation rate will produce natural pricing and limit price manipulation by large coin holders [7].

Most economists for traditional fiat systems agree that deflation is bad, as it increases debt; and some inflation is good, as it stimulates the economy of a country [9]. With inflation, the purchasing power of savings decreases over time. This encourages the purchasing of goods and services, resulting in the currency being used as an MoE rather than as an SoV. People with debt such as study loans, vehicle loans and home loans also benefit from inflation, as it produces an eroding effect on the total debt for long periods of repayment. Currently, this benefit does not apply to cryptocurrencies, as not much debt exists. This is because it is difficult to maintain successful borrower-lender relationships due to the anonymous nature of cryptocurrencies [10].

On the other hand, over time, deflation in traditional fiat systems produces an increase of purchasing power that encourages saving and discourages debt, resulting in the currency being used as an SoV. Unfortunately, this comes with a negative side effect that people will stop purchasing goods and services. Bitcoin can be considered deflationary, as people would rather buy and hold Bitcoins, as the price per coin might increase over time. This is limiting its use as an MoE. Deflation can also cause a deflationary spiral, as people with debt will have more debt and people with money will start hoarding their money, as it might be worth more at a later stage [11]. Deflation in traditional fiat systems typically tends to only happen in times of economic crisis and recession, and is managed by introducing inflation, using monetary policies [12].

As most inflationary fiat systems are government backed, they are able to control the level of inflation to help alleviate government debt and finance budget deficits [13]. This could result in hyperinflation where the devaluation of currency occurs at an extreme pace, resulting in many people losing their savings and pensions [14]. Cryptocurrencies, on the other hand, provide a transparent algorithmic monetary inflation that is not controlled by a central authority or government. This limits its misuse.

Finding a good balance between being an SoV and MoE is an important issue for developing a successful currency. A balance between deflation and inflation needs to be achieved to motivate saving and at the same time spending of a currency. A low inflationary model where inflation is algorithmically maintained and not controlled by a single authority seems to be the safest choice. However, only time will tell if the high inflation model proposed by Grin will have the desired effect.

Proof-of-Work Algorithm - from ASIC Resistant to ASIC Friendly

Initially, the Grin team proposed using two Application-Specific Integrated Circuit (ASIC) resistant algorithms: Cuckoo cycles and a high memory requirement Equihash algorithm called Equigrin. These algorithms were selected to encourage mining decentralization. ASIC resistance was obtained by having high memory requirements for the PoW algorithms, limiting their calculation to Central Processing Units (CPUs) and High-range Graphics Processing Units (GPUs) [15]. The plan was to adjust the parameters of these PoW algorithms every six months to deter stealth ASIC mining and move over to using only Cuckoo cycles as the primary PoW algorithm.

Recently, the Grin team proposed switching to a new dual PoW system, where one PoW algorithm is ASIC friendly and the other PoW algorithm is not. Grin will now make use of the new Cuckatoo Cycle algorithm, but details of its second PoW algorithm remain vague. The Cuckatoo PoW algorithm is a variation of Cuckoo that aims to be more ASIC friendly [16]. This is achieved by using plain bits for ternary counters and requiring large amounts of Static Random-Access Memory (SRAM) to speed up the memory latency bound access of random node bits. SRAM tends to be limited on GPU and CPU processors, but increasing SRAM on ASIC processors is much easier to implement [17].

ASIC miners tend to be specialized hardware that is very efficient at calculating and solving specific PoW algorithms. Encouraging ASIC miners on a network might not seem like a bad idea, as the mining network will have a higher hash rate. This will make it more difficult to hack and it will use less electrical power compared to using primarily CPU and GPU based miners.

Unfortunately, a negative side effect of running a PoW algorithm that is ASIC friendly is that the network of miners will become more centralized. General consumers do not have access to or a need for this type of hardware. This results in the use of ASIC miners being primarily reserved for enthusiasts and large corporations that are establishing mining farms. Having most of the network's hash rate localized in large mining farms will result in the blockchain becoming more vulnerable to potential 51% attacks [18], especially when specific ASIC manufacturers recommend or enforce their hardware to make use of specific mining pools that are controlled by single bodies.

Using general-purpose and multi-use hardware such as CPUs and GPUs that are primarily used for gaming and large workstations ensures that the network of miners is more widely distributed and that it is not controlled by a single potential bad player. This will make it more difficult for a single entity to control more than 50% of the network's hash rate or total computational power, limiting the potential of double spends.

Selecting to be ASIC resistant or ASIC friendly is an important decision that can affect the security of the blockchain. The Grin team's choice to support the ASIC community and try to balance an ASIC-friendly and an ASIC-resistant PoW algorithm will be interesting, with many potential pitfalls.

Choice of Cryptographic Elliptic-curve - secp256k1

Elliptic curve cryptography is used for generating Private and Public key pairs that can be used for digital signatures as well as authorization for individuals and transactions. It is much more secure and requires smaller keys for similar security compared to other Public-key cryptography techniques such as RSA [19].

Secp256k1 is an elliptic curve defined in the Standards for Efficient Cryptography [20] and is used for digital signatures in a number of cryptocurrencies such as Bitcoin, Ethereum, EOS and Litecoin [21]. Grin also makes use of this same elliptic curve [22]. Some security experts recommend not using the secp256k1 curve, as some issues have been uncovered, but not necessarily exploited. One of these problems is that the complex-multiplication field discriminant is not high enough to be secure. This could result in potential future exploits, as curves with low complex-multiplication field discriminant tend to be easier to break [23].

Starting a project with a potentially compromised curve does not seem like a good idea, especially when other curves with better security properties and characteristics do exist. A number of alternative curves exist that could be used to improve security. For example, Curve25519, which can be used with the improved Ed25519 public-key signature system. The Ed25519 signature scheme makes use of the Edwards-curve Digital Signature Algorithm (EdDSA) and uses SHA-512 and Curve25519 [24] to build a fast signature scheme without sacrificing security.

Many additional alternatives exist, and platforms such as SafeCurves, maintained by Daniel J. Bernstein and Tanje Lange, can help with the investigation and selection of an alternative security curve. The SafeCurves platform will make it easier to evaluate the security properties and potential vulnerabilities of many cryptographic curves [25].

Selection of Key-store Library

Grin originally made use of RocksDB [26] as an internal key-value store, but received some criticism for this decision. A number of alternatives with other performance and security characteristics exist, such as LevelDB [27], HyperLevelDB [28] and the Lightning Memory-Mapped Database (LMDB) [29]. Selecting between these to find the "best" key-value store library for blockchain applications remains a difficult problem, as many online sources provide conflicting information.

Based on the controversial results from a number of online benchmarks, it seems as if some of these alternatives have better performance, such as producing small database sizes and performing faster queries [30]. As an example, RocksDB and LevelDB seem incorrectly to be better alternatives to LMDB, as they produce the fastest reads and deletes, as well as some of the smallest databases, compared to the other database libraries [31]. This is not entirely true, as some mistakes were made during the testing process. Howard Chu wrote an article entitled "Lies, Damn Lies, Statistics, and Benchmarks", which exposes some of these issues and shows that LMDB is the best key-value store library [32]. Other benchmarks performed by Symas Corp support this claim, where LMDB outperformed all the tested key store libraries [33].

Grin later replaced RocksDB with LMDB to maintain the state of Grin Wallets [34]. This switch appears to be a good idea, as LMDB seem to be the best key-value store library for blockchain-related applications.

Conclusions, Observations and Recommendations

  • Selecting the correct emission rate to create a sustainable monetary policy is an important decision. Care should be taken to ensure that the right balance is found between being an SoV and/or an MoE.
  • Weighing the benefits and potential issues of being ASIC friendly compared to ASIC resistant needs to be carefully evaluated.
  • Tools such as SafeCurves can be used to select a secure elliptic curve for an application. Cryptographic curves with even potential security vulnerabilities should rather be ignored.
  • Care should be taken when using online benchmarks to help select libraries for a project, as the results might be misleading.


[1] M. Franzoni, "Grin: a Lightweight Implementation of the MimbleWimble Protocol" [online]. Available: Date accessed: 2018‑10‑05.

[2] S. Nakamoto, "Bitcoin: A Peer-to-Peer Electronic Cash System" [online]. Available: < Date accessed: 2018‑10‑05.

[3] A. Barone, "What Happens to Bitcoin after all 21 Million are Mined?" [Online.] Available: Date accessed: 2018‑10‑07.

[4] "Emission Rate of Grin" [online]. Available: Date accessed: 2018‑10‑15.

[5] "Coin Emission and Block Reward Schedules: Bitcoin vs. Monero" [online]. Available: Date accessed: 2018‑10‑15.

[6] "On Grin, MimbleWimble, and Monetary Policy" [online]. Available: Date accessed: 2018‑10‑07.

[7] "Grin - Monetary Policy" [online]. Available: Date accessed: 2018‑10‑08.

[8] J. J. Roberts and N. Rapp, "Exclusive: Nearly 4 Million Bitcoin Lost Forever, New Study Says" [online]. Available: Date accessed: 2018‑10‑08.

[9] Andrew Ancheta, "How Inflationary should Cryptocurrency really be?" [Online.]. Available: Date accessed: 2018‑11‑06.

[10] L. Mutch, "Debtcoin: Credit, Debt, and Cryptocurrencies" [online]. Available: Date accessed: 2018‑11‑06.

[11] Brian Curran, "Inflation vs Deflation: A Guide to Bitcoin & Cryptocurrencies Deflationary Nature" [online]. Available: Date accessed: 2018‑11‑06.

[12] A. Hayes, "Why is Deflation Bad for the Economy?" [Online.] Available: Date accessed: 2018‑11‑06.

[13] J. H. Cochrane, "Inflation and Debt" [online]. Available: Date accessed: 2018‑11‑07.

[14] L. Ziyuan, "Think Piece: Fighting Hyperinflation with Cryptocurrencies" [online]. Available: Date accessed: 2018‑11‑07.

[15] "Grin - Proof of Work Update" [online]. Available: Date accessed: 2018‑10‑15.

[16] "Grin - Meeting Notes: Governance, Sep 25 2018" [online]. Available: Date accessed: 2018‑10‑15.

[17] "Cuck(at)oo Cycle" [online]. Available: Date accessed: 2018‑10‑15.

[18] "51% Attack" [online]. Available: Date accessed: 2018‑10‑11.

[19] H. Knutson, "What is the Math behind Elliptic Curve Cryptography?" [Online.] Available: Date accessed: 2018‑10‑14.

[20] "Standards for Efficient Cryptography Group" [online]. Available: Date accessed: 2018‑10‑11.

[21] "Secp256k1" [online]. Available: Date accessed: 2018‑10‑15.

[22] "Grin - Schnorr Signatures in Grin & Information" [online]. Available: Date accessed: 2018‑10‑08.

[23] "SafeCurves - CM Field Discriminants" [online]. Available: Date accessed: 2018‑10‑15.

[24] D. J. Bernstein, "Curve25519: New Diffie-Hellman Speed Records" [online]. Available: Date accessed: 2018‑10‑15.

[25] "SafeCurves - Choosing Safe Curves for Elliptic-curve Cryptography" [online]. Available: Date accessed: 2018‑10‑10.

[26] "RocksDB" [online]. Available: Date accessed: 2018‑10‑10.

[27] "LevelDB" [online]. Available: Date accessed: 2018‑10‑15.

[28] "HyperLevelDB" [online]. Available: Date accessed: 2018‑10‑15.

[29] "LMDB" [online]. Available: Date accessed: 2018‑10‑29.

[30] P. Dix, "Benchmarking LevelDB vs. RocksDB vs. HyperLevelDB vs. LMDB Performance for InfluxDB" [online]. Available: Date accessed: 2018‑10‑15.

[31] B. Alex, "Lmdbjava - Benchmarks" [online]. Available: Date accessed: 2018‑10‑14.

[32] H. Chu, "Lies, Damn Lies, Statistics, and Benchmarks" [online]. Available: Date accessed: 2018‑10‑29.

[33] "HyperDex Benchmark, Symas Corp" [online]. Available: Date accessed: 2018‑10‑29.

[34] Yeastplume, "Progress Update May - Sep 2018" [online]. Available: Date accessed: 2018‑10‑28.


Atomic Swaps

What are Atomic Swaps?

Atomic swaps or cross-chain atomic swaps [1], in a nutshell, are decentralized exchanges, but only for cryptocurrencies. They allow multiple parties to exchange two different crypto currencies in a trustless environment. If one party defaults or fails the transaction, neither party can "run off" with anyone's money. For this to work, we will require two technologies: a payment channel and hashed timelock contracts. An implementation of a payment channel is the lightning network.

Hashed Timelock Contracts

Hashed Timelock Contracts (HTLC) [2] is one of the most important technologies required for atomic swaps. This is a payment class tha