Introduction
Welcome to Tari Labs University (TLU). Our mission is to be the premier destination for balanced and accessible learning material for blockchain, digital currency and digital assets learning material.
We hope to make this a learning experience for us at TLU: as a means to grow our knowledge base and internal expertise or as a refresher. We think this will also be an excellent resource for anyone interested in the myriad disciplines required to understand blockchain technology.
We would like this platform to be a place of learning accessible to anyone, irrespective of their degree of expertise. Our aim is to cover a wide range of topics that are relevant to the TLU space, starting at a beginner level and extending down a path of deeper complexity.
You are welcome to contribute to our online content by submitting a pull request or issue in GitHub. To help you get started, we've compiled a Style Guide for TLU reports. Using this Style Guide, you can help us to ensure consistency in the content and layout of TLU reports.
Errors, Comments and Contributions
We would like this collection of educational presentations and videos to be a collaborative affair. This extends to our presentations. We are learning along with you. Our content may not be perfect first time around, so we invite you to alert us to errors and issues or, better yet, if you know how to make a pull request, to contribute a fix, write the correction and make a pull request.
As much as this learning platform is called Tari Labs University and will see input from many internal contributors and external experts, we would like you to contribute to new material, be it in the form of a suggestion of topics, varying the skill levels of presentations, or posting presentations that you may feel will benefit us as a growing community. In the words of Yoda, “Always pass on what you have learned.”
Guiding Principles
If you are considering contributing content to TLU, please be aware of our guiding principles:
 The topic researched should be potentially relevant to the Tari protocol; chat to us on #tariresearch on IRC if you're not sure.
 The topic should be thoroughly researched.
 A critical approach should be taken (in the academic sense), with critiques and commentaries sought out and presented alongside the main topic. Remember that every white paper promises the world, so go and look for counterclaims.
 A recommendation/conclusion section should be included, providing a critical analysis on whether or not the technology/ proposal would be useful to the Tari protocol.
 The work presented should be easy to read and understand, distilling complex topics into a form that is accessible to a technical but nonexpert audience. Use your own voice.
Submission Process
This is the basic submission process we follow within TLU. We would appreciate it if you, as an external contributor, follow the same process.
 Get some agreement from the community that the topic is of interest.
 Write up your report.
 Push a first draft of your report as a pull request.
 The community will peerreview the report, much the same as we would with a code pull request.
 The report is merged into the master.
 Receive the fame and acclaim that is due.
Learning Paths
With a field that is rapidly developing it is very important to stay updated with the latest trends, protocols and products that have incorporated this technology.
List of resources available to garner your knowledge in the Blockchain and cryptocurrency Domains is presented in the following paths:
Learning Path  Description  

1  Blockchain Basics  Blockchain Basics provides a broad and easily understood explanation of blockchain technology, as well as the history of value itself. Having an understanding of the value is especially important in grasping how blockchain and cryptocurrencies have the potential to reshape the financial industry. 
2  Mimblewimble Implementation  Mimblewimble is a blockchain protocol that focuses on privacy through the implementation of confidential transactions. It enables a greatly simplified blockchain in which all spent transactions can be pruned, resulting in a much smaller blockchain footprint and efficient base node validation. The blockchain consists only of blockheaders, remaining Unspent Transaction Outputs (UTXO) with their range proofs and an unprunable transaction kernel per transaction. 
3  Bulletproofs  Bulletproofs are short noninteractive zeroknowledge proofs that require no trusted setup. A bulletproof can be used to convince a verifier that an encrypted plaintext is well formed. For example, prove that an encrypted number is in a given range, without revealing anything else about the number. 
Key
Level  Description 

Beginner  These are easy to digest articles, reports and presentations 
Intermediate  Requires the foundational knowledge and basic understanding of the content 
Advanced  Prior knowledge and/or mathematics skills are essential to understand these information sources 
Blockchain Basics
Background
Blockchain Basics provides a broad and easily understood explanation of blockchain technology, as well as the history of value itself. Having an understanding of the value is especially important in grasping how blockchain and cryptocurrencies have the potential to reshape the financial industry.
Learning Path Matrix
For learning purposes, we have arranged report, presentation and video topics in a matrix, in categories of difficulty, interest and format.
Topics  Type 

So you think you need a blockchain? Part I  
So you think you need a Blockchain, Part II?  
BFT Consensus Mechanisms  
Lightning Network for Dummies  
Atomic Swaps  
Layer 2 Scaling Survey  
Merged Mining Introduction  
Introduction to SPV, Merkle Trees and Bloom Filters  
Elliptic Curves 101 
Mimblewimble Implementation
Background
Mimblewimble is a blockchain protocol that focuses on privacy through the implementation of confidential transactions. It enables a greatly simplified blockchain in which all spent transactions can be pruned, resulting in a much smaller blockchain footprint and efficient base node validation. The blockchain consists only of blockheaders, remaining Unspent Transaction Outputs (UTXO) with their range proofs and an unprunable transaction kernel per transaction.
Learning Path Matrix
Topics  Type 

Mimblewimble  
Introduction to Schnorr Signatures  
Introduction to Scriptless Scripts  
MimblewimbleGrin Block Chain Protocol Overview  
Grin vs. BEAM; a Comparison  
Grin Design Choice Criticisms  Truth or Fiction  
Mimblewimble Transactions Explained  
Mimblewimble Multiparty Bulletproof UTXO 
Bulletproofs
Background
Bulletproofs are short noninteractive zeroknowledge proofs that require no trusted setup. A bulletproof can be used to convince a verifier that an encrypted plaintext is well formed. For example, prove that an encrypted number is in a given range, without revealing anything else about the number.
Learning Path Matrix
Topics  Type 

Bulletproofs and Mimblewimble  
Building on Bulletproofs  
The Bulletproof Protocols  
Mimblewimble Multiparty Bulletproof UTXO 
Cryptography
Purpose
"The purpose of cryptography is to protect data transmitted in the likely presence of an adversary" [1].
Definitions

From Wikipedia: Cryptography or cryptology (from Ancient Greek  κρυπτός, kryptós "hidden, secret"; and γράφειν graphein, "to write", or λογία logia, "study", respectively) is the practice and study of techniques for secure communication in the presence of third parties called adversaries. More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages; various aspects in information security such as data confidentiality, data integrity, authentication, and nonrepudiation are central to modern cryptography [2].

From SearchSecurity: Cryptography is a method of protecting information and communications through the use of codes so that only those for whom the information is intended can read and process it. The prefix "crypt" means "hidden" or "vault" and the suffix "graphy" stands for "writing" [3].
References
[1] L. Koved, A. Nadalin, N. Nagaratnam and M. Pistoia, "The Theory of Cryptography" [online], informIT. Available: http://www.informit.com/articles/article.aspx?p=170808. Date accessed: 2019‑06‑07.
[2] Wikipedia: "Cryptography" [online]. Available: https://en.wikipedia.org/wiki/Cryptography. Date accessed: 2019‑06‑07.
[3] SearchSecurity: "Cryptography". Available: https://searchsecurity.techtarget.com/definition/cryptography. Date accessed: 2019‑06‑07.
Elliptic curves 101
Having trouble viewing this presentation?
View it in a separate window.
Introduction to Schnorr Signatures
 Overview
 Let's get Started
 Basics of Schnorr Signatures
 Schnorr Signatures
 MuSig
 References
 Contributors
Overview
Privatepublic key pairs are the cornerstone of much of the cryptographic security underlying everything from secure web browsing to banking to cryptocurrencies. Privatepublic key pairs are asymmetric. This means that given one of the numbers (the private key), it's possible to derive the other one (the public key). However, doing the reverse is not feasible. It's this asymmetry that allows one to share the public key, uh, publicly and be confident that no one can figure out our private key (which we keep very secret and secure).
Asymmetric key pairs are employed in two main applications:
 in authentication, where you prove that you have knowledge of the private key; and
 in encryption, where messages can be encoded and only the person possessing the private key can decrypt and read the message.
In this introduction to digital signatures, we'll be talking about a particular class of keys: those derived from elliptic curves. There are other asymmetric schemes, not least of which are those based on products of prime numbers, including RSA keys [1].
We're going to assume you know the basics of elliptic curve cryptography (ECC). If not, don't stress, there's a gentle introduction in a previous chapter.
Let's get Started
This is an interactive introduction to digital signatures. It uses Rust code to demonstrate some of the ideas presented here, so you can see them at work. The code for this introduction uses the libsecp256krs library.
That's a mouthful, but secp256k1 is the name of the elliptic curve that secures a lot of things in many cryptocurrencies' transactions, including Bitcoin.
This particular library has some nice features. We've overridden the +
(addition) and *
(multiplication)
operators so that the Rust code looks a lot more like mathematical formulae. This makes it much easier
to play with the ideas we'll be exploring.
WARNING! Don't use this library in production code. It hasn't been battlehardened, so use this one in production instead.
Basics of Schnorr Signatures
Public and Private Keys
The first thing we'll do is create a public and private key from an elliptic curve.
On secp256k1, a private key is simply a scalar integer value between 0 and ~2^{256}. That's roughly how many atoms there are in the universe, so we have a big sandbox to play in.
We have a special point on the secp256k1 curve called G, which acts as the "origin". A public key is calculated by
adding G on the curve to itself, \( k_a \) times. This is the definition of multiplication by a scalar, and is
written as:
$$
P_a = k_a G
$$
Let's take an example from this post, where
it is known that the public key for 1
, when written in uncompressed format, is 0479BE667...C47D08FFB10D4B8
.
The following code snippet demonstrates this:
extern crate libsecp256k1_rs; use libsecp256k1_rs::{ SecretKey, PublicKey }; #[allow(non_snake_case)] fn main() { // Create the secret key "1" let k = SecretKey::from_hex("0000000000000000000000000000000000000000000000000000000000000001").unwrap(); // Generate the public key, P = k.G let pub_from_k = PublicKey::from_secret_key(&k); let known_pub = PublicKey::from_hex("0479BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8").unwrap(); // Compare it to the known value assert_eq!(pub_from_k, known_pub); println!("Ok") }
Creating a Signature
Approach Taken
Reversing ECC math multiplication (i.e. division) is pretty much infeasible when using properly chosen random values for your scalars ([5],[6]). This property is called the Discrete Log Problem, and is used as the principle behind many cryptography and digital signatures. A valid digital signature is evidence that the person providing the signature knows the private key corresponding to the public key with which the message is associated, or that they have solved the Discrete Log Problem.
The approach to creating signatures always follows this recipe:
 Generate a secret onceoff number (called a nonce), $r$.
 Create a public key, $R$ from $r$ (where $R = r.G$).
 Send the following to Bob, your recipient  your message ($m$), $R$, and your public key ($P = k.G$).
The actual signature is created by hashing the combination of all the public information above to create a challenge, $e$: $$ e = H(R  P  m) $$ The hashing function is chosen so that e has the same range as your private keys. In our case, we want something that returns a 256bit number, so SHA256 is a good choice.
Now the signature is constructed using your private information: $$ s = r + ke $$ Bob can now also calculate $e$, since he already knows $m, R, P$. But he doesn't know your private key, or nonce.
Note: When you construct the signature like this, it's known as a Schnorr signature, which is discussed in a following section. There are other ways of constructing $s$, such as ECDSA [2], which is used in Bitcoin.
But see this:
$$ sG = (r + ke)G $$
Multiply out the righthand side:
$$ sG = rG + (kG)e $$
Substitute \(R = rG \) and \(P = kG \) and we have: $$ sG = R + Pe $$
So Bob must just calculate the public key corresponding to the signature $\text{(}s.G\text{)}$ and check that it equals the righthand side of the last equation above $\text{(}R + P.e\text{)}$, all of which Bob already knows.
Why do we Need the Nonce?
Why do we need a nonce in the standard signature?
Let's say we naïvely sign a message $m$ with $$ e = H(P  m) $$ and then the signature would be \(s = ek \).
Now as before, we can check that the signature is valid: $$ \begin{align} sG &= ekG \\ &= e(kG) = eP \end{align} $$ So far so good. But anyone can read your private key now because $s$ is a scalar, so \(k = {s}/{e} \) is not hard to do. With the nonce you have to solve \( k = (s  r)/e \), but $r$ is unknown, so this is not a feasible calculation as long as $r$ has been chosen randomly.
We can show that leaving off the nonce is indeed highly insecure:
extern crate libsecp256k1_rs as secp256k1; use secp256k1::{SecretKey, PublicKey, thread_rng, Message}; use secp256k1::schnorr::{ Challenge}; #[allow(non_snake_case)] fn main() { // Create a random private key let mut rng = thread_rng(); let k = SecretKey::random(&mut rng); println!("My private key: {}", k); let P = PublicKey::from_secret_key(&k); let m = Message::hash(b"Meet me at 12").unwrap(); // Challenge, e = H(P  m) let e = Challenge::new(&[&P, &m]).as_scalar().unwrap(); // Signature let s = e * k; // Verify the signature assert_eq!(PublicKey::from_secret_key(&s), e*P); println!("Signature is valid!"); // But let's try calculate the private key from known information let hacked = s * e.inv(); assert_eq!(k, hacked); println!("Hacked key: {}", k) }
ECDH
How do parties that want to communicate securely generate a shared secret for encrypting messages? One way is called the Elliptic Curve DiffieHellman exchange (ECDH), which is a simple method for doing just this.
ECDH is used in many places, including the Lightning Network during channel negotiation [3].
Here's how it works. Alice and Bob want to communicate securely. A simple way to do this is to use each other's public keys and calculate $$ \begin{align} S_a &= k_a P_b \tag{Alice} \\ S_b &= k_b P_a \tag{Bob} \\ \implies S_a = k_a k_b G &\equiv S_b = k_b k_a G \end{align} $$
extern crate libsecp256k1_rs as secp256k1; use secp256k1::{ SecretKey, PublicKey, thread_rng, Message }; #[allow(non_snake_case)] fn main() { let mut rng = thread_rng(); // Alice creates a publicprivate keypair let k_a = SecretKey::random(&mut rng); let P_a = PublicKey::from_secret_key(&k_a); // Bob creates a publicprivate keypair let k_b = SecretKey::random(&mut rng); let P_b = PublicKey::from_secret_key(&k_b); // They each calculate the shared secret based only on the other party's public information // Alice's version: let S_a = k_a * P_b; // Bob's version: let S_b = k_b * P_a; assert_eq!(S_a, S_b, "The shared secret is not the same!"); println!("The shared secret is identical") }
For security reasons, the private keys are usually chosen at random for each session (you'll see the term ephemeral keys being used), but then we have the problem of not being sure the other party is who they say they are (perhaps due to a maninthemiddle attack [4]).
Various additional authentication steps can be employed to resolve this problem, which we won't get into here.
Schnorr Signatures
If you follow the crypto news, you'll know that that the new hotness in Bitcoin is Schnorr Signatures.
But in fact, they're old news! The Schnorr signature is considered the simplest digital signature scheme to be provably secure in a random oracle model. It is efficient and generates short signatures. It was covered by U.S. Patent 4,995,082, which expired in February 2008 [7].
So why all the Fuss?
What makes Schnorr signatures so interesting and potentially dangerous, is their simplicity. Schnorr signatures are linear, so you have some nice properties.
Elliptic curves have the multiplicative property. So if you have two scalars $x, y$ with corresponding points $X, Y$, the following holds: $$ (x + y)G = xG + yG = X + Y $$ Schnorr signatures are of the form \( s = r + e.k \). This construction is linear too, so it fits nicely with the linearity of elliptic curve math.
You saw this property in a previous section, when we were verifying the signature. Schnorr signatures' linearity makes it very attractive for, among others:
 signature aggregation;
 atomic swaps;
 "scriptless" scripts.
Naïve Signature Aggregation
Let's see how the linearity property of Schnorr signatures can be used to construct a twooftwo multisignature.
Alice and Bob want to cosign something (a Tari transaction, say) without having to trust each other; i.e. they need to be able to prove ownership of their respective keys, and the aggregate signature is only valid if both Alice and Bob provide their part of the signature.
Assuming private keys are denoted \( k_i \) and public keys \( P_i \). If we ask Alice and Bob to each supply a nonce, we can try: $$ \begin{align} P_{agg} &= P_a + P_b \\ e &= H(R_a  R_b  P_a  P_b  m) \\ s_{agg} &= r_a + r_b + (k_a + k_b)e \\ &= (r_a + k_ae) + (r_b + k_ae) \\ &= s_a + s_b \end{align} $$ So it looks like Alice and Bob can supply their own $R$, and anyone can construct the twooftwo signature from the sum of the $Rs$ and public keys. This does work:
extern crate libsecp256k1_rs as secp256k1; use secp256k1::{SecretKey, PublicKey, thread_rng, Message}; use secp256k1::schnorr::{Schnorr, Challenge}; #[allow(non_snake_case)] fn main() { // Alice generates some keys let (ka, Pa, ra, Ra) = get_keyset(); // Bob generates some keys let (kb, Pb, rb, Rb) = get_keyset(); let m = Message::hash(b"a multisig transaction").unwrap(); // The challenge uses both nonce public keys and private keys // e = H(Ra  Rb  Pa  Pb  H(m)) let e = Challenge::new(&[&Ra, &Rb, &Pa, &Pb, &m]).as_scalar().unwrap(); // Alice calculates her signature let sa = ra + ka * e; // Bob calculates his signature let sb = rb + kb * e; // Calculate the aggregate signature let s_agg = sa + sb; // S = s_agg.G let S = PublicKey::from_secret_key(&s_agg); // This should equal Ra + Rb + e(Pa + Pb) assert_eq!(S, Ra + Rb + e*(Pa + Pb)); println!("The aggregate signature is valid!") } #[allow(non_snake_case)] fn get_keyset() > (SecretKey, PublicKey, SecretKey, PublicKey) { let mut rng = thread_rng(); let k = SecretKey::random(&mut rng); let P = PublicKey::from_secret_key(&k); let r = SecretKey::random(&mut rng); let R = PublicKey::from_secret_key(&r); (k, P, r, R) }
But this scheme is not secure!
Key Cancellation Attack
Let's take the previous scenario again, but this time, Bob knows Alice's public key and nonce ahead of time, by waiting until she reveals them.
Now Bob lies and says that his public key is \( P_b' = P_b  P_a \) and public nonce is \( R_b' = R_b  R_a \).
Note that Bob doesn't know the private keys for these faked values, but that doesn't matter.
Everyone assumes that \(s_{agg} = R_a + R_b' + e(P_a + P_b') \) as per the aggregation scheme.
But Bob can create this signature himself: $$ \begin{align} s_{agg}G &= R_a + R_b' + e(P_a + P_b') \\ &= R_a + (R_b  R_a) + e(P_a + P_b  P_a) \\ &= R_b + eP_b \\ &= r_bG + ek_bG \\ \therefore s_{agg} &= r_b + ek_b = s_b \end{align} $$
extern crate libsecp256k1_rs as secp256k1; use secp256k1::{SecretKey, PublicKey, thread_rng, Message}; use secp256k1::schnorr::{Schnorr, Challenge}; #[allow(non_snake_case)] fn main() { // Alice generates some keys let (ka, Pa, ra, Ra) = get_keyset(); // Bob generates some keys as before let (kb, Pb, rb, Rb) = get_keyset(); // ..and then publishes his forged keys let Pf = Pb  Pa; let Rf = Rb  Ra; let m = Message::hash(b"a multisig transaction").unwrap(); // The challenge uses both nonce public keys and private keys // e = H(Ra  Rb'  Pa  Pb'  H(m)) let e = Challenge::new(&[&Ra, &Rf, &Pa, &Pf, &m]).as_scalar().unwrap(); // Bob creates a forged signature let s_f = rb + e * kb; // Check if it's valid let sG = Ra + Rf + e*(Pa + Pf); assert_eq!(sG, PublicKey::from_secret_key(&s_f)); println!("Bob successfully forged the aggregate signature!") } #[allow(non_snake_case)] fn get_keyset() > (SecretKey, PublicKey, SecretKey, PublicKey) { let mut rng = thread_rng(); let k = SecretKey::random(&mut rng); let P = PublicKey::from_secret_key(&k); let r = SecretKey::random(&mut rng); let R = PublicKey::from_secret_key(&r); (k, P, r, R) }
Better Approaches to Aggregation
In the Key Cancellation Attack, Bob didn't know the private keys for his published $R$ and $P$ values. We could defeat Bob by asking him to sign a message proving that he does know the private keys.
This works, but it requires another round of messaging between parties, which is not conducive to a great user experience.
A better approach would be one that incorporates one or more of the following features:
 It must be provably secure in the plain publickey model, without having to prove knowledge of secret keys, as we might have asked Bob to do in the naïve approach.
 It should satisfy the normal Schnorr equation, i.e. the resulting signature can be verified with an expression of the form \( R + e X \).
 It allows for Interactive Aggregate Signatures (IAS), where the signers are required to cooperate.
 It allows for Noninteractive Aggregate Signatures (NAS), where the aggregation can be done by anyone.
 It allows each signer to sign the same message, $m$.
 It allows each signer to sign their own message, \( m_i \).
MuSig
MuSig is a recently proposed ([8],[9]) simple signature aggregation scheme that satisfies all of the properties in the preceding section.
MuSig Demonstration
We'll demonstrate the interactive MuSig scheme here, where each signatory signs the same message. The scheme works as follows:
 Each signer has a publicprivate key pair, as before.
 Each signer shares a commitment to their public nonce (we'll skip this step in this demonstration). This step is necessary to prevent certain kinds of rogue key attacks [10].
 Each signer publishes the public key of their nonce, \( R_i \).
 Everyone calculates the same "shared public key", $X$ as follows:
$$ \begin{align} \ell &= H(X_1  \dots  X_n) \\ a_i &= H(\ell  X_i) \\ X &= \sum a_i X_i \\ \end{align} $$
Note that in the preceding ordering of public keys, some deterministic convention should be used, such as the lexicographical order of the serialized keys.
 Everyone also calculates the shared nonce, \( R = \sum R_i \).
 The challenge, $e$ is \( H(R  X  m) \).
 Each signer provides their contribution to the signature as:
$$ s_i = r_i + k_i a_i e $$
Notice that the only departure here from a standard Schnorr signature is the inclusion of the factor \( a_i \).
The aggregate signature is the usual summation, \( s = \sum s_i \).
Verification is done by confirming that as usual: $$ sG \equiv R + e X \ $$ Proof: $$ \begin{align} sG &= \sum s_i G \\ &= \sum (r_i + k_i a_i e)G \\ &= \sum r_iG + k_iG a_i e \\ &= \sum R_i + X_i a_i e \\ &= \sum R_i + e \sum a_i X_i \\ &= R + e X \\ \blacksquare \end{align} $$ Let's demonstrate this using a threeofthree multisig:
extern crate libsecp256k1_rs as secp256k1; use secp256k1::{ SecretKey, PublicKey, thread_rng, Message }; use secp256k1::schnorr::{ Challenge }; #[allow(non_snake_case)] fn main() { let (k1, X1, r1, R1) = get_keys(); let (k2, X2, r2, R2) = get_keys(); let (k3, X3, r3, R3) = get_keys(); // I'm setting the order here. In general, they'll be sorted let l = Challenge::new(&[&X1, &X2, &X3]); // ai = H(l  p) let a1 = Challenge::new(&[ &l, &X1 ]).as_scalar().unwrap(); let a2 = Challenge::new(&[ &l, &X2 ]).as_scalar().unwrap(); let a3 = Challenge::new(&[ &l, &X3 ]).as_scalar().unwrap(); // X = sum( a_i X_i) let X = a1 * X1 + a2 * X2 + a3 * X3; let m = Message::hash(b"SomeSharedMultiSigTx").unwrap(); // Calc shared nonce let R = R1 + R2 + R3; // e = H(R  X  m) let e = Challenge::new(&[&R, &X, &m]).as_scalar().unwrap(); // Signatures let s1 = r1 + k1 * a1 * e; let s2 = r2 + k2 * a2 * e; let s3 = r3 + k3 * a3 * e; let s = s1 + s2 + s3; //Verify let sg = PublicKey::from_secret_key(&s); let check = R + e * X; assert_eq!(sg, check, "The signature is INVALID"); println!("The signature is correct!") } #[allow(non_snake_case)] fn get_keys() > (SecretKey, PublicKey, SecretKey, PublicKey) { let mut rng = thread_rng(); let k = SecretKey::random(&mut rng); let P = PublicKey::from_secret_key(&k); let r = SecretKey::random(&mut rng); let R = PublicKey::from_secret_key(&r); (k, P, r, R) }
Security Demonstration
As a final demonstration, let's show how MuSig defeats the cancellation attack from the naïve signature scheme.
Using the same idea as in the Key Cancellation Attack section, Bob has provided fake values for his
nonce and public keys:
$$
\begin{align}
R_f &= R_b  R_a \\
X_f &= X_b  X_a \\
\end{align}
$$
This leads to both Alice and Bob calculating the following "shared" values:
$$
\begin{align}
\ell &= H(X_a  X_f) \\
a_a &= H(\ell  X_a) \\
a_f &= H(\ell  X_f) \\
X &= a_a X_a + a_f X_f \\
R &= R_a + R_f (= R_b) \\
e &= H(R  X  m)
\end{align}
$$
Bob then tries to construct a unilateral signature following MuSig:
$$
s_b = r_b + k_s e
$$
Let's assume for now that \( k_s \) doesn't need to be Bob's private key, but that he can derive it using information
he knows. For this to be a valid signature, it must verify to \( R + eX \). So therefore:
$$
\begin{align}
s_b G &= R + eX \\
(r_b + k_s e)G &= R_b + e(a_a X_a + a_f X_f) & \text{The first term looks good so far}\\
&= R_b + e(a_a X_a + a_f X_b  a_f X_a) \\
&= (r_b + e a_a k_a + e a_f k_b  e a_f k_a)G & \text{The r terms cancel as before} \\
k_s e &= e a_a k_a + e a_f k_b  e a_f k_a & \text{But nothing else is going away}\\
k_s &= a_a k_a + a_f k_b  a_f k_a \\
\end{align}
$$
In the previous attack, Bob had all the information he needed on the righthand side of the analogous calculation. In MuSig,
Bob must somehow know Alice's private key and the faked private key (the terms don't cancel anymore) in order to create a unilateral signature,
and so his cancellation attack is defeated.
Replay Attacks!
It's critical that a new nonce be chosen for every signing ceremony. The best way to do this is to make use of a cryptographically secure (pseudo)random number generator (CSPRNG).
But even if this is the case, let's say an attacker can trick us into signing a new message by "rewinding" the signing ceremony to the point where partial signatures are generated. At this point, the attacker provides a different message, \( e' = H(...m') \) to sign. Not suspecting any foul play, each party calculates their partial signature:
$$ s'_i = r_i + a_i k_i e' $$ However, the attacker still has access to the first set of signatures: \( s_i = r_i + a_i k_i e \). He now simply subtracts them: $$ \begin{align} s'_i  s_i &= (r_i + a_i k_i e')  (r_i + a_i k_i e) \\ &= a_i k_i (e'  e) \\ \therefore k_i &= \frac{s'_i  s_i}{a_i(e'  e)} \end{align} $$ Everything on the righthand side of the final equation is known by the attacker and thus he can trivially extract everybody's private key. It's difficult to protect against this kind of attack. One way to is make it difficult (or impossible) to stop and restart signing ceremonies. If a multisig ceremony gets interrupted, then you need to start from step one again. This is fairly unergonomic, but until a more robust solution comes along, it may be the best we have!
References
[1] Wikipedia: "RSA (Cryptosystem)" [online]. Available: https://en.wikipedia.org/wiki/RSA_(cryptosystem). Date accessed: 2018‑10‑11.
[2] Wikipedia: "Elliptic Curve Digital Signature Algorithm" [online]. Available: https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm. Date accessed: 2018‑10‑11.
[3] Github: "BOLT #8: Encrypted and Authenticated Transport, Lightning RFC" [online].
Available: https://github.com/lightningnetwork/lightningrfc/blob/master/08transport.md. Date accessed: 2018‑10‑11.
[4] Wikipedia: "Man in the Middle Attack" [online]. Available: https://en.wikipedia.org/wiki/Maninthemiddle_attack. Date accessed: 2018‑10‑11.
[5] StackOverflow: "How does a Cryptographically Secure Random Number Generator Work?" [online]. Available: https://stackoverflow.com/questions/2449594/howdoesacryptographicallysecurerandomnumbergeneratorwork. Date accessed: 2018‑10‑11.
[6] Wikipedia: "Cryptographically Secure Pseudorandom Number Generator" [online]. Available: https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator. Date accessed: 2018‑10‑11.
[7] Wikipedia: "Schnorr Signature" [online]. Available: https://en.wikipedia.org/wiki/Schnorr_signature. Date accessed: 2018‑09‑19.
[8] Blockstream: "Key Aggregation for Schnorr Signatures" [online]. Available: https://blockstream.com/2018/01/23/musigkeyaggregationschnorrsignatures.html. Date accessed: 2018‑09‑19.
[9] G. Maxwell, A. Poelstra, Y. Seurin and P. Wuille, "Simple Schnorr Multisignatures with Applications to Bitcoin" [online]. Available: https://eprint.iacr.org/2018/068.pdf. Date accessed: 2018‑09‑19.
[10] M. Drijvers, K. Edalatnejad, B. Ford, E. Kiltz, J. Loss, G. Neven and I. Stepanovs, "On the Security of Tworound Multisignatures", Cryptology ePrint Archive, Report 2018/417 [online]. Available: https://eprint.iacr.org/2018/417.pdf. Date accessed: 2019‑02‑21.
Contributors
 https://github.com/CjS77
 https://github.com/SWvHeerden
 https://github.com/hansieodendaal
 https://github.com/neonknight64
 https://github.com/anselld
Introduction to Scriptless Scripts
 Definition of Scriptless Scripts
 Benefits of Scriptless Scripts
 List of Scriptless Scripts
 Role of Schnorr Signatures
 Schnorr Multisignatures
 Adaptor Signatures
 Simultaneous Scriptless Scripts
 Atomic (Crosschain Swaps) Example with Adaptor Signatures
 Zero Knowledge Contingent Payments
 Mimblewimble's Core Scriptless Script
 References
 Contributors
Definition of Scriptless Scripts
Scriptless Scripts are a means to execute smart contracts offchain, through the use of Schnorr signatures [1].
The concept of Scriptless Scripts was born from Mimblewimble, which is a blockchain design that with the exception of kernels and their signatures, does not store permanent data. Fundamental properties of Mimblewimble include both privacy and scaling, both of which require the implementation of Scriptless Scripts [2].
A brief introduction is also given in Scriptless Scripts, Layer 2 Scaling Survey.
Benefits of Scriptless Scripts
The benefits of Scriptless Scripts are functionality, privacy and efficiency.
Functionality
With regard to functionality, Scriptless Scripts are said to increase the range and complexity of smart contracts.
Currently, as within Bitcoin Script, limitations stem from the number of OP_CODES
that have been enabled by the
network. Scriptless Scripts move the specification and execution of smart contracts from the network to a discussion
that only involves the participants of the smart contract.
Privacy
With regard to privacy, moving the specification and execution of smart contracts from onchain to offchain increases privacy. When onchain, many details of the smart contract are shared to the entire network. These details include the number and addresses of participants, and the amounts transferred. By moving smart contracts offchain, the network only knows that the participants agree that the terms of their contract have been satisfied and that the transaction in question is valid.
Efficiency
With regard to efficiency, Scriptless Scripts minimize the amount of data that requires verification and storage onchain. By moving smart contracts offchain, there are fewer overheads for full nodes and lower transaction fees for users [1].
List of Scriptless Scripts
In this report, various forms of scripts will be covered, including [3]:
 Simultaneous Scriptless Scripts
 Adaptor Signatures
 Zero Knowledge Contingent Payments
Role of Schnorr Signatures
To begin with, the fundamentals of Schnorr signatures must be defined. The signer has a private key $x$ and random private nonce $r$. $G$ is the generator of a discrete log hard group, $R=rG$ is the public nonce and $P=xG$ is the public key [4].
The signature, $s$, for message $m$, can then be computed as a simple linear transaction
$$ s = r + e \cdot x $$
where
$$ e=H(PRm) \text{, } P=xG $$
The position on the line chosen is taken as the hash of all the data that one needs to commit to, the digital signature. The verification equation involves the multiplication of each of the terms in the equation by $G$ and takes into account the cryptographic assumption (discrete log) where $G$ can be multiplied in but not divided out, thus preventing deciphering.
$$ sG = rG + e \cdot xG $$
Elliptic Curve Digital Signature Algorithm (ECDSA) signatures (used in Bitcoin) are not linear in $x$ and $r$, and are thus less useful [2].
Schnorr Multisignatures
A multisignature (multisig) has multiple participants that produce a signature. Every participant might produce a separate signature and concatenate them, forming a multisig.
With Schnorr Signatures, one can have a single public key, which is the sum of many different people's public keys. The resulting key is one against which signatures will be verifiable [5].
The formulation of a multisig involves taking the sum of all components; thus all nonces and $s$ values result in the formulation of a multisig [4]: $$ s=\sum s(i) $$ It can therefore be seen that these signatures are essentially Scriptless Scripts. Independent public keys of several participants are joint to form a single key and signature, which, when published, do not divulge details of the number of participants involved or the original public keys.
Adaptor Signatures
This multisig protocol can be modified to produce an adaptor signature, which serves as the building block for all Scriptless Script functions [5].
Instead of functioning as a full valid signature on a message with a key, an adaptor signature is a promise that a signature agreed to be published, will reveal a secret.
This concept is similar to that of atomic swaps. However, no scripts are implemented. Since this is elliptic curve cryptography, there is only scalar multiplication of elliptic curve points. Fortunately, similar to a hash function, elliptic curves function in one way, so an elliptic curve point ($T$), can simply be shared and the secret will be its corresponding private key.
If two parties are considered, rather than providing their nonce $R$ in the multisig protocol, a blinding factor, taken as an elliptic curve point $T$, is conceived and sent in addition to $R$ (i.e. $R+T$). It can therefore be seen that $R$ is not blinded; it has instead been offset by the secret value $T$.
Here, the Schnorr multisig construction is modified such that the first party generates $$ T=tG, R=rG $$ where $t$ is the shared secret, $G$ is the generator of discrete log hard group and $r$ is the random nonce.
Using this information, the second party generates
$$
H(PR+Tm)x
$$
where the coins to be swapped are contained within message $m$. The first party can now calculate the complete
signature $s$ such that
$$
s=r+t+H(PR+Tm)x
$$
The first party then calculates and publishes the adaptor signature $s'$ to the second party (and anyone else listening)
$$
s'=st
$$
The second party can verify the adaptor signature $s'$ by asserting $s'G$
$$
s'G \overset{?}{=} R+H(PR+Tm)P
$$
However, this is not a valid signature, as the hashed nonce point is $R+T$ and not $R$.
The second party cannot retrieve a valid signature from this and requires ECDLP solving to recover $s'+t$, which is virtually impossible.
After the first party broadcasts $s$ to claim the coins within message $m$, the second party can calculate the secret $t$ from $$ t=ss' $$ The above is very general. However, by attaching auxiliary proofs too, an adaptor signature can be derived that will allow the translation of correct movement of the auxiliary protocol into a valid signature.
Simultaneous Scriptless Scripts
Preimages
The execution of separate transactions in an atomic fashion is achieved through preimages. If two transactions require the preimage to the same hash, once one is executed, the preimage is exposed so that the other one can be as well. Atomic swaps and Lightning channels use this construction [4].
Difference of Two Schnorr Signatures
If we consider the difference of two Schnorr signatures: $$ d=ss'=kk'+e \cdot xe' \cdot x' $$ The above equation can be verified in a similar manner to that of a single Schnorr signature, by multiplying each term by $G$ and confirming algebraic correctness: $$ dG=kGk'G+e \cdot xGe' \cdot x'G $$ It must be noted that the difference $d$ is being verified, and not the Schnorr signature itself. $d$ functions as the translating key between two separate independent Schnorr signatures. Given $d$ and either $s$ or $s'$, the other can be computed. So possession of $d$ makes these two signatures atomic. This scheme does not link the two signatures or compromise their security.
For an atomic transaction, during the setup stage, someone provides the opposing party with the value $d$, and asserts it as the correct value. Once the transaction is signed, it can be adjusted to complete the other transaction. Atomicity is achieved, but can only be used by the person who possesses this $d$ value. Generally, the party that stands to lose money requires the $d$ value.
The $d$ value provides an interesting property with regard to atomicity. It is shared before signatures are public, which in turn allows the two transactions to be atomic once the transactions are published. By taking the difference of any two Schnorr signatures, one is able to construct transcripts, such as an atomic swap multisig contract.
This is a critical feature for Mimblewimble, which was previously thought to be unable to support atomic swaps or Lightning channels [4].
Atomic (Crosschain Swaps) Example with Adaptor Signatures
Alice has a certain number of coins on a particular blockchain; Bob also has a certain number of coins on another blockchain. Alice and Bob want to engage in an atomic exchange. However, neither blockchain is aware of the other, nor are they able to verify each other's transactions.
The classical way of achieving this involves the use of the blockchain's script system to put a hash preimage challenge and then reveal the same preimage on both sides. Once Alice knows the preimage, she reveals it to take her coins. Bob then copies it off one chain to the other chain to take his coins.
Using adaptor signatures, the same result can be achieved through simpler means. In this case, both Alice and Bob put up their coins on two of two outputs on each blockchain. They sign the multisig protocols in parallel, where Bob then gives Alice the adaptor signatures for each side using the same value $T$ . This means that for Bob to take his coins, he needs to reveal $t$; and for Alice to take her coins, she needs to reveal $T$. Bob then replaces one of the signatures and publishes $t$, taking his coins. Alice computes $t$ from the final signature, visible on the blockchain, and uses that to reveal another signature, giving Alice her coins.
Thus it can be seen that atomicity is achieved. One is still able to exchange information, but now there are no explicit hashes or preimages on the blockchain. No script properties are necessary and privacy is achieved [4].
Zero Knowledge Contingent Payments
Zero Knowledge Contingent Payments (ZKCP) is a transaction protocol. This protocol allows a buyer to purchase information from a seller using coins in a manner that is private, scalable, secure and, importantly, in a trustless environment. The expected information is transferred only when payment is made. The buyer and seller do not need to trust each other or depend on arbitration by a third party [6].
Mimblewimble's Core Scriptless Script
As previously stated, Mimblewimble is a blockchain design. Built similarly to Bitcoin, every transaction has inputs and outputs. Each input and output has a confidential transaction commitment. Confidential commitments have an interesting property where, in a valid balanced transaction, one can subtract the input from the output commitments, ensuring that all of the values of the Pedersen values balance out. Taking the difference of these inputs and outputs results in the multisig key of the owners of every output and every input in the transaction. This is referred to as the kernel.
Mimblewimble blocks will only have a list of new inputs, new outputs and signatures that are created from the aforementioned excess value [7].
Since the values are homomorphically encrypted, nodes can verify that no coins are being created or destroyed.
References
[1] "Crypto Innovation Spotlight 2: Scriptless Scripts" [online]. Available: https://medium.com/blockchaincapital/cryptoinnovationspotlight2scriptlessscripts306c4eb6b3a8. Date accessed: 2018‑02‑27.
[2] A. Poelstra, "Mimblewimble and Scriptless Scripts". Presented at Real World Crypto, 2018 [online]. Available: https://www.youtube.com/watch?v=ovCBT1gyk9c&t=0s. Date accessed: 2018‑01‑11.
[3] A. Poelstra, "Scriptless Scripts". Presented at Layer 2 Summit Hosted by MIT DCI and Fidelity Labs on 18 May 2018 [online]. Available: https://www.youtube.com/watch?v=jzoS0tPUAiQ&t=3h36m. Date Accessed: 2018‑05‑25.
[4] A. Poelstra, "Mimblewimble and Scriptless Scripts". Presented at MIT Bitcoin Expo 2017 Day 1 [online]. Available: https://www.youtube.com/watch?v=0mVOq1jaR1U&feature=youtu.be&t=39m20. Date accessed: 2017‑03‑04.
[5] "Flipping the Scriptless Script on Schnorr" [online]. Available: https://joinmarket.me/blog/blog/flippingthescriptlessscriptonschnorr/. Date accessed: November 2017.
[6] "The First Successful Zeroknowledge Contingent Payment" [online]. Available: https://bitcoincore.org/en/2016/02/26/zeroknowledgecontingentpaymentsannouncement/. Date accessed: 2016‑02‑26.
[7] "What is Mimblewimble?" [Online.] Available: https://www.cryptocompare.com/coins/guides/whatismimblewimble/. Date accessed: 2018‑06‑30.
Contributors
The MuSig Schnorr Signature Scheme
 Introduction
 Overview of Multisignatures
 Formation of MuSig
 Conclusions, Observations and Recommendations
 References
 Contributors
Introduction
This report investigates Schnorr MultiSignature Schemes (MuSig), which make use of key aggregation and are provably secure in the plain publickey model.
Signature aggregation involves mathematically combining several signatures into a single signature, without having to prove Knowledge of Secret Keys (KOSK). This is known as the plain publickey model, where the only requirement is that each potential signer has a public key. The KOSK scheme requires that users prove knowledge (or possession) of the secret key during public key registration with a certification authority. It is one way to generically prevent roguekey attacks.
Multisignatures are a form of technology used to add multiple participants to cryptocurrency transactions. A traditional multisignature protocol allows a group of signers to produce a joint multisignature on a common message.
Schnorr Signatures and their Attack Vectors
Schnorr signatures produce a smaller onchain size, support faster validation and have better privacy. They natively allow for combining multiple signatures into one through aggregation, and they permit more complex spending policies.
Signature aggregation also has its challenges. This includes the roguekey attack, where a participant steals funds using a specifically constructed key. Although this is easily solved for simple multisignatures through an enrollment procedure that involves the keys signing themselves, supporting it across multiple inputs of a transaction requires plain publickey security, meaning there is no setup.
There is an additional attack, named the Russel attack, after Russel O'Connor, who discovered that for multiparty schemes, a party could claim ownership of someone else's private key and so spend the other outputs.
P. Wuille [1] addressed some of these issues and provided a solution that refines the BellareNeven (BN) scheme. He also discussed the performance improvements that were implemented for the scaler multiplication of the BN scheme and how they enable batch validation on the blockchain [2].
MuSig
Multisignature protocols, introduced by [3], allow a group of signers (that individually possess their own private/ public key pair) to produce a single signature $ \sigma $ on a message $ m $. Verification of the given signature $ \sigma $ can be publicly performed, given the message and the set of public keys of all signers.
A simple way to change a standard signature scheme into a multisignature scheme is to have each signer produce a standalone signature for $ m $ with their private key, and to then concatenate all individual signatures.
The transformation of a standard signature scheme into a multisignature scheme needs to be useful and practical. The newly calculated multisignature scheme must therefore produce signatures where the size is independent of the number of signers and similar to that of the original signature scheme [4].
A traditional multisignature scheme is a combination of a signing and verification algorithm, where multiple signers (each with their own private/public key) jointly sign a single message, resulting in a combined signature. This can then be verified by anyone knowing the message and the public keys of the signers, where a trusted setup with KOSK is a requirement.
MuSig is a multisignature scheme that is novel in combining:
 support for key aggregation; and
 security in the plain publickey model.
There are two versions of MuSig that are provably secure, and which differ based on the number of communication rounds:
 Threeround MuSig, which only relies on the Discrete Logarithm (DL) assumption, on which Elliptic Curve Digital Signature Algorithm (ECDSA) also relies.
 Tworound MuSig, which instead relies on the slightly stronger OneMore Discrete Logarithm (OMDL) assumption.
Key Aggregation
The term key aggregation refers to multisignatures that look like a singlekey signature, but with respect to an aggregated public key that is a function of only the participants' public keys. Thus, verifiers do not require the knowledge of the original participants' public keys. They can just be given the aggregated key. In some use cases, this leads to better privacy and performance. Thus, MuSig is effectively a key aggregation scheme for Schnorr signatures.
To make the traditional approach more effective and without needing a trusted setup, a multisignature scheme must provide sublinear signature aggregation, along with the following properties:
 It must be provably secure in the plain publickey model.
 It must satisfy the normal Schnorr equation, whereby the resulting signature can be written as a function of a combination of the public keys.
 It must allow for Interactive Aggregate Signatures (IAS), where the signers are required to cooperate.
 It must allow for Noninteractive Aggregate Signatures (NAS). where the aggregation can be done by anyone.
 It must allow each signer to sign the same message.
 It must allow each signer to sign their own message.
This is different to a normal multisignature scheme, where one message is signed by all. MuSig potentially provides all of these properties.
Other multisignature schemes that already exist, provide key aggregation for Schnorr signatures. However, they come with some limitations, such as needing to verify that participants actually have the private key corresponding to the pubic keys that they claim to have. Security in the plain publickey model means that no limitations exist. All that is needed from the participants is their public keys [1].
Overview of Multisignatures
Recently, the most obvious use case for multisignatures is with regard to Bitcoin, where it can function as a more efficient replacement of $ nofn $ multisig scripts (where the signatures required to spend and the signatures possible are equal in quantity) and other policies that permit a number of possible combinations of keys.
A key aggregation scheme also lets one reduce the number of public keys per input to one, as a user can send coins to the aggregate of all involved keys rather than including them all in the script. This leads to a smaller onchain footprint, faster validation and better privacy.
Instead of creating restrictions with one signature per input, one signature can be used for the entire transaction. Traditionally, key aggregation cannot be used across multiple inputs, as the public keys are committed to by the outputs, and those can be spent independently. MuSig can be used here, with key aggregation done by the verifier.
No noninteractive aggregation scheme is known that only relies on the DL assumption, but interactive schemes are trivial to construct where a multisignature scheme has every participant sign the concatenation of all messages. Reference [4], focused on key aggregation for Schnorr Signatures, showed that this is not always a desirable construction, and gave an IAS variant of BN with better properties instead [1].
Bitcoin $ mofn $ Multisignatures
What are $ mofn $ Transactions?
Currently, standard transactions on the Bitcoin network can be referred to as singlesignature transactions, as they require only one signature, from the owner of the private key associated with the Bitcoin address. However, the Bitcoin network supports far more complicated transactions, which can require the signatures of multiple people before the funds can be transferred. These are often referred to as $ mofn $ transactions, where m represents the number of signatures required to spend, while n represents the number of signatures possible [5].
Use Cases for $ mofn $ Multisignatures
The use cases for $ mofn $ multisignatures are as follows:

When $ m=1 $ and $ n>1 $, it is considered a shared wallet, which could be used for small group funds that do not require much security. This is the least secure multisignature option, because it is not multifactor. Any compromised individual would jeopardize the entire group. Examples of use cases include funds for a weekend or evening event, or a shared wallet for some kind of game. Besides being convenient to spend from, the only benefit of this setup is that all but one of the backup/password pairs could be lost and all of the funds would be recoverable.

When $ m=n $, it is considered a partner wallet, which brings with it some nervousness, as no keys can be lost. As the number of signatures required increases, the risk also increases. This type of multisignature can be considered to be a hard multifactor authentication.

When $ m<0.5n $, it is considered a buddy account, which could be used for spending from corporate group funds. The consequence for the colluding minority needs to be greater than possible benefits. It is considered less convenient than a shared wallet, but much more secure.

When $ m>0.5n $, it is referred to as a consensus account. The classic multisignature wallet is a two of three, and is a special case of a consensus account. A two of three scheme has the best characteristics for creating new Bitcoin addresses, and for secure storing and spending. One compromised signatory does not compromise the funds. A single secret key can be lost and the funds can still be recovered. If done correctly, offsite backups are created during wallet setup. The way to recover funds is known by more than one party. The balance of power with a multisignature wallet can be shifted by having one party control more keys than the other parties. If one party controls multiple keys, there is a greater risk of those keys not remaining as multiple factors.

When $ m=0.5n $, it is referred to as a split account. This is an interesting use case, as there would be three of six, where one person holds three keys and three people hold one key. In this way, one person could control their own money, but the funds could still be recoverable, even if the primary key holder were to disappear with all of their keys. As $ n $ increases, the level of trust in the secondary parties can decrease. A good use case might be a family savings account that would automatically become an inheritance account if the primary account holder were to die [5].
Rogue Attacks
Refer to Key cancellation attack demonstration in Introduction to Schnorr Signatures.
Rogue attacks are a significant concern when implementing multisignature schemes. Here, a subset of corrupted signers manipulate the public keys computed as functions of the public keys of honest users, allowing them to easily produce forgeries for the set of public keys (despite them not knowing the associated secret keys).
Initial proposals from [10], [11], [12], [13], [14], [15] and [16] were thus undone before a formal model was put forward along with a provably secure scheme from [17]. Unfortunately, despite being provably secure, their scheme is costly and an impractical interactive key generation protocol [4].
A means of generically preventing roguekey attacks is to make it mandatory for users to prove knowledge (or possession) of the secret key during public key registration with a certification authority [18]. Certification authority is a setting known as the KOSK assumption. The pairingbased multisignature schemes described in [19] and [20] rely on the KOSK assumption in order to maintain security. However, according to [18] and [21], the cost of complexity and expense of the scheme, and the unrealistic and burdensome assumptions on the Publickey Infrastructure (PKI), have made this solution problematic.
As it stands, [21] provides one of the most practical multisignature schemes, based on the Schnorr signature scheme, which is provably secure and that does not contain any assumption on the key setup. Since the only requirement of this scheme is that each potential signer has a public key, this setting is referred to as the plainkey model.
Reference [17] solves the roguekey attack using a sophisticated interactive key generation protocol.
In [22], the number of rounds was reduced from three to two, using a homomorphic commitment scheme. Unfortunately, this increases the signature size and the computational cost of signing and verification.
Reference [23] proposed a signature scheme that involved the "double hashing" technique, which sees the reduction of the signature size compared to [22], while using only two rounds.
However, neither of these two variants allows for key aggregation.
Multisignature schemes supporting key aggregation are easier to come by in the KOSK model. In particular, [24] proposed the CoSi scheme, which can be seen as the naive Schnorr multisignature scheme, where the cosigners are organized in a tree structure for fast signature generation.
Interactive Aggregate Signatures
In some situations, it may be useful to allow each participant to sign a different message rather than a single common one. An IAS is one where each signer has their own message $ m_{i} $ to sign, and the joint signature proves that the $ i $ th signer has signed $ m_{i} $. These schemes are considered to be more general than multisignature schemes. However, they are not as flexible as noninteractive aggregate signatures ([25], [26]) and sequential aggregate signatures [27].
According to [21], a generic way of turning any multisignature scheme into an IAS scheme, is for the signer running the multisignature protocol to use the tuple of all public keys/message pairs involved in the IAS protocol, as the message.
For BN's scheme and Schnorr multisignatures, this does not increase the number of communication rounds, as messages can be sent together with shares $ R_{i} $.
Applications of Interactive Aggregate Signatures
With regard to digital currency schemes, where all participants have the ability to validate transactions, these transactions consist of outputs (which have a verification key and amount) and inputs (which are references to outputs of earlier transactions). Each input contains a signature of a modified version of the transaction to be validated with its referenced output's key. Some outputs may require multiple signatures to be spent. Transactions spending such an output are referred to as mofn multisignature transactions [28], and the current implementation corresponds to the trivial way of building a multisignature scheme by concatenating individual signatures. Additionally, a threshold policy can be enforced where only $ m $ valid signatures out of the $ n $ possible ones are needed to redeem the transaction. (Again, this is the most straightforward way to turn a multisignature scheme into some kind of basic threshold signature scheme.)
While several multisignature schemes could offer an improvement over the currently available method, two properties increase the possible impact:
 The availability of key aggregation removes the need for verifiers to see all the involved keys, improving bandwidth, privacy and validation cost.
 Security under the plain publickey model enables multisignatures across multiple inputs of a transaction, where the choice of signers cannot be committed to in advance. This greatly increases the number of situations in which multisignatures are beneficial.
Native Multisignature Support
An improvement is to replace the need for implementing $ nofn $ multisignatures with a constantsize, multisignature primitive such as BN. While this is an improvement in terms of size, it still needs to contain all of the signers' public keys. Key aggregation improves upon this further, as a singlekey predicate can be used instead. This is smaller and has lower computational cost for verification. Predicate encryption is an encryption paradigm that gives a master secret key owner finegrained control over access to encrypted data [29]. It also improves privacy, as the participant keys and their count remain private to the signers.
When generalizing to the $ mofn $ scenario, several options exist. One is to forego key aggregation, and still include all potential signer keys in the predicates, while still only producing a single signature for the chosen combination of keys. Alternatively, a Merkle tree [30] where the leaves are permitted combinations of public keys (in aggregated form), can be employed. The predicate in this case would take as input an aggregated public key, a signature and a proof. Its validity would depend on the signature being valid with the provided key, and the proof establishing that the key is in fact one of the leaves of the Merkle tree, identified by its root hash. This approach is very generic, as it works for any subset of combinations of keys, and as a result has good privacy, as the exact policy is not visible from the proof.
Some key aggregation schemes that do not protect against roguekey attacks can be used instead in the above cases, under the assumption that the sender is given a proof of knowledge/possession for the receivers' private keys. However, these schemes are difficult to prove secure, except by using very large proofs of knowledge. As those proofs of knowledge/possession do not need to be seen by verifiers, they are effectively certified by the sender's validation. However, passing them around to senders is inconvenient, and easy to get wrong. Using a scheme that is secure in the plain publickey model categorically avoids these concerns.
Another alternative is to use an algorithm whose key generation requires a trusted setup, e.g. in the KOSK model. While many of these schemes have been proven secure, they rely on mechanisms that are usually not implemented by certification authorities ([18], [19], [20]), [21]).
Crossinput Multisignatures
The previous sections explained how the numbers of signatures per input can generally by reduced to one. However, it is possible to go further and replace this with a single signature per transaction. Doing so requires a fundamental change in validation semantics, as the validity of separate inputs is no longer independent. As a result, the outputs can no longer be modeled as predicates, where the secret key owner is given access to encrypted data. Instead, they are modeled as functions that return a Boolean (data type with only two possible values) plus a set of zero or more public keys.
Overall validity requires all returned Booleans to be True
and a multisignature of the transaction with $ L $ the
union of all returned keys.
With regard to Bitcoin, this can be implemented by providing an alternative to the signature checking opcode
OP_CHECKSIG and related opcodes in the Script language. Instead of returning the result of an actual ECDSA verification,
they always return True
, but additionally add the public key with which the verification would have taken place to a
transactionwide multiset of keys. Finally, after all inputs are verified, a multisignature present in the transaction
is verified against that multiset. In case the transaction spends inputs from multiple owners, they will need to
collaborate to produce the multisignature, or choose to only use the original opcodes. Adding these new opcodes is
possible in a backwardcompatible way [4].
Protection against Roguekey Attacks
In Bitcoin, when taking crossinput signatures into account, there is no published commitment to the set of signers, as each transaction input can independently spend an output that requires authorization from distinct participants. This functionality was not restricted, as it would then interfere with fungibility improvements such as CoinJoin [31]. Due to the lack of certification, security against roguekey attacks is of great importance.
If it is assumed that transactions used a single multisignature that was vulnerable to rogueattacks, an attacker could identify an arbitrary number of outputs they want to steal, with the public keys $ X_{1},...,X_{nt} $ and then use the roguekey attack to determine $ X_{nt+1},...,X_{n} $ such that they can sign for the aggregated key $ \tilde{X} $. They would then send a small amount of their own money to outputs with predicates corresponding to the keys $ X_{nt+1},...,X_{n} $. Finally, they can create a transaction that spends all of the victims' coins together with the ones they have just created by forging a multisignature for the whole transaction.
It can be seen that in the case of multisignatures across inputs, theft can occur through the ability to forge a signature over a set of keys that includes at least one key which is not controlled by the attacker. According to the plain publickey model, this is considered a win for the attacker. This is in contrast to the singleinput multisignature case, where theft is only possible by forging a signature for the exact (aggregated) keys contained in an existing output. As a result, it is no longer possible to rely on proofs of knowledge/possession that are private to the signers.
Formation of MuSig
Notation Used
This section gives the general notation of mathematical expressions when specifically referenced. This information is important preknowledge for the remainder of the report.
 Let $ p $ be a large prime number.
 Let $ \mathbb{G} $ denote cyclic group of the prime order $ p $.
 Let $ \mathbb Z_p $ denote the ring of integer $ modulo \mspace{4mu} p $.
 Let a generator of $ \mathbb{G} $ be denoted by $ g $. Thus, there exists a number $ g \in\mathbb{G} $ such that $ \mathbb{G} = \lbrace 1, \mspace{3mu}g, \mspace{3mu}g^2,\mspace{3mu}g^3, ..., \mspace{3mu}g^{p1} \rbrace $.
 Let $ \textrm{H} $ denote the hash function.
 Let $ S= \lbrace (X_{1}, m_{1}),..., (X_{n}, m_{n}) \rbrace $ be the multiset of all public key/message pairs of all participants, where $ X_{1}=g^{x_{1}} $.
 Let $ \langle S \rangle $ denote a lexicographically encoding of the multiset of public key/message pairs in $ S $.
 Let $ L= \lbrace X_{1}=g^{x_{1}},...,X_{n}=g^{x_{n}} \rbrace $ be the multiset of all public keys.
 Let $ \langle L \rangle $ denote a lexicographically encoding of the multiset of public keys $ L= \lbrace X_{1}...X_{n} \rbrace $.
 Let $ \textrm{H}_{com} $ denote the hash function in the commitment phase.
 Let $ \textrm{H}_{agg} $ denote the hash function used to compute the aggregated key.
 Let $ \textrm{H}_{sig} $ denote the hash function used to compute the signature.
 Let $ X_{1} $ and $ x_{1} $ be the public and private key of a specific signer.
 Let $ m $ be the message that will be signed.
 Let $ X_{2},...,X_{n} $ be the public keys of other cosigners.
Recap on Schnorr Signature Scheme
The Schnorr signature scheme [6] uses group parameters $(\mathbb{G\mathrm{,p,g)}}$ and a hash function $ \textrm{H} $.
A private/public key pair is a pair
$$ (x,X) \in \lbrace 0,...,p1 \rbrace \mspace{6mu} \mathsf{x} \mspace{6mu} \mathbb{G} $$
where $ X=g^{x} $
To sign a message $ m $, the signer draws a random integer $ r \in Z_{p} $ and computes
$$ \begin{aligned} R &= g^{r} \\ c &= \textrm{H}(X,R,m) \\ s &= r+cx \end{aligned} $$
The signature is the pair $ (R,s) $, and its validity can be checked by verifying whether $$ g^{s} = RX^{c} $$
This scheme is referred to as the "keyprefixed" variant of the scheme, which sees the public key hashed together with $ R $ and $ m $ [7]. This variant was thought to have a better multiuser security bound than the classic variant [8]. However, in [9], the keyprefixing was seen as unnecessary to enable good multiuser security for Schnorr signatures.
For the development of the MuSig Schnorrbased multisignature scheme [4], keyprefixing is a requirement for the security proof, despite not knowing the form of an attack. The rationale also follows the process in reality, as messages signed in Bitcoin always indirectly commit to the public key.
Design of Schnorr Multisignature Scheme
The naive way to design a Schnorr multisignature scheme would be as follows:
A group of $ n $ signers want to cosign a message $ m $. Each cosigner randomly generates and communicates to others a share
$$ R_i = g^{r_{i}} $$
Each of the cosigners then computes:
$$ R = \prod _{i=1}^{n} R_{i} \mspace{30mu} \mathrm{and} \mspace{30mu} c = \textrm{H} (\tilde{X},R,m) $$
where $$ \tilde{X} = \prod_{i=1}^{n}X_{i} $$
is the product of individual public. The partial signature is then given by
$$ s_{i} = r_{i}+cx_{i} $$
All partial signatures are then combined into a single signature $(R,s)$ where
$$ s = \displaystyle\sum_{i=1}^{n}s_i \mod p $$
The validity of a signature $ (R,s) $ on message $ m $ for public keys $ \lbrace X_{1},...X_{n} \rbrace $ is equivalent to
$$ g^{s} = R\tilde{X}^{c} $$
where
$$ \tilde{X} = \prod_{i=1}^{n} X_{i} \mspace{30mu} \mathrm{and} \mspace{30mu} c = \textrm{H}(\tilde{X},R,m) $$
Note that this is exactly the verification equation for a traditional keyprefixed Schnorr signature with respect to public key $ \tilde{X} $, a property termed key aggregation. However, these protocols are vulnerable to a roguekey attack ([12], [14], [15] and [17]) where a corrupted signer sets its public key to
$$ X_{1}=g^{x_{1}} (\prod_{i=2}^{n} X_{i})^{1} $$
allowing the signer to produce signatures for public keys $ \lbrace X_{1},...X_{n} \rbrace $ by themselves.
Bellare and Neven Signature Scheme
Bellare and Neven [21] proceeded differently in order to avoid any key setup. A group of $ n $ signers want to cosign a message $ m $. Their main idea is to have each cosigner use a distinct "challenge" when computing their partial signature
$$ s_{i} = r_{i}+c_{i}x_{i} $$
defined as
$$ c_{i} = \textrm{H}( \langle L \rangle , X_{i},R,m) $$
where
$$ R = \prod_{i=1}^{n}R_{i} $$
The equation to verify signature $ (R,s) $ on message $ m $ for the public keys $ L $ is
$$ g^s = R\prod_{i=1}^{n}X_{i}^{c_{i}} $$
A preliminary round is also added to the signature protocol, where each signer commits to its share $ R_i $ by sending $ t_i = \textrm{H}^\prime(R_i) $ to other cosigners first.
This stops any cosigner from setting $ R = \prod_{i=1}^{n}R_{i} $ to some maliciously chosen value, and also allows the reduction to simulate the signature oracle in the security proof.
Bellare and Neven [21] showed that this yields a multisignature scheme provably secure in the plain publickey model under the Discrete Logarithm assumptions, modeling $ \textrm{H} $ and $ \textrm{H}^\prime $ as random oracles. However, this scheme does not allow key aggregation anymore, since the entire list of public keys is required for verification.
MuSig Signature Scheme
MuSig is paramaterized by group parameters $(\mathbb{G\mathrm{,p,g)}}$ and three hash functions $ ( \textrm{H}_{com} , \textrm{H}_{agg} , \textrm{H}_{sig} ) $ from $ \lbrace 0,1 \rbrace ^{*} $ to $ \lbrace 0,1 \rbrace ^{l} $ (constructed from a single hash, using proper domain separation).
Round 1
A group of $ n $ signers want to cosign a message $ m $. Let $ X_1 $ and $ x_1 $ be the public and private key of a specific signer, let $ X_2 , . . . , X_n $ be the public keys of other cosigners and let $ \langle L \rangle $ be the multiset of all public keys involved in the signing process.
For $ i\in \lbrace 1,...,n \rbrace $ , the signer computes the following
$$ a_{i} = \textrm{H}_{agg}(\langle L \rangle,X_{i}) $$
as well as the "aggregated" public key
$$ \tilde{X} = \prod_{i=1}^{n}X_{i}^{a_{i}} $$
Round 2
The signer generates a random private nonce $ r_{1}\leftarrow\mathbb{Z_{\mathrm{p}}} $, computes $ R_{1} = g^{r_{1}} $ (the public nonce) and commitment $ t_{1} = \textrm{H}_{com}(R_{1}) $ and sends $t_{1}$ to all other cosigners.
When receiving the commitments $t_{2},...,t_{n}$ from the other cosigners, the signer sends $R_{1}$ to all other cosigners. This ensures that the public nonce is not exposed until all commitments have been received.
Upon receiving $R_{2},...,R_{n}$ from other cosigners, the signer verifies that $t_{i}=\textrm{H}_{com}(R_{i})$ for all $ i\in \lbrace 2,...,n \rbrace $.
The protocol is aborted if this is not the case.
Round 3
If all commitment and random challenge pairs can be verified with $ \textrm{H}_{agg} $, the following is computed:
$$ \begin{aligned} R &= \prod^{n}_{i=1}R_{i} \\ c &= \textrm{H}_{sig} (\tilde{X},R,m) \\ s_{1} &= r_{1} + ca_{1} x_{1} \mod p \end{aligned} $$
Signature $s_{1}$ is sent to all other cosigners. When receiving $ s_{2},...s_{n} $ from other cosigners, the signer can compute $ s = \sum_{i=1}^{n}s_{i} \mod p$. The signature is $ \sigma = (R,s) $.
In order to verify the aggregated signature $ \sigma = (R,s) $, given a lexicographically encoded multiset of public keys $ \langle L \rangle $ and message $ m $, the verifier computes:
$$ \begin{aligned} a_{i} &= \textrm{H}_{agg}(\langle L \rangle,X_{i}) \mspace{9mu} \textrm {for} \mspace{9mu} i \in \lbrace 1,...,n \rbrace \\ \tilde{X} &= \prod_{i=1}^{n}X_{i}^{a_{i}} \\ c &= \textrm{H}_{sig} (\tilde{X},R,m) \end{aligned} $$
then accepts the signature if
$$ g^{s} = R\prod_{i=1}^{n}X_{i}^{a_{i}c}=R\tilde{X^{c}.} $$
Revisions
In a previous version of [4], published on 15 January 2018, a tworound variant of MuSig was proposed, where the initial commitment round is omitted, claiming a security proof under the One More Discrete Logarithm (OMDL) assumptions ([32], [33]). The authors of [34] then discovered a flaw in the security proof and showed that through a metareduction, the initial multisignature scheme cannot be proved secure using an algebraic black box reduction under the DL or OMDL assumption.
In more detail, it was observed that in the tworound variant of MuSig, an adversary (controlling public keys $ X_{2},...,X_{n} $) can impose the value of $ R=\Pi_{i=1}^{n}R_{i} $ used in signature protocols since they can choose $ R_{2},...,R_{n} $ after having received $ R_{1} $ from the honest signer (controlling public key $ X_{1}=g^{x_{1}} $ ). This prevents one from using the initial method of simulating the honest signer in the Random Oracle model without knowing $ x_{1} $ by randomly drawing $ s_{1} $ and $ c $, computing $ R_1=g^{s_1}(X_1)^{a_1c}$, and later programming $ \textrm{H}_{sig}(\tilde{X}, R, m) \mspace{2mu} : = c_1 $ since the adversary might have made the random oracle query $ \textrm{H}_{sig}(\tilde{X}, R, m) $ before engaging the corresponding signature protocol.
Despite this, there is no attack currently known against the tworound variant of MuSig and it might be secure, although this is not provable under standard assumptions from existing techniques [4].
Turning Bellare and Neven’s Scheme into an IAS Scheme
In order to change the BN multisignature scheme into an IAS scheme, [4] proposed the following scheme, which includes a fix to make the execution of the signing algorithm dependent on the message index.
If $ X = g^{x_i} $ is the public key of a specific signer and $ m $ the message they want to sign, and
$$ S^\prime = \lbrace (X^\prime_{1}, m^\prime_{1}),..., (X^\prime_{n1}, m^\prime_{n1}) \rbrace $$
is the set of the public key/message pairs of other signers, this specific signer merges $ (X, m) $ and $ S^\prime $ into the ordered set
$$ \langle S \rangle \mspace{6mu} \mathrm{of} \mspace{6mu} S = \lbrace (X_{1}, m_{1}),..., (X_{n}, m_{n}) \rbrace $$
and retrieves the resulting message index $ i $ such that
$$ (X_{1}, m_{i}) = (X, m) $$
Each signer then draws $ r_{1}\leftarrow\mathbb{Z_{\mathrm{p}}} $, computes $ R_{i} = g^{r_{i}} $ and subsequently sends commitment $ t_{i} = H^\prime(R_{i}) $ in a first round and then $ R_{i} $ in a second round, and then computes
$$ R = \prod_{i=1}^{n}R_{i} $$
The signer with message index $ i $ then computes:
$$ c_{i} = H(R, \langle S \rangle, i) \mspace{30mu} \\ s_{i} = r_{i} + c_{i}x_{i} \mod p $$
and then sends $ s_{i} $ to other signers. All signers can compute
$$ s = \displaystyle\sum_{i=1}^{n}s_{i} \mod p $$
The signature is $ \sigma = (R, s) $.
Given an ordered set $ \langle S \rangle \mspace{6mu} \mathrm{of} \mspace{6mu} S = \lbrace (X_{1}, m_{1}),...,(X_{n}, m_{n}) \rbrace $ and a signature $ \sigma = (R, s) $ then $ \sigma $ is valid for $ S $ when
$$ g^s = R\prod_{i=1}^{n}X_{i} ^{H(R, \langle S \rangle, i)} $$
There is no need to include $ \langle L \rangle $ in the hash computation nor the public key $ X_i $ of the local signer, since they are already "accounted for" through ordered set $ \langle S \rangle $ and the message index $ i $.
Note: As of writing of this report, the secure IAS scheme presented here still needs to undergo a complete security analysis.
Conclusions, Observations and Recommendations
 MuSig leads to both native and private multisignature transactions with signature aggregation.
 Signature data for multisignatures can be large and cumbersome. MuSig will allow users to create more complex transactions without burdening the network and revealing compromising information.
 The IAS case where each signer signs their own message must still be proven by a complete security analysis.
References
[1] P. Wuille, “Key Aggregation for Schnorr Signatures”, 2018. Available: https://blockstream.com/2018/01/23/musigkeyaggregationschnorrsignatures/. Date accessed: 2019‑01‑20.
[2] Blockstream, “Schnorr Signatures for Bitcoin  BPASE ’18”, 2018. Available: https://blockstream.com/2018/02/15/schnorrsignaturesbpase/. Date accessed: 2019‑01‑20.
[3] K. Itakura, “A Publickey Cryptosystem Suitable for Digital Multisignatures”, NEC J. Res. Dev., Vol. 71, 1983. Available: https://scinapse.io/papers/200023587/. Date accessed: 2019‑01‑20.
[4] G. Maxwell, A. Poelstra, Y. Seurin and P. Wuille, “Simple Schnorr Multisignatures with Applications to Bitcoin”, pp. 1–34, 2018. Available: https://eprint.iacr.org/2018/068.pdf. Date accessed: 2019‑01‑20.
[5] B. W. Contributors, “Multisignature”, 2017. Available: https://wiki.bitcoin.com/w/Multisignature. Date accessed: 2019‑01‑20.
[6] C. P. Schnorr, “Efficient Signature Generation by Smart Cards”, Journal of Cryptology, Vol. 4, No. 3, pp. 161‑174, 1991. Available: https://link.springer.com/article/10.1007/BF00196725. Date accessed: 2019‑01‑20.
[7] D. J. Bernstein, N. Duif, T. Lange, P. Schwabe and B. Yang, “Highspeed Highsecurity Signatures”, Journal of Cryptographic Engineering, Vol. 2, No. 2, pp. 77‑89, 2012. Available: https://ed25519.cr.yp.to/ed2551920110705.pdf. Date accessed: 2019‑01‑20.
[8] D. J. Bernstein, “Multiuser Schnorr Security, Revisited”, IACR Cryptology ePrint Archive, Vol. 2015, p. 996, 2015. Available: https://eprint.iacr.org/2015/996.pdf. Date accessed: 2019‑01‑20.
[9] E. Kiltz, D. Masny and J. Pan, “Optimal Security Proofs for Signatures from Identification Schemes”, Annual Cryptology Conference, pp. 33‑61, Springer, 2016. Available: https://eprint.iacr.org/2016/191.pdf. Date accessed: 2019‑01‑20.
[10] C. M. Li, T. Hwang and N. Y. Lee, “Thresholdmultisignature Schemes where Suspected Forgery Implies Traceability of Adversarial Shareholders”, Workshop on the Theory and Application of Cryptographic Techniques, pp. 194‑204, Springer, 1994. Available: https://link.springer.com/content/pdf/10.1007%2FBFb0053435.pdf. Date accessed: 2019‑01‑20.
[11] L. Harn, “Grouporiented (t, n) Threshold Digital Signature Scheme and Digital Multisignature”, IEEE ProceedingsComputers and Digital Techniques, Vol. 141, No. 5, pp. 307‑313, 1994. Available: https://ieeexplore.ieee.org/abstract/document/326780. Date accessed: 2019‑01‑20.
[12] P. Horster, M. Michels and H. Petersen, “MetaMultisignature Schemes Based on the Discrete Logarithm Problem”, Information Securitythe Next Decade, pp. 128‑142, Springer, 1995. Available: https://link.springer.com/content/pdf/10.1007%2F9780387348735_11.pdf. Date accessed: 2019‑01‑20.
[13] K. Ohta and T. Okamoto, “A Digital Multisignature Scheme Based on the FiatShamir Scheme,” International Conference on the Theory and Application of Cryptology, pp. 139‑148, Springer, 1991. Available: https://link.springer.com/chapter/10.1007/3540573321_11. Date accessed: 2019‑01‑20.
[14] S. K. Langford, “Weaknesses in Some Threshold Cryptosystems”, Annual International Cryptology Conference, pp. 74‑82, Springer, 1996. Available: https://link.springer.com/content/pdf/10.1007%2F3540686975_6.pdf. Date accessed: 2019‑01‑20.
[15] M. Michels and P. Horster, “On the Risk of Disruption in Several Multiparty Signature Schemes”, International Conference on the Theory and Application of Cryptology and Information Security, pp. 334‑345, Springer, 1996. Available: https://pdfs.semanticscholar.org/d412/e5ab35fd397931cef0f8202324308f44e545.pdf. Date accessed: 2019‑01‑20.
[16] K. Ohta and T. Okamoto, “Multisignature Schemes Secure Against Active Insider Attacks”, IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, Vol. 82, No. 1, pp. 21‑31, 1999. Available: http://search.ieice.org/bin/summary.php?id=e82a_1_21&category=A&year=1999&lang=E&abst=. Date accessed: 2019‑01‑20.
[17] S. Micali, K. Ohtaz and L. Reyzin, “Accountablesubgroup Multisignatures”, Proceedings of the 8th ACM C onference on Computer and Communications Security, pp. 245#8209;254, ACM, 2001. Available: https://pdfs.semanticscholar.org/6bf4/f9450e7a8e31c106a8670b961de4735589cf.pdf. Date accessed: 2019‑01‑20.
[18] T. Ristenpart and S. Yilek, “The Power of Proofsofpossession: Securing Multiparty Signatures Against Roguekey Attacks”, Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 228‑245, Springer, 2007. Available: https://link.springer.com/content/pdf/10.1007%2F9783540725404_13.pdf. Date accessed: 2019‑01‑20.
[19] A. Boldyreva, “Threshold Signatures, Multisignatures and Blind Signatures Based on the GapDiffieHellmangroup Signature Scheme”, International Workshop on Public Key Cryptography, pp. 31‑46, Springer, 2003. Available: https://www.iacr.org/archive/pkc2003/25670031/25670031.pdf. Date accessed: 2019‑01‑20.
[20] S. Lu, R. Ostrovsky, A. Sahai, H. Shacham and B. Waters, “Sequential Aggregate Signatures and Multisignatures without Random Oracles,” Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 465‑485, Springer, 2006. Available: https://eprint.iacr.org/2006/096.pdf. Date accessed: 2019‑01‑20.
[21] M. Bellare and G. Neven, “MultiSignatures in the Plain PublicKey Model and a General Forking Lemma”, Acm Ccs, pp. 390‑399, 2006. Available: https://cseweb.ucsd.edu/~mihir/papers/multisignaturesccs.pdf. Date accessed: 2019‑01‑20.
[22] A. Bagherzandi, J. H. Cheon and S. Jarecki, “Multisignatures Secure under the Discrete Logarithm Assumption and a Generalized Forking Lemma”, Proceedings of the 15th ACM Conference on Computer and Communications Security  CCS '08, p. 449, 2008. Available: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.544.2947&rep=rep1&type=pdf. Date accessed: 2019‑01‑20.
[23] C. Ma, J. Weng, Y. Li and R. Deng, “Efficient Discrete Logarithm Based Multisignature Scheme in the Plain Public Key Model”, Designs, Codes and Cryptography, Vol. 54, No. 2, pp. 121‑133, 2010. Available: https://link.springer.com/article/10.1007/s106230099313z. Date accessed: 2019‑01‑20.
[24] E. Syta, I. Tamas, D. Visher, D. I. Wolinsky, P. Jovanovic, L. Gasser, N. Gailly, I. Khoffi and B. Ford, “Keeping Authorities 'Honest or Bust' with Decentralized Witness Cosigning”, Security and Privacy (SP), 2016 IEEE Symposium on pp. 526‑545, IEEE, 2016. Available: https://arxiv.org/pdf/1503.08768.pdf. Date accessed: 2019‑01‑20.
[25] D. Boneh, C. Gentry, B. Lynn and H. Shacham, “Aggregate and Verifiably Encrypted Signatures from Bilinear Maps”, in International Conference on the Theory and Applications of Cryptographic Techniques, pp. 416‑432, Springer, 2003. Available: http://crypto.stanford.edu/~dabo/papers/aggreg.pdf. Date accessed: 2019‑01‑20.
[26] M. Bellare, C. Namprempre and G. Neven, “Unrestricted Aggregate Signatures”, International Colloquium on Automata, Languages, and Programming, pp. 411‑422, Springer, 2007. Available: https://cseweb.ucsd.edu/~mihir/papers/agg.pdf. Date accessed: 2019‑01‑20.
[27] A. Lysyanskaya, S. Micali, L. Reyzin and H. Shacham, “Sequential Aggregate Signatures from Trapdoor Permutations”, in International Conference on the Theory and Applications of Cryptographic Techniques, pp. 74‑90, Springer, 2004. Available: https://hovav.net/ucsd/dist/rsaagg.pdf. Date accessed: 2019‑01‑20.
[28] G. Andersen, “MofN Standard Transactions”, 2011. Available: https://bitcoin.org/en/glossary/multisig. Date accessed: 2019‑01‑20.
[29] E. Shen, E. Shi and B. Waters, “Predicate Privacy in Encryption Systems”, Theory of Cryptography Conference, pp. 457‑473, Springer, 2009. Available: https://www.iacr.org/archive/tcc2009/54440456/54440456.pdf. Date accessed: 2019‑01‑20.
[30] R. C. Merkle, “A Digital Signature Based on a Conventional Encryption Function”, Conference on the Theory and Application of Cryptographic Techniques, pp. 369‑378, Springer, 1987. Available: https://people.eecs.berkeley.edu/~raluca/cs261f15/readings/merkle.pdf. Date accessed: 2019‑01‑20.
[31] G. Maxwell, “CoinJoin: Bitcoin Privacy for the Real World”, 2013. Available: https://bitcointalk.org/index.php?topic=279249.0. Date accessed: 2019‑01‑20.
[32] M. Bellare and A. Palacio, “GQ and Schnorr Identification Schemes: Proofs of Security against Impersonation under Active and Concurrent Attacks”, Annual International Cryptology Conference, pp. 162–177, Springer, 2002. Available: https://cseweb.ucsd.edu/~mihir/papers/gq.pdf. Date accessed: 20190120.
[33] M. Bellare, C. Namprempre, D. Pointcheval and M. Semanko, “The OneMoreRSA Inversion Problems and the Security of Chaum’s Blind Signature Scheme”, Journal of Cryptology, Vol. 16, No. 3, 2003. Available: https://eprint.iacr.org/2001/002.pdf. Date accessed: 2019‑01‑20.
[34] M. Drijvers, K. Edalatnejad, B. Ford, and G. Neven, “Okamoto Beats Schnorr: On the Provable Security of Multisignatures”, Tech. Rep., 2018. Available: https://www.semanticscholar.org/paper/OkamotoBeatsSchnorr%3AOntheProvableSecurityofDrijversEdalatnejad/154938a12885ff30301129597ebe11dd153385bb. Date accessed: 2019‑01‑20.
Contributors
 https://github.com/kevoulee
 https://github.com/hansieodendaal
 https://github.com/CjS77
 https://github.com/anselld
Fraud Proofs and SPV (Lightweight) Clients  Easier Said than Done?
 Background
 Introduction
 Full Node vs. SPV Client
 What are Fraud Proofs?
 Fraud Proofs Possible within Existing Bitcoin Protocol
 Fraud Proofs Requiring Changes to Bitcoin Protocol
 Universal Fraud Proofs (Suggested Improvement)
 How SPV Clients Work
 Security and Privacy Issues with SPV Clients
 Examples of SPV Implementations
 Other Suggested Fraudproof Improvements
 Conclusions, Observations and Recommendations
 References
 Contributors
Background
The bitcoin blockchain was, as of June 2018, approximately 173 GB in size [1]. This makes it nearly impossible for everyone to run a full bitcoin node. Lightweight/Simplified Payment Verification (SPV) clients will have to be used by users, since not everyone can run full nodes due to the computational power, bandwidth and cost needed to run a full bitcoin node.
SPV clients will believe everything miners or nodes tell them, as evidenced by Peter Todd in the following screenshot of an Android client showing millions of bitcoins. The wallet was sent a transaction of 2.1 million BTC outputs [17]. Peter Todd modified the code for his node in order to deceive the bitcoin wallet, since the wallets cannot verify coin amounts [27] (code can be found in the "Quickndirty hack to lie to SPV wallets" branch on his GitHub repository).
Introduction
In the original bitcoin whitepaper [2], Satoshi recognized the limitations described in Background, and introduced the concept of a Simplified Payment Verification (SPV). This concept allows verification of payments using a lightweight client that does not need to download the entire bitcoin blockchain. It only downloads block headers with the longest proofofwork chain, which are achieved by obtaining the Merkle branch linking a transaction to a block [3]. The existence of the Merkle root in the chain, along with blocks added after the block containing the Merkle root, provides confirmation of the legitimacy of that chain.
In this system, the full nodes would need to provide an alert (known as a fraud proof) to SPV clients when an invalid block is detected. The SPV clients would then be prompted to download the full block and alerted transactions to confirm the inconsistency [2]. An invalid block need not be of malicious intent, but could be as a result of other accounting errors (whether by accident or by malicious intent).
Full Node vs. SPV Client
A full bitcoin node contains the following details:
 every block;
 every transaction that has ever been sent;
 all the unspent transaction outputs (UTXOs) [4].
An SPV client, however, contains:
 a block header with transaction data relative to the client, including other transactions required to compute the Merkle root; or
 just a block header with no transactions.
What are Fraud Proofs?
Fraud proofs are a way to improve the security of SPV clients [5] by providing a mechanism for full nodes to prove that a chain is invalid, irrespective of the amount of proof of work it has [5]. Fraud proofs could also help with the bitcoin scaling debate, as SPV clients are easier to run and could thus help with bitcoin scalability issues ([6], [18]).
Fraud Proofs Possible within Existing Bitcoin Protocol
At the time of writing (February 2019), various proofs are needed to prove fraud in the bitcoin blockchain based on various actions. The following are the types of proofs needed to prove fraud based on specific fraud cases within the existing bitcoin protocol [5]:
Invalid Transaction due to Stateless Criteria Violation (Correct Syntax, Input Scripts Conditions Satisfied, etc.)
In the case of an invalid transaction, the fraud proofs consist of:
 the header of invalid block;
 the invalid transaction;
 an invalid block's Merkle tree containing the minimum number of nodes needed to prove the existence of the invalid transaction in the tree.
Invalid Transaction due to Input Already Spent
In this case, the fraud proof would consist of:
 the header of the invalid block;
 the invalid transaction;
 proof that the invalid transaction is within the invalid block;
 the header of the block containing the original spend transaction;
 the original spending transaction;
 proof showing that the spend transaction is within the header block of the spend transaction.
Invalid Transaction due to Incorrect Generation Output Value
In this case, the fraud proof consists of the block itself.
Invalid Transaction if Input does not Exist
In this case, the fraud proof consists of the entire blockchain.
Fraud Proofs Requiring Changes to Bitcoin Protocol
The following fraud proofs would require changes to the bitcoin protocol itself [5]:
Invalid Transaction if Input does not Exist in Old Blocks
In this case, the fraud proof consists of:
 the header of the invalid block;
 the invalid transaction;
 proof that the header of the invalid block contains the invalid transaction;
 proof that the header of the invalid block contains the leaf node corresponding to the nonexistent input;
 the block referenced by the leaf node, if it exists.
Missing Proof Tree Item
In this case, the fraud proof consists of:
 the header of the invalid block;
 the transaction of the missing proof tree node;
 an indication as to which input from the transaction of the missing proof tree node is missing;
 proof that the header of the invalid block contains the transition of the missing proof tree node;
 proof that the proof tree contains two adjacent leaf nodes.
Universal Fraud Proofs (Suggested Improvement)
As can be seen, requiring different fraud proof constructions for different fraud proofs can get cumbersome. AlBassam, et al. [26] proposed a general, universal fraudproof construction for most cases. Their proposition is to generalize the entire blockchain as a state transition system and represent the entire state as a Merkle root using a Sparse Merkle tree, with each transaction changing the state root of the blockchain. This can be simplified by this function:
transaction(state,tx) = State or Error
In the case of the bitcoin blockchain, representing the entire blockchain as a keyvalue store Sparse Merkle tree would mean:
Key = UTXO ID
Value = 1 if unspent or 0 if spent
Each transaction will change the state root of the blockchain and can be represented with this function:
TransitionRoot(stateRoot,tx,Witnesses) = stateRoot or Error
In this proposition, a valid fraud proof construction will consist of:
 the transaction;
 the prestate root;
 the poststate root;
 witnesses (Merkle proofs of all the parts of the state the transaction accesses/modifies).
Also expressed as this function:
rootTransition(stateRoot, tx, witnesses) != stateRoot
So a full node would send a light client/SPV this data to prove a valid fraud proof. The SPV would compute this function and, if the transition root of the state root is different to the state root in the block, then the block is rejected.
The poststate root can be excluded in order to save block space. However, this does increase the fraud proof size. This works with the assumption that the SPV client is connected to a minimum of one honest node.
How SPV Clients Work
SPV clients make use of Bloom filters to receive transactions that are relevant to the user [7]. Bloom filters are probabilistic data structures used to check the existence of an element in a set quicker by responding with a Boolean answer [9].
In addition to Bloom filters, SPV clients rely on Merkle trees [26]  binary structures that have a list of all the hashes between the block (apex) and the transaction (leaf). With Merkle trees, one only needs to check a small part of the block, called a Merkle root, to prove that the transaction has been accepted in the network [8].
Fraud proofs are integral to the security of SPV clients. However, the other components in SPV clients are not without issues.
Security and Privacy Issues with SPV Clients
Weak Bloom Filters and Merkle Tree Designs
In August 2017, a weakness in the bitcoin Merkle tree design was found to reduce the security of SPV clients. This weakness could allow an attacker to simulate a payment of an arbitrary amount to a victim using an SPV wallet, and trick the victim into accepting it as valid [10]. The bitcoin Merkle tree makes no distinction between inner and leaf nodes, and could thus be manipulated by an attack that could reinterpret transactions as nodes and nodes as transactions [11]. This weakness is due to inner nodes having no format and only requiring the length to be 64 bytes.
A bruteforce attack particularly affects systems that automatically accept SPV proofs and could be carried out with an investment of approximately USD3 million. One proposed solution is to ensure that no internal, 64bit node is ever accepted as a valid transaction by SPV wallets/clients [11].
The BIP37 SPV [13] Bloom filters do not have relevant privacy features [7]. They leak information such as IP addresses of the user, and whether multiple addresses belong to a single owner [12] (if Tor or Virtual Private Networks (VPNs) are not used).
Furthermore, SPV clients face the risk of a denial of service attack against full nodes due to processing load (80 GB disk reads) when SPV clients sync and full nodes themselves can cause a denial of service for SPV clients by returning NULL filter responses to requests [14]. Peter Todd [15] aptly demonstrates the risk of SPV denial of service.
Improvements
To address these issues, a new concept called committed Bloom filters was introduced to improve the performance and security of SPV clients. In this concept, which can be used in lieu of BIP37 [16], a Bloom filter digest (BFD) of every block's inputs, outputs and transactions is created with a filter that consists of a small size of the overall block size [14].
A second Bloom filter is created with all transactions and a binary comparison is made to determine matching transactions. This BFD allows the caching of filters by SPV clients without the need to recompute [16]. It also introduces semitrusted oracles to improve the security and privacy of SPV clients by allowing SPV clients to download block data via any out of band method [14].
Examples of SPV Implementations
There are two wellknown SPV implementations for bitcoin: bitcoinj and electrum. The latter does SPVlevel validation, comparing multiple electrum servers against each other. It has very similar security to bitcoinj, but potentially better privacy [25] due to bitcoinj's implementation of Bloom filters [7].
Other Suggested Fraudproof Improvements
Erasure Codes
Along with the proposed universal fraudproof solution, another data availability issue with fraud proofs is erasure coding. Erasure coding allows a piece of data M chunks long to be expanded into a piece of data N chunks long (“chunks” can be of arbitrary size), such that any M of the N chunks can be used to recover the original data. Blocks are then required to commit the Merkle root of this extended data and have light clients probabilistically check that the majority of the extended data is available [21].
According to the proposed solution, one of three conditions will be true for the SPV client when using erasure codes [20]:
 The entire extended data is available, the erasure code is constructed correctly and the block is valid.
 The entire extended data is available, the erasure code is constructed correctly, but the block is invalid.
 The entire extended data is available, but the erasure code is constructed incorrectly.
In case (1), the block is valid and the light client can accept it. In case (2), it is expected that some other node will quickly construct and relay a fraud proof. In case (3), it is also expected that some other node will quickly construct and relay a specialized kind of fraud proof that shows that the erasure code is constructed incorrectly.
Merklix Trees
Another suggested fraud proof improvement for the bitcoin blockchain is by means of block sharding and validation using Merklix trees. Merklix trees are essentially Merkle trees that use unordered set [22]. This also assumes that there is at least one honest node per shard. Using Merklix proofs, the following can be proven [23]:
 A transaction is in the block.
 The transaction's inputs and outputs are or are not in the UTXO set.
In this scenario, SPV clients can be made aware of any invalidity in blocks and cannot be lied to about the UTXO set.
Payment Channels
Bitcoin is made to be resilient to denial of service (DoS) attacks. However, the same cannot be said for SPV clients. This could be an issue if malicious alerting nodes spam with false fraud proofs. A proposed solution to this is payment channels [6], due to them:
 operating at near instant speeds, thus allowing quick alerting of fraud proofs;
 facilitating microtransactions;
 being robust to temporary mining failures (as they use long “custodial periods”).
In this way, the use of payment channels can help with incentivizing full nodes to issue fraud proofs.
Conclusions, Observations and Recommendations
Fraud proofs can be complex [6] and hard to implement. However, they appear to be necessary for scalability of blockchains and the security and privacy for SPV clients, since not everyone can or should want to run a full node to participate in the network. The current SPV implementations are working on improving the security and privacy of these SPV clients. Furthermore, for current blockchains, a hard or soft fork would need to be done in order to accommodate the data in the block headers.
Based on the payment channels fraud proof proposal that suggests some sort of incentive for nodes that issue alert/fraud proofs, it seems likely that some sort of fraud proof provider and consumer marketplace will have to emerge.
Where Tari is concerned, it would appear that the universal fraud proof proposals or something similar would need to be looked into, as undoubtedly endusers of the protocol/network will mostly be using light clients. However, since these fraud proofs work on the assumption of a minimum of one honest node, in the case of a digital issuer (which may be one or more), a fraud proof will not be viable on this assumption, as the digital issuer could be the sole node.
References
[1] "Size of the Bitcoin Blockchain from 2010 to 2018, by Quarter (in Megabytes)" [online].
Available: https://www.statista.com/statistics/647523/worldwidebitcoinblockchainsize/. Date accessed: 2018‑09‑10.
[2] S. Nakamoto, "Bitcoin: A PeertoPeer Electronic Cash System" [online]. Available: https://www.bitcoin.com/bitcoin.pdf.
Date accessed: 2018‑09‑10.
[3] "Simple Payment Verification" [online]. Available: http://docs.electrum.org/en/latest/spv.html. Date accessed: 2018‑09‑10.
[4] "SPV, Bloom Filters and Checkpoints" [online]. Available: https://multibit.org/hd0.4/howspvworks.html. Date accessed: 2018‑09‑10.
[5] "Improving the Ability of SPV Clients to Detect Invalid Chains" [online].
Available: https://gist.github.com/justusranvier/451616fa4697b5f25f60. Date accessed: 2018‑09‑10.
[6] "Meditations on Fraud Proofs" [online]. Available: http://www.truthcoin.info/blog/fraudproofs/. Dated accessed: 2018‑09‑10.
[7] A. Gervais, G. O. Karame, D. Gruber and S. Capkun, "On the Privacy Provisions of Bloom Filters in Lightweight Bitcoin Clients" [online]. Available: https://eprint.iacr.org/2014/763.pdf. Date accessed: 2018‑09‑10.
[8] "SPV, Bloom Filters and Checkpoints" [online]. Available: https://multibit.org/hd0.4/howspvworks.html. Date accessed: 2018‑09‑10.
[9] "A Case of False Positives in Bloom Filters" [online].
Available: https://medium.com/blockchainmusings/acaseoffalsepositivesinbloomfiltersda09ec487ff0. Date
accessed: 2018‑09‑11.
[10] "The Design of Bitcoin Merkle Trees Reduces the Security of SPV Clients" [online].
Available: https://media.rsk.co/thedesignofbitcoinmerkletreesreducesthesecurityofspvclients/.
Date accessed: 2018‑09‑11.
[11] "Leafnode Weakness in Bitcoin Merkle Tree Design" [online].
Available: https://bitslog.wordpress.com/2018/06/09/leafnodeweaknessinbitcoinmerkletreedesign/.
Date accessed: 2018‑09‑11.
[12] "Privacy in Bitsquare" [online]. Available: https://bisq.network/blog/privacyinbitsquare/. Date accessed: 2018‑09‑11.
[13] "bip0037.mediawiki" [online]. Available: https://github.com/bitcoin/bips/blob/master/bip0037.mediawiki. Date accessed: 2018‑09‑11.
[14] "Committed Bloom Filters for Improved Wallet Performance and SPV Security" [online].
Available: https://lists.linuxfoundation.org/pipermail/bitcoindev/2016May/012636.html. Date accessed: 2018‑09‑11.
[15] "Bloomioattack" [online]. Available: https://github.com/petertodd/bloomioattack. Date accessed: 2018‑09‑11.
[16] "Committed Bloom Filters versus BIP37 SPV" [online].
Available: https://www.newsbtc.com/2016/05/10/developersintroducebloomfiltersimprovebitcoinwalletsecurity/.
Date accessed: 2018‑09‑12.
[17] "Fraud Proofs" [online]. Available: https://www.linkedin.com/pulse/petertoddsfraudproofstalkmitbitcoinexpo2016markmorris/.
Date accessed: 2018‑09‑12.
[18] "New Satoshi Nakamoto Emails Revealed" [online]. Available: https://www.trustnodes.com/2017/08/12/newsatoshinakamotoemailsrevealed. Date accessed: 2018‑09‑12.
[19] J. Poon and V. Buterin, "Plasma: Scalable Autonomous Smart Contracts" [online]. Available: https://plasma.io/plasma.pdf.
Date accessed: 2018‑09‑13.
[20] "A Note on Data Availability and Erasure Coding" [online].
Available: https://github.com/ethereum/research/wiki/Anoteondataavailabilityanderasurecoding.
Date accessed: 2018‑09‑13.
[21] "Vitalik Buterin and Peter Todd Go Head to Head in the Crypto Culture Wars" [online].
Available: https://www.trustnodes.com/2017/08/14/vitalikbuterinpetertoddgoheadheadcryptoculturewars.
Date accessed: 2018‑09‑14.
[22] "Introducing Merklix Tree as an Unordered Merkle Tree on Steroid" [online].
Available: https://www.deadalnix.me/2016/09/24/introducingmerklixtreeasanunorderedmerkletreeonsteroid/.
Date accessed 2018‑09‑14.
[23] "Using Merklix Tree to Shard Block Validation" [online].
Available: https://www.deadalnix.me/2016/11/06/usingmerklixtreetoshardblockvalidation/. Date accessed: 2018‑09‑14.
[24] "Fraud Proofs" [online]. Available: https://bitco.in/forum/threads/fraudproofs.1617/. Date accessed: 2018‑09‑18.
[25] "Whats the Difference between an API Wallet and a SPV Wallet?" [Online.]
Available: https://www.reddit.com/r/Bitcoin/comments/3c3zn4/whats_the_difference_between_an_api_wallet_and_a/.
Date accessed: 2018‑09‑21.
[26] M. AlBassam, A. Sinnino and V. Butterin, "Fraud Proofs: Maximising Light Client Security and Scaling Blockchains with Dishonest Majorities" [online]. Available: https://arxiv.org/pdf/1809.09044.pdf. Date accessed: 2018‑10‑08.
[27] "Bitcoin Integration/Staging Tree" [online]. Available: https://github.com/petertodd/bitcoin/tree/201602lietospv.
Date accessed: 2018‑10‑12.
Contributors
 https://github.com/ksloven
 https://github.com/CjS77
 https://github.com/hansieodendaal
 https://github.com/anselld
Bulletproofs and Mimblewimble
 Introduction
 How do Bulletproofs Work?
 Applications for Bulletproofs
 Comparison to other Zeroknowledge Proof Systems
 Interesting Bulletproofs Implementation Snippets
 Conclusions, Observations and Recommendations
 References
 Appendices
 Contributors
Introduction
Bulletproofs
Bulletproofs form part of the family of distinct Zeroknowledge Proof^{def} systems, such as ZeroKnowledge Succinct NonInteractive ARguments of Knowledge (zkSNARK); Succinct Transparent ARgument of Knowledge (STARK); and Zero Knowledge Prover and Verifier for Boolean Circuits (ZKBoo). Zeroknowledge proofs are designed so that a prover is able to indirectly verify that a statement is true without having to provide any information beyond the verification of the statement, e.g. to prove that a number is found that solves a cryptographic puzzle and fits the hash value without having to reveal the Nonce^{def} ([2], [4]).
The Bulletproofs technology is a Noninteractive Zeroknowledge (NIZK) proof protocol for general Arithmetic Circuits^{def} with very short proofs (Arguments of Knowledge Systems^{def}) and without requiring a trusted setup. They rely on the Discrete Logarithm^{def} (DL) assumption and are made noninteractive using the FiatShamir Heuristic^{def}. The name "Bulletproof" originated from a nontechnical summary from one of the original authors of the scheme's properties: "Short like a bullet with bulletproof security assumptions" ([1], [29]).
Bulletproofs also implement a Multiparty Computation (MPC) protocol, whereby distributed proofs of multiple provers with secret committed values are aggregated into a single proof before the FiatShamir challenge is calculated and sent to the verifier, thereby minimizing rounds of communication. Secret committed values will stay secret ([1], [6]).
The essence of Bulletproofs is its innerproduct algorithm originally presented by Groth [13] and then further refined by Bootle et al. [12]. The latter development provided a proof (argument of knowledge) for two independent (not related) binding^{def} vector Pedersen Commitments^{def} that satisfied the given innerproduct relation. Bulletproofs build on these techniques, which yield communicationefficient, zeroknowledge proofs, but offer a further replacement for the inner product argument that reduces overall communication by a factor of three ([1], [29]).
Mimblewimble
Mimblewimble is a blockchain protocol designed for confidential transactions. The essence is that a Pedersen Commitment to $ 0 $ can be viewed as an Elliptic Curve Digital Signature Algorithm (ECDSA) public key, and that for a valid confidential transaction, the difference between outputs, inputs and transaction fees must be $ 0 $. A prover constructing a confidential transaction can therefore sign the transaction with the difference of the outputs and inputs as the public key. This enables a greatly simplified blockchain in which all spent transactions can be pruned, and new nodes can efficiently validate the entire blockchain without downloading any old and spent transactions. The blockchain consists only of blockheaders, remaining Unspent Transaction Outputs (UTXO) with their range proofs and an unprunable transaction kernel per transaction. Mimblewimble also allows transactions to be aggregated before being committed to the blockchain ([1], [20]).
How do Bulletproofs Work?
The basis of confidential transactions is to replace the input and output amounts with Pedersen Commitments^{def}. It is then publicly verifiable that the transactions balance (the sum of the committed inputs is greater than the sum of the committed outputs, and all outputs are positive), while keeping the specific committed amounts hidden. This makes it a zeroknowledge transaction. The transaction amounts must be encoded as $ integers \mod q $, which can overflow, but are prevented from doing so by making use of range proofs. This is where Bulletproofs come in. The essence of Bulletproofs is its ability to calculate proofs, including range proofs, from innerproducts.
The prover must convince the verifier that commitment $ C(x,r) = xH + rG $ contains a number such that $ x \in [0,2^n  1] $. If $ \mathbf {a} = (a_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} a_n) \in {0,1}^n $ is the vector containing the bits of $ x $, the basic idea is to hide all the bits of the amount in a single vector Pedersen Commitment. It must then be proven that each bit satisfies $ \omega(\omega1) = 0 $, i.e. each $ \omega $ is either $ 0 $ or $ 1 $, and that they sum to $ x $. As part of the ensuing protocol, the verifier sends random linear combinations of constraints and challenges $ \in \mathbb{Z_p} $ to the prover. The prover is then able to construct a vectorized inner product relation containing the elements of $ \mathbf {a} $, the constraints and challenges $ \in \mathbb{Z_p} $, and appropriate blinding vectors $ \in \mathbb Z_p^n $.
These inner product vectors have size $ n $ that would require many expensive exponentiations. The Pedersen Commitment scheme, shown in Figure 1, allows for a vector to be cut in half, and for the two halves to be compressed together, each time calculating a new set of Pedersen Commitment generators. Applying the same trick repeatedly, $ \log _2 n $ times, produces a single value. This is applied to the inner product vectors; they are reduced interactively with a logarithmic number of rounds by the prover and verifier into a single multiexponentiation of size $ 2n + 2 \log_2(n) + 1 $. This single multiexponentiation can then be calculated much faster than $ n $ separate ones. All of this is made noninteractive using the FiatShamir Heuristic^{def}.
Figure 1: Vector Pedersen Commitment Cut and Half ([12], [63])
Bulletproofs only rely on the discrete logarithm assumption. In practice, this means that Bulletproofs are compatible with any secure elliptic curve, making them extremely versatile. The proof sizes are short; only $ [2 \log_2(n) + 9] $ elements are required for the range proofs and $ [\log_2(n) + 13] $ elements for arithmetic circuit proofs, with $ n $ denoting the multiplicative complexity. Additionally, the logarithmic proof size enables the prover to aggregate multiple range proofs into a single short proof, as well as to aggregate multiple range proofs from different parties into one proof (refer to Figure 2) ([1], [3], [5]).
Figure 2: Logarithmic Aggregate Bulletproofs Proof Sizes [3]
If all Bitcoin transactions were confidential, approximately 50 million UTXOs from approximately 22 million transactions would result in roughly 160GB range proof data, when using current/linear proof systems and assuming use of 52 bits to represent any value from 1 satoshi up to 21 million bitcoins. Aggregated Bulletproofs would reduce the data storage requirement to less than 17GB [[1]].In Mimblewimble, the blockchain grows with the size of the UTXO set. Using Bulletproofs as a dropin replacement for range proofs in confidential transactions, the size of the blockchain would only grow with the number of transactions that have unspent outputs. This is much smaller than the size of the UTXO set [1].
The recent implementation of Bulletproofs in Monero on 18 October 2018 saw the average data size on the blockchain per payment reduce by ~73% and the average USDbased fees reduce by ~94.5% for the period 30 August 2018 to 28 November 2018 (refer to Figure 3).
Figure 3: Monero Payment, Block and Data Size Statistics
Applications for Bulletproofs
Bulletproofs were designed for range proofs. However, they also generalize to arbitrary arithmetic circuits. In practice, this means that Bulletproofs have wide application and can be efficiently used for many types of proofs. Use cases of Bulletproofs are listed in this section, but this list may not be exhaustive, as use cases for Bulletproofs continue to evolve ([1], [2], [3], [5], [6], [59]).

Range proofs
Range proofs are proofs that a secret value, which has been encrypted or committed to, lies in a certain interval. It prevents any numbers coming near the magnitude of a large prime, say $ 2^{256} $, that can cause wraparound when adding a small number, e.g. proof that $ x \in [0,2^{52}  1] $.

Merkle proofs
Hash preimages in a Merkle tree [7] can be leveraged to create zeroknowledge Merkle proofs using Bulletproofs, to create efficient proofs of inclusion in massive data sets.

Proof of solvency
Proofs of solvency are a specialized application of Merkle proofs; coins can be added into a giant Merkle tree. It can then be proven that some outputs are in the Merkle tree and that those outputs add up to some amount that the cryptocurrency exchange claims they have control over without revealing any private information. A Bitcoin exchange with 2 million customers needs approximately 18GB to prove solvency in a confidential manner using the Provisions protocol [58]. Using Bulletproofs and its variant protocols proposed in [1], this size could be reduced to approximately 62MB.

Multisignatures with deterministic nonces
With Bulletproofs, every signatory can prove that their nonce was generated deterministically. A SHA256 arithmetic circuit could be used in a deterministic way to show that the derandomized nonces were generated deterministically. This would still work if one signatory were to leave the conversation and rejoin later, with no memory of interacting with the other parties they were previously interacting with.

Scriptless scripts
Scriptless scripts is a way to do smart contracts exploiting the linear property of Schnorr signatures, using an older form of zeroknowledge proofs called a Sigma protocol. This can all be done with Bulletproofs, which could be extended to allow assets that are functions of other assets, i.e. crypto derivatives.

Smart contracts and crypto derivatives
Traditionally, a new trusted setup is needed for each smart contract when verifying privacypreserving smart contracts, but with Bulletproofs, no trusted setup is needed. Verification time, however, is linear, and it might be too complex to prove every step in a smart contract. The Refereed Delegation Model [33] has been proposed as an efficient protocol to verify smart contracts with public verifiability in the offline stage, by making use of a specific verification circuit linked to a smart contract.
A challenger will input the proof to the verification circuit and get a binary response as to the validity of the proof. The challenger can then complain to the smart contract, claim the proof is invalid and send the proof, together with the output from a chosen gate in the verification circuit, to the smart contract. Interactive binary searches are then used to identify the gate where the proof turns invalid. Hence the smart contract must only check a single gate in the verification procedure to decide whether the challenger or prover was correct. The cost is logarithmic in the number of rounds and amount of communications, with the smart contract only doing one computation. A Bulletproof can be calculated as a short proof for the arbitrary computation in the smart contract, thereby creating privacypreserving smart contracts (refer to Figure 4).

Verifiable shuffles
Alice has some computation and wants to prove to Bob that she has done it correctly and has some secret inputs to this computation. It is possible to create a complex function that either evaluates to 1 if all secret inputs are correct and to 0 otherwise. Such a function can be encoded in an arithmetic circuit and can be implemented with Bulletproofs to prove that the transaction is valid.
When a proof is needed that one list of values $[x_1, ... , x_n]$ is a permutation of a second list of values $[y_1, ... , y_n]$, it is called a verifiable shuffle. It has many applications, e.g. voting, blind signatures for untraceable payments and solvency proofs. Currently, the most efficient shuffle has size $O \sqrt{n}$. Bulletproofs can be used very efficiently to prove verifiable shuffles of size $O \log(n)$, as shown in Figure 5.
Another potential use case is to verify that two nodes executed the same list of independent instructions $ [x1,x4,x3,x2] $ and $ [x1,x2,x3,x4] $, which may be in different order, to arrive at the same next state $ N $. The nodes do not need to share the actual instructions with a verifier, but the verifier can show that they executed the same set without having knowledge of the instructions.

Batch verifications
Batch verifications can be done using one of the Bulletproofs derivative protocols. This has application where the verifier needs to verify multiple (separate) range proofs at once, e.g. a blockchain full node receiving a block of transactions needs to verify all transactions as well as range proofs. This batch verification is then implemented as one large multiexponentiation; it is applied to reduce the number of expensive exponentiations.
Comparison to other Zeroknowledge Proof Systems
Table 1 ([2], [5]) shows a highlevel comparison between Sigma protocols (i.e. interactive publiccoin protocols) and the different Zeroknowledge proof systems mentioned in this report. (The most desirable outcomes for each measurement are shown in bold italics.) The aim will be to have a proof system that is not interactive, has short proof sizes, has linear prover runtime scalability, has efficient (sublinear) verifier runtime scalability, has no trusted setup, is practical and is at least DL secure. Bulletproofs are unique in that they are not interactive, have a short proof size, do not require a trusted setup, have very fast execution times and are practical to implement. These attributes make Bulletproofs extremely desirable to use as range proofs in cryptocurrencies.
Proof System  Sigma Protocols  zkSNARK  STARK  ZKBoo  Bulletproofs 

Interactive  yes  no  no  no  no 
Proof Size  long  short  shortish  long  short 
Prover Runtime Scalability  linear  quasilinear  quasilinear (big memory requirement)  linear  linear 
Verifier Runtime Scalability  linear  efficient  efficient (polylogarithmically)  efficient  linear 
Trusted Setup  no  required  no  no  no 
Practical  yes  yes  not quite  somewhat  yes 
Security Assumptions  DL  nonfalsifiable, but not on par with DL  quantum secure Oneway Function (OWF) [50], which is better than DL  similar to STARKs  DL 
Interesting Bulletproofs Implementation Snippets
Bulletproofs development is currently still evolving, as can be seen when following the different community development projects. Different implementations of Bulletproofs also offer different levels of efficiency, security and functionality. This section describes some of these aspects.
Current and Past Efforts
The initial prototype Bulletproofs' implementation was done by Benedikt Bünz in Java
located at GitHub:bbuenz/BulletProofLib
[27].
The initial work that provided cryptographic support for a Mimblewimble implementation was mainly done by
Pieter Wuille, Gregory Maxwell and
Andrew Poelstra in C located at GitHub:ElementsProject/secp256k1zkp
[25].
This effort was forked as GitHub:apoelstra/secp256k1mw
[26] with main contributors being
Andrew Poelstra, Pieter Wuille, and
Gregory Maxwell where Mimblewimble primitives and support for many of the Bulletproof
protocols (e.g. zero knowledge proofs, range proofs and arithmetic circuits) were added. Current effort also involves
MuSig [48] support.
The Grin project (an open source Mimblewimble implementation in Rust) subsequently forked
GitHub:ElementsProject/secp256k1zkp
[25] as GitHub:mimblewimble/secp256k1zkp
[30] and has added Rust wrappers
to it as mimblewimble/rustsecp256k1zkp
[45] for use in its blockchain. The Beam project (another open source
Mimblewimble implementation in C++) links directly to GitHub:ElementsProject/secp256k1zkp
[25] as its cryptographic
submodule. Refer to MimblewimbleGrin Blockchain Protocol Overview
and Grin vs. BEAM, a Comparison for more information about the
Mimblewimble implementation of Grin and Beam.
An independent implementation for Bulletproof range proofs was done for the Monero project (an open source CryptoNote implementation in C++) by Sarang Noether [49] in Java as the precursor and moneromooomonero [46] in C++ as the final implementation. Its implementation supports single and aggregate range proofs.
Adjoint, Inc. has also done an independent open source implementation of Bulletproofs in Haskell at
GitHub: adjointio/bulletproofs
[29]. It has an open source implementation of a private permissioned blockchain
with multiparty workflow aimed at the financial industry.
Chain/Interstellar has done another independent open source implementation of Bulletproofs in Rust from the ground up at
GitHub:dalekcryptography/bulletproofs
[28]. It has implemented parallel Edwards formulas [39] using Intel®
Advanced Vector Extensions 2 (AVX2) to accelerate curve operations. Initial testing suggests approximately 50% speedup
(twice as fast) over the original libsecp256k1
based Bulletproofs implementation.
Security Considerations
Realworld implementation of Ellipticcurve Cryptography (ECC) is largely based on official standards that govern the selection of curves in order to try and make the Ellipticcurve Discretelogarithm Problem (ECDLP) hard to solve, i.e. finding an ECC user's secret key given the user's public key. Many attacks break realworld ECC without solving ECDLP due to problems in ECC security, where implementations can produce incorrect results and also leak secret data. Some implementation considerations also favor efficiency over security. Secure implementations of the standardsbased curves are theoretically possible, but highly unlikely ([14], [32]).
Grin, Beam and Adjoint use ECC curve secp256k1 [24] for their Bulletproofs implementation, which fails one out of the four ECDLP security criteria and three out of the four ECC security criteria. Monero and Chain/Interstellar use the ECC curve Curve25519 [38] for their Bulletproofs implementation, which passes all ECDLP and ECC security criteria [32].
Chain/Interstellar goes one step further with its use of Ristretto, a technique for constructing prime order elliptic curve groups with nonmalleable encodings, which allows an existing Curve25519 library to implement a primeorder group with only a thin abstraction layer. This makes it possible for systems using Ed25519 signatures to be safely extended with zeroknowledge protocols, with no additional cryptographic assumptions and minimal code changes [31].
The Monero project has also had security audits done on its Bulletproofs' implementation, which resulted in a number of serious and critical bug fixes as well as some other code improvements ([8], [9], [11]).
Wallet Reconstruction and Switch Commitment  Grin
Grin implemented a switch commitment [43] as part of a transaction output to be ready for the age of quantum adversaries and to pose as a defense mechanism. It had an original implementation that was discarded (completely removed) due to it being complex, using a lot of space in the blockchain and allowing inclusion of arbitrary data. Grin also employed a complex scheme to embed the transaction amount inside a Bulletproof range proof for wallet reconstruction, which was linked to the original switch commitment hash implementation. The latest implementation improved on all those aspects and uses a much simpler method to regain the transaction amount from a Bulletproof range proof.
Initial Implementation
The initial Grin implementation ([21], [34]. [35], [54]) hides two things in the Bulletproof range proof: a transaction amount for wallet reconstruction and an optional switch commitment hash to make the transaction perfectly binding^{def} later on, as opposed to currently being perfectly hiding^{def}. Perfect in this sense means that a quantum adversary (an attacker with infinite computing power) cannot tell what amount has been committed to and is also unable to produce fake commitments. Computational means that no efficient algorithm running in a practical amount of time can reveal the commitment amount or produce fake commitments, except with small probability. The Bulletproof range proofs are stored in the transaction kernel and will thus remain persistent in the blockchain.
In this implementation, a Grin transaction output contains the original (Elliptic Curve) Pedersen Commitment^{def} as well as the optional switch commitment hash. The switch commitment hash takes the resultant blinding factor $ b $, a third cyclic group random generator $ J $ and a walletseed derived random value $ r $ as input. The transaction output has the following form:
$$ (vG + bH \mspace{3mu} , \mspace{3mu} \mathrm{H_{B2}}(bJ \mspace{3mu} , \mspace{3mu} r)) \tag{1} $$
where $ \mathrm{H_{B2}} $ is the BLAKE2 hash function [44] and $ \mathrm{H_{B2}}(bJ \mspace{3mu} , \mspace{3mu} r) $ the switch commitment hash. In order for such an amount to be spent, the owner needs to reveal $ b , r $ so that the verifier can check the opening of $ \mathrm{H_{B2}}(bJ \mspace{3mu} , \mspace{3mu} r) $ by confirming that it matches the value stored in the switch commitment hash portion of the transaction output. Grin implemented the BLAKE2 hash function, which outperforms all mainstream hash function implementations in terms of hashing speed with similar security to the latest Secure Hash Algorithm 3 (SHA3) standard [44].
In the event of quantum adversaries, the owner of an output can choose to stay anonymous and not claim ownership or reveal $ bJ $ and $ r $, whereupon the amount can be moved to the then hopefully forked quantum resistant blockchain.
In the Bulletproof range proof protocol, two 32byte scalar nonces $ \tau_1 , \alpha $ (not important to know what they are) are generated with a secure random number generator. If the seed for the random number generator is known, the scalar values $ \tau_1 , \alpha $ can be recalculated when needed. Sixtyfour (64) bytes worth of message space (out of 674 bytes worth of range proof) are made available by embedding a message into those variables using a logic $ \mathrm{XOR} $ gate. This message space is used for the transaction amount for wallet reconstruction.
To ensure that the transaction amount of the output cannot be spent by only opening the (Elliptic Curve) Pedersen Commitment $ vG + bH $, the switch commitment hash and embedded message are woven into the Bulletproof range proof calculation. The initial part is done by seeding the random number generator used to calculate $ \tau_1 , \alpha $ with the output from a seed function $ \mathrm S $ that uses as input a nonce $ \eta $ (which may be equal to the original blinding factor $ b $), the (Elliptic Curve) Pedersen Commitment^{def} $ P $ and the switch commitment hash
$$ \mathrm S (\eta \mspace{3mu} , \mspace{3mu} P \mspace{3mu} , \mspace{3mu} \mathrm{H_{B2}}(bJ \mspace{3mu} , \mspace{3mu} r) ) = \eta \mspace{3mu} \Vert \mspace{3mu} \mathrm{H_{S256}} (P \mspace{3mu} \Vert \mspace{3mu} \mathrm{H_{B2}}(bJ \mspace{3mu} , \mspace{3mu} r) ) \tag{2} $$
where $ \mathrm{H_{S256}}$ is the SHA256 hash function. The Bulletproof range proof is then calculated with an adapted
pair $ \tilde{\alpha} , \tilde{\tau_1} $, using the original $ \tau_1 , \alpha $ and two
32byte words $ m_{w1} $ and
$m_{w2} $ that make up the 64byte embedded message as follows:
$$ \tilde{\alpha} = \mathrm {XOR} ( \alpha \mspace{3mu} , \mspace{3mu} m_{w1}) \mspace{12mu} \mathrm{and} \mspace{12mu} \tilde{\tau_1} = \mathrm {XOR} ( \tau_1 \mspace{3mu} , \mspace{3mu} m_{w2} ) \tag{3} $$
To retrieve the embedded message, the process is simply inverted. Note that the owner of an output needs to keep record of the blinding factor $ b $, the nonce $ \eta $ if not equal to the blinding factor $ b $, as well as the walletseed derived random value $ r $ to be able to claim such an output.
Improved Implementation
The latter Grin implementation ([56], [57]) uses Bulletproof range proof rewinding so that wallets can recognize their own transaction outputs. This negated the requirement to remember the walletseed derived random value $ r $, nonce $ \eta $ for the seed function $ \mathrm S $ and use of the adapted pair $ \tilde{\alpha} , \tilde{\tau_1} $ in the Bulletproof range proof calculation.
In this implementation, it is not necessary to remember a hash of the switch commitment as part of the transaction output set and for it to be passed around during a transaction. The switch commitment looks exactly like the original (Elliptic Curve) Pedersen Commitment $ vG + bH $, but in this instance the blinding factor $ b $ is tweaked to be
$$ b = b^\prime + \mathrm{H_{B2}} ( vG + b^\prime H \mspace{3mu} , \mspace{3mu} b^\prime J ) \tag{4} $$
with $ b^\prime $ being the user generated blinding factor. The (Elliptic Curve) Pedersen Commitment then becomes
$$ vG + b^\prime H + \mathrm{H_{B2}} ( vG + b^\prime H \mspace{3mu} , \mspace{3mu} b^\prime J ) H \tag{5} $$
After activation of the switch commitment in the age of quantum adversaries, users can reveal $ ( vG + b^\prime H \mspace{3mu} , \mspace{3mu} b^\prime J ) $, and verifiers can check if it is computed correctly and use it as if it were the ElGamal Commitment^{def} $ ( vG + b H \mspace{3mu} , \mspace{3mu} b J ) $.
GitHub Extracts
The following extracts of discussions depict the initial and improved implementations of the switch commitment and retrieving transactions amounts from Bulletproofs for wallet reconstruction.
Bulletproofs #273 [35]
{yeastplume} "The only thing I think we're missing here from being able to use this implementation is the ability to store an amount within the range proof (for wallet reconstruction). From conversations with @apoelstra earlier, I believe it's possible to store 64 bytes worth of 'message' (not nearly as much as the current range proofs)."
{apoelstra} "Ok, I can get you 64 bytes without much trouble (xoring them into* tau_1
and alpha
which are easy
to extract from tau_x
and mu
if you know the original seed used to produce the randomness). I think it's possible to
get another 32 bytes into t
but that's way more involved since t
is a big innerproduct*."
Message hiding in Bulletproofs #721 [21]
"Breaking out from #273, we need the wind a message into a bulletproof similarly to how it could be done in 'Rangeproof
Classic'. This is an absolute requirement as we need to embed an output's SwitchCommitHash
(which is otherwise not
committed to) and embed an output amount for wallet reconstruction. We should be able to embed up to 64 bytes of message
without too much difficulty, and another 32 with more difficulty (see original issue). 64 should be enough for the time
being."
Switch Commits/Bulletproofs  Status #734 [34]
"The prove function takes a value, a secret key (blinding factor in our case), a nonce, optional extra_data and a generator and produces a 674 byte proof. I've also modified it to optionally take a message (more about this in a bit). It creates the Pedersen commitment it works upon internally with these values."
"The verify function takes a proof, a Pedersen commitment and optional extra_data and returns true if proof demonstrates that the value within the Pedersen commitment is in the range [0..2^64] (and the extra_data is correct)."
"Additionally, I've added an unwind function which takes a proof, a Pedersen commitment, optional extra_data and a 32 bit nonce (which needs to be the same as the original nonce used in order to return the same message) and returns the hidden message."
"If you have the correct Pedersen commitment and proof and extra_data, and attempt to unwind a message out using the wrong nonce, the attempt won't fail, you'll get out gibberish or just wildly incorrect values as you parse back the bytes."
"The SwitchCommitHash
is currently a field of an output, and while it is stored in the Txo set and passed around
during a transaction, it is not currently included in the output's hash. It is passed in as the extra_data field
above, meaning that anyone validating the range proof also needs to have the correct switch commit in order to validate
the range proof."
Removed all switch commitment usages, including restore #841 [55]
{ignopeverell} "After some discussion with @antiochp, @yeastplume and @tromp, we decided switch commitments weren't worth the cost of maintaining them and their drawbacks. Removing them."
{ignopeverell} "For reference, switch commitments were found to:
 add a lot of complexity and assumptions
 take additional space for little benefit right now
 allow the inclusion of arbitrary data, potentially for the worst
 provide little to no advantage in case of quantamageddon (as range proofs are still a weakness)"
{apoelstra} "After chatting with @yeastplume on IRC, I realize that we can actually use rangeproof rewinding for wallets to recognize their own outputs, which even avoids the "gap" problem of just scanning for pregenerated keys. With that in mind, it's true that the benefit of switch commitments for MW are not spectacular."
Switch commitment discussion #998 [56]
{antiochp} "Sounds like there is a "zero cost" way of getting switch commitments in as part of the commitment itself, so we would not need to store and maintain a separate "switch commitment" on each output. I saw that switch commitments have been removed for various reasons."
"Let me suggest a variant (idea suggested by Pieter Wuille initially): The switch commitment is (vG + bH), where b = b' + hash(vG + b'H,b'J). (So this "tweaks" the commitment, in a paytocontract / taproot style). Before the switch, this is just used like a normal Pedersen Commitment vG + bH. After the switch, users can reveal (vG + b'H, b'J), and verifiers check if it's computed correctly and use as if it were the ElGamal commitment (vG + bH, bJ)."
{@ignopeverell} modified the milestones: Beta / testnet3, Mainnet on 11 Jul.
{@ignopeverell} added the musthave label on 24 Aug.
Conclusions, Observations and Recommendations

Bulletproofs are not Bulletproofs are not Bulletproofs. This is evident by comparing the functionality, security and performance of all the current different Bulletproof implementations as well as the evolving nature of Bulletproofs.

The security audit instigated by the Monero project on their Bulletproofs implementation as well as the resulting findings and corrective actions prove that every implementation of Bulletproofs has potential risk. This risk is due to the nature of confidential transactions; transacted values and token owners are not public.

The growing number of open source Bulletproof implementations should strengthen the development of a new confidential blockchain protocol such as Tari.

In the pure implementation of Bulletproof range proofs, a discretelog attacker (e.g. a bad actor employing a quantum computer) would be able to exploit Bulletproofs to silently inflate any currency that used them. Bulletproofs are perfectly hiding^{def} (i.e. confidential), but only computationally binding^{def} (i.e. not quantum resistant). Unconditional soundness is lost due to the data compression being employed ([1], [5], [6] and [10]).

Bulletproofs are not only about range proofs. All the different Bulletproof use cases have a potential implementation in a new confidential blockchain protocol such as Tari; in the base layer as well as in the probable second layer.
References
[1] B. Bünz, J. Bootle, D. Boneh, A. Poelstra, P. Wuille and G. Maxwell, "Bulletproofs: Short Proofs for Confidential Transactions and More", Blockchain Protocol Analysis and Security Engineering 2018 [online]. Available: http://web.stanford.edu/~buenz/pubs/bulletproofs.pdf. Date accessed: 2018‑09‑18.
[2] A. Poelstra, "Bulletproofs" (Transcript), Bitcoin Milan Meetup 2018‑02‑02 [online]. Available: https://diyhpl.us/wiki/transcripts/20180202andrewpoelstraBulletproofs. Date accessed: 2018‑09‑10.
[3] A. Poelstra, "Bulletproofs" (Slides), Bitcoin Milan Meetup 2018‑02‑02 [online]. Available: https://drive.google.com/file/d/18OTVGX7COgvnZ7T0keajhMWwOHOWfKV/view. Date accessed: 2018‑09‑10.
[4] B. Feng, "Decoding zkSNARKs" [online]. Available: https://medium.com/wolverineblockchain/decodingzksnarks85e73886a040. Date accessed: 2018‑09‑17.
[5] B. Bünz, J. Bootle, D. Boneh, A. Poelstra, P. Wuille and G. Maxwell, "Bulletproofs: Short Proofs for Confidential Transactions and More" (Slides) [online]. Available: https://cyber.stanford.edu/sites/default/files/bpase18.pptx. Date accessed: 2018‑09‑18.
[6] B. Bünz, J. Bootle, D. Boneh, A. Poelstra, P. Wuille and G. Maxwell, "Bulletproofs: Short Proofs for Confidential Transactions and More (Transcripts)" [online]. Available: http://diyhpl.us/wiki/transcripts/blockchainprotocolanalysissecurityengineering/2018/Bulletproofs. Date accessed: 2018‑09‑18.
[7] "Merkle Root and Merkle Proofs" [online]. Available: https://bitcoin.stackexchange.com/questions/69018/MerklerootandMerkleproofs. Date accessed: 2018‑10‑10.
[8] "Bulletproofs Audit: Fundraising" [online]. Available: https://forum.getmonero.org/22/completedtasks/90007/Bulletproofsauditfundraising. Date accessed: 2018‑10‑23.
[9] "The QuarksLab and Kudelski Security Audits of Monero Bulletproofs are Complete" [online]. Available: https://ostif.org/thequarkslabandkudelskisecurityauditsofmoneroBulletproofsarecomplete. Date accessed: 2018‑10‑23.
[10] A. Poelstra, "Bulletproofs Presentation at Feb 2 Milan Meetup, Reddit" [online]. Available: https://www.reddit.com/r/Bitcoin/comments/7w72pq/Bulletproofs_presentation_at_feb_2_milan_meetup. Date accessed: 2018‑09‑10.
[11] "The OSTIF and QuarksLab Audit of Monero Bulletproofs is Complete – Critical Bug Patched [online]". Available: https://ostif.org/theostifandquarkslabauditofmoneroBulletproofsiscompletecriticalbugpatched. Date accessed: 20181023.
[12] J. Bootle, A. Cerulli1, P. Chaidos, J. Groth and C. Petit, "Efficient Zeroknowledge Arguments for Arithmetic Circuits in the Discrete Log Setting", Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 327357. Springer, 2016 [online]. Available: https://eprint.iacr.org/2016/263.pdf. Date accessed: 2018‑09‑21.
[13] J. Groth, "Linear Algebra with Sublinear Zeroknowledge Arguments" [online]. Available: https://link.springer.com/content/pdf/10.1007%2F9783642033568_12.pdf. Date accessed: 2018‑09‑21.
[14] T. Perrin, "The XEdDSA and VXEdDSA Signature Schemes", 20161020 [online]. Available: https://signal.org/docs/specifications/xeddsa and https://signal.org/docs/specifications/xeddsa/xeddsa.pdf. Date accessed: 2018‑10‑23.
[15] A. Poelstra, A. Back, M. Friedenbach, G. Maxwell and P. Wuille, "Confidential Assets", Blockstream [online]. Available: https://blockstream.com/bitcoin17final41.pdf. Date accessed: 2018‑09‑25.
[16] Wikipedia: "Zeroknowledge Proof" [online]. Available: https://en.wikipedia.org/wiki/Zeroknowledge_proof. Date accessed: 2018‑09‑18.
[17] Wikipedia: "Discrete Logarithm" [online]. Available: https://en.wikipedia.org/wiki/Discrete_logarithm. Date accessed: 2018‑09‑20.
[18] A. Fiat and A. Shamir, "How to Prove Yourself: Practical Solutions to Identification and Signature Problems". CRYPTO 1986: pp. 186‑194 [online]. Available: https://link.springer.com/content/pdf/10.1007%2F3540477217_12.pdf. Date accessed: 2018‑09‑20.
[19] D. Bernhard, O. Pereira and B. Warinschi, "How not to Prove Yourself: Pitfalls of the FiatShamir Heuristic and Applications to Helios" [online]. Available: https://link.springer.com/content/pdf/10.1007%2F9783642349614_38.pdf. Date accessed: 2018‑09‑20.
[20] "Mimblewimble Explained" [online]. Available: https://www.weusecoins.com/mimblewimbleandrewpoelstra/. Date accessed: 2018‑09‑10.
[21] GitHub: "Message Hiding in Bulletproofs #721" [online]. Available: https://github.com/mimblewimble/grin/issues/721. Date accessed: 2018‑09‑10.
[22] Pedersencommitment: "An Implementation of Pedersen Commitment Schemes" [online]. Available: https://hackage.haskell.org/package/pedersencommitment. Date accessed: 20180925.
[23] "Zero Knowledge Proof Standardization  An Open Industry/Academic Initiative" [online]. Available: https://zkproof.org/documents.html. Date accessed: 2018‑09‑26.
[24] "SEC 2: Recommended Elliptic Curve Domain Parameters, Standards for Efficient Cryptography", 20 September 2000 [online]. Available: http://safecurves.cr.yp.to/www.secg.org/SEC2Ver1.0.pdf. Date accessed: 2018‑09‑26.
[25] GitHub: "ElementsProject/secp256k1zkp, Experimental Fork of libsecp256k1 with Support for Pedersen Commitments and Range Proofs" [online]. Available: https://github.com/ElementsProject/secp256k1zkp. Date accessed: 2018‑09‑18.
[26] GitHub: "apoelstra/secp256k1mw, Fork of libsecpzkp d78f12b
to Add Support for Mimblewimble Primitives"
[online]. Available: https://github.com/apoelstra/secp256k1mw/tree/Bulletproofs. Date accessed: 2018‑09‑18.
[27] GitHub: "bbuenz/BulletProofLib, Library for Generating Noninteractive Zero Knowledge Proofs without Trusted Setup (Bulletproofs)" [online]. Available: https://github.com/bbuenz/BulletProofLib. Date accessed: 2018‑09‑18.
[28] GitHub: "dalekcryptography/Bulletproofs, A PureRust Implementation of Bulletproofs using Ristretto" [online]. Available: https://github.com/dalekcryptography/Bulletproofs. Date accessed: 20180918.
[29] GitHub: "adjointio/Bulletproofs, Bulletproofs are Short Noninteractive Zeroknowledge Proofs that Require no Trusted Setup" [online]. Available: https://github.com/adjointio/Bulletproofs. Date accessed: 2018‑09‑10.
[30] GitHub: "mimblewimble/secp256k1zkp, Fork of secp256k1zkp for the Grin/MimbleWimble project" [online]. Available: https://github.com/mimblewimble/secp256k1zkp. Date accessed: 20180918.
[31] "The Ristretto Group" [online]. Available: https://ristretto.group/ristretto.html. Date accessed: 2018‑10‑23.
[32] "SafeCurves: Choosing Safe Curves for Ellipticcurve Cryptography" [online]. Available: http://safecurves.cr.yp.to/. Date accessed: 2018‑10‑23.
[33] R. Canetti, B. Riva and G. N. Rothblum, "Two 1Round Protocols for Delegation of Computation" [online]. Available: https://eprint.iacr.org/2011/518.pdf. Date accessed: 2018‑10‑11.
[34] GitHub: "mimblewimble/grin, Switch Commits / Bulletproofs  Status #734" [online]. Available: https://github.com/mimblewimble/grin/issues/734. Date accessed: 20180910.
[35] GitHub: "mimblewimble/grin, Bulletproofs #273" [online]. Available: https://github.com/mimblewimble/grin/issues/273. Date accessed: 2018‑09‑10.
[36] Wikipedia: "Commitment Scheme" [online]. Available: https://en.wikipedia.org/wiki/Commitment_scheme. Date accessed: 2018‑09‑26.
[37] Cryptography Wikia: "Commitment Scheme" [online]. Available: http://cryptography.wikia.com/wiki/Commitment_scheme. Date accessed: 2018‑09‑26.
[38] D. J. Bernstein, "Curve25519: New DiﬃeHellman Speed Records" [online]. Available: https://cr.yp.to/ecdh/curve2551920060209.pdf. Date accessed: 2018‑09‑26.
[39] H. Hisil, K. KoonHo Wong, G. Carter and E. Dawson, "Twisted Edwards Curves Revisited", Information Security Institute, Queensland University of Technology [online]. Available: https://iacr.org/archive/asiacrypt2008/53500329/53500329.pdf. Date accessed: 2018‑09‑26.
[40] A. Sadeghi and M. Steiner, "Assumptions Related to Discrete Logarithms: Why Subtleties Make a Real Difference" [online]. Available: http://www.semper.org/sirene/publ/SaSt_01.dhetal.long.pdf. Date accessed: 20180924.
[41] Crypto Wiki: "Cryptographic Nonce" [online]. Available: http://cryptography.wikia.com/wiki/Cryptographic_nonce. Date accessed: 2018‑10‑08.
[42] Wikipedia: "Cryptographic Nonce" [online]. Available: https://en.wikipedia.org/wiki/Cryptographic_nonce. Date accessed: 20181008.
[43] T. Ruffing and G. Malavolta, " Switch Commitments: A Safety Switch for Confidential Transactions", Saarland University [online]. Available: https://people.mmci.unisaarland.de/~truffing/papers/switchcommitments.pdf. Date accessed: 2018‑10‑08.
[44] "BLAKE2 — Fast Secure Hashing" [online]. Available: https://blake2.net. Date accessed: 2018‑10‑08.
[45] GitHub: "mimblewimble/rustsecp256k1zkp" [online]. Available: https://github.com/mimblewimble/rustsecp256k1zkp. Date accessed: 2018‑11‑16.
[46] GitHub: "moneroproject/monero [online]". Available: https://github.com/moneroproject/monero/tree/master/src/ringct. Date accessed: 2018‑11‑16.
[47] Wikipedia: "Arithmetic Circuit Complexity [online]". Available: https://en.wikipedia.org/wiki/Arithmetic_circuit_complexity. Date accessed: 2018‑11‑08.
[48] G. Maxwell, A. Poelstra1, Y. Seurin and P. Wuille, "Simple Schnorr Multisignatures with Applications to Bitcoin", 20 May 2018 [online]. Available: https://eprint.iacr.org/2018/068.pdf. Date accessed: 2018‑07‑24.
[49] GitHub: "bggoodell/researchlab" [online]. Available: https://github.com/bggoodell/researchlab/tree/master/sourcecode/StringCTjava. Date accessed: 2018‑11‑16.
[50] Wikipedia: "Oneway Function" [online]. Available: https://en.wikipedia.org/wiki/Oneway_function. Date accessed: 2018‑11‑27.
[51] P. Sharma, A. K. Gupta and S. Sharma, "Intensified ElGamal Cryptosystem (IEC)", International Journal of Advances in Engineering & Technology, January 2012 [online]. Available: http://www.eijaet.org/media/58I6IJAET0612695.pdf. Date accessed: 2018‑10‑09.
[52] Y. Tsiounis and M Yung, "On the Security of ElGamal Based Encryption" [online]. Available: https://drive.google.com/file/d/16XGAByoXse5NQl57v_GldJwzmvaQlS94/view. Date accessed: 2018‑10‑09.
[53] Wikipedia: "Decisional Diffie–Hellman Assumption" [online]. Available: https://en.wikipedia.org/wiki/Decisional_Diffie%E2%80%93Hellman_assumption. Date accessed: 2018‑10‑09.
[54] GitHub: "mimblewimble/grin, Bulletproof messages #730" [online]. Available: https://github.com/mimblewimble/grin/pull/730. Date accessed: 2018‑11‑29.
[55] GitHub: "mimblewimble/grin, Removed all switch commitment usages, including restore #841" [online]. Available: https://github.com/mimblewimble/grin/pull/841. Date accessed: 2018‑11‑29.
[56] GitHub: "mimblewimble/grin, switch commitment discussion #998" [online]. Available: https://github.com/mimblewimble/grin/issues/998. Date accessed: 2018‑11‑29.
[57] GitHub: "mimblewimble/grin, [DNM] Switch commitments #2007" [online]. Available: https://github.com/mimblewimble/grin/pull/2007. Date accessed: 2018‑11‑29.
[58] G. G. Dagher, B. Bünz, J. Bonneauy, J. Clark and D. Boneh1, "Provisions: Privacypreserving Proofs of Solvency for Bitcoin Exchanges", October 2015 [online]. Available: https://eprint.iacr.org/2015/1008.pdf. Date accessed: 2018‑11‑29.
[59] A. Poelstra, "Bulletproofs: Faster Rangeproofs and Much More", February 2018 [online]. Available: https://blockstream.com/2018/02/21/bulletproofsfasterrangeproofsandmuchmore/. Date accessed: 2018‑11‑30.
[60] B. Franca, "Homomorphic Miniblockchain Scheme", April 2015 [online]. Available: http://cryptonite.info/files/HMBC.pdf. Date accessed: 2018‑11‑22.
[61] C. Franck and J. Großschädl, "Efficient Implementation of Pedersen Commitments Using Twisted Edwards Curves", University of Luxembourg [online]. Available: http://orbilu.uni.lu/bitstream/10993/33705/1/MSPN2017.pdf. Date accessed: 2018‑11‑22.
[62] A. Gibson, "An Investigation into Confidential Transactions", July 2018 [online]. Available: https://github.com/AdamISZ/ConfidentialTransactionsDoc/blob/master/essayonCT.pdf. Date accessed: 2018‑11‑22.
[63] J. Bootle, "How to do Zeroknowledge from DiscreteLogs in under 7kB", October 2016 [online]. Available: https://www.benthamsgaze.org/2016/10/25/howtodozeroknowledgefromdiscretelogsinunder7kb. Date accessed: 2019‑01‑18.
Appendices
Appendix A: Definition of Terms
Definitions of terms presented here are high level and general in nature. Full mathematical definitions are available in the cited references.
 Arithmetic Circuits: An arithmetic circuit $ C $ over a field $ F $ and variables $ (x_1, ..., x_n) $ is a directed acyclic graph whose vertices are called gates. Arithmetic circuits can alternatively be described as a list of addition and multiplication gates with a collection of linear consistency equations relating the inputs and outputs of the gates. The size of an arithmetic circuit is the number of gates in it, with the depth being the length of the longest directed path. Upper bounding the complexity of a polynomial $ f $ is to find any arithmetic circuit that can calculate $ f $, whereas lower bounding is to find the smallest arithmetic circuit that can calculate $ f $. An example of a simple arithmetic circuit with size six and depth two that calculates a polynomial is shown below ([29], [47]).
 Argument of Knowledge System: Proof systems with computational soundness like Bulletproofs are sometimes called argument systems. The terms proof and argument of knowledge have exactly the same meaning and can be used interchangeably [29].
 Commitment Scheme: A commitment scheme in a Zeroknowledge Proof^{def} is a cryptographic primitive that allows a prover to commit to only a single chosen value/statement from a finite set without the ability to change it later (binding property) while keeping it hidden from a verifier (hiding property). Both binding and hiding properties are then further classified in increasing levels of security to be computational, statistical or perfect. No commitment scheme can at the same time be perfectly binding and perfectly hiding ([36], [37]).
 Discrete Logarithm/Discrete Logarithm Problem (DLP): In the mathematics of real numbers, the logarithm $ \log_b^a $ is a number $ x $ such that $ b^x=a $, for given numbers $ a $ and $ b $. Analogously, in any group $ G $, powers $ b^k $ can be defined for all integers $ k $, and the discrete logarithm $ \log_ba $ is an integer $ k $ such that $ b^k=a $. Algorithms in publickey cryptography base their security on the assumption that the discrete logarithm problem over carefully chosen cyclic finite groups and cyclic subgroups of elliptic curves over finite fields has no efficient solution ([17], [40]).
 Elliptic Curve Pedersen Commitment: An efficient implementation of the Pedersen
Commitment ([15], [22]) will use secure Elliptic Curve Cryptography (ECC), which is based on the algebraic structure
of elliptic curves over finite (prime) fields. Elliptic curve points are used as basic mathematical objects, instead
of numbers. Note that traditionally in elliptic curve arithmetic lower case letters are used for ordinary numbers
(integers) and upper case letters for curve points ([60], [61], [62]).

The generalized Elliptic Curve Pedersen Commitment definition follows (refer to Appendix B: Notation Used):
 Let $ \mathbb F_p $ be the group of elliptic curve points, where $ p $ is a large prime.
 Let $ G \in \mathbb F_p $ be a random generator point (base point) and let $ H \in \mathbb F_p $ be specially chosen so that the value $ x_H $ to satisfy $ H = x_H G $ cannot be found except if the Elliptic Curve DLP (ECDLP) is solved.
 Let $ r $ (the blinding factor) be a random value and element of $ \mathbb Z_p $.
 The commitment to value $ x \in \mathbb Z_p $ is then determined by calculating $ C(x,r) = rH + xG $, which is called the Elliptic Curve Pedersen Commitment.

Elliptic curve point addition is analogous to multiplication in the originally defined Pedersen Commitment. Thus $ g^x $, the number $ g $ multiplied by itself $ m $ times, is analogous to $ xG $, the elliptic curve point $ G $ added to itself $ x $ times. In this context $ xG $ is also a point in $ \mathbb F_p $.

In the Elliptic Curve context $ C(x,r) = rH + xG $ is then analogous to $ C(x,r) = h^r g^x $.

The number $ H $ is what is known as a Nothing Up My Sleeve (NUMS) number. With secp256k1, the value of $ H $ is the SHA256 hash of a simple encoding of the prespecified generator point $ G $.

Similar to Pedersen Commitments, the Elliptic Curve Pedersen Commitments are also additionally homomorphic, such that for messages $ x $, $ x_0 $ and $ x_1 $, blinding factors $ r $, $ r_0 $ and $ r_1 $ and scalar $ k $ the following relation holds: $ C(x_0,r_0) + C(x_1,r_1) = C(x_0+x_1,r_0+r_1) $ and $ C(k \cdot x, k \cdot r) = k \cdot C(x, r) $.

In secure implementations of ECC, it is as hard to guess $ x $ from $ xG $ as it is to guess $ x $ from $g^x $. This is called the Elliptic Curve DLP (ECDLP).

Practical implementations usually consist of three algorithms:
Setup()
to set up the commitment parameters;Commit()
to commit to the message using the commitment parameters; andOpen()
to open and verify the commitment.

 ElGamal Commitment/Encryption: An ElGamal commitment is a Pedersen Commitment ([15], [22]) with an additional commitment $ g^r $ to the randomness used. The ElGamal encryption scheme is based on the Decisional DiffeHellman (DDH) assumption and the difficulty of the DLP for finite fields. The DDH assumption states that it is infeasible for a Probabilistic Polynomialtime (PPT) adversary to solve the DDH problem. Note: The ElGamal encryption scheme should not be confused with the ElGamal signature scheme ([1], [51], [52], [53]).
 Fiat‑Shamir Heuristic/Transformation: The Fiat‑Shamir heuristic is a technique in
cryptography to convert an interactive publiccoin protocol (Sigma protocol) between a prover and a verifier into a
onemessage (noninteractive) protocol using a cryptographic hash function ([18], [19]).

The prover will use a
Prove()
algorithm to calculate a commitment $ A $ with a statement $ Y $ that is shared with the verifier and a secret witness value $ w $ as inputs. The commitment $ A $ is then hashed to obtain the challenge $ c $, which is further processed with theProve()
algorithm to calculate the response $ f $. The single message sent to the verifier then contains the challenge $ c $ and response $ f $. 
The verifier is then able to compute the commitment $ A $ from the shared statement $ Y $, challenge $ c $ and response $ f $. The verifier will then use a
Verify()
algorithm to verify the combination of shared statement $ Y $, commitment $ A $, challenge $ c $ and response $ f $. 
A weak Fiat‑Shamir transformation can be turned into a strong Fiat‑Shamir transformation if the hashing function is applied to the commitment $ A $ and shared statement $ Y $ to obtain the challenge $ c $ as opposed to only the commitment $ A $.

 Nonce: In security engineering, nonce is an abbreviation of number used once. In cryptography, a nonce is an arbitrary number that can be used just once. It is often a random or pseudorandom number issued in an authentication protocol to ensure that old communications cannot be reused in replay attacks ([41], [42]).
 Zeroknowledge Proof/Protocol: In cryptography, a zeroknowledge proof/protocol is a
method by which one party (the prover) can convince another party (the verifier) that a statement $ Y $ is true,
without conveying any information apart from the fact that the prover knows the value of $ Y $. The proof system must
be complete, sound and zeroknowledge ([16], [23]):

Complete  if the statement is true and both prover and verifier follow the protocol, the verifier will accept.

Sound  if the statement is false, and the verifier follows the protocol, the verifier will not be convinced.

Zeroknowledge  if the statement is true and the prover follows the protocol, the verifier will not learn any confidential information from the interaction with the prover, apart from the fact that the statement is true.

Appendix B: Notation Used
The general notation of mathematical expressions when specifically referenced is given here, based on [1].
 Let $ p $ and $ q $ be large prime numbers.
 Let $ \mathbb G $ and $ \mathbb Q $ denote cyclic groups of prime order $ p $ and $ q $ respectively.
 let $ \mathbb Z_p $ and $ \mathbb Z_q $ denote the ring of integers $ modulo \mspace{4mu} p $ and $ modulo \mspace{4mu} q $ respectively.
 Let generators of $ \mathbb G $ be denoted by $ g, h, v, u \in \mathbb G $. In other words, there exists a number $ g \in \mathbb G $ such that $ \mathbb G = \lbrace 1 \mspace{3mu} , \mspace{3mu} g \mspace{3mu} , \mspace{3mu} g^2 \mspace{3mu} , \mspace{3mu} g^3 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} g^{p1} \rbrace \equiv \mathbb Z_p $. Note that not every element of $ \mathbb Z_p $ is a generator of $ \mathbb G $.
 Let $ \mathbb Z_p^* $ denote $ \mathbb Z_p \setminus \lbrace 0 \rbrace $ and $ \mathbb Z_q^* $ denote $ \mathbb Z_q \setminus \lbrace 0 \rbrace $, that is all invertible elements of $ \mathbb Z_p $ and $ \mathbb Z_q $ respectively. This excludes the element $ 0 $ which is not invertible.
Contributors
 https://github.com/hansieodendaal
 https://github.com/CjS77
 https://github.com/SWvheerden
 https://github.com/philiprza
 https://github.com/neonknight64
 https://github.com/anselld
Building on Bulletproofs
Summary
In this post Cathie explains the basics of the Bulletproofs zero knowledge proof protocol. She then goes further to explain specific applications built on top of Bulletproofs, which can be used for range proofs or constraint system proofs. The constraint system proof protocol was used to build a confidential assets scheme called Cloak and a confidential smart contract language called ZkVM.
The Bulletproof Protocols
 Introduction
 Preliminaries
 Bulletproof Protocols
 Innerproduct Argument (Protocol 1)
 How Proof System for Protocol 1 Works, Shrinking by Recursion
 Innerproduct Verification through Multiexponentiation (Protocol 2)
 Range Proof Protocol with Logarithmic Size
 Zeroknowledge Proof for Arithmetic Circuits
 Optimized Verifier using Multiexponentiation and Batch Verification
 Evolving Bulletproof Protocols
 Conclusions, Observations and Recommendations
 References
 Appendices
 Contributors
Introduction
The overview of Bulletproofs given in Bulletproofs and Mimblewimble was largely based on the original work done by Bünz et al. [1]. They documented a number of different Bulletproof protocols, but not all of them in an obvious manner. This report summarizes and explains the different Bulletproof protocols as simply as possible. It also simplifies the logic and explains the base mathematical concepts in more detail where prior knowledge was assumed. The report concludes with a discussion on an improved Bulletproof zeroknowledge proof protocol provided by some community members following an evolutionary approach.
Preliminaries
Notation Used
This section gives the general notation of mathematical expressions when specifically referenced, based on [1]. This notation provides important preknowledge for the remainder of the report.
 Let $ p $ and $ q $ be large prime numbers.
 Let $ \mathbb G $ and $ \mathbb Q $ denote cyclic groups of prime order $ p $ and $ q $ respectively.
 let $ \mathbb Z_p $ and $ \mathbb Z_q $ denote the ring of integers $ modulo \mspace{4mu} p $ and $ modulo \mspace{4mu} q $ respectively.
 Let generators of $ \mathbb G $ be denoted by $ g, h, v, u \in \mathbb G $, i.e. there exists a number $ g \in \mathbb G $ such that $ \mathbb G = \lbrace 1 \mspace{3mu} , \mspace{3mu} g \mspace{3mu} , \mspace{3mu} g^2 \mspace{3mu} , \mspace{3mu} g^3 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} g^{p1} \rbrace \equiv \mathbb Z_p $. Note that not every element of $ \mathbb Z_p $ is a generator of $ \mathbb G $.
 Let $ \mathbb Z_p^* $ denote $ \mathbb Z_p \setminus \lbrace 0 \rbrace $ and $ \mathbb Z_q^* $ denote $ \mathbb Z_q \setminus \lbrace 0 \rbrace $, i.e. all invertible elements of $ \mathbb Z_p $ and $ \mathbb Z_q $ respectively. This excludes the element $ 0 $, which is not invertible.
 Let $ \mathbb G^n $ and $ \mathbb Z^n_p $ be vector spaces of dimension $ n $ over $ \mathbb G $ and $ \mathbb Z_p $ respectively.
 Let $ h^r \mathbf g^\mathbf x = h^r \prod_i g_i^{x_i} \in \mathbb G $ be the vector Pedersen Commitment^{def} with $ \mathbf {g} = (g_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} g_n) \in \mathbb G^n $ and $ \mathbf {x} = (x_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} x_n) \in \mathbb G^n $.
 Let $ \mathbf {a} \in \mathbb F^n $ be a vector with elements $ a_1 \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_n \in \mathbb F $.
 Let $ \langle \mathbf {a}, \mathbf {b} \rangle = \sum _{i=1}^n {a_i \cdot b_i} $ denote the innerproduct between two vectors $ \mathbf {a}, \mathbf {b} \in \mathbb F^n $.
 Let $ \mathbf {a} \circ \mathbf {b} = (a_1 \cdot b_1 \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_n \cdot b_n) \in \mathbb F^n $ denote the entry wise multiplication of two vectors $ \mathbf {a}, \mathbf {b} \in \mathbb F^n $.
 Let $ \mathbf {A} \circ \mathbf {B} = (a_{11} \cdot b_{11} \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_{1m} \cdot b_{1m} \mspace{6mu} ; \mspace{6mu} . . . \mspace{6mu} ; \mspace{6mu} a_{n1} \cdot b_{n1} \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_{nm} \cdot b_{nm} ) $ denote the entry wise multiplication of two matrixes, also known as the Hadamard Product^{def}.
 Let $ \mathbf {a} \parallel \mathbf {b} $ denote the concatenation of two vectors; if $ \mathbf {a} \in \mathbb Z_p^n $ and $ \mathbf {b} \in \mathbb Z_p^m $ then $ \mathbf {a} \parallel \mathbf {b} \in \mathbb Z_p^{n+m} $.
 Let $ p(X) = \sum _{i=0}^d { \mathbf {p_i} \cdot X^i} \in \mathbb Z_p^n [X] $ be a vector polynomial where each coefficient $ \mathbf {p_i} $ is a vector in $ \mathbb Z_p^n $.
 Let $ \langle l(X),r(X) \rangle = \sum _{i=0}^d { \sum _{j=0}^i { \langle l_i,r_i \rangle \cdot X^{i+j}}} \in \mathbb Z_p [X] $ denote the innerproduct between two vector polynomials $ l(X),r(X) $.
 Let $ t(X)=\langle l(X),r(X) \rangle $, then the innerproduct is defined such that $ t(x)=\langle l(x),r(x) \rangle $ holds for all $ x \in \mathbb{Z_p} $.
 Let $ C=g^a = \prod _{i=1}^n g_i^{a_i} \in \mathbb{G} $ be a binding (but not hiding) commitment to the vector $ \mathbf {a} \in \mathbb Z_p^n $ where $ \mathbf {g} = (g_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} g_n) \in \mathbb G^n $. Given vector $ \mathbf {b} \in \mathbb Z_p^n $ with nonzero entries, $ \mathbf {a} \circ \mathbf {b} $ is treated as a new commitment to $ C $. For this let $ g_i^\backprime =g_i^{(b_i^{1})} $ such that $ C= \prod _{i=1}^n (g_i^\backprime)^{a_i \cdot b_i} $. The binding property of this new commitment is inherited from the old commitment.
 Let $ \mathbf a _{[:l]} = ( a_1 \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_l ) \in \mathbb F ^ l$ and $ \mathbf a _{[l:]} = ( a_{1+1} \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_n ) \in \mathbb F ^ {nl} $ be slices of vectors for $ 0 \le l \le n $ (using Python notation).
 Let $ \mathbf {k}^n $ denote the vector containing the first $ n $ powers of $ k \in \mathbb Z_p^* $ such that $ \mathbf {k}^n = (1,k,k^2, \mspace{3mu} ... \mspace{3mu} ,k^{n1}) \in (\mathbb Z_p^*)^n $.
 Let $ \mathcal{P} $ and $ \mathcal{V} $ denote the prover and verifier respectively.
 Let $ \mathcal{P_{IP}} $ and $ \mathcal{V_{IP}} $ denote the prover and verifier in relation to innerproduct calculations respectively.
Pedersen Commitments and Elliptic Curve Pedersen Commitments
The basis of confidential transactions is the Pedersen Commitment scheme defined in [15].
A commitment scheme in a Zeroknowledge Proof^{def} is a cryptographic primitive that allows a prover $ \mathcal{P} $ to commit to only a single chosen value/statement from a finite set without the ability to change it later (binding property), while keeping it hidden from a verifier $ \mathcal{V} $ (hiding property). Both binding and hiding properties are then further classified in increasing levels of security to be computational, statistical or perfect:
 Computational means that no efficient algorithm running in a practical amount of time can reveal the commitment amount or produce fake commitments, except with a small probability.
 Statistical means that the probability of an adversary doing the same in a finite amount of time is negligible.
 Perfect means that a quantum adversary (an attacker with infinite computing power) cannot tell what amount has been committed to, and is unable to produce fake commitments.
No commitment scheme can simultaneously be perfectly binding and perfectly hiding ([12], [13]).
Two variations of the Pedersen Commitment scheme share the same security attributes:
 Pedersen Commitment: The Pedersen Commitment is a system for making a blinded,
noninteractive commitment to a value ([1], [3], [8], [14], [15]).

The generalized Pedersen Commitment definition follows (refer to Notation Used):
 Let $ q $ be a large prime and $ p $ be a large safe prime such that $ p = 2q + 1 $.
 Let $ h $ be a random generator of cyclic group $ \mathbb G $ such that $ h $ is an element of $ \mathbb Z_q^* $.
 Let $ a $ be a random value and element of $ \mathbb Z_q^* $ and calculate $ g $ such that $ g = h^a $.
 Let $ r $ (the blinding factor) be a random value and element of $ \mathbb Z_p^* $.
 The commitment to value $ x $ is then determined by calculating $ C(x,r) = h^r g^x $, which is called the Pedersen Commitment.
 The generator $ h $ and resulting number $ g $ are known as the commitment bases and should be shared along with $ C(x,r) $, with whomever wishes to open the value.

Pedersen Commitments are also additionally homomorphic, such that for messages $ x_0 $ and $ x_1 $ and blinding factors $ r_0 $ and $ r_1 $, we have $ C(x_0,r_0) \cdot C(x_1,r_1) = C(x_0+x_1,r_0+r_1) $

 Elliptic Curve Pedersen Commitment: An efficient implementation of the Pedersen
Commitment^{def} will use secure Elliptic Curve Cryptography (ECC), which is based on the algebraic
structure of elliptic curves over finite (prime) fields. Elliptic curve points are used as basic mathematical objects,
instead of numbers. Note that traditionally in elliptic curve arithmetic, lowercase letters are used for ordinary
numbers (integers) and uppercase letters for curve points ([26], [27], [28]).

The generalized Elliptic Curve Pedersen Commitment definition follows (refer to Notation Used:
 Let $ \mathbb F_p $ be the group of elliptic curve points, where $ p $ is a large prime.
 Let $ G \in \mathbb F_p $ be a random generator point (base point) and let $ H \in \mathbb F_p $ be specially chosen so that the value $ x_H $ to satisfy $ H = x_H G $ cannot be found except if the Elliptic Curve DLP (ECDLP) is solved.
 Let $ r $ (the blinding factor) be a random value and element of $ \mathbb Z_p $.
 The commitment to value $ x \in \mathbb Z_p $ is then determined by calculating $ C(x,r) = rH + xG $, which is called the Elliptic Curve Pedersen Commitment.

Elliptic curve point addition is analogous to multiplication in the originally defined Pedersen Commitment^{def}. Thus $ g^x $, the number $ g $ multiplied by itself $ m $ times, is analogous to $ xG $, the elliptic curve point $ G $ added to itself $ x $ times. In this context, $ xG $ is also a point in $ \mathbb F_p $.

In the Elliptic Curve context, $ C(x,r) = rH + xG $ is then analogous to $ C(x,r) = h^r g^x $.

The number $ H $ is what is known as a Nothing Up My Sleeve (NUMS) number. With secp256k1, the value of $ H $ is the SHA256 hash of a simple encoding of the prespecified generator point $ G $.

Similar to Pedersen Commitments, the Elliptic Curve Pedersen Commitments are also additionally homomorphic, such that for messages $ x $, $ x_0 $ and $ x_1 $, blinding factors $ r $, $ r_0 $ and $ r_1 $ and scalar $ k $, the following relation holds: $ C(x_0,r_0) + C(x_1,r_1) = C(x_0+x_1,r_0+r_1) $ and $ C(k \cdot x, k \cdot r) = k \cdot C(x, r) $.

In secure implementations of ECC, it is as hard to guess $ x $ from $ xG $ as it is to guess $ x $ from $g^x $. This is called the Elliptic Curve DLP (ECDLP).

Practical implementations usually consist of three algorithms:
Setup()
to set up the commitment parameters;Commit()
to commit to the message using the commitment parameters; andOpen()
to open and verify the commitment.

Security Aspects of (Elliptic Curve) Pedersen Commitments
By virtue of their definition, Pedersen Commitments are only computationally binding but perfectly hiding. A simplified explanation follows.
If Alice wants to commit to a value $ x $ that will be sent to Bob, who will at a later stage want Alice to prove that the value $ x $ is the value that was used to generate the commitment $ C $, then Alice will select random blinding factor $ r $, calculate $ C(x,r) = h^r g^x $ and send that to Bob. Later, Alice can reveal $ x $ and $ r $, and Bob can do the calculation and see that the commitment $ C $ it produces is the same as the one Alice sent earlier.
Perhaps Alice or Bob has managed to build a computer that can solve the DLP and, given a public key, could find alternative solutions to $ C(x,r) = h^r g^x $ in a reasonable time, i.e. $ C(x,r) = h^{r^\prime} g^{x^\prime} $. This means, even though Bob can find values for $ r^\prime $ and $ x^\prime $ that produce $ C $, he cannot know if those are the specific $ x $ and $ r $ that Alice chose, because there are so many that can produce the same $ C $. Pedersen Commitments are thus perfectly hiding.
Although the Pederson Commitment is perfectly hiding, it does rely on the fact that Alice has NOT cracked the DLP to be able to calculate other pairs of input values to open the commitment to another value when challenged. The Pederson Commitment is thus only computationally binding.
Bulletproof Protocols
Bulletproof protocols have multiple applications, most of which are discussed in Bulletproofs and Mimblewimble. The following list links these use cases to the different Bulletproof protocols:

Bulletproofs provide short, noninteractive, zeroknowledge proofs without a trusted setup. The small size of Bulletproofs reduces overall cost. This has applicability in distributed systems where proofs are transmitted over a network or stored for a long time.

Range Proof Protocol with Logarithmic Size provides short, single and aggregatable range proofs, and can be used with multiple blockchain protocols including Mimblewimble. These can be applied to normal transactions or to some smart contract, where committed values need to be proven to be in a specific range without revealing the values.

The protocol presented in Aggregating Logarithmic Proofs can be used for Merkle proofs and proof of solvency without revealing any additional information.

Range proofs can be compiled for a multiparty, single, joined confidential transaction for their known outputs using MPC Protocol for Bulletproofs. Users do not have to reveal their secret transaction values.

Verifiable shuffles and multisignatures with deterministic nonces can be implemented with Protocol 3.

Bulletproofs present an efficient and short zeroknowledge proof for arbitrary Arithmetic Circuits^{def} using Zeroknowledge Proof for Arithmetic Circuits.

Various Bulletproof protocols can be applied to scriptless scripts, to make them noninteractive and not have to use Sigma protocols.

Batch verifications can be done using Optimized Verifier using Multiexponentiation and Batch Verification, e.g. a blockchain full node receiving a block of transactions needs to verify all transactions as well as range proofs.
A detailed mathematical discussion of the different Bulletproof protocols follows. Protocols 1, 2 and 3 are numbered consistently with [1]. Refer to Notation Used.
Note: Full mathematical definitions and terms not defined are available in [1].
Innerproduct Argument (Protocol 1)
The first and most important building block of the Bulletproofs is its efficient algorithm to calculate an innerproduct argument for two independent (not related) binding vector Pedersen Commitments^{def}. Protocol 1 is an argument of knowledge that the prover $ \mathcal{P} $ knows the openings of two binding Pedersen vector commitments that satisfy a given inner product relation. Let inputs to the innerproduct argument be independent generators $ g,h \in \mathbb G^n $, a scalar $ c \in \mathbb Z_p $ and $ P \in \mathbb G $. The argument lets the prover $ \mathcal{P} $ convince a verifier $ \mathcal{V} $ that the prover $ \mathcal{P} $ knows two vectors $ \mathbf a, \mathbf b \in \mathbb Z^n_p $ such that $$ P =\mathbf{g}^\mathbf{a}\mathbf{h}^\mathbf{b} \mspace{30mu} \mathrm{and} \mspace{30mu} c = \langle \mathbf {a} \mspace{3mu}, \mspace{3mu} \mathbf {b} \rangle $$
$ P $ is referred to as the binding vector commitment to $ \mathbf a, \mathbf b $. The inner product argument is an efficient proof system for the following relation:
$$ { (\mathbf {g},\mathbf {h} \in \mathbb G^n , \mspace{12mu} P \in \mathbb G , \mspace{12mu} c \in \mathbb Z_p ; \mspace{12mu} \mathbf {a}, \mathbf {b} \in \mathbb Z^n_p ) \mspace{3mu} : \mspace{15mu} P = \mathbf{g}^\mathbf{a}\mathbf{h}^\mathbf{b} \mspace{3mu} \wedge \mspace{3mu} c = \langle \mathbf {a} \mspace{3mu}, \mspace{3mu} \mathbf {b} \rangle } \tag{1} $$
Relation (1) requires sending $ 2n $ elements to the verifier $ \mathcal{V} $. The inner product $ c = \langle \mathbf {a} \mspace{3mu}, \mspace{3mu} \mathbf {b} \rangle \ $ can be made part of the vector commitment $ P \in \mathbb G $. This will enable sending only $ 2 \log 2 (n) $ elements to the verifier $ \mathcal{V} $ (explained in the next section). For a given $ P \in \mathbb G $, the prover $ \mathcal{P} $ proves that it has vectors $ \mathbf {a}, \mathbf {b} \in \mathbb Z^n_p $ for which $ P =\mathbf{g}^\mathbf{a}\mathbf{h}^\mathbf{b} \cdot u^{ \langle \mathbf {a}, \mathbf {b} \rangle } $. Here $ u \in \mathbb G $ is a fixed group element with an unknown discretelog relative to $ \mathbf {g},\mathbf {h} \in \mathbb G^n $.
$$ { (\mathbf {g},\mathbf {h} \in \mathbb G^n , \mspace{12mu} u,P \in \mathbb G ; \mspace{12mu} \mathbf {a}, \mathbf {b} \in \mathbb Z^n_p ) : \mspace{15mu} P = \mathbf{g}^\mathbf{a}\mathbf{h}^\mathbf{b} \cdot u^{ \langle \mathbf {a}, \mathbf {b} \rangle } } \tag{2} $$
A proof system for relation (2) gives a proof system for (1) with the same complexity, thus only a proof system for relation (2) is required.
Protocol 1 is then defined as the proof system for relation (2) as shown in Figure 1. The element $ u $ is raised to a random power $ x $ (the challenge) chosen by the verifier $ \mathcal{V} $ to ensure that the extracted vectors $ \mathbf {a}, \mathbf {b} $ from Protocol 2 satisfy $ c = \langle \mathbf {a} , \mathbf {b} \rangle $.
The argument presented in Protocol 1 has the following Commitment Scheme properties:
 Perfect completeness (hiding): Every validity/truth is provable. Also refer to Definition 9 in [1].
 Statistical witness extended emulation (binding): Robust against either extracting a nontrivial discrete logarithm relation between $ \mathbf {g} , \mathbf {h} , u $ or extracting a valid witness $ \mathbf {a}, \mathbf {b} $.
How Proof System for Protocol 1 Works, Shrinking by Recursion
Protocol 1 uses an inner product argument of two vectors $ \mathbf a, \mathbf b \in \mathbb Z^n_p $ of size $ n $. The Pedersen Commitment scheme allows a vector to be cut in half and the two halves to then be compressed together. Let $ \mathrm H : \mathbb Z^{2n+1}_p \to \mathbb G $ be a hash function for commitment $ P $, with $ P = \mathrm H(\mathbf a , \mathbf b, \langle \mathbf a, \mathbf b \rangle) $. Note that commitment $ P $ and thus $ \mathrm H $ is additively homomorphic, therefore sliced vectors of $ \mathbf a, \mathbf b \in \mathbb Z^n_p $ can be hashed together with inner product $ c = \langle \mathbf a , \mathbf b \rangle \in \mathbb Z_p$. If $ n ^\prime = n/2 $, starting with relation (2), then
$$ \begin{aligned} \mathrm H(\mathbf a \mspace{3mu} , \mspace{3mu} \mathbf b \mspace{3mu} , \mspace{3mu} \langle \mathbf a , \mathbf b \rangle) &= \mathbf{g} ^\mathbf{a} \mathbf{h} ^\mathbf{b} \cdot u^{ \langle \mathbf a, \mathbf b \rangle} \mspace{20mu} \in \mathbb G \\ \mathrm H(\mathbf a_{[: n ^\prime]} \mspace{3mu} , \mspace{3mu} \mathbf a_{[n ^\prime :]} \mspace{3mu} , \mspace{3mu} \mathbf b_{[: n ^\prime]} \mspace{3mu} , \mspace{3mu} \mathbf b_{[n ^\prime :]} \mspace{3mu} , \mspace{3mu} \langle \mathbf {a}, \mathbf {b} \rangle) &= \mathbf g ^ {\mathbf a_{[: n ^\prime]}} _{[: n ^\prime]} \cdot \mathbf g ^ {\mathbf a^\prime_{[n ^\prime :]}} _{[n ^\prime :]} \cdot \mathbf h ^ {\mathbf b_{[: n ^\prime]}} _{[: n ^\prime]} \cdot \mathbf h ^ {\mathbf b^\prime_{[n ^\prime :]}} _{[n ^\prime :]} \cdot u^{\langle \mathbf {a}, \mathbf {b} \rangle} \mspace{20mu} \in \mathbb G \end{aligned} $$
Commitment $ P $ can then further be sliced into $ L $ and $ R $ as follows:
$$ \begin{aligned} P &= \mathrm H(\mspace{3mu} \mathbf a_{[: n ^\prime]} \mspace{6mu} , \mspace{6mu} \mathbf a_{[n ^\prime :]} \mspace{6mu} , \mspace{6mu} \mathbf b_{[: n ^\prime]} \mspace{6mu} , \mspace{6mu} \mathbf b_{[n ^\prime :]} \mspace{6mu} , \mspace{6mu} \langle \mathbf {a}, \mathbf {b} \rangle \mspace{49mu}) \mspace{20mu} \in \mathbb G \\ L &= \mathrm H(\mspace{3mu} 0 ^ {n ^\prime} \mspace{18mu} , \mspace{6mu} \mathbf a_{[: n ^\prime]} \mspace{6mu} , \mspace{6mu} \mathbf b_{[n ^\prime :]} \mspace{6mu} , \mspace{6mu} 0 ^ {n ^\prime} \mspace{18mu} , \mspace{6mu} \langle \mathbf {a_{[: n ^\prime]}} , \mathbf {b_{[n ^\prime :]}} \rangle \mspace{3mu}) \mspace{20mu} \in \mathbb G \\ R &= \mathrm H(\mspace{3mu} \mathbf a_{[n ^\prime :]} \mspace{6mu} , \mspace{6mu} 0 ^ {n ^\prime} \mspace{18mu} , \mspace{6mu} 0 ^ {n ^\prime} \mspace{18mu} , \mspace{6mu} \mathbf b_{[: n ^\prime]} \mspace{6mu} , \mspace{6mu} \langle \mathbf {a_{[n ^\prime :]}} , \mathbf {b_{[: n ^\prime]}} \rangle \mspace{3mu}) \mspace{20mu} \in \mathbb G \end{aligned} $$
The first reduction step is as follows:
 The prover $ \mathcal{P} $ calculates $ L,R \in \mathbb G $ and sends it to the verifier $ \mathcal{V} $.
 The verifier $ \mathcal{V} $ chooses a random $ x \overset{$}{\gets} \mathbb Z _p $ and sends it to the prover $ \mathcal{P} $.
 The prover $ \mathcal{P} $ calculates $ \mathbf a^\prime , \mathbf b^\prime \in \mathbb Z^{n^\prime}_p $ and sends it to the verifier $ \mathcal{V} $:
$$ \begin{aligned} \mathbf a ^\prime &= x\mathbf a _{[: n ^\prime]} + x^{1} \mathbf a _{[n ^\prime :]} \in \mathbb Z^{n^\prime}_p \\ \mathbf b ^\prime &= x^{1}\mathbf b _{[: n ^\prime]} + x \mathbf b _{[n ^\prime :]} \in \mathbb Z^{n^\prime}_p \end{aligned} $$
 The verifier $ \mathcal{V} $ calculates $ P^\prime = L^{x^2} \cdot P \cdot R^{x^{2}} $ and accepts (verify true) if
$$ P^\prime = \mathrm H ( x^{1} \mathbf a^\prime \mspace{3mu} , \mspace{3mu} x \mathbf a^\prime \mspace{6mu} , \mspace{6mu} x \mathbf b^\prime \mspace{3mu} , \mspace{3mu} x^{1} \mathbf b^\prime \mspace{3mu} , \mspace{3mu} \langle \mathbf a^\prime , \mathbf b^\prime \rangle ) \tag{3} $$
So far, the prover $ \mathcal{P} $ only sent $ n + 2 $ elements to the verifier $ \mathcal{V} $, i.e. the four tuple $ ( L , R , \mathbf a^\prime , \mathbf b^\prime ) $, approximately half the length compared to sending the complete $ \mathbf a, \mathbf b \in \mathbb Z^n_p $. The test in relation (3) is the same as testing that $$ P^\prime = (\mathbf g ^ {x^{1}} _{[: n ^\prime]} \circ \mathbf g ^ x _{[n ^\prime :]})^{\mathbf a^\prime} \cdot (\mathbf h ^ x _{[: n ^\prime]} \circ \mathbf h ^ {x^{1}} _{[n ^\prime :]})^{\mathbf b^\prime} \cdot u^{\langle \mathbf a^\prime , \mathbf b^\prime \rangle} \tag{4} $$
Thus, the prover $ \mathcal{P} $ and verifier $ \mathcal{V} $ can recursively engage in an innerproduct argument for $ P^\prime $ with respect to generators
$$ (\mathbf g ^ {x^{1}} _{[: n ^\prime]} \circ \mathbf g ^ x _{[n ^\prime :]} \mspace{6mu} , \mspace{6mu} \mathbf h ^ x _{[: n ^\prime]} \circ \mathbf h ^ {x^{1}} _{[n ^\prime :]} \mspace{6mu} , \mspace{6mu} u ) $$
which will result in a $ \log _2 n $ round protocol with $ 2 \log _2 n $ elements in $ \mathbb G $ and $ 2 $ elements in $ \mathbb Z _p $. The prover $ \mathcal{P} $ ends up sending the following terms to the verifier $ \mathcal{V} $: $$ (L_1 , R_1) \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} (L_{\log _2 n} , R _{\log _2 n}) \mspace{3mu} , \mspace{3mu} (a , b) $$
where $ a,b \in \mathbb Z _p $ are only sent right at the end. This protocol can be made noninteractive using the FiatShamir^{def} heuristic.
Innerproduct Verification through Multiexponentiation (Protocol 2)
The inner product argument to be calculated is that of two vectors $ \mathbf a, \mathbf b \in \mathbb Z^n_p $ of size $ n $. Protocol 2 has a logarithmic number of rounds. In each round, the prover $ \mathcal{P} $ and verifier $ \mathcal{V} $ calculate a new set of generators $ ( \mathbf g ^\prime , \mathbf h ^\prime ) $, which would require a total of $ 4n $ computationally expensive exponentiations. Multiexponentiation is a technique to reduce the number of exponentiations for a given calculation. In Protocol 2, the number of exponentiations is reduced to a single multiexponentiation by delaying all the exponentiations until the last round. It can also be made noninteractive using the FiatShamir^{def} heuristic, providing a further speedup.
Let $ g $ and $ h $ be the generators used in the final round of the protocol and $ x_j $ be the challenge from the $ j _{th} $ round. In the last round, the verifier $ \mathcal{V} $ checks that $ g^a h^b u ^{a \cdot b} = P $, where $ a,b \in \mathbb Z_p $ are given by the prover $ \mathcal{P} $. The final $ g $ and $ h $ can be expressed in terms of the input generators $ \mathbf {g},\mathbf {h} \in \mathbb G^n $ as:
$$ g = \prod _{i=1}^n g_i^{s_i} \in \mathbb{G}, \mspace{21mu} h=\prod _{i=1}^n h_i^{1/s_i} \in \mathbb{G} $$
where $ \mathbf {s} = (s_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} s_n) \in \mathbb Z_p^n $ only depends on the challenges $ (x_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} x_{\log_2(n)}) \in \mathbb Z_p^n $. The scalars of $ \mathbf {s} $ are calculated as follows:
$$ s_i = \prod ^{\log _2 (n)} _{j=1} x ^{b(i,j)} _j \mspace{15mu} \mathrm {for} \mspace{15mu} i = 1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} n $$
where
$$ b(i,j) = \begin{cases} \mspace{12mu} 1 \mspace{18mu} \text{if the} \mspace{4mu} j \text{th bit of} \mspace{4mu} i1 \mspace{4mu} \text{is} \mspace{4mu} 1 \\ 1 \mspace{15mu} \text{otherwise} \end{cases} $$
The entire verification check in the protocol reduces to a single multiexponentiation of size $ 2n + 2 \log_2(n) + 1 $:
$$ \mathbf g^{a \cdot \mathbf{s}} \cdot \mathbf h^{b \cdot\mathbf{s^{1}}} \cdot u^{a \cdot b} \mspace{12mu} \overset{?}{=} \mspace{12mu} P \cdot \prod _{j=1}^{\log_2(n)} L_j^{x_j^2} \cdot R_j^{x_j^{2}} \tag{5} $$
Figure 2 shows Protocol 2:
Range Proof Protocol with Logarithmic Size
This protocol provides short and aggregatable range proofs, using the improved inner product argument from Protocol 1. It is built up in five parts:
 how to construct a range proof that requires the verifier $ \mathcal{V} $ to check an inner product between two vectors;
 how to replace the inner product argument with an efficient innerproduct argument;
 how to efficiently aggregate $ m $ range proofs into one short proof;
 how to make interactive public coin protocols noninteractive by using the FiatShamir^{def} heuristic; and
 how to allow multiple parties to construct a single aggregate range proof.
Figure 3 gives a diagrammatic overview of a range proof protocol implementation using Elliptic Curve Pedersen Commitments^{def}:
Innerproduct Range Proof
This protocol provides the ability to construct a range proof that requires the verifier $ \mathcal{V} $ to check an inner product between two vectors. The range proof is constructed by exploiting the fact that a Pedersen Commitment $ V $ is an element in the same group $ \mathbb G $ that is used to perform the inner product argument. Let $ v \in \mathbb Z_p $ and let $ V \in \mathbb G $ be a Pedersen Commitment to $ v $ using randomness $ \gamma $. The proof system will convince the verifier $ \mathcal{V} $ that commitment $ V $ contains a number $ v \in [0,2^n  1] $ such that
$$ { (g,h \in \mathbb{G}) , V , n \mspace{3mu} ; \mspace{12mu} v, \gamma \in \mathbb{Z_p} ) \mspace{3mu} : \mspace{3mu} V =h^\gamma g^v \mspace{5mu} \wedge \mspace{5mu} v \in [0,2^n  1] } $$
without revealing $ v $. Let $ \mathbf {a}_L = (a_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} a_n) \in {0,1}^n $ be the vector containing the bits of $ v, $ so that $ \langle \mathbf {a}_L, \mathbf {2}^n \rangle = v $. The prover $ \mathcal{P} $ commits to $ \mathbf {a}_L $ using a constant size vector commitment $ A \in \mathbb{G} $. It will convince the verifier $ \mathcal{V} $ that $ v $ is in $ [0,2^n  1] $ by proving that it knows an opening $ \mathbf {a}_L \in \mathbb Z_p^n $ of $ A $ and $ v, \gamma \in \mathbb{Z_p} $ such that $ V =h^\gamma g^v $ and
$$ \langle \mathbf {a}_L \mspace{3mu} , \mspace{3mu} \mathbf {2}^n \rangle = v \mspace{20mu} \mathrm{and} \mspace{20mu} \mathbf {a}_R = \mathbf {a}_L  \mathbf {1}^n \mspace{20mu} \mathrm{and} \mspace{20mu} \mathbf {a}_L \circ \mathbf {a}_R = \mathbf{0}^n \mspace{20mu} \tag{6} $$
This proves that $ a_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} a_n $ are all in $ {0,1} $ and that $ \mathbf {a}_L $ is composed of the bits of $ v $. However, the $ 2n + 1 $ constraints need to be expressed as a single innerproduct constant so that Protocol 1 can be used, by letting the verifier $ \mathcal{V} $ choose a random linear combination of the constraints. To prove that a committed vector $ \mathbf {b} \in \mathbb Z_p^n $ satisfies $ \mathbf {b} = \mathbf{0}^n $, it suffices for the verifier $ \mathcal{V} $ to send a random $ y \in \mathbb{Z_p} $ to the prover $ \mathcal{P} $ and for the prover $ \mathcal{P} $ to prove that $ \langle \mathbf {b}, \mathbf {y}^n \rangle = 0 $, which will convince the verifier $ \mathcal{V} $ that $ \mathbf {b} = \mathbf{0}^n $. The prover $ \mathcal{P} $ can thus prove relation (6) by proving that $$ \langle \mathbf {a}_L \mspace{3mu} , \mspace{3mu} \mathbf {2}^n \rangle = v \mspace{20mu} \mathrm{and} \mspace{20mu} \langle \mathbf {a}_L  1  \mathbf {a}_R \mspace{3mu} , \mspace{3mu} \mathbf {y}^n \rangle=0 \mspace{20mu} \mathrm{and} \mspace{20mu} \langle \mathbf {a}_L \mspace{3mu} , \mspace{3mu} \mathbf {a}_R \circ \mathbf {y}^n \rangle = \mathbf{0}^n \mspace{20mu} $$
Building on this, the verifier $ \mathcal{V} $ chooses a random $ z \in \mathbb{Z_p} $ and lets the prover $ \mathcal{P} $ prove that
$$ z^2 \cdot \langle \mathbf {a}_L \mspace{3mu} , \mspace{3mu} \mathbf {2}^n \rangle + z \cdot \langle \mathbf {a}_L  1  \mathbf {a}_R \mspace{3mu} , \mspace{3mu} \mathbf {y}^n \rangle + \langle \mathbf {a}_L \mspace{3mu} , \mspace{3mu} \mathbf {a}_R \circ \mathbf {y}^n \rangle = z^2 \cdot v \mspace{20mu} \tag{7} $$
Relation (7) can be rewritten as
$$ \langle \mathbf {a}_L  z \cdot \mathbf {1}^n \mspace{3mu} , \mspace{3mu} \mathbf {y}^n \circ (\mathbf {a}_R + z \cdot \mathbf {1}^n) +z^2 \cdot \mathbf {2}^n \rangle = z^2 \cdot v + \delta (y,z) \tag{8} $$
where
$$ \delta (y,z) = (zz^2) \cdot \langle \mathbf {1}^n \mspace{3mu} , \mspace{3mu} \mathbf {y}^n\rangle z^3 \cdot \langle \mathbf {1}^n \mspace{3mu} , \mspace{3mu} \mathbf {2}^n\rangle \in \mathbb{Z_p} $$
can be easily calculated by the verifier $ \mathcal{V} $. The proof that relation (6) holds was thus reduced to a single innerproduct identity.
Relation (8) cannot be used in its current form without revealing information about $ \mathbf {a}_L $. Two additional blinding vectors $ \mathbf {s}_L , \mathbf {s}_R \in \mathbb Z_p^n $ are introduced with the prover $ \mathcal{P} $ and verifier $ \mathcal{V} $ engaging in the zeroknowledge protocol shown in Figure 4:
Two linear vector polynomials $ l(X), r(X) \in \mathbb Z^n_p[X] $ are defined as the innerproduct terms for relation (8), also containing the blinding vectors $ \mathbf {s}_L $ and $ \mathbf {s}_R $. A quadratic polynomial $ t(X) \in \mathbb Z_p[X] $ is then defined as the inner product between the two vector polynomials $ l(X), r(X) $ such that
$$ t(X) = \langle l(X) \mspace{3mu} , \mspace{3mu} r(X) \rangle = t_0 + t_1 \cdot X + t_2 \cdot X^2 \mspace{10mu} \in \mathbb {Z}_p[X] $$
The blinding vectors $ \mathbf {s}_L $ and $ \mathbf {s}_R $ ensure that the prover $ \mathcal{P} $ can publish $ l(x) $ and $ r(x) $ for one $ x \in \mathbb Z_p^* $ without revealing any information about $ \mathbf {a}_L $ and $ \mathbf {a}_R $. The constant term $ t_0 $ of the quadratic polynomial $ t(X) $ is then the result of the inner product in relation (8), and the prover $ \mathcal{P} $ needs to convince the verifier $ \mathcal{V} $ that
$$ t_0 = z^2 \cdot v + \delta (y,z) $$
In order to do so, the prover $ \mathcal{P} $ convinces the verifier $ \mathcal{V} $ that it has a commitment to the remaining coefficients of $ t(X) $, namely $ t_1,t_2 \in \mathbb Z_p $, by checking the value of $ t(X) $ at a random point $ x \in \mathbb Z_p^* $. This is illustrated in Figure 5:
The verifier $ \mathcal{V} $ now needs to check that $ l $ and $ r $ are in fact $ l(x) $ and $ r(x) $, and that $ t(x) = \langle l \mspace{3mu} , \mspace{3mu} r \rangle $. A commitment for $ \mathbf {a}_R \circ \mathbf {y}^n $ is needed and to do so, the commitment generators are switched from $ h \in \mathbb G^n $ to $ h ^\backprime = h^{(\mathbf {y}^{1})}$. Thus $ A $ and $ S $ now become vector commitments to $ ( \mathbf {a}_L \mspace{3mu} , \mspace{3mu} \mathbf {a}_R \circ \mathbf {y}^n ) $ and $ ( \mathbf {s}_L \mspace{3mu} , \mspace{3mu} \mathbf {s}_R \circ \mathbf {y}^n ) $ respectively, with respect to the new generators $ (g, h ^\backprime, h) $. This is illustrated in Figure 6.
The range proof presented here has the following Commitment Scheme properties:
 Perfect completeness (hiding). Every validity/truth is provable. Also refer to Definition 9 in [1].
 Perfect special Honest Verifier Zeroknowledge (HVZK). The verifier $ \mathcal{V} $ behaves according to the protocol. Also refer to Definition 12 in [1].
 Computational witness extended emulation (binding). A witness can be computed in time closely related to time spent by the prover $ \mathcal{P} $. Also refer to Definition 10 in [1].
Logarithmic Range Proof
This protocol replaces the inner product argument with an efficient innerproduct argument. In step (63) in Figure 5, the prover $ \mathcal{P} $ transmits $ \mathbf {l} $ and $ \mathbf {r} $ to the verifier $ \mathcal{V} $, but their size is linear in $ n $. To make this efficient, a proof size that is logarithmic in $ n $ is needed. The transfer of $ \mathbf {l} $ and $ \mathbf {r} $ can be eliminated with an innerproduct argument. Checking correctness of $ \mathbf {l} $ and $ \mathbf {r} $ (step 67 in Figure 6) and $ \hat {t} $ (step 68 in Figure 6) is the same as verifying that the witness $ \mathbf {l} , \mathbf {r} $ satisfies the inner product of relation (2) on public input $ (\mathbf {g} , \mathbf {h} ^ \backprime , P \cdot h^{\mu}, \hat t) $. Transmission of vectors $ \mathbf {l} $ and $ \mathbf {r} $ to the verifier $ \mathcal{V} $ (step 63 in Figure 5) can then be eliminated and transfer of information limited to the scalar properties $ ( \tau _x , \mu , \hat t ) $ alone, thereby archiving a proof size that is logarithmic in $ n $.
Aggregating Logarithmic Proofs
This protocol efficiently aggregates $ m $ range proofs into one short proof with a slight modification to the protocol presented in Innerproduct Range Proof. For aggregate range proofs, the inputs of one range proof do not affect the output of another range proof. Aggregating logarithmic range proofs is especially helpful if a single prover $ \mathcal{P} $ needs to perform multiple range proofs at the same time.
A proof system must be presented for the following relation:
$$ { (g,h \in \mathbb{G}) , \mspace{9mu} \mathbf {V} \in \mathbb{G}^m \mspace{3mu} ; \mspace{9mu} \mathbf {v}, \gamma \in \mathbb Z_p^m ) \mspace{6mu} : \mspace{6mu} V_j = h^{\gamma_j} g^{v_j} \mspace{6mu} \wedge \mspace{6mu} v_j \in [0,2^n  1] \mspace{15mu} \forall \mspace{15mu} j \in [1,m] } \tag{9} $$
The prover $ \mathcal{P} $ should now compute $ \mspace{3mu} \mathbf a_L \in \mathbb Z_p^{n \cdot m} $ as the concatenation of all of the bits for every $ v_j $ such that
$$ \langle \mathbf{2}^n \mspace{3mu} , \mspace{3mu} \mathbf a_L[(j1) \cdot n : j \cdot n1] \rangle = v_j \mspace{9mu} \forall \mspace{9mu} j \in [1,m] \mspace{3mu} $$
The quantity $ \delta (y,z) $ is adjusted to incorporate more crossterms $ n \cdot m $ , the linear vector polynomials $ l(X), r(X) $ are adjusted to be in $ \mathbb Z^{n \cdot m}_p[X] $ and the blinding factor $ \tau_x $ for the inner product $ \hat{t} $ (step 61 in Figure 5) is adjusted for the randomness of each commitment $ V_j $. The verification check (step 65 in Figure 6) is updated to include all $ V_j $ commitments and the definition of $ P $ (step 66 in Figure 6) is changed to be a commitment to the new $ r $.
This aggregated range proof that makes use of the inner product argument only uses $ 2 \cdot [ \log _2 (n \cdot m)] + 4 $ group elements and $ 5 $ elements in $ \mathbb Z_p $. The growth in size is limited to an additive term $ 2 \cdot [ \log _2 (m)] $ as opposed to a multiplicative factor $ m $ for $ m $ independent range proofs.
The aggregate range proof presented here has the following Commitment Scheme properties:
 Perfect completeness (hiding): Every validity/truth is provable. Also refer to Definition 9 in [1].
 Perfect special HVZK: The verifier $ \mathcal{V} $ behaves according to the protocol. Also refer to Definition 12 in [1].
 Computational witness extended emulation (binding): A witness can be computed in time closely related to time spent by the prover $ \mathcal{P} $. Also refer to Definition 10 in [1].
Noninteractive Proof through FiatShamir Heuristic
So far, the verifier $ \mathcal{V} $ behaves as an honest verifier and all messages are random elements from $ \mathbb Z_p^* $. These are the prerequisites needed to convert the protocol presented so far into a noninteractive protocol that is secure and has full zeroknowledge in the random oracle model (thus without a trusted setup) using the FiatShamir Heuristic^{def}.
MPC Protocol for Bulletproofs
The Multiparty Computation (MPC) protocol for Bulletproofs allows multiple parties to construct a single, simple, efficient, aggregate range proof designed for Bulletproofs. This is valuable when multiple parties want to create a single joined confidential transaction, where each party knows some of the inputs and outputs, and needs to create range proofs for their known outputs. In Bulletproofs, $ m $ parties, each having a Pedersen Commitment $ (V_k)_{k=1}^m $, can generate a single Bulletproof to which each $ V_k $ commits in some fixed range.
Let $ k $ denote the $ k $th party's message, thus $ A^{(k)} $ is generated using only inputs of party $ k $. A set of distinct generators $ (g^{(k)}, h^{(k)})^m_{k=1} $ is assigned to each party, and $ \mathbf g,\mathbf h $ is defined as the interleaved concatenation of all $ g^{(k)} , h^{(k)} $ such that
$$ g_i=g_{[{i \over{m}}]}^{((i1) \mod m+1)} \mspace{15mu} \mathrm{and} \mspace{15mu} h_i=h_{[{i \over{m}}]}^{((i1) \mod m+1)} $$
The protocol either uses three rounds with linear communication in both $ m $ and the binary encoding of the range, or it uses a logarithmic number of rounds and communication that is only linear in $ m $. For the linear communication case, the protocol in Innerproduct Range Proof is followed with the difference that each party generates its part of the proof using its own inputs and generators, i.e.
$$ A^{(k)} , S^{(k)}; \mspace{15mu} T_1^{(k)} , T_2^{(k)}; \mspace{15mu} \tau_x^{(k)} , \mu^{(k)} , \hat{t}^{(k)} , \mathbf{l}^{(k)} , \mathbf{r}^{(k)} $$
These shares are sent to a dealer (this could be anyone, even one of the parties) who adds them homomorphically to generate the respective proof components, e.g.
$$ A = \prod^m_{k=1} A^{(k)} \mspace{15mu} \mathrm{and} \mspace{15mu} \tau_x = \prod^m_{k=1} \tau_x^{(k)} $$
In each round, the dealer generates the challenges using the FiatShamir^{def} heuristic and the combined proof components, and sends them to each party. In the end, each party sends $ \mathbf{l}^{(k)},\mathbf{r}^{(k)} $ to the dealer, who computes $ \mathbf{l},\mathbf{r} $ as the interleaved concatenation of all shares. The dealer runs the inner product argument (Protocol 1) to generate the final proof. Each proof component is the (homomorphic) sum of each party's proof components and each share constitutes part of a separate zeroknowledge proof. Figure 7 shows an example of the MPC protocol implementation using three rounds with linear communication [29]:
The communication can be reduced by running a second MPC protocol for the inner product argument, reducing the rounds to $ \log_2(l) $. Up to the last $ \log_2(l) $ round, each party's witnesses are independent, and the overall witness is the interleaved concatenation of the parties' witnesses. The parties compute $ L^{(k)}, R^{(k)} $ in each round and the dealer computes $ L, R $ as the homomorphic sum of the shares. In the final round, the dealer generates the final challenge and sends it to each party, who in turn send their witness to the dealer, who completes Protocol 2.
MPC Protocol Security Discussion
With the standard MPC protocol implementation as depicted in Figure 7, there's no guarantee that the dealer behaves honestly according to the specified protocol and generates challenges honestly. Since the Bulletproofs protocol is special HVZK ([30], [31]) only, a secure MPC protocol requires all parties to receive partial proof elements and independently compute aggregated challenges, in order to avoid the alternative case where a single dealer maliciously generates them. A single dealer can, however, assign index positions to all parties at the start of the protocol, and may complete the inner product compression, since it does not rely on partial proof data.
HVZK implies witnessindistinguishability [32], but it isn't clear what the implications of this would be on the MPC protocol in practice. It could be that there are no practical attacks possible from a malicious dealer and that witnessindistinguishability is sufficient.
Zeroknowledge Proof for Arithmetic Circuits
Bulletproofs present an efficient zeroknowledge argument for arbitrary Arithmetic Circuits^{def} with a proof size of $ 2 \cdot [ \log _2 (n)+13] $ elements with $ n $ denoting the multiplicative complexity (number of multiplication gates) of the circuit.
Bootle et al. [2] showed how an arbitrary arithmetic circuit with $ n $ multiplication gates can be converted into a relation containing a Hadamard Product^{def} relation with additional linear consistency constraints. The communication cost of the addition gates in the argument was removed by providing a technique that can directly handle a set of Hadamard products and linear relations together. For a twoinput multiplication gate, let $ \mathbf a_L , \mathbf a_R $ be the left and right input vectors respectively, then $ \mathbf a_O = \mathbf a_L \circ \mathbf a_R $ is the vector of outputs. Let $ Q \leqslant 2 \cdot n $ be the number of linear consistency constraints, $ \mathbf W_{L,q} \mspace{3mu} , \mathbf W_{R,q} \mspace{3mu}, \mathbf W_{O,q} \in \mathbb Z_p^n $ be the gate weights and $ c_q \in \mathbb Z_p $ for all $ q \in [1,Q] $, then the linear consistency constraints have the form $$ \langle \mathbf W_{L,q}, \mathbf a_L \rangle + \langle \mathbf W_{R,q}, \mathbf a_R \rangle +\langle \mathbf W_{O,q}, \mathbf a_O \rangle = c_q $$
The highlevel idea of this protocol is to convert the Hadamard Product relation along with the linear consistency constraints into a single inner product relation. Pedersen Commitments $ V_j $ are also included as input wires to the arithmetic circuit, which is an important refinement, otherwise the arithmetic circuit would need to implement a commitment algorithm. The linear constraints also include openings $ v_j $ of $ V_j $.
Innerproduct Proof for Arithmetic Circuits (Protocol 3)
Similar to Innerproduct Range Proof, the prover $ \mathcal{P} $ produces a random linear combination of the Hadamard Product^{def} and linear constraints to form a single inner product constraint. If the combination is chosen randomly by the verifier $ \mathcal{V} $, then with overwhelming probability, the innerproduct constraint implies the other constraints. A proof system must be presented for relation (10) below:
$$ \begin{aligned} \mspace{3mu} (g,h \in \mathbb{G} \mspace{3mu} ; \mspace{3mu} \mathbf g,\mathbf h \in \mathbb{G}^n \mspace{3mu} ; \mspace{3mu} \mathbf V \in \mathbb{G}^m \mspace{3mu} ; \mspace{3mu} \mathbf W_{L} , \mathbf W_{R} , \mathbf W_{O} \in \mathbb Z_p^{Q \times n} \mspace{3mu} ; \\ \mathbf W_{V} \in \mathbb Z_p^{Q \times m} \mspace{3mu} ; \mspace{3mu} \mathbb{c} \in \mathbb Z_p^{Q} \mspace{3mu} ; \mspace{3mu} \mathbf a_L , \mathbf a_R , \mathbf a_O \in \mathbb Z_p^{n} \mspace{3mu} ; \mspace{3mu} \mathbf v , \mathbf \gamma \in \mathbb Z_p^{m}) \mspace{3mu} : \mspace{15mu} \\ V_j =h^{\gamma_j} g^{v_j} \mspace{6mu} \forall \mspace{6mu} j \in [1,m] \mspace{6mu} \wedge \mspace{6mu} \mathbf a_L + \mathbf a_R = \mathbf a_O \mspace{6mu} \wedge \mspace{50mu} \\ \mathbf W_L \cdot \mathbf a_L + \mathbf W_R \cdot \mathbf a_R + \mathbf W_O \cdot \mathbf a_O = \mathbf W_V \cdot \mathbf v + \mathbf c \mspace{50mu} \end{aligned} \tag{10} $$
Let $ \mathbf W_V \in \mathbb Z_p^{Q \times m} $ be the weights for a commitment $ V_j $. Relation (10) only holds when $ \mathbf W_{V} $ is of rank $ m $, i.e. if the columns of the matrix are all linearly independent.
Figure 8 presents Part 1 of the protocol, where the prover $ \mathcal{P} $ commits to $ l(X),r(X),t(X) $:
Figure 9 presents Part 2 of the protocol, where the prover $ \mathcal{P} $ convinces the verifier $ \mathcal{V} $ that the polynomials are well formed and that $ \langle l(X),r(X) \rangle = t(X) $:
The proof system presented here has the following Commitment Scheme properties:
 Perfect completeness (hiding): Every validity/truth is provable. Also refer to Definition 9 in [1].
 Perfect HVZK: The verifier $ \mathcal{V} $ behaves according to the protocol. Also refer to Definition 12 in [1].
 Computational witness extended emulation (binding): A witness can be computed in time closely related to time spent by the prover $ \mathcal{P} $. Also refer to Definition 10 in [1].
Logarithmicsized Noninteractive Protocol for Arithmetic Circuits
Similar to Logarithmic Range Proof, the communication cost of Protocol 3 can be reduced by using the efficient inner product argument. Transmission of vectors $ \mathbf {l} $ and $ \mathbf {r} $ to the verifier $ \mathcal{V} $ (step 82 in Figure 9) can be eliminated, and transfer of information limited to the scalar properties $ ( \tau _x , \mu , \hat t ) $ alone. The prover $ \mathcal{P} $ and verifier $ \mathcal{V} $ engage in an inner product argument on public input $ (\mathbf {g} , \mathbf {h} ^ \backprime , P \cdot h^{\mu}, \hat t) $ to check correctness of $ \mathbf {l} $ and $ \mathbf {r} $ (step 92 in Figure 9) and $ \hat {t} $ (step 88 in Figure 9); this is the same as verifying that the witness $ \mathbf {l} , \mathbf {r} $ satisfies the inner product of relation. Communication is now reduced to $ 2 \cdot [ \log_22(n)] + 8 $ group elements and $ 5 $ elements in $ \mathbb Z $ instead of $ 2 \cdot n $ elements, thereby achieving a proof size that is logarithmic in $ n $.
Similar to Noninteractive Proof through FiatShamirHeuristic, the protocol presented so far can be turned into an efficient, noninteractive proof that is secure and full zeroknowledge in the random oracle model (thus without a trusted setup) using the FiatShamir Heuristic^{def}.
The proof system presented here has the following Commitment Scheme properties:
 Perfect completeness (hiding): Every validity/truth is provable. Also refer to Definition 9 in [1].
 Statistical zeroknowledge: The verifier $ \mathcal{V} $ behaves according to the protocol and $ \mathbf {l} , \mathbf {r} $ can be efficiently simulated.
 Computational soundness (binding): if the generators $ \mathbf {g} , \mathbf {h} , g , h $ are independently generated, then finding a discrete logarithm relation between them is as hard as breaking the Discrete Log Problem.
Optimized Verifier using Multiexponentiation and Batch Verification
In many of the Bulletproofs' Use Cases, the verifier's runtime is of particular interest. This protocol presents optimizations for a single range proof that is also extendable to aggregate range proofs and the arithmetic circuit protocol.
Multiexponentiation
In Protocol 2, verification of the innerproduct is reduced to a single multiexponentiation. This can be extended to verify the whole range proof using a single multiexponentiation of size $ 2n + \log_2(n) + 7 $. In Protocol 2, the Bulletproof verifier $ \mathcal{V} $ only performs two checks: step 68 in Figure 6 and step 16 in Figure 2.
In the protocol presented in Figure 10, which is processed by the verifier $ \mathcal{V} $, $ x_u $ is the challenge from Protocol 1, $ x_j $ the challenge from round $ j $ of Protocol 2, and $ L_j , R_j $ the $ L , R $ values from round $ j $ of Protocol 2.
A further idea is that multiexponentiation (steps 98 and 105 in Figure 10) be delayed until those checks are performed, and that they are also combined into a single check using a random value $ c \xleftarrow[]{$} \mathbf Z_p $. This follows from the fact that if $ A^cB = 1 $ for a random $ c $, then with high probability, $ A = 1 \mspace 3mu \wedge \mspace 3mu B = 1 $. Various algorithms are known to compute the multiexponentiations and scalar quantities (steps 101 and 102 in Figure 10) efficiently (sublinearly), thereby further improving the speed and efficiency of the protocol.
Batch Verification
A further important optimization concerns the verification of multiple proofs. The essence of the verification is to calculate a large multiexponentiation. Batch verification is applied in order to reduce the number of expensive exponentiations. This is based on the observation that checking $ g^x = 1 \mspace 3mu \wedge \mspace 3mu g^y = 1 $ can be checked by drawing a random scalar $ \alpha $ from a large enough domain and checking that $ g^{\alpha x + y} = 1 $. With high probability, the latter equation implies the first. When applied to multiexponentiations, $ 2n $ exponentiations can be saved per additional proof. Verifying $ m $ distinct range proofs of size $ n $ only requires a single multiexponentiation of size $ 2n+2+m \cdot (2 \cdot \log (n) + 5 ) $ along with $ O ( m \cdot n ) $ scalar operations.
Evolving Bulletproof Protocols
Interstellar [24] recently introduced the Programmable Constraint Systems for Bulletproofs [23], an evolution of Zeroknowledge Proof for Arithmetic Circuits, extending it to support proving arbitrary statements in zeroknowledge using a constraint system, bypassing arithmetic circuits altogether. They provide an Application Programmers Interface (API) for building a constraint system directly, without the need to construct arithmetic expressions and then transform them into constraints. The Bulletproof constraint system proofs are then used as building blocks for a confidential assets protocol called Cloak.
The constraint system has three kinds of variables:
 Highlevel witness variables
 known only to the prover $ \mathcal{P} $, as external inputs to the constraint system;
 represented as individual Pedersen Commitments to the external variables in Bulletproofs.
 Lowlevel witness variables
 known only to the prover $ \mathcal{P} $, as internal to the constraint system;
 representing the inputs and outputs of the multiplication gates.
 Instance variables
 known to both the prover $ \mathcal{P} $ and the verifier;
 $ \mathcal{V} $, as public parameters;
 represented as a family of constraint systems parameterized by public inputs (compatible with Bulletproofs);
 folding all instance variables into a single constant parameter internally.
Instance variables can select the constraint system out of a family for each proof. The constraint system becomes a challenge from a verifier $ \mathcal{V} $ to a prover $ \mathcal{P} $, where some constraints are generated randomly in response to the prover's $ \mathcal{P} $ commitments. Challenges to parametrize constraint systems make the resulting proof smaller, requiring only $ O(n) $ multiplications instead of $ O(n^2) $ in the case of verifiable shuffles when compared to a static constraint system.
Merlin transcripts [25] employing the FiatShamir Heuristic^{def} are used to generate the challenges. The challenges are bound to the highlevel witness variables (the external inputs to the constraint system), which are added to the transcript before any of the constraints are created. The prover $ \mathcal{P} $ and verifier $ \mathcal{V} $ can then compute weights for some constraints with the use of the challenges.
Because the challenges are not bound to lowlevel witness variables, the resulting construction can be unsafe. Interstellar are working on an improvement to the protocol that would allow challenges to be bound to a subset of the lowlevel witness variables, and have a safer API using features of Rust’s type system.
The resulting API provides a single code path used by both the prover $ \mathcal{P} $ and verifier $ \mathcal{V} $ to allocate variables and define constraints. This is organized into a hierarchy of taskspecific gadgets, which manages allocation, assignment and constraints on the variables, ensuring that all variables are constrained. Gadgets interact with mutable constraint system objects, which are specific to the prover $ \mathcal{P} $ and verifier $ \mathcal{V} $. They also receive secret variables and public parameters and generate challenges.
The Bulletproofs library [22] does not provide any standard gadgets, but only an API for the constraint system. Each protocol built on top of the Bulletproofs library must create its own collection of gadgets to enable building a complete constraint system out of them. Figure 11 shows the Interstellar Bulletproof zeroknowledge proof protocol built with their programmable constraint system:
Conclusions, Observations and Recommendations

Bulletproofs have many potential use cases or applications, but are still under development. A new confidential blockchain protocol such as Tari should carefully consider expanded use of Bulletproofs to maximally leverage functionality of the code base.

Bulletproofs are not done yet, as illustrated in Evolving Bulletproof Protocols, and their further development and efficient implementation have a lot of traction in the community.

Bünz et al. [1] proposed that the switch commitment scheme defined by Ruffing et al. [10] can be used for Bulletproofs if doubts in the underlying cryptographic hardness (discrete log) assumption arise in future. The switch commitment scheme allows for a blockchain with proofs that are currently only computationally binding to later switch to a proof system that is perfectly binding and secure against quantum adversaries; this will weaken the perfectly hiding property as a drawback and slow down all proof calculations. In the Bünz et al. [1] proposal, all Pedersen Commitments will be replaced with ElGamal Commitments^{def} to move from computationally binding to perfectly binding. They also gave further ideas about how the ElGamal commitments can possibly be enhanced to improve the hiding property to be statistical or perfect. (Refer to the Grin projects' implementation here.)

It is important that developers understand more about the fundamental underlying mathematics when implementing something like Bulletproofs, even if they just reuse libraries developed by someone else.

With the standard MPC protocol implementation, there is no guarantee that the dealer behaves honestly according to the protocol and generates challenges honestly. The protocol can be extended to be more secure if all parties receive partial proof elements and independently compute aggregated challenges.
References
[1] B. Bünz, J. Bootle, D. Boneh, A. Poelstra, P. Wuille and G. Maxwell, "Bulletproofs: Short Proofs for Confidential Transactions and More", Blockchain Protocol Analysis and Security Engineering 2018 [online]. Available: http://web.stanford.edu/~buenz/pubs/bulletproofs.pdf. Date accessed: 2018‑09‑18.
[2] J. Bootle, A. Cerulli, P. Chaidos, J. Groth and C. Petit, "Efficient Zeroknowledge Arguments for Arithmetic Circuits in the Discrete Log Setting", Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 327‑357. Springer, 2016 [online]. Available: https://eprint.iacr.org/2016/263.pdf. Date accessed: 2018‑09‑21.
[3] A. Poelstra, A. Back, M. Friedenbach, G. Maxwell G. and P. Wuille, "Confidential Assets", Blockstream [online]. Available: https://blockstream.com/bitcoin17final41.pdf. Date accessed: 2018‑09‑25.
[4] Wikipedia: "Zeroknowledge Proof" [online]. Available: https://en.wikipedia.org/wiki/Zeroknowledge_proof. Date accessed: 2018‑09‑18.
[5] Wikipedia: "Discrete Logarithm" [online]. Available: https://en.wikipedia.org/wiki/Discrete_logarithm. Date accessed: 2018‑09‑20.
[6] A. Fiat and A. Shamir, "How to Prove Yourself: Practical Solutions to Identification and Signature Problems", CRYPTO 1986: pp. 186‑194 [online]. Available: https://link.springer.com/content/pdf/10.1007%2F3540477217_12.pdf. Date accessed: 2018‑09‑20.
[7] D. Bernhard, O. Pereira and B. Warinschi, "How not to Prove Yourself: Pitfalls of the FiatShamir Heuristic and Applications to Helios" [online]. Available: https://link.springer.com/content/pdf/10.1007%2F9783642349614_38.pdf. Date accessed: 2018‑09‑20.
[8] "Pedersencommitment: An Implementation of Pedersen Commitment Schemes" [online]. Available: https://hackage.haskell.org/package/pedersencommitment. Date accessed: 2018‑09‑25.
[9] "Zero Knowledge Proof Standardization  An Open Industry/Academic Initiative" [online]. Available: https://zkproof.org/documents.html. Date accessed: 2018‑09‑26.
[10] T. Ruffing and G. Malavolta, "Switch Commitments: A Safety Switch for Confidential Transactions" [online]. Available: https://eprint.iacr.org/2017/237.pdf. Date accessed: 2018‑09‑26.
[11] GitHub: "adjointio/Bulletproofs, Bulletproofs are Short Noninteractive Zeroknowledge Proofs that Require no Trusted Setup" [online]. Available: https://github.com/adjointio/Bulletproofs. Date accessed: 2018‑09‑10.
[12] Wikipedia: "Commitment Scheme" [online]. Available: https://en.wikipedia.org/wiki/Commitment_scheme. Date accessed: 2018‑09‑26.
[13] Cryptography Wikia: "Commitment Scheme" [online]. Available: http://cryptography.wikia.com/wiki/Commitment_scheme. Date accessed: 2018‑09‑26.
[14] Adjoint Inc. Documentation: "Pedersen Commitment Scheme" [online]. Available: https://www.adjoint.io/docs/cryptography.html#pedersencommitmentscheme. Date accessed: 2018‑09‑27.
[15] T. Pedersen. "Noninteractive and Informationtheoretic Secure Verifiable Secret Sharing" [online]. Available: https://www.cs.cornell.edu/courses/cs754/2001fa/129.pdf. Date accessed: 2018‑09‑27.
[16] A. Sadeghi and M. Steiner, "Assumptions Related to Discrete Logarithms: Why Subtleties Make a Real Difference" [online]. Available: http://www.semper.org/sirene/publ/SaSt_01.dhetal.long.pdf. Date accessed: 2018‑09‑24.
[17] P. Sharma P, A. Gupta and S. Sharma, "Intensified ElGamal Cryptosystem (IEC)", International Journal of Advances in Engineering & Technology, January 2012 [online].* Available: http://www.eijaet.org/media/58I6IJAET0612695.pdf. Date accessed: 2018‑10‑09.
[18] Y. Tsiounis and M. Yung M, "On the Security of ElGamal Based Encryption" [online]. Available: https://drive.google.com/file/d/16XGAByoXse5NQl57v_GldJwzmvaQlS94/view. Date accessed: 2018‑10‑09.
[19] Wikipedia: "Decisional Diffie–Hellman Assumption" [online]. Available: https://en.wikipedia.org/wiki/Decisional_Diffie%E2%80%93Hellman_assumption. Date accessed: 2018‑10‑09.
[20] Wikipedia: "Arithmetic Circuit Complexity" [online]. Available: https://en.wikipedia.org/wiki/Arithmetic_circuit_complexity. Date accessed: 2018‑11‑08.
[21] Wikipedia: "Hadamard Product (Matrices)" [online]. Available: https://en.wikipedia.org/wiki/Hadamard_product_(matrices). Date accessed: 2018‑11‑12.
[22] Dalek Cryptography  Crate Bulletproofs [online]. Available: https://doc.dalek.rs/bulletproofs/index.html. Date accessed: 2018‑11‑12.
[23] "Programmable Constraint Systems for Bulletproofs" [online]. Available: https://medium.com/interstellar/programmableconstraintsystemsforbulletproofs365b9feb92f7. Date accessed: 2018‑11‑22.
[24] Inter/stellar website [online]. Available: https://interstellar.com. Date accessed: 2018‑11‑22.
[25] "Dalek Cryptography  Crate Merlin" [online]. Available: https://doc.dalek.rs/merlin/index.html. Date accessed: 2018‑11‑22.
[26] B. Franca, "Homomorphic Miniblockchain Scheme", April 2015 [online]. Available: http://cryptonite.info/files/HMBC.pdf. Date accessed: 2018‑11‑22.
[27] C. Franck and J. Großschädl, "Efficient Implementation of Pedersen Commitments Using Twisted Edwards Curves", University of Luxembourg [online]. Available: http://orbilu.uni.lu/bitstream/10993/33705/1/MSPN2017.pdf. Date accessed: 2018‑11‑22.
[28] A. Gibson, "An Investigation into Confidential Transactions", July 2018 [online]. Available: https://github.com/AdamISZ/ConfidentialTransactionsDoc/blob/master/essayonCT.pdf. Date accessed: 2018‑11‑22.
[29] "Dalek Cryptography  Module bulletproofs::range_proof_mpc" [online]. Available: https://docinternal.dalek.rs/bulletproofs/range_proof_mpc/index.html. Date accessed: 2019‑07‑10.
[30] "What is the difference between honest verifier zero knowledge and zero knowledge?" [Online]. Available: https://crypto.stackexchange.com/questions/40436/whatisthedifferencebetweenhonestverifierzeroknowledgeandzeroknowledge. Date accessed: 2019‑11‑12.
[31] "600.641 Special Topics in Theoretical Cryptography  Lecture 11: Honest Verifier ZK and FiatShamir" [online]. Available: https://www.cs.jhu.edu/~susan/600.641/scribes/lecture11.pdf. Date accessed: 2019‑11‑12.
[32] Wikipedia: "Witnessindistinguishable proof" [online]. Available: https://en.wikipedia.org/wiki/Witnessindistinguishable_proof. Date accessed: 2019‑11‑12.
Appendices
Appendix A: Definition of Terms
Definitions of terms presented here are high level and general in nature. Full mathematical definitions are available in the cited references.
 Arithmetic Circuits: An arithmetic circuit $ C $ over a field $ F $ and variables $ (x_1, ..., x_n) $ is a directed acyclic graph whose vertices are called gates. Arithmetic circuits can alternatively be described as a list of addition and multiplication gates with a collection of linear consistency equations relating the inputs and outputs of the gates. The size of an arithmetic circuit is the number of gates in it, with the depth being the length of the longest directed path. Upper bounding the complexity of a polynomial $ f $ is to find any arithmetic circuit that can calculate $ f $, whereas lower bounding is to find the smallest arithmetic circuit that can calculate $ f $. An example of a simple arithmetic circuit with size six and depth two that calculates a polynomial is shown here ([11], [20]):
 Discrete Logarithm/Discrete Logarithm Problem (DLP): In the mathematics of real numbers, the logarithm $ \log_b^a $ is a number $ x $ such that $ b^x=a $, for given numbers $ a $ and $ b $. Analogously, in any group $ G $ , powers $ b^k $ can be defined for all integers $ k $, and the discrete logarithm $ \log_ba $ is an integer $ k $ such that $ b^k=a $. Algorithms in publickey cryptography base their security on the assumption that the discrete logarithm problem over carefully chosen cyclic finite groups and cyclic subgroups of elliptic curves over finite fields has no efficient solution ([5], [16]).
 ElGamal Commitment/Encryption: An ElGamal commitment is a Pedersen Commitmentdef with an additional commitment $ g^r $ to the randomness used. The ElGamal encryption scheme is based on the Decisional DiffeHellman (DDH) assumption and the difficulty of the DLP for finite fields. The DDH assumption states that it is infeasible for a Probabilistic Polynomialtime (PPT) adversary to solve the DDH problem ([1], [17], [18], [19]). Note: The ElGamal encryption scheme should not be confused with the ElGamal signature scheme.
 Fiat–Shamir Heuristic/Transformation: The Fiat–Shamir heuristic is a technique in
cryptography to convert an interactive publiccoin protocol (Sigma protocol) between a prover $ \mathcal{P} $ and a
verifier $ \mathcal{V} $ into a onemessage (noninteractive) protocol using a cryptographic hash function ([6], [7]).

The prover $ \mathcal{P} $ will use a
Prove()
algorithm to calculate a commitment $ A $ with a statement $ Y $ that is shared with the verifier $ \mathcal{V} $ and a secret witness value $ w $ as inputs. The commitment $ A $ is then hashed to obtain the challenge $ c $, which is further processed with theProve()
algorithm to calculate the response $ f $. The single message sent to the verifier $ \mathcal{V} $ then contains the challenge $ c $ and response $ f $. 
The verifier $ \mathcal{V} $ is then able to compute the commitment $ A $ from the shared statement $ Y $, challenge $ c $ and response $ f $. The verifier $ \mathcal{V} $ will then use a
Verify()
algorithm to verify the combination of shared statement $ Y $, commitment $ A $, challenge $ c $ and response $ f $. 
A weak Fiat–Shamir transformation can be turned into a strong Fiat–Shamir transformation if the hashing function is applied to the commitment $ A $ and shared statement $ Y $ to obtain the challenge $ c $ as opposed to only the commitment $ A $.

 Hadamard Product: In mathematics, the Hadamard product is a binary operation that takes two matrices $ \mathbf {A} , \mathbf {B} $ of the same dimensions, and produces another matrix of the same dimensions where each element $ i,j $ is the product of elements $ i,j $ of the original two matrices. The Hadamard product $ \mathbf {A} \circ \mathbf {B} $ is different from normal matrix multiplication, most notably because it is also commutative $ [ \mathbf {A} \circ \mathbf {B} = \mathbf {B} \circ \mathbf {A} ] $ along with being associative $ [ \mathbf {A} \circ ( \mathbf {B} \circ \mathbf {C} ) = ( \mathbf {A} \circ \mathbf {B} ) \circ \mathbf {C} ] $ and distributive over addition $ [ \mathbf {A} \circ ( \mathbf {B} + \mathbf {C} ) = \mathbf {A} \circ \mathbf {B} + \mathbf {A} \circ \mathbf {C} ] $ ([21]).
$$ \mathbf {A} \circ \mathbf {B} = \mathbf {C} = (a_{11} \cdot b_{11} \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_{1m} \cdot b_{1m} \mspace{6mu} ; \mspace{6mu} . . . \mspace{6mu} ; \mspace{6mu} a_{n1} \cdot b_{n1} \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_{nm} \cdot b_{nm} ) $$
 Zeroknowledge Proof/Protocol: In cryptography, a zeroknowledge proof/protocol is a
method by which one party (the prover $ \mathcal{P} $) can convince another party (the verifier $ \mathcal{V} $) that a statement $ Y $ is true, without
conveying any information apart from the fact that the prover $ \mathcal{P} $ knows the value of $ Y $. The proof system must be
complete, sound and zeroknowledge ([4], [9]).

Complete: If the statement is true, and both the prover $ \mathcal{P} $ and verifier $ \mathcal{V} $ follow the protocol, the verifier will accept.

Sound: If the statement is false, and the verifier $ \mathcal{V} $ follows the protocol, the verifier $ \mathcal{P} $ will not be convinced.

Zeroknowledge: If the statement is true, and the prover $ \mathcal{P} $ follows the protocol, the verifier $ \mathcal{V} $ will not learn any confidential information from the interaction with the prover $ \mathcal{P} $ apart from the fact that the statement is true.

Contributors
 https://github.com/hansieodendaal
 https://github.com/neonknight64
 https://github.com/CjS77
 https://github.com/philiprza
 https://github.com/SarangNoether
 https://github.com/anselld
PureRust Elliptic Curve Cryptography
Isis Agora Lovecruft
Cryptographer
Summary
Fast, Safe, PureRust Elliptic Curve Cryptography by Isis Lovecruft & Henry De Valence
This talk discusses the design and implementation of curve25519dalek, a pureRust implementation of operations on the elliptic curve known as Curve25519.
They discuss the goals of the library and give a brief overview of the implementation strategy.
They also discuss features of the Rust language that allow them to achieve competitive performance without sacrificing safety or readability, and future features that could allow them to achieve more safety and more performance.
Finally, they will discuss how the dalek library makes it easy to implement complex cryptographic primitives, such as zeroknowledge proofs.
The content in this video is not complicated, but it is an advanced topic, since it dives deep into the internals of how cryptographic operations are implemented on computers.
Video
Slides
zkSNARKs
 Introduction
 What is ZKP? A Complete Guide to Zeroknowledge Proof
 Introduction to zkSNARKs
 Comparing Generalpurpose zkSNARKs
 Quadratic Arithmetic Programs  from Zero to Hero
 Explaining SNARKs Series: Part I to Part VII
 Overview
 Part I: Homomorphic Hidings
 Part II: Blind Evaluation of Polynomials
 Part III: The Knowledge of Coefficient Test and Assumption
 Part IV: How to make Blind Evaluation of Polynomials Verifiable
 Part V: From Computations to Polynomials
 Part VI: The Pinocchio Protocol
 Part VII: Pairings of Elliptic Curves
 zkSHARKs: Combining Succinct Verification and Public Coin Setup
 References
Introduction
Zeroknowledge proof protocols have gained much attention in the past decade, due to the popularity of cryptocurrencies. A zeroknowledge Succinct Noninteractive Argument of Knowledge (zkSNARK), referred to here as an argument of knowledge, is a special kind of a zeroknowledge proof. The difference between a proof of knowledge and an argument of knowledge is rather technical for the intended audience of this report. The distinction lies in the difference between statistical soundness and computational soundness. The technical reader is referred to [1] or [2].
zkSNARKs have found many applications in zeroknowledge proving systems, libraries of proving systems such as libsnark and bellman and generalpurpose compilers from highlevel languages such as Pinocchio. "DIZK: A Distributed Zero Knowledge Proof System", by Wu et al. [3], is one of the best papers on zkSNARKs, at least from a cryptographer's point of view. Coins such as Zerocoin and Zcash use zkSNARKs. The content reflected in this curated content report, although not all inclusive, covers all the necessary basics one needs to understand zkSNARKs and their implementations.
What is ZKP? A Complete Guide to Zeroknowledge Proof
Hasib Anwar
Just is a born geek
Overview
A zeroknowledge proof is a technique one uses to prove to a verifier that one has knowledge of some secret information, without disclosing the information. This is a powerful tool in the blockchain world, particularly in cryptocurrencies. The aim is to achieve a trustless network, i.e. anyone in the network should be able to verify information recorded in a block.
Summary
In this post, Anwar provides an excellent zeroknowledge infographic that summarizes what a zeroknowledge proof is, its main properties (completeness, soundness and zeroknowledge), as well as its use cases such as authentication, secure data sharing and filesystem control. Find what Anwar calls a complete guide to zeroknowledge proofs, here.
Introduction to zkSNARKs
Dr Christian Reitwiessner
Ethereum Foundation
Overview
A typical zeroknowledge proof protocol involves at least two participants: the Verifier and the Prover. The Verifier sends a challenge to the Prover in the form of a computational problem. The Prover has to solve the computational problem and, without revealing the solution, send proof of the correct solution to the Verifier.
Summary
zkSNARKs are important in blockchains for at least two reasons:
 Blockchains are by nature not scalable. They thus benefit in that zkSNARKs allow a verifier to verify a given proof of a computation without having to actually carry out the computation.
 Blockchains are public and need to be trustless, as explained earlier. The zeroknowledge property of zk‑SNARKs as well as the possibility to put in place a socalled trusted setup make this almost possible.
Reitwiessner uses an example of a mini 4 x 4 Sudoku challenge as an example of an interactive zeroknowledge proof. He explains how it would fall short of the zeroknowledge property without the use of homomorphic encryption as well as putting in place a trusted setup. Reitwiessner proceeds to explain how computations involving polynomials are better suited as challenges posed by the Verifier to the Prover.
Video
The slides from the talk can be found here.
Comparing Generalpurpose zkSNARKs
Ronald Mannak
Opensource Blockchain Developer
Overview
Recently, zkSNARK constructs such as Supersonic [4] and Halo [5] were created mainly for efficiency of proofs. The following article by Mannak gives a quick survey of the most recent developments, comparing generalpurpose zkSNARKs. The article is easy to read. It provides the technical reader with relevant references to scholarly research papers.
Summary
The main drawback of zkSNARKs is their reliance on a common reference string that is created using a trusted setup. In this post, Mannak mentions three issues with reference strings or having a trusted setup:
 A leaked reference string can be used to create undetectable fake proofs.
 One setup is only applicable to one computation, thus making smart contracts impossible.
 Reference strings are not upgradeable. This means that a whole new ceremony is required, even for minor bug fixes in crypto coins.
After classifying zkSNARKs according to the type of trusted setup they use, Mannak compares their proof and verification sizes as well as performance.
Quadratic Arithmetic Programs  from Zero to Hero
Vitalik Buterin
Cofounder of Ethereum
Overview
The zkSNARK endtoend journey is to create a function or a protocol that takes the proof, given by the Prover, and checks its veracity. In a zkSNARK proof, a computation is verified step by step. To do so, the computation is first turned into an arithmetic circuit. Each of its wires is then assigned a value that results from feeding specific inputs to the circuit. Next, each computing node of the arithmetic circuit (called a “gate”  an analogy of the nomenclature of electronic circuits) is transformed into a constraint that verifies the output wire has the value it should have for the values assigned to the input wires. This process involves transforming statements or computational problems into various formats on which a zkSNARK proof can be performed. The following seven steps depicts the process for achieving a zkSNARK:
Computational Problem —> Arithmetic Circuit —> R1CS —> QAP —> Linear PCP —> Linear Interactive Proof > zkSNARK
Summary
In this post, Buterin explains how zkSNARKs work, using an example that focuses on the first three steps of the process given above. Buterin explains how a computational problem can be written as an arithmetic circuit, converted into a rank1 constraint system or R1CS, and ultimately transform the R1CS into a quadratic arithmetic program.
Explaining SNARKs Series: Part I to Part VII
Ariel Gabizon
Engineer at Zcash
Overview
The explanation of zkSNARKs given by Buterin above, and similar explanations by Pinto (6], [7]), although excellent in clarifying the R1CS and the Quadratic Arithmatic Program (QAP) concepts, do not explain how zeroknowledge is achieved in zkSNARKs. For a stepbystep and mathematical explanation of how this is achieved, as used in Zcash, refer to the sevenpart series listed here.
[Part I: Homomorphic Hidings]
This post explains how zkSNARKs use homomorphic hiding or homomorphic encryption in order to achieve zeroknowledge proofs. Gabizon dives into the mathematics that underpins the cryptographic security of homomorphic encryption afforded by the difficulty of solving discrete log problems in a finite group of a large prime order.
[Part II: Blind Evaluation of Polynomials]
This post explains how the power of the homomorphic property of these types of hidings is seen in how it easily extends to linear combinations. Since any polynomial evaluated at a specific value $x = \bf{s} $ is a weighted linear combination of powers of $\bf{s}$, this property allows sophisticated zeroknowledge proofs to be set up.
For example, two parties can set up a zeroknowledge proof where the Verifier can request the Prover to prove knowledge of the "right" polynomial $P(x)$, without revealing $P(x)$ to the Verifier. All that the Verifier requests is for the Prover to evaluate $P(x)$ at a secret point $\bf{s}$, without learning what $\bf{s}$ is. Thus, instead of sending $\bf{s}$ in the open, the Verifier sends homomorphic hidings of the necessary power of $\bf{s}$. The Prover therefore simply evaluates the right linear combination of the hidings as dictated to by the polynomial $P(x)$. This is how the Prover performs what is called a blind evaluation of the polynomial $P(x)$ at a secret point $\bf{s}$ only known by the Verifier.
[Part III: The Knowledge of Coefficient Test and Assumption]
In this post, Gabizon notes that it is necessary to force the Prover to comply with the rules of the protocol. Although this is covered in the next part of the series, in this post he considers the Knowledge of Coefficient (KC) Test, as well as its KC assumption.
The KC Test is in fact a request for a proof in the form of a challenge that a Verifier poses to a Prover. The Verifier sends a pair $( a, b )$ of elements of a prime field, where $a$ is such that $b = \alpha \cdot a$, to the Prover. The Verifier challenges the Prover to produce a similar pair $( a', b' )$, where $b' = \alpha \cdot a' $ for the same scalar $\alpha$. The KC assumption is that if the Prover succeeds with a nonnegligible probability, then the Prover knows the ratio between $a$ and $a'$. Gabizon explains how this two concept can be formalized using an extractor of the Prover.
[Part IV: How to make Blind Evaluation of Polynomials Verifiable]
In this part of the series, Gabizon explains how to make the blind evaluation of polynomials of Part II above, verifiable. This requires an extension of the Knowledge of Coefficient Assumption considered in Part III. Due to the homomorphic property of the used homomorphic hiding function, the Prover is able to receive several hidings of $\alpha$pairs from the Verifier, evaluate the polynomial $P(x)$ on a particular linear combination of these hidings of $\alpha$pairs and send the resulting pair to the Verifier. Now, according to the extended Knowledge of Coefficient Assumption of degree $d$, the Verifier can know, with a high probability, that the Prover knows the "right" polynomial, $P(x)$, without disclosing it.
[Part V: From Computations to Polynomials]
This post aims to translate statements that are to be proved and verified into the language of polynomials. Gabizon explains the same process discussed above by Buterin, for transforming a computational problem into an arithmetic circuit and ultimately into a QAP. However, unlike Buterin, Gabizon does not mention constraint systems.
[Part VI: The Pinocchio Protocol]
The Pinocchio protocol is used as an example of how the QAP computed in the previous parts of this series can be used between both the Prover and the Verifier to achieve a zeroknowledge proof with negligible probability that the Verifier would accept a wrong polynomial as correct. The low probability is guaranteed by a wellknown theorem that "two different polynomials of degree at most $2d$ can agree on at most $2d$ points in the given prime field". Gabizon further discusses how to restrict the Prover to choose their polynomials according to the assignment $\bf{s}$ given by the Verifier, and how the Prover can use randomly chosen field elements to blind all the information they send to the Verifier.
[Part VII: Pairings of Elliptic Curves]
This part of the series aims to set up a Common Reference String (CRS) model that can be used to convert the verifiable blind evaluation of the polynomial of Part IV into a noninteractive proof system. A homomorphic hiding that supports both addition and multiplication is needed for this purpose. Such a homomorphic hiding is created from what is known as Tate pairings. Since Tate pairings emanate from Elliptic Curve Groups, Gabizon starts by defining these groups.
The Pinnochio SNARK, however, uses a relaxed notion of a noninteractive proof, where the use of a CRS is allowed. The CRS is created before any proofs are constructed, according to a certain randomized process, and is broadcast to all parties. The assumption here is that the randomness used in creating the CRS is not known to any of the parties.
The intended noninteractive evaluation protocol has three parts; Setup, Proof, and Verification. In the Setup, the CRS and a random field element $\bf{s}$ are used to calculate the Verifier's challenge (i.e. the set of $\alpha$pairs a in Part IV).
zkSHARKs: Combining Succinct Verification and Public Coin Setup
Madars Virza
Scientist, MIT
Background
Most of the research done on zeroknowledge proofs has been about the efficiency of these types of proofs and making them more practical, especially in cryptocurrencies. One of the most recent innovations is that of the socalled zkSHARKs (short for zeroknowledge Succinct Hybrid ARguments of Knowledge) designed by Mariana Raykova, Eran Tromer and Madars Virza.
Summary
Virza starts with a concise survey of the best zkSNARK protocols and their applications while giving an assessment of the efficiency of zeroknowledge proof implementations in the context of blockchains. He mentions that although zeroknowledge proofs have found practical applications in privacy preserving cryptocurrencies, privacy preserving smart contracts, proof of regulatory compliance and blockchainbased sovereign identity, they still have a few shortcomings. While QAPbased zeroknowledge proofs can execute fast verifications, they still require a trusted setup. Also, in Probabilistically Checkable Proof (PCP)based zkSNARKs, the speed of verification decays with the increasing statement size.
Virza mentions that the danger of slow verification can tempt miners to skip validation of transactions. This can cause forks such as the July 2015 Bitcoin fork. He uses the Bitcoin fork example and slow verification as motivation for a zeroknowledge protocol that allows multiple verification modes. This will allow miners to carry out an optimistic verification without losing much time and later check the validity of transactions by using prudent verification. The zkSHARK is introduced as one such zeroknowledge protocol that implements these two types of verification modes. It is a hybrid, as it incorporates a Noninteractive Zeroknowledge (NIZK) proof inside a SNARK. The internal design of the NIZK verifier is algebraic in nature, using a new compilation technique for linear PCPs. The specialpurpose SNARK, which constitutes the main part of the zkSHARK, is dedicated to verifying an encoded polynomial delegation problem.
The design of zkSHARKs is ingenious and aims at avoiding unnecessary coin forkings.
Video
The slides from the talk can be found here.
References
[1] G. Brassard, D. Chaum and C. Crepeau, "Minimum Disclosure Proofs of Knowledge" [online]. Available: http://crypto.cs.mcgill.ca/~crepeau/PDF/BCC88jcss.pdf. Date accessed: 2019‑12‑07.
[2] M. Nguyen, S. J. Ong and S. Vadhan, "Statistical Zeroknowledge Arguments for NP from any Oneway Function (Extended Abstract)" [online]. Available: http://people.seas.harvard.edu/~salil/research/SZKargsfocs.pdf. Date accessed: 2019‑12‑07.
[3] H. Wu, W. Zheng, A. Chiesa, R. A. Popa and I. Stoica, "DIZK: A Distributed Zero Knowledge Proof System", UC Berkeley [online]. Available: https://www.usenix.org/system/files/conference/usenixsecurity18/sec18wu.pdf. Date accessed: 2019‑12‑07.
[4] B. Bünz, B. Fisch and A. Szepieniec, "Transparent SNARKs from DARK Compilers" [online]. Available: https://eprint.iacr.org/2019/1229.pdf. Date accessed: 2019‑12‑07.
[5] S. Bowe, J. Grigg and D. Hopwood, "Halo: Recursive Proof Composition without a Trusted Setup" [online]. Available: https://eprint.iacr.org/2019/1021.pdf. Date accessed: 2019‑12‑07.
[6] A. Pinto, "Constraint Systems for ZK SNARKs" [online]. Available: http://coderserrand.com/constraintsystemsforzksnarks/. Date accessed: 2019‑03‑06.
[7] A. Pinto, "The Vanishing Polynomial for QAPs", 23 March 2019 [online]. Available: http://coderserrand.com/thevanishingpolynomialforqaps/. Date accessed: 2019‑10‑10.
Rank1 Constraint System with Application to Bulletproofs
 Introduction
 Arithmetic Circuits
 Rank1 Constraint Systems
 From Arithmetic Circuits to Programmable Constraint Systems for Bulletproofs
 Interstellar's Bulletproof Constraint System
 Conclusions, Observations and Recommendations
 References
 Appendix
 Contributors
Introduction
This report explains the technical underpinnings of Rank1 Constraint Systems (R1CSs) as applied to Bulletproofs.
The literature on the use of R1CSs in zeroknowledge (ZK) proofs, for example in zeroknowledge Succinct Noninteractive ARguments of Knowledge (zk‑SNARKs), shows that this mathematical tool is used simply as one part of many in a complex process towards achieving the proof [1]. Not much attention is given to it, not even in explaining what "rank1" actually means. Although the terminology is similar to the traditional rank of a matrix in linear algebra, examples on the Internet do not yield a reduced matrix with only one nonzero row or column.
R1CSs became more prominent for Bulletproofs due to research work done by Cathie Yun and her colleagues at Interstellar. The constraint system, in particular R1CS, is used as an addon to Bulletproof protocols. The title of Yun's article, "Building on Bulletproofs" [2], suggests this is true. One of the Interstellar team's goals is to use the constraint system in their Confidential Asset Protocol called the Cloak and in their envisaged Spacesuit. Despite their work on using R1CS being research in progress, their detailed notes on constraint systems and their implementation in RUST are available in [3].
The aim of this report is to:
 highlight the connection between arithmetic circuits and R1CSs;
 clarify the difference R1CSs make in Bulletproofs and in range proofs;
 compare ZK proofs for arithmetic circuits and programmable constraint systems.
Arithmetic Circuits
Overview
Many problems in Symbolic Computation^{def} and cryptography can be expressed as the task of computing some polynomials. Arithmetic circuits are the most standard model for studying the complexity of such computations. ZK proofs form a core building block of many cryptographic protocols. Of special interest are ZK proof systems capable of proving the correct computation of arbitrary arithmetic circuits ([6], [7]).
Definition of Arithmetic Circuit
An arithmetic circuit $\mathcal{A}$ over the field^{def} $\mathcal{F}$ and the set of variables $X = \lbrace {x_1,\dots,x_n} \rbrace$ is a directed acyclic graph such that the vertices of $\mathcal{A}$ are called gates, while the edges are called wires [7]:
 A gate is of indegree $l$ if it has $l$ incoming wires, and similarly, of outdegree $k$ if it has $k$ outgoing wires.
 Every gate in $\mathcal{A}$ of indegree 0 is labeled with either a variable ${x_i}$ or some field element from $\mathcal{F}$. Such a gate is called an input gate. Every gate of outdegree 0 is called an output gate or the root.
 Every other gate in $\mathcal{A}$ is labeled with either $\otimes$ or $\oplus$, and called a multiplication gate or addition gate, respectively.
 An arithmetic circuit is called a formula if it is a directed tree whose edges are directed from the leaves to the root.
 The depth of a node is the number of edges in the longest directed path between the node and the root.
 The size of $\mathcal{A}$, denoted $\mathcal{A}$, is the number of wires, i.e. edges, in $\mathcal{A}$.
Arithmetic circuits of interest and most applicable to this report are those with gates of indegree 2 and outdegree 1. A typical multiplication gate has a left input $a_L$, a right input $a_R$ and an output $a_O$ (shown in Figure 1). Note that $ a_L \cdot a_R  a_O = 0 $.
In cases where the inputs and outputs are all vectors of size $n$, i.e. $\mathbf{a_L} = ( a_{L, 1}, a_{L, 2}, \dots, a_{L, n})$, $\mathbf{a_R} = ( a_{R, 1}, a_{R, 2}, \dots, a_{R, n})$ and $\mathbf{a_O} = ( a_{O, 1}, a_{O, 2}, \dots, a_{O, n})$, then multiplication of $ \mathbf{a_L} $ and $ \mathbf{a_R} $ is defined as an entrywise product called the Hadamard product:
$$ \mathbf{a_L}\circ \mathbf{a_R} = \big(( a_{L, 1} \cdot a_{R, 1} ), ( a_{L, 2} \cdot a_{R, 2} ), \dots, ( a_{L, n} \cdot a_{R, n} ) \big) = \mathbf{a_O} $$
The equation ${ \mathbf{a_L}\circ \mathbf{a_R} = \mathbf{a_O} }$ is referred to as a multiplicative constraint, but is also known as the Hadamard relation [4].
Example of Arithmetic Circuit
An arithmetic circuit computes a polynomial in a natural way, as shown in this example. Consider the following arithmetic circuit $\mathcal{A}$ with inputs $\lbrace x_1, x_2, 1 \rbrace$ over some field $\mathcal{F}$:
The output of $\mathcal{A}$ above is the polynomial $x^2_1 \cdot x_2 + x_1 + 1 $ of total degree three. A typical computational problem would involve finding the solution to, let's say, $x^2_1 \cdot x_2 + x_1 + 1 = 22$. Or, in a proof of knowledge scenario, the prover has to prove to the verifier that they have the correct solution to such an equation. Following the wires in the example shows that an arithmetic circuit actually breaks down the given computation into smaller equations corresponding to each gate:
$$ u = x_1 \cdot x_1 \text{,} \ \ v = u \cdot x_2 \text{,} \ \ y = x_1 + 1 \ \text {and} \ z = v + y $$
The variables $u, v$ and $ y $ are called auxiliary variables or lowlevel variables, while ${ z }$ is the output of $ \mathcal{A} $. Thus, in addition to computing polynomials naturally, an arithmetic circuit helps in reducing a computation to a lowlevel language involving only two variables, one operation and an output.
ZK proofs in general require that statements to be proved are expressed in their simplest terms for efficiency. A ZK proof's endtoend journey is to create a function to write proofs about, yet such a function needs to work with specific constructs. In making ZK proofs more efficient: "these functions have to be specified as sequences of very simple terms, namely, additions and multiplications of only two terms in a particular field" [8]. This is where arithmetic circuits come in.
In verifying a ZK proof, the verifier needs to carry out a stepbystep check of the computations. When these computations are expressed in terms of arithmetic circuits, the process translates to checking whether the output $ a_O $ of each gate is correct with respect to the given inputs $ a_L $ and $ a_R $. That is, testing if $ a_L \cdot a_R  a_O = 0 $ for each multiplication gate.
Rank1 Constraint Systems
Overview
Although a computational problem is typically expressed in terms of a highlevel programming language, a ZK proof requires it to be expressed in terms of a set of quadratic constraints, which are closely related to circuits of logical gates.
Other than in Bulletproof constructions, R1CS types of constraint systems have been featured in several constructions of zk‑SNARKs. At times they were simply referred to as quadratic constraints or quadratic equations; refer to [6], [9] and [10].
Definition of Constraint System
A constraint system was originally defined by Bootle et al. in [5], who first expressed arithmetic circuit satisfiability in terms of the Hadamard relation and linear constraints:
$$ \mathbf{W_L\cdot { a_L} + W_R\cdot { a_R} + W_O\cdot { a_O } = c } $$
where $\mathbf{c}$ is a vector of constant terms used in linear constraints, and $\mathbf{W_L, W_R}$ and $\mathbf{W_O}$ are weights applied to respective input vectors and output vectors. Bunz et al. [4] incorporated a vector $\mathbf{v}$ and vector of weights $\mathbf{W_V}$ into the Bootle et al. definition: $$ \mathbf{W_L\cdot { a_L} + W_R\cdot { a_R} + W_O\cdot { a_O } = W_V\cdot { v + c} } $$
where $\mathbf{v} = {(v_1, v_2, \dots, v_m )}$ is a secret vector of openings ${v_i}$ of the Pedersen Commitments $V_i$, $ i \in (1, 2, \cdots, m) $ and $\mathbf{W_V}$ is a vector of weights for all commitments $V_i$. This incorporation of the secret vector $\mathbf{v}$ and the corresponding vector of weights $\mathbf{W_V}$ into the linear consistency constraints enabled them to provide a protocol for a more general setting [4, page 24].
The Dalek team give a more general definition of a constraint system [3]. A constraint system is a collection of arithmetic constraints over a set of variables. There are two kinds of variables in the constraint system:
 ${m}$ highlevel variables, the input secrets ${ \mathbf{v}}$;
 $ n$ lowlevel variables, the internal input vectors ${ \mathbf{a}_L}$ and ${ \mathbf{a}_R},$ and output vectors ${ \mathbf{a}_O } $ of the multiplication gates.
Specifically, an R1CS is a system that consists of two sets of constraints [3]:
 ${ n}$ multiplicative constraints, $ \mathbf{ a_L \circ a_R = a_O } $; and
 ${ q}$ linear constraints, $\mathbf{W_L\cdot { a_L} + W_R\cdot { a_R} + W_O\cdot { a_O } = W_V\cdot { v + c} } $.
R1CS Definition of zk‑SNARKs
Arithmetic circuits and R1CS are more naturally applied to zk‑SNARKs, and these are implemented in cryptocurrencies such as Zerocoin and Zcash; refer to [1]. To illustrate the simplicity of this concept, a definition (from [11]) of an R1CS as it applies to zk‑SNARKs is provided in this paragraph.
An R1CS is a sequence of groups of three vectors ${ \bf{a_L}}, { \bf{a_R}}, { \bf{a_O}},$ and the solution to an R1CS is a vector ${ \bf{s}}$ that satisfies the equation:
$$ { \langle {\mathbf{a_L}, \mathbf{s}} \rangle \cdot \langle {\mathbf{a_R}, \mathbf{s}} \rangle  \langle {\mathbf{a_O}, \mathbf{s}} \rangle = 0 } $$
where
$$ \langle {\mathbf{a_L}, \mathbf{s}} \rangle = a_{L, 1} \cdot s_1 + a_{L, 2} \cdot s_2 + \cdots + a_{L, n} \cdot s_n $$
which is the inner product of the vectors $ \mathbf{a_{L}} = (a_{L,1}, a_{L,2}, ..., a_{L,n} )$ and $ {\mathbf{s}} = (s_1, s_2, ..., s_n) $.
Example of Rank1 Constraint System
One solution to the equation ${x^2_1 x_2 + x_1 + 1 = 22}$, from the preceding example of an arithmetic circuit, is ${ x_1 = 3}$ and ${ { x_2 = 2 }}$ belonging to the appropriate field ${ \mathcal{F}}$. Thus the solution vector ${ { s = ( const, x_1, x_2, z, u, v, y )}}$ becomes ${ { s = ( 1, 3, 2, 22, 9, 18, 4 )}}$.
It is easy to check that the R1CS for the computation problem in the preceding example is as follows (one need only test if ${ \langle {\mathbf{a_L}, \mathbf{s}} \rangle \cdot \langle {\mathbf{a_R}, \mathbf{s}} \rangle  \langle {\mathbf{a_O}, \mathbf{s}} \rangle = 0}$ for each equation).
Equation  Rank1 Constraint System Vectors 

${ u = x_1\cdot x_1}$  $ {\bf{a_L}} = ( 0, 1, 0, 0, 0, 0, 0 ), \ \ {\bf{a_R}} = ( 0, 1, 0, 0, 0, 0, 0 ),\ \ {\bf{a_O}} = ( 0, 0, 0, 0, 1, 0, 0 ) $ 
$ { v = u\cdot x_2 }$  $ {\bf{a_L}} = ( 0, 0, 0, 0, 1, 0, 0 ),\ \ {\bf{a_R}} = ( 0, 0, 1, 0, 0, 0, 0 ),\ \ {\bf{a_O}} = ( 0, 0, 0, 0, 0, 1, 0 ) $ 
$ { y = 1\cdot( x_1 + 1 ) } $  ${\bf{a_L}} = ( 1, 1, 0, 0, 0, 0, 0 ),\ \ {\bf{a_R}} = ( 1, 0, 0, 0, 0, 0, 0 ),\ \ {\bf{a_O}} = ( 0, 0, 0, 0, 0, 0, 1 ) $ 
$ { z = 1\cdot( v + y )} $  ${\bf{a_L}} = ( 0, 0, 0, 0, 0, 1, 1 ),\ \ {\bf{a_R}} = ( 1, 0, 0, 0, 0, 0, 0 ),\ \ {\bf{a_O}} = ( 0, 0, 0, 1, 0, 0, 0 )$ 
In a more formal definition, an R1CS is a set of three matrices ${\bf{ A_L, A_R }}$ and ${\bf A_O}$, where the rows of each matrix are formed by the corresponding vectors $ {\bf{a_L }}$, ${ \bf{a_R }}$ and ${ \bf{a_O}} $, respectively, as shown in Table 1:
$$ \bf{A_L} = \bf{\begin{bmatrix} 0&1&0&0&0&0&0 \\ 0&0&0&0&1&0&0 \\ 1&1&0&0&0&0&0 \\ 0&0&0&0&0&1&1 \end{bmatrix}} \text{,} \quad \bf{A_R} = \bf{\begin{bmatrix} 0&1&0&0&0&0&0 \\ 0&0&1&0&0&0&0 \\ 1&0&0&0&0&0&0 \\ 1&0&0&0&0&0&0 \end{bmatrix}} \text{,} \quad \bf{A_O} = \bf{\begin{bmatrix} 0&0&0&0&1&0&0 \\ 0&0&0&0&0&1&0 \\ 0&0&0&0&0&0&1 \\ 0&0&0&1&0&0&0 \end{bmatrix}} $$
Observe that ${ \bf{ (A_L * s^T) \cdot (A_R * s^T )  (A_O * s^T)} = 0 }$, where "$ * $" is matrix multiplication and ${ \bf s^T}$ is the transpose of the solution vector ${ \bf{s}}$ [8].
From Arithmetic Circuits to Programmable Constraint Systems for Bulletproofs
Interstellar's "Programmable Constraint Systems for Bulletproofs" [12] is an extension of "Efficient Zeroknowledge Arguments for Arithmetic Circuits in the Discrete Log Setting" by Bootle et al. [5], enabling protocols that support proving of arbitrary statements in ZK using constraint systems. Although the focus here is on the two works of research [5] and [12], the Bulletproofs paper by Bunz et al. [4] is here recognized as a bridge between the two. Table 2 compares these three research documents.
All these ZK proofs are based on the difficulty of the discrete logarithm problem.
No.  Efficient Zeroknowledge Arguments for Arithmetic Circuits in the Discrete Log Setting [5] (2016)  Bulletproofs: Short Proofs for Confidential Transactions and More [4] (2017)  Programmable Constraint Systems [12] (2018) 

1.  Introduces the Hadamard relation and linear constraints.  Turns the Hadamard relation and linear constraints into a single linear constraint, and these are in fact the R1CS.  Generalizes constraint systems and uses what is called gadgets as building blocks for constraint systems. 
2.  Improves on Groth's work [13] on ZK proofs. Reducing a $\sqrt{N}$ complexity to $6log_2(N) + 13$, where $N$ is the circuit size.  Improves on Bootle et al.'s work [5]. Reducing a $6log_2(N) + 13$ complexity to $2log_2(N) + 13$, where $N$ is the circuit size.  Adds constraint systems to Bunz et al.'s work on Bulletproofs, which are short proofs. The complexity advantage is seen in proving several statements at once. 
3.  Introduces logarithmsized innerproduct ZK proofs.  Introduces Bulletproofs, extending proofs to proofs of arbitrary statements. The halving method is used on the innerproducts, resulting in the above reduction in complexity.  Introduces gadgets that are actually addons to an ordinary ZK proof. A range proof is an example of a gadget. 
4.  Uses FiatShamir heuristics in order to achieve noninteractive ZK proofs.  Bulletproofs also use the Fiat Shamir heuristics to achieve noninteraction.  Merlin transcripts are specifically used for a FiatShamir transformation to achieve noninteraction. 
5.  The Pedersen commitments are used in order to achieve ZK property.  Eliminates the need for a commitment algorithm by including Pedersen commitments among the inputs to the verification proof.  Lowlevel variables, representing inputs and outputs to multiplication gates, are computed per proof and committed using a single vector Pedersen commitment. 
6.  The ZK proof involves conversion of the arithmetic circuit into an R1CS.  The mathematical expression of a Hadamard relation is closely related to an arithmetic circuit. The use of this relation plus linear constraints as a single constraint amounts to using a constraint system.  Although arithmetic circuits are not explicitly used here, the Hadamard relation remains the same as first seen in Bulletproofs, more so in the innerproduct proof. 
Interstellar is building an Application Programming Interface (API) that allows developers to choose their own collection of gadgets suitable for the protocol they wish to develop, as discussed in the next section.
Interstellar's Bulletproof Constraint System
Overview
The Interstellar team paved the way for the implementation of several cryptographic primitives in the RUST language, including Ristretto [14], a construction of a primeorder group using a cofactor8 curve known as Curve25519. They reported on how they implemented Bulletproofs in Henry de Valence's article, "Bulletproofs Prerelease" [15]. An update on their progress in extending the implementation of Bulletproofs [16] to a constraint system API, which enables ZK proofs of arbitrary statements, was given in Cathie Yun's article, "Programmable Constraint Systems for Bulletproofs" [12]. The Hadamard relation and linear constraints together form the constraint system as formalized by the Interstellar team. Most of the mathematical background of these constraints and bulletproofs is contained in Bunz et al.'s paper [4].
Dalek's constraint system, as defined earlier in Definition of Constraint System, is a collection of two types of arithmetic constraints: multiplicative constraints and linear constraints, over a set of highlevel and lowlevel variables.
Easytobuild Constraint Systems
In this Bulletproofs framework, a prover can build a constraint system in two steps:
 Firstly, committing to secret inputs and allocating highlevel variables corresponding to the inputs.
 secondly, selecting a suitable combination of multiplicative constraints and linear constraints, as well as requesting a random scalar in response to the highlevel variables already committed [3].
Reference [17] gives an excellent outline of ZK proofs that use Bulletproofs:
 The prover commits to a value(s) that they want to prove knowledge of.
 The prover generates the proof by enforcing the constraints over the committed values and any additional public values. The constraints might require the prover to commit to some additional variables.
 The prover sends the verifier all the commitments made in step 1 and step 2, along with the proof from step 2.
 The verifier now verifies the proof by enforcing the same constraints over the commitments, plus any public values.
About Gadgets
Consider a verifiable shuffle: given two lists of committed values ${ x_1, x_2, . . ., x_n}$ and ${ y_1, y_2, . . ., y_n},$ prove that the second list is a permutation of the first. Bunz et al. ([4, page 5]) mention that the use of bulletproofs improves the complexity of such a verifiable shuffle to size $\mathcal{O}(log(n))$ compared to previous implementation results. Although not referred to as a gadget in the paper, this is in fact a shuffle gadget. The term gadget was used and popularized by the Interstellar team, who introduced gadgets as building blocks of constraint systems; refer to [2].
A shuffle gadget (Figure 3) is any function whose outputs are but a permutation of its inputs. By definition of a permutation, the number of inputs to a shuffle gadget is always the same as the number of outputs.
The Interstellar team mentions other gadgets: “merge”, “split” and a “range proof” that are implemented in their Confidential Assets scheme called the Cloak. Just as a shuffle gadget creates constraints that prove that two sets of variables are equal up to a permutation, a rangeproof gadget checks that a given value is in the interval $ [ 0, 2^n ] $ where $ n $ is the size of the input vector [3].
Gadgets in their simplest form merely receive some variables as inputs and produce corresponding output values. However, they may allocate more variables, sometimes called auxiliary variables, for internal use, and produce constraints involving all these variables. The main advantage of gadgets is that they are composable, thus a more complex gadget can always be created from a number of single gadgets. Interstellar's Bulletproofs API allows developers to choose their own collection of gadgets suitable for the protocol they wish to develop.
Interstellar's Concluding Remarks
Cathie Yun reports in [12] that their "work on Cloak and Spacesuit is far from complete" and mentions that they still have two goals to achieve:
 Firstly, in order to "ensure that challengebased variables cannot be inspected" and prevent the user from accidentally breaking soundness of their gadgets, the Bulletproofs protocol needs to be slightly extended, enabling it to commit "to a portion of lowlevel variables using a single vector Pedersen commitment without an overhead of additional individual highlevel Pedersen commitments" [12].
 Secondly, to "improve privacy in Cloak" by enabling "multiparty proving of a single constraint system". That is, "building a joint proof of a single constraint system by multiple parties, without sharing their secret inputs with each other".
Allinall, Yun regards constraint systems as "very powerful because they can represent any efficiently verifiable program" [2].
R1CS Factorization Example for Bulletproofs
In [17], Harchandani explores the Dalek Bulletproofs API, using various examples. Of interest is the factorization problem, which is one out of the six R1CS Bulletproof examples discussed in the article. The computational challenge is to "prove knowledge of factors p and q of a given number r without revealing the factors".
Table 3 gives an outline of the description and the code lines of the example. Harchandani's complete code of this example can be found in [18]. Important to note is that the verifier must have an exact copy of the R1CS circuit used by the prover.
No.  Description  Code Lines 

1.  Create two pairs of generators; one pair for the Pedersen commitments and the other for the Bulletproof.  let pc_gens = PedersenGens::default(); let bp_gens = BulletproofGens::new(128, 1); 
2.  Instantiate the prover using the commitment and Bulletproofs generators of Step 1, to produce the prover's transcript.  let mut prover_transcript = Transcript::new(b"Factors"); let mut prover = Prover::new(&bp_gens, &pc_gens, &mut prover_transcript); 
3.  Prover commits to variables using the Pedersen commitments, creates variables corresponding to each commitment and adds the variables to the transcript.  let x1 = Scalar::random(&mut rng); let (com_p, var_p) = prover.commit(p.into(), x1); let x2 = Scalar::random(&mut rng); let (com_q, var_q) = prover.commit(q.into(), x2); 
4.  Prover constrains the variables in two steps: a. Prover multiplies the variables of step 3 and captures the product in the "output" variable O. b. Prover wants to ensure the difference of the product O and r is zero.  let (_, _, o) = prover.multiply(var_p.into(), var_q.into()); let r_lc: LinearCombination = vec![(Variable::One(), r.into())].iter().collect(); prover.constrain(o  r_lc); 
5.  Prover creates the proof.  let proof = prover.prove().unwrap(); 
6.  Instantiation of the verifier using the Pedersen commitments and Bulletproof generators. The verifier creates its own transcript.  let mut verifier_transcript = Transcript::new(b"Factors"); let mut verifier = Verifier::new(&bp_gens, &pc_gens, &mut verifier_transcript); 
7.  Verifier records commitments for p and q sent by prover in the transcript, and creates variables for them similar to the prover's.  let var_p = verifier.commit(commitments.0); let var_q = verifier.commit(commitments.1); 
8.  Verifier constrains variables corresponding to the commitments.  let (_, _, o) = verifier.multiply(var_p.into(), var_q.into()); let r_lc: LinearCombination = vec![(Variable::One(), r.into())].iter().collect(); verifier.constrain(o  r_lc); 
9.  Verifier verifies the proof.  verifier.verify(&proof) 
Conclusions, Observations and Recommendations
Constraint systems form a natural language for most computational problems expressed as arithmetic circuits, and have found ample application in both zk‑SNARKs and Bulletproofs. The use of ZK proofs for arithmetic circuits and programmable constraint systems has evolved considerably as illustrated in Table 2. Although much work still needs to be done, Bulletproofs with constraint systems built on them promise to be powerful tools for efficient handling of verifiable programs. The leverage that developers have, in choosing whatever gadgets they wish to implement, leaves enough room to build proof systems that have some degree of modularity. Proof system examples by both Dalek and Harchandani are valuable examples of what can be achieved with Bulletproofs and R1CSs. This framework provides great opportunities in building blockchainenabled confidential digital asset schemes.
References
[1] A. Gabizon, "Explaining SNARKs Part V: From Computations to Polynomials" [online]. Available: https://electriccoin.co/blog/snarkexplain5/. Date accessed: 2020‑01‑03.
[2] C. Yun, "Building on Bulletproofs" [online]. Available: https://medium.com/@cathieyun/buildingonbulletproofs2faa58af0ba8. Date accessed: 2020‑01‑03.
[3] Dalek's R1CS documents, "Module Bulletproofs::r1cs_proof" [online]. Available: https://docinternal.dalek.rs/bulletproofs/notes/r1cs_proof/index.html. Date accessed: 2020‑01‑07.
[4] B. Bünz, J. Bootle, D. Boneh, A. Poelstra, P. Wuille and G. Maxwell, "Bulletproofs: Short Proofs for Confidential Transactions and More" [online], Blockchain Protocol Analysis and Security Engineering 2018 . Available: http://web.stanford.edu/~buenz/pubs/bulletproofs.pdf. Date accessed: 2019‑11‑21.
[5] J. Bootle, A. Cerulli, P. Chaidos, J. Groth and C. Petit, "Efficient Zeroknowledge Arguments for Arithmetic Circuits in the Discrete Log Setting" [online], Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 327‑357. Springer, 2016 . Available: https://eprint.iacr.org/2016/263.pdf Date accessed: 2019‑12‑21.
[6] A. Szepieniec and B. Preneel, "Generic Zeroknowledge and Multivariate Quadratic Systems" [online]. Available: https://pdfs.semanticscholar.org/06c8/ea507b2c4aaf7b421bd0c93e6145e3ff7517.pdf?_ga=2.124585865.240482160.1578465071151955209.1571053591. Date accessed: 2019‑12‑31.
[7] A. Shpilka and A. Yehudayoff, "Arithmetic Circuits: A Survey of Recent Results and Open Questions" [online], TechnionIsrael Institute of Technology, Haifa, Israel, 2010 . Available: http://www.cs.tau.ac.il/~shpilka/publications/SY10.pdf. Date accessed: 2019‑12‑21.
[8] A. Pinto, "Constraint Systems for ZK SNARKs" [online]. Available: http://coderserrand.com/constraintsystemsforzk‑SNARKs/. Date accessed: 2019‑12‑23.
[9] H. Wu, W. Zheng, A. Chiesa, R. Ada Popa, and I. Stoica, "DIZK: A Distributed Zero Knowledge Proof System" [online], Proceedings of the 27th USENIX Security Symposium, August 15–17, 2018. Available: https://www.usenix.org/system/files/conference/usenixsecurity18/sec18wu.pdf. Date accessed: 2019‑12‑14.
[10] E. BenSasson, A. Chiesa, D. Genkin, E. Tromer and M. Virza, "SNARKs for C: Verifying Program Executions Succinctly and in Zero Knowledge (extended version)" [online], October 2013. Available: https://eprint.iacr.org/2013/507.pdf. Date accessed: 2019‑12‑17.
[11] V. Buterin, "Quadratic Arithmetic Programs: from Zero to Hero" [online], 12 December 2016. Available: https://medium.com/@VitalikButerin/quadraticarithmeticprogramsfromzerotoherof6d558cea649. Date accessed: 2019‑12‑19.
[12] C. Yun, "Programmable Constraint Systems for Bulletproofs" [online]. Available: https://medium.com/interstellar/programmableconstraintsystemsforbulletproofs365b9feb92f7. Date accessed: 2019‑12‑04.
[13] J. Groth, "Linear Algebra with Sublinear Zeroknowledge Arguments" [online], Advances in Cryptology – CRYPTO 2009, pages 192–208, 2009. Available: https://iacr.org/archive/crypto2009/56770190/56770190.pdf. Date accessed: 2019‑12‑04.
[14] Dalek, "Ristretto" [online]. Available: https://docs.rs/curve25519dalek/0.15.1/curve25519_dalek/ristretto/index.html Date accessed: 2019‑10‑17
[15] H. Valence, "Bulletproofs Prerelease" [online]. Available: https://medium.com/interstellar/bulletproofsprereleasefcb1feb36d4b Date accessed: 2019‑11‑21.
[16] Dalek, "Bulletproofs Implementation" [online]. Available: http://github.com/dalekcryptography/bulletproofs/ Date accessed: 2019‑10‑02.
[17] L. Harchandani, "Zero Knowledge Proofs using Bulletproofs" [online]. Available: https://medium.com/coinmonks/zeroknowledgeproofsusingbulletproofs4a8e2579fc82. Date accessed: 2020‑01‑03.
[18] L. Harchandani, "Factors R1CS Bulletproofs Example" [online]. Available: https://github.com/lovesh/bulletproofs/blob/e477511a20bdb8de8f4fa82cb789ba71cc66afd8/tests/basic_r1cs.rs#L17. Date accessed: 2019‑10‑02.
[19] "Computer Algebra" [online]. Available: https://en.wikipedia.org/wiki/Computer_algebra. Date accessed: 2020‑02‑05.
[20] Wikipedia: "Binary Operation" [online]. Available: <https://en.wikipedia.org/wiki/Binary_operation >. Date accessed: 2020‑02‑06.
[21] WolframMathWorld, "Field Theory" [online]. Available: http://mathworld.wolfram.com/FieldAxioms.html. Date accessed: 2020‑02‑06.
[22] Wikipedia: "Finite Theory" [online]. Available: https://en.wikipedia.org/wiki/Finite_field. Date accessed: 2020‑02‑06.
Appendices
Appendix A: Definition of Terms
Definitions of terms presented here are high level and general in nature. Full mathematical definitions are available in the cited references.
 Symbolic Computation: The study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects [19].

Binary Operation: An operation $ * $ or a calculation that combines two elements $ \mathcal{a} $ and $ \mathcal{b}$, called operands, to produce another element $ \mathcal{a} * \mathcal{b} $ [20].

Field: Any set $ \mathcal{F} $ of elements together with Binary Operations $ + $ and $ \cdot $, called addition and multiplication, respectively, is a field if for any three elements $ \mathcal{a}, \mathcal{b} $ and $ \mathcal{c} $ in $ \mathcal{F} $ satisfy the field axioms given in the following table. A Finite field is any field $ \mathcal{F} $ that contains a finite number of elements ([21], [22]).
Name  Addition  Multiplication 

associativity  
commutativity  
distributivity  
identity  
inverses 
Appendix B: Notation Used

Let $ \mathcal{F} $ be a field.

Let $ \mathbf{a} = (a_1, a_2, ..., a_n ) $ be a vector with $n$ components $ a_1, a_2, ..., a_n $, which are elements of some field $ \mathcal{F} $.

Let $\mathbf{s}^T$ be the transpose of a vector $ \mathbf{s} = ( s_1, s_2, \dots, s_n )$ of size $ n $ such that $ \mathbf{s}^T = \begin{bmatrix} s_1 \\ s_2 \\ \cdot \\ \cdot \\ \cdot \\ s_n \end{bmatrix} $
Contributors
Amortization of Bulletproofs Innerproduct Proof
 Introduction
 Notation and Assumptions
 Zeroknowledge Proofs
 Bulletproofs Range Proofs
 What is Recursive Proof Composition?
 Verification Amortization Strategies
 Amortized Innerproduct Proof
 Application to Tari Blockchain
 References
 Appendices
 Contributors
Introduction
One of the main attractions of blockchains is the idea of trustlessness. It is aiming at building a network that allows every participant to run their own validation should they be in doubt about any given set of transactions. Well, this is currently an ideal, especially for blockchains with confidential transactions. One of the reasons for this is the current sizes of zeroknowledge proofs. Although much research has focused on reducing the sizes of zeroknowledge proofs, seen in the form of zeroknowledge Succinct Noninteractive Argument of Knowledge (zkSNARKs) [1] and Bulletproofs [2], scalability remains a big challenge.
Recent efforts in minimizing verification costs have gone the route of recursive proofs. Sean Bowe et al. [14] lead the pack on recursive proof composition without a trusted setup, with the Halo protocol. Other applications of recursive proof composition still rely on some kind of a trusted setup [3]. For example, Coda [4] and Sonic [5] use a Common Reference String (CRS) and an updatable structured reference string, respectively.
Coindesk previously commented that "In essence, Bowe and Co. discovered a new method of proving the validity of transactions, while masked, by compressing computational data to the bare minimum" [6].
For blockchains with confidential transactions such as Mimblewimble, Bulletproofs range proofs are the most crucial zeroknowledge proofs involved in validation of blockchain transactions. The bulk of computations in these range proofs take place in the Innerproduct Proof (IPP).
This report investigates how the innovative amortization strategies of the Halo protocol can be used to enhance the Bulletproofs IPP.
Notation and Assumptions
The settings and notations used in this report follow the Bulletproofs framework as in the Dalek notes [7]. This includes the underlying field $\mathbb{F}_p$, elliptic curve, elliptic curve group, and generators $G$ and $H$. The details are not included here, as they have been covered in other Tari Labs University (TLU) reports. Note that properties such as completeness, soundness or public coin, and discrete log difficulty, are assumed here.
Since the Ristretto implementation of the curve25519 is used in the Bulletproofs setting, unless otherwise stated, all vectors of size $n$ refer to $n$ 32 bytes.
Zeroknowledge Proofs
There are two parties in a zeroknowledge proof: the prover and the verifier. The prover seeks to convince the verifier that he or she has knowledge of a secret value $w$, called the witness, without disclosing any more information about $w$ to the verifier. How does this work?
The prover receives a challenge $x$ from the verifier and:
 makes a commitment $P$ of the witness, which hides the value of $w$; then
 creates a proof $\pi$ that attests knowledge of the correct $w$.
The prover sends these two to the verifier, who checks correctness of the proof $\pi$. This means that the verifier tests if some particular relation $\mathcal{R}$ between $w$ and $x$ holds true. The proof $\pi$ is deemed correct if $\mathcal{R}(x,w) = 1$ and incorrect if $\mathcal{R}(x,w) = 0$. Since the verifier does not know $w$, they use some verification algorithm $\mathcal{V}$ such that $\mathcal{V}(x, \pi) = \mathcal{R}(x,w)$.
The entire research on scalability is in pursuit of such an algorithm $\mathcal{V}$ that is most efficient and secure.
Bulletproofs Range Proofs
The Bulletproofs system provides a framework for building noninteractive zeroknowledge proofs without any need for a trusted setup. Also, according to Cathie Yun, "it allows for proving a much wider class of statements than just range proofs" [8].
The Bulletproofs framework uses the Pedersen commitment scheme, which is known for its hiding and binding properties.
A Pedersen commitment of a value $v$ is given by $Com(v) = v \cdot B + \tilde{v} \cdot \tilde{B}$, where $B$ and $\tilde{B}$ are the generators of the elliptic curve group, and $\tilde{v}$ is a blinding factor [7].
In a Bulletproofs range proof 
 A prover, given a challenge $x$ from the verifier:
 makes a commitment to a value $v$;
 creates a proof $\pi$ that attests to the statement that $v \in [0, 2^n)$;
 sends the proof $\pi$ to the verifier, without revealing any other information about $v$.
 A verifier checks if indeed $v$ is nonnegative and falls within the interval $v \in [0, 2^n )$.
The Bulletproofs range proof achieves its goal by rewriting the statement $v \in [0, 2^n)$ in terms of its binary vectors, as well as expressing it as a single inner product $t(x) = \langle \mathbf{l}(x), \mathbf{r}(x) \rangle$ of specially defined binary polynomial vectors $\mathbf{l}(x)$ and $\mathbf{r}(x)$.
Thus a socalled vector Pedersen commitment is also used in these types of proofs, and is defined as follows:
A vector Pedersen commitment of vectors $\mathbf{a}_L$ and $\mathbf{a}_R$ is given by $ Com(\mathbf{a}_L , \mathbf{a}_R ) = \langle \mathbf{a}_L \mathbf{G} \rangle + \langle \mathbf{a}_R , \mathbf{H} \rangle + \tilde{a} \tilde{B} $ where $\mathbf{G}$ and $\mathbf{H}$ are vectors of generators of the elliptic curve group [7].
The major component of a Bulletproofs range proof is no doubt its IPP. This became even more apparent when Bootle et al. introduced an IPP that requires only $2log_2(n) + 2$ proof elements instead of $2n$ in [9]. Henry de Valence of Interstellar puts it this way:
"The innerproduct proof allows the prover to convince a verifier that some scalar is the innerproduct of two length$n$ vectors using $\mathcal{O}(log(n))$ steps, and it’s the reason that Bulletproofs are compact" [10].
No wonder the Halo creators also looked at the IPP in their amortization strategies, particularly taking advantage of its recursive nature. Close attention is therefore given to the IPP as described by Bootle et al. [9].
What is Recursive Proof Composition?
The aim of Recursive Proof Composition is to construct "proofs that verify other proofs" or "proofs that are capable of verifying other instances of themselves" [6]. These take advantage of the recursive nature of some components of known zeroknowledge proofs.
Before discussing recursive proofs or proof recursion, the recursive function concept and the efficiency of recursive algorithms are briefly discussed. The IPP, as used in a Bulletproofs range proof, is given as an example of a recursive algorithm. Figure 1 shows the recursive nature of the IPP, and will later be helpful in explaining some of Halo's amortization strategies.
Recursive Functions
Recursion is used to define functions or sequences of values that depict a consistent pattern. When written as a formula, it becomes apparent that a $(j1)$th member of such a sequence is needed in computing the $j$th member of the same sequence.
A function $F(x)$ that yields a sequence of values $ F(0) , F(1), ... , F(n)$ for some positive integer $ n $ is a recursive function if $F(k) = F(k  1) + g(k)$ for all $0 < k \leq n$, where $g(x)$ is some function of $ x $, an indeterminate.
A typical recursive function $F(j)$ for $j \in \{ 0 , 1 , ... , n \} $ can be represented in terms of a flow chart, as shown in Figure 1, depicting how values of the sequence $F(0) , F(1), ... , F(n)$ are computed.
In computer programming, algorithms that involve recursive functions are efficiently executed by the use of "forloops" and "whileloops". One can say that computers were made and designed to specifically carry out repetitive computation without much error. However, although recursive proof composition is pointedly applicable to recursive algorithms, there's more to it than just recursiveness. Proof recursion is not defined by recursiveness, but rather takes advantage of it.
Recursion in Bulletproofs Innerproduct Proof
In Bulletproofs range proofs, a prover commits to a value $v$ and seeks to construct an IPP to the fact that $v \in [ 0 , 2^n ) $. Pedersen commitments are used to keep the value of $v$ confidential, and are expressed as innerproducts.
The main recursive part of a range proof is the IPP. The innerproduct of two vectors $\mathbf{a}$, $\mathbf{b}$ and the associated Pedersen commitment can be expressed as:
$$ P_k = \langle \mathbf{a} , \mathbf{G} \rangle + \langle \mathbf{b} , \mathbf{H} \rangle + \langle \mathbf{a} ,\mathbf{b} \rangle \cdot Q $$
where $\mathbf{a}$ and $\mathbf{b}$ are size$n$ vectors of scalars in the field $\mathbb{F}_p$, while $\mathbf{G}$ and $\mathbf{H}$ are vectors of points in an elliptic curve $\mathbb{E} ( \mathbb{F}_p)$ and $k = log_2(n)$; refer to [11].
Recursion is seen in a $k$round noninteractive IPP argument, where these commitments are written in terms of challenges $u_k$ sent by the verifier:
$$ P_{k  1} = P_k + L_k \cdot u_k^{2} + R_k \cdot u_k^{2} $$
where $ L_k $ and $ R_k $ are specifically defined as linear combinations of innerproducts of vectors that are half the size of vectors in the $k  1$ round.
In the IP proof, the prover convinces the verifier of the veracity of the commitment $P_k$ by sending only $k = log(n)$
pairs of values $L_j$ and $R_j$, where $j \in \{ 1, 2, 3, ... , k \}$.
It is due to this recursion that Bootle et al. [9] reduced the previous complexity of zeroknowledge proofs from
$O(\sqrt{n})$ to $O(log(n))$.
Refer to Figure 2 for an overview of the prover's side of the IPP.
The input to the IPP is the quadruple of size $ n = 2^k $ vectors
$$ \big( \mathbf{a}^{(j)} , \mathbf{b}^{(j)} , \mathbf{G}^{(j)} , \mathbf{H}^{(j)} \big) $$
which is initially
$$ \big( \mathbf{a} , \mathbf{b} , \mathbf{G} , \mathbf{H} \big) $$
However, when $j < k$, the input is updated to
$$ \big(\mathbf{a}^{(j1)}, \mathbf{b}^{(j1)}, \mathbf{G}^{(j1)},\mathbf{H}^{(j1)} \big) $$
quadruple vectors each of size $2^{k1}$, where
$$ \mathbf{a}^{(j1)} = \mathbf{a}_{lo} \cdot u_j + \mathbf{a}_{hi} \cdot u_j^{1} \\ \mathbf{b}^{(j1)} = \mathbf{b}_{lo} \cdot u_j^{1} + \mathbf{b}_{hi} \cdot u_j \\ \mathbf{G}^{(j1)} = \mathbf{G}_{lo} \cdot u_j^{1} + \mathbf{G}_{hi} \cdot u_j \\ \mathbf{H}^{(j1)} = \mathbf{H}_{lo} \cdot u_j + \mathbf{H}_{hi} \cdot u_j^{1} \\ $$
and $u_k$ is the verifier's challenge. The vectors
$$ \mathbf{a}_{lo}, \mathbf{b}_{lo}, \mathbf{G}_{lo}, \mathbf{H}_{lo} \\ \mathbf{a}_{hi}, \mathbf{b}_{hi}, \mathbf{G}_{hi}, \mathbf{H}_{hi} $$
are the left and the right halves of the vectors
$$ \mathbf{a}, \mathbf{b}, \mathbf{G}, \mathbf{H} $$
respectively.
Figure 2 is included here not only to display the recursive nature of the IPP, but will also be handy and pivotal in understanding amortization strategies that will be applied to the IPP.
Inductive Proofs
As noted earlier, "forloops" and "whileloops" are powerful in efficiently executing repetitive computations. However, they are not sufficient to reach the level of scalability needed in blockchains. The basic reason for this is that the iterative executions compute each instance of the recursive function. This is very expensive and clearly undesirable, because the aim in blockchain validation is not to compute every instance, but to prove that the current instance was correctly executed.
Figure 3 shows how instances of a recursive function are linked, the same way each block in a blockchain is linked to the previous block via hash values.
The amortization strategy used by recursive proof composition or proof recursion is based on the old but powerful mathematical tool called the Principle of Mathematical Induction, which dates back to the sixteenth century [12].
Suppose one has to prove that a given sequence of values $ F(0) , F(1), ... , F(n)$ are correct instances of a recursive function $F(n) = F(n  1) + g(n)$ for all positive integer $ n $ and $g(n)$ some function.
According to the Principle of Mathematical Induction, it is sufficient to prove the above statement in the following two steps:

(Base step): Prove that the first possible instance $F(0)$ is correct.

(Inductive step): For any integer $ k > 0 $, prove that "if the previous instance $F(k  1)$ is correct, then the current instance $F(k)$ is also correct", i.e. prove that "$F(k  1)$ is correct" implies "$F(k)$ is also correct".
These two steps together are sufficient to form a complete proof for the verifier to be convinced. Such a proof is valid even if the current instance is the zillionth. This saves the verifier the trouble of checking every instance of the function $F(n)$.
Michael Straka describes a similar inductive proof, which he refers to as "proof recursion" [13]. His explanation of the simplest case that a recursive proof will prove a relation $\mathcal{R}$ inductively, is as follows:
The verifier has:
 a “base” proof $\pi_0$, which attests to the prover knowing some input $( x_0 , w_0 )$, such that $\mathcal{R}( x_0 ,w_0 ) = 1$.
 the proof $\pi_n$ for any $n>0$ will then prove that the prover knows $( x_n , w_n )$, such that $\mathcal{R}(x_n , w_n ) = 1$ and that a proof $\pi_{n1}$ was produced, attesting to the knowledge of $(x_{n−1} , w_{n−1})$.
Figure 4 illustrates the above proof:
Straka continues to explain how an arithmetic circuit for the verifier could be built in order to carry out the above proof. Such a circuit $\mathcal{C}$ would either verify $\mathcal{R}( x_0 , w_0 ) = 1$ (for the base case) or verify $\mathcal{R}( x_i , w_i ) = 1$ and then check $\mathcal{V}( x_{i  1} , π_{i−1} ) = 1$ [13].
Proof systems that use this type of inductive proof for verification solve the blockchain's distributed validation problem. A participant in the blockchain network only needs to download the current state of the network as well as a single proof that this state is correct.
Recursive proof composition is described in [14] as "proofs that can feasibly attest to the correctness of other instances of themselves".
Verification Amortization Strategies
Only a few amortization strategies in the literature are used to achieve proper recursive proof composition. This report focuses on those that Bowe et al. [14] used in the Halo protocol.
Application of Verifiable Computation
Verifiable computation is used for the delegation of computations to an untrusted third party (or at times, the prover) who returns:
 the result $z$ of the computation; and
 a cryptographic proof $\pi_z$ that the result is correct.
The ideal property of such a proof $\pi_z$ is succinctness, which means the proof $\pi_z$ must be asymptotically smaller and less expensive to check than the delegated computation itself [14].
Incrementally Verifiable Computation
The idea here is to attest to the validity of a previous proof in addition to application of verifiable computation. The best thing about this amortization strategy is that one proof can be used to inductively demonstrate the correctness of many previous proofs.
Then, any participator in a blockchain network who wishes to validate the current state of the chain, need only download two things:
 the current state of the chain network; and
 a single proof that this state is correct.
Any further proofs of state changes in a blockchain can be constructed with the latest proof alone, allowing active participants to prune old state changes [14].
Thus, with incrementally verifiable computation, a large and virtually unbounded amount of computation can be verified with a single proof, and with this proof alone, the computation can be extended to further proofs.
Due to this strategy, by "allowing a single proof to inductively demonstrate the correctness of many previous proofs" [14], Halo achieves better scalability than most known SNARKs constructions.
Nested Amortization
This strategy can be used as part of the above two amortization strategies.
Whenever the verifier has to compute an expensive fixed operation $F$ that is invoked with some input $x$, the prover is allowed to witness $y = F(x)$ and send the pair $(x, y)$ to the verifier. The verification circuit takes $(x, y)$ as a public input. "The circuit can then proceed under the assumption that $y$ is correct, delegating the responsibility of checking the correctness of $y$ to the verifier of the proof" [14].
Now, increasing instances of $( x , y )$ will accumulate as proofs are continually composed, simply because the verification circuit will not check these proofs, but rather continually delegate the checking to its verifier. The problem here is that computational cost will escalate.
It is here that the amortization strategy of collapsing computations is needed:
 given instances $( x , y )$ and $( x′ , y′ )$, the prover will provide a noninteractive proof $\pi_{y,y'}$ that $y = F(x)$ and $y' = F(x')$ as a witness to the verification circuit; and
 the verification circuit will check $\pi_{y,y'}$ proof.
If the cost of checking the correctness of $\pi_{y,y'}$ is equivalent to invoking the operation $F$, then the verifier will have collapsed the two instances $( x , y )$ and $( x' , y' )$ into a single fresh instance $( x'' , y'' )$, as shown in Figure 5. This is how the cost of invoking $F$ can be amortized.
Therefore, a nested amortization is achieved on the instances of a binary tree of accumulated instances, helping to reduce verification costs of the final single proof from linear time to sublinear marginal verification time.
As mentioned above, the aim of Recursive Proof Compositions is twopronged: construction of "proofs that verify other proofs" and construction of "proofs that are capable of verifying other instances of themselves" [6]. The discussion in this report focuses on the latter, while the former will be part of forthcoming reports, as it requires delving into understanding cycles of elliptic curves.
Amortized Innerproduct Proof
One application of the amortization strategies discussed thus far, is the Bulletproofs IPP. The aim here is to investigate how much significance these strategies bring to this powerful and pervasive proof, especially in blockchain technology, where efficiency of zeroknowledge proof is pursued.
Bulletproofs Innerproduct Proof Verification
Consider the Bulletproofs IPP, originally described by Bootle et al. [9], but following the Dalek's Bulletproofs settings [11]. The IPP is no doubt recursive in the way in which it is executed. Figure 2 and Figure 6 make this apparent. It is therefore the most relevant case study in amortizing verification costs, especially in the context of recursive proofs.
Figure 6 shows a naive implementation of the verifier's side of the Bulletproofs IPP.
Verifiable Computation
Application 1  Delegating Inversion of Verifier's Challenges
One of the details omitted from Figure 6 is the computation of inverses of the verifier's challenges $u_j$ needed to complete verification. The verifier, for example, needs $u_j^{2}$ in order to compute $ L_j \cdot u_j^2  R_j \cdot u^{2}$. Verifiable computation strategy is therefore applicable to the IPP, where the verifier delegates inversion of challenges to the prover.
As Bowe et al. [14] proposed, in arithmetic circuits where a field inversion of a variable $u$ can be computed, i.e. $u^{p−2}$, which would normally require $log(p)$ multiplication constraints, the prover could witness $v = u^{− 1}$ instead [3]. The verifier could then simply check if $uv = 1$, taking only a single multiplication constraint. Thus one trades $log(p)$ multiplication constraints for only one.
Therefore, to amortize some verification costs, the prover in the Bulletproofs IPP is requested to compute each $u_j^{ 1}$ and send it to the verifier. That is in addition to values $L_j$ and $R_j$ for $j \in \{ 1, 2 , ... , log_2(n) \}$.
Worth noting is that the prover will have computed these inverses anyway, because they need them in "halving" of the input vectors to the IPP. The delegation of inversion therefore comes with no extra cost to the prover.
This amortization strategy has huge savings on verification costs in the form of the number of multiplications by the verifier. Instead of a verification cost of $(log(p))*(log_2(n))$ multiplication constraints on inversion alone, it will now only cost $log_2(n)$ multiplication constraints for a single IPP. The savings are huge considering the fact that $n$, which is the size of the initial input vectors to the IPP, is typically very small compared to $p$, the prime order of the elliptic curve group.
As noted earlier, this amortization strategy reduces the verification costs by factor of $log(p)$.
Application 2  Delegating Computation of $G_i$'s Coefficients
Consider the vector of groupgenerators $\mathbf{G} = ( G_0 , G_1 , G_2 , ... , G_{n1} )$, one of the four initial input vectors to the IPP. In verifying the prover's IPP, the verifier has to compute the vector $\mathbf{s} = ( s_0 , s_1 , s_2 , ... , s_{(n1)} )$, where each $s_i = \prod\limits_{j = 1}^k u_j^{b(i,j)}$ is the socalled coefficient of $G_i$, while $j \in \{ 1 , 2 , 3 , ... , k \}$ with $k = log_2(n)$. Refer to Figure 6. Note that
$$ b(i,j) = \begin{cases} {1} & {\text{if}\ \ (i\ \ mod\ \ 2^j) < 2^{j1}} \\ {+1} & {\text{if}\ \ (i\ \ mod\ \ 2^j) \geq 2^{j1}} \end{cases} $$
determines whether the factor multiplied into $s_i$ is the verifier's challenge $u_j$ or its inverse.
Computations of these coefficients can be delegated to the prover or some third party called the "helper". In this application these coefficients are delegated to the helper, and this means the verifier will need a way to test if the values the helper sends are correct. Thus a verifiable computation is created in the next subsection. Since each of these coefficients is a product of field elements, they have strong algebraic properties that can be exploited in two ways:
 Firstly, the verifier can use these properties to check if the $s_i$'s were correctly computed by the helper.
 Secondly, they can be used to minimize the prover's computational costs.
Verifier's Test of $G_i$'s Coefficients
Properties of $G_i$'s Coefficients
Note that the verifier has to compute the values $u_j^2$ and $u_j^{2}$ for all $j \in \{ 1, 2, 3, ... , k \}$.
The idea here is to use a verifier's test that involves these squares of the challenges and their inverses. The next theorem describes such relationships between these squares and the coefficients $s_i$.
Theorem 1 (Some properties of the set of coefficients ${ s_i }$)
Let $n = 2^k$ and $s_i = \prod\limits_{j = 1}^k u_j^{b(i,j)}$ for all $j \in \{ 1, 2, 3, ... , k \}$, where each $G_i$ is the $i$th component of the initial input vector $\mathbf{G} = ( G_0 , G_1 , G_2 , ... , G_{n1})$, then:

$\ \ s_i \cdot s_{(n1)  i} = 1_{\mathbb{F}_p}$ for all $i \in \{ 0, 1, 2, ... , n1 \}$.

$\ \ s_{2^{(j1)}} \cdot s_{n1} = u_j^2 $ for all $j \in \{ 1, 2, 3, ... , k \}$.

$\ \ s_0 \cdot s_{(n1)  2^{(j1)}} = u_j^{2} $ for all $j \in \{ 1, 2, 3, ... , k \}$.
The proof of Part (1) of Theorem 1 follows by induction on the size $n$ of the initial input vector $\mathbf{G} = ( G_0 , G_1 , G_2 , ... , G_{n1} )$ to the IPP, while parts (2) and (3) follow by induction on $k$. Appendix B contains full details of the proof.
Test of $G_i$'s Coefficients
The verifier tests the correctness of the coefficients $s_i$ with the following statement:
$$
(s_{1} \cdot s_{n1} = u_1^2) \land (s_{2} \cdot s_{n1} = u_2^2) \land (s_{4} \cdot s_{n1} = u_3^2) \land\ \dots \ \land (s_{2^{(k1)}} \cdot s_{n1} = u_k^2) \land \\
(s_0 \cdot s_{(n1)  2^{(0)}} = u_1^{2}) \land (s_0 \cdot s_{(n1)  2^{(1)}} = u_2^{2}) \land\ \dots \ \land (s_0 \cdot s_{(n1)  2^{(k1)}} = u_k^{2}) \land \\
( s_{r_1} \cdot s_{(n1)  {r_1}} = 1_{\mathbb{F}_p} ) \land (s_{r_2} \cdot s_{(n1)  {r_2}} = 1_{\mathbb{F}_p}) \land (s_{r_3} \cdot s_{(n1)  {r_3}} = 1_{\mathbb{F}_p}) \land\ \dots \ \land (s_{r_k} \cdot s_{(n1)  r_k} = 1_{\mathbb{F}_p})
$$
for randomly sampled values $ \{r_1, r_2, ... , r_k \} \subset \{0, 1, 2, ... , n1 \} $ where $k = log_2(n)$.
The multiplication cost of the Test of $G_i$'s Coefficients as stated above is sublinear with respect to $n$, costing the verifier only $3log(n)$ multiplications. That's because all three parts of Theorem 1 are tested. However, it would still suffice for the verifier to test only two out of the three parts of Theorem 1, reducing the cost to $2log(n)$.
Note that the values of the coefficients $s_i$ and the squares $u_j^2$ and $u_j^{2}$ do not exist by default nor are they automatically computed in the Bulletproofs system. In fact, according to Dalek's internal documents on the Innerproduct Proof [7], specifically under Verification equation, these values can be provided by the system. Either way, these values need to be computed. The next subsection therefore focuses on optimising the computations of the $s_i$'s.
Optimizing Prover's Computation of $G_i$'s Coefficients
Naive Algorithm
The naively implemented computation of the coefficients $s_i$, as depicted in Figure 6, is very expensive in terms of the number of multiplications.
The naive algorithm codes computation of the coefficients $s_i = \prod\limits_{j = 1}^k u_j^{b(i,j)}$ by cumulatively multiplying the correct factor $u_j^{b(i,j)}$ in each $j$th IPP round, running from $k = log_2(n)$ down to $1$.
That is, although recursive according to the nature of the IPP, the naive implementation strictly follows the definition without any attempt to optimize.
**Naive Algorithm**
%initialize
s[n] = [1,1,1, ... ,1];
k = log_2(n);
%running inside IPP
int main() {
for (j = k; j > 0; j) {
for (i = 0; i < n; i++) {
s[i] = s[i]*(u_j)^{b(i,j)};
}
}
return = 0;
}
Using the naive algorithm, an input vector $\mathbf{G}$ to the IPP of size $n$, costs the verifier $n*[ log_2(n)  1 ]$ multiplications. If $n = 256$, the cost is $1,792$. This is rather expensive irrespective of whether the $s_i$'s are computed by a third party or by the proof system.
Optimized Algorithms
At least four competing algorithms optimize computation of the coefficients ${ s_i }$. Each one is far much better than the naive algorithm.
The basic optimization strategy is based on the observation that every coefficient has at least one common factor with $2^{k1} 1$ other coefficients, two common factors with $2^{k2}1$ other coefficients, three common factors with $2^{k3} 1$ other coefficients, and so on. It is therefore costeffective in terms of the number of multiplications to aim at computing these common factors and only form the required coefficients later.
**Typical Optimized Algorithm**
%initialize
s[n] = [1,1,1, ... ,1];
k = log_2(n);
%running inside IPP
int main() {
s[0] = u_k^{1}; s[2^{k1}] = u_k;
s[2^{k2}] = s[0]; s[2^{k1} + 2^{k2}] = s[2^{k1}];
for (j = k  1; j > t; j) {
for (i = 0; i < n; i++) {
if (i mod 2^j == 0) {
s[i] = s[i]*(u_j)^{b(i,j)};
l = i + 2^(j1);
s[l] = s[l]*(u_j^{b(l,j)};
}
}
}
return = 0;
}
The value t in the first forloop can be varied in order to form either products of two or three or four, and so on. The optimization of these algorithms depends on whether they only form "doubles" or "triples" or "quadruples", and how they form them. Unlike the naive algorithm, which keeps spending multiplication disregarding the existence of common factors among the coefficients $s_i$, the optimized algorithms use multiplication only if the new product formed is unique.
Since the multiplication cost of the naive algorithm is ludicrously expensive, it will not be used as a point of reference. No programmer would implement the computation of the $s_i$'s using the naive algorithm. So instead, a minimally optimized algorithm called Algorithm 0 or [A0] will henceforth be the yardstick.
In terms of the pseudocode of a Typical Optimized Algorithm given above, algorithm [A0] is defined by setting 't = 0'. That means, [A0] aims at computing the largest possible and distinct products as soon as the new challenge $u_j$ is received from the verifier.
Table 1 gives the multiplication cost of [A0] together with other four optimised algorithms. Appendix A contains full descriptions of these algorithms, which are simply referred to as Algorithm 1 or [A1], Algorithm 2 or [A2], Algorithm 3 or [A3] and Algorithm 4 or [A4].
Vector Size $n$  [A0]  [A1]  [A2]  [A3]  [A4]  Best Algo & Savings % Relative to [A0] 

$n = 4$  $4$  $4$  $4$  $4$  $4$  [All] 0% 
$n = 8$  $12$  $12$  $12$  $12$  $12$  [All] 0% 
$n = 16$  $28$  $28$  $28$  $24$  $24$  [A3,A4] 14.29% 
$n = 32$  $60$  $48$  $60$  $48$  $56$  [A1,A3] 20% 
$n = 64$  $124$  $88$  $96$  $92$  $92$  [A1] 29.03% 
$n = 128$  $252$  $168$  $168$  $164$  $164$  [A3, A4] 34.92% 
$n = 256$  $508$  $316$  $312$  $304$  $304$  [A3, A4] 40.16% 
$n = 512$  $1020$  $612$  $600$  $584$  $816$  [A3] 42.75% 
$n = 1024$  $2044$  $1140$  $1144$  $1140$  $1332$  [A1,A3] 44.23% 
$n = 2048$  $4092$  $2184$  $2244$  $2234$  $2364$  [A1] 46.63% 
$n = 4096$  $8188$  $4272$  $4436$  $4424$  $4424$  [A1] 47.83% 
All four optimized algorithms, [A1], [A2], [A3] and [A4], are fairly competent in saving multiplication costs, showing more than 20% savings for vectors of sizes 64 and above.
The above results indicate that algorithm [A1] is the best of the four for several reasons:

It is the second most frequent winner in all 11 cases investigated, with a score of 7 out of 11.

In order to account for the other four cases; it takes second place twice and third place twice.

It has the best savings percentage of 47.83% relative to [A0].

The only three cases in which it is on par with the minimally optimized algorithm [A0] is when all other algorithms are on par; i.e. when $n = 4$ and $n = 8$; as well as when the best optimized algorithm saves a mere $4$ multiplications.
Concluding Amortized Innerproduct Proof
The amortization strategies herein applied to the Bulletproofs IPP are tangible and significantly enhance the proof. With the above amortization of the IPP, the prover sends $3log_2(n)$ IPP elements, i.e. the set of all $log_2(n)$ triples $L_j$, $R_j$ and $u^{1}$. The $n$ coefficients $s_i$'s will either be sent by the thirdparty helper or provided by the system according to Dalek's internal documents. See IPP's Verification equation in [7].
The given verifier's test of the $G_i$'s coefficients and the optimization of their computations solidify the proposed amortization of the IPP. Given the Bulletproofs setting, that a vector of size $n$ refers to $n$ number of 32byte values, even seemingly small savings are significant.
Application to Tari Blockchain
This report not only explains the often obscured parts of the IPP mechanism, but also succeeds in bringing this powerful proof as close as possible to practicality.
The amortized IPP as discussed above does not veer off the Dalek's Bulletproofs setting, which is preferred for the Tari blockchain.
These amortization strategies, though minimal in the changes they bring into the IPP to which we are accustomed, have huge savings for the verification costs, cutting down on the number of multiplications by a factor of $log(p)$, where the prime $p$ is deliberately chosen to be very large.
The amortized IPP lends itself to implementations of any zeroknowledge proof that involves innerproducts, especially in the Bulletproofs framework.
References
[1] H. Wu, W. Zheng, A. Chiesa, R. A. Popa and I. Stoica, "DIZK: A Distributed Zero Knowledge Proof System", UC Berkeley [online]. Available: https://www.usenix.org/system/files/conference/usenixsecurity18/sec18wu.pdf. Date accessed: 2019‑12‑07.
[2] B. Bünz, J. Bootle, D. Boneh, A. Poelstra, P. Wuille and G. Maxwell, "Bulletproofs: Short Proofs for Confidential Transactions and More" [online]. Blockchain Protocol Analysis and Security Engineering 2018. Available: http://web.stanford.edu/~buenz/pubs/bulletproofs.pdf. Date accessed: 2020‑04‑21.
[3] ECC Posts, "Halo: Recursive Proof Composition without a Trusted Setup" [online]. Electric Coin Co., September 2019. Available: https://electriccoin.co/blog/halorecursiveproofcompositionwithoutatrustedsetup/. Date accessed: 2020‑04‑24.
[4] I. Meckler and E. Shapiro, "Coda: Decentralized Cryptocurrency at Scale" [online]. O(1) Labs, 2018‑05‑10. Available: https://cdn.codaprotocol.com/v2/static/codawhitepaper051020180.pdf. Date accessed: 2020‑04‑27.
[5] M. Maller, S. Bowe, M. Kohlweiss and S. Meiklejohn, "Sonic: Zeroknowledge SNARKs from Linearsize Universal and Updatable Structured Reference Strings" [online]. Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications, 2019. Available: https://eprint.iacr.org/2019/099.pdf. Date accessed: 2020‑04‑27.
[6] W. Foxley, "You Can Now Prove a Whole Blockchain With One Math Problem – Really" [online]. Coindesk, 2019. Available: https://www.coindesk.com/youcannowproveawholeblockchainwithonemathproblemreally. Date accessed: 2020‑04‑27.
[7] Dalek's Bulletproofs documents, "Module Bulletproofs::notes ::index" [online]. Available: https://docinternal.dalek.rs/bulletproofs/inner_product_proof/index.html. Date accessed: 2020‑05‑01.
[8] C. Yun, "Building on Bulletproofs" [online]. Available: https://medium.com/@cathieyun/buildingonbulletproofs2faa58af0ba8. Date accessed: 2020‑04‑27.
[9] J. Bootle, A. Cerulli, P. Chaidos, J. Groth and C. Petit, "Efficient Zeroknowledge Arguments for Arithmetic Circuits in the Discrete Log Setting" [online]. Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 327‑357. Springer, 2016. Available: https://eprint.iacr.org/2016/263.pdf. Date accessed: 2019‑12‑21.
[10] H. de Valence, "Merlin: Flexible, Composable Transcripts for Zeroknowledge Proofs" [online]. Available: https://medium.com/@hdevalence/merlinflexiblecomposabletranscriptsforzeroknowledgeproofs28d9fda22d9a. Date accessed: 2019‑12‑21.
[11] Dalek's Bulletproofs documents, "Module Bulletproofs:: notes :: index" [online]. Available: https://docinternal.dalek.rs/bulletproofs/notes/inner_product_proof/index.html. Date accessed: 2020‑04‑27.
[12] History of Science and Mathematics Beta, "Who Introduced the Principle of Mathematical Induction for the First Time?" [online]. Available: https://hsm.stackexchange.com/questions/524/whointroducedtheprincipleofmathematicalinductionforthefirsttime. Date accessed: 2020‑04‑30.
[13] M. Straka, "Recursive Zeroknowledge Proofs: A Comprehensive Primer" [online], 2019‑12‑08. Available: https://www.michaelstraka.com/posts/recursivesnarks/. Date accessed: 2020‑04‑30.
[14] S. Bowe, J. Grigg and D. Hopwood, "Recursive Proof Composition without a Trusted Setup" [online]. Electric Coin Company, 2019. Available: https://eprint.iacr.org/2019/1021.pdf. Date accessed: 2020‑04‑24.
Appendices
Appendix A: Optimized Algorithms
The naive algorithm codes computation of the coefficients $s_i = \prod\limits_{j = 1}^k u_j^{b(i,j)}$ by cumulatively multiplying the correct factor $u_j^{b(i,j)}$ in each $j$th round of the IPP, running from $k = log_2(n)$ to $1$.
This appendix contains details of the algorithms investigated to optimize computations of the coefficients ${ s_i }$ of $G_i$ the component of the vector input to the IPP, $\mathbf{G} = (G_0 , G_1 , G_2 , ... , G_{n1})$.
Basic Approach
The basic approach to the formulation of efficient and costsaving algorithms is as follows:

Each algorithm aims at reducing verification costs in terms of the number of multiplications.

Each algorithm takes advantage of the fact that each coefficient has common subproducts with other $2^{k2}$ coefficients.

The intermediate values of the coefficients are not used anywhere in the IPP. Only the final values made up of all $k$ factors $u_j^{b(i,j)}$. Hence their subproducts can be separately computed and only put together at the end of the IPP's $k$th round.

Due to the algebraic structure of the sets of coefficients, they can be systematically computed. Note specifically that each
$$ s_i\ \ =\ \ \prod\limits_{j = 1}^k u_j^{b(i,j)}\ \ =\ \ u_k^{b(i,k)} \cdot u_{k1}^{b(i,k1)} \cdot\ \ ...\ \ \cdot u_2^{b(i,2)} \cdot u_1^{b(i,1)} $$
corresponds to a $k$tuple of field elements
$$ \Big( u_k^{b(i,k)}, u_{k1}^{b(i,k1)}, ... , u_2^{b(i,2)}, u_1^{b(i,1)}\Big)\ \ \in\ \ \mathbb{F}_p^k $$
Hence subproducts such as $\ \ u_4^{b(i,4)} \cdot u_{3}^{b(i,3)} \ \ $ or $\ \ u_{10}^{b(i,10)} \cdot u_{9}^{b(i,9)} \cdot u_{8}^{b(i,8)}\ \ $ or $\ \ u_{4}^{b(i,4)} \cdot u_{3}^{b(i,3)} \cdot u_{2}^{b(i,2)} \cdot u_{1}^{b(i,1)}\ \ $ are herein referred to as "doubles" or "triples" or "quadruples", respectively.
Defining Optimized Algorithms
Since the main strategy in using these algorithms is to avoid insisting on immediate updates of the values $s_i$, they differ in their scheduling of when "doubles" or "triples" or "quadruples" are computed. Note that "distinct" subproducts refers to those subproducts with no common factor.
Algorithm 1
This algorithm computes new distinct doubles as soon as it is possible to do so. These new doubles are turned into triples in the next immediate IPP round. This process is repeated until all IPP rounds are completed. Only then are next largersized subproducts computed, but consumption of the smallest existing "tuples" is given priority.
**Algorithm 1 or [A1]**
%initialize
s[n] = [1,1,1, ... ,1];
k = log_2(n);
%running inside IPP
int main() {
s[0] = u_k^{1};
s[2^{k1}] = u_k;
s[2^{k2}] = s[0];
s[3*2^{k2}] = s[2^{k1}];
t = k3;
for (j = k  1; j > t; j) {
for (i = 0; i < n; i++) {
if ( i mod 2^j == 0 ) {
s[i] = s[i]*(u_j)^{b(i,j)};
l = i + 2^(j1);
s[l] = s[l]*(u_j^{b(l,j)};
}
}
}
%if k3 > 0, then the program proceeds as follows:
s[1] = u_{k3}^{1};
s[1+2^{k4}] = u_{k3};
s[1+2^{k1}] = s[1];
s[(1+2^{k4})+2^{k1}] = s[1+2^{k4}];
%if k4 > 0, then the program proceeds as follows:
t = k6;
for (j = k4; j > t; j) {
for (i = 0; i < n; i++) {
if (i mod (1+2^(k1)) == 0) {
s[i] = s[i]*(u_j)^{b(i,j)};
l = i + 2^j;
s[l] = s[l]*(u_j^{b(l,j)};
}
}
}
% program continues forming new and distinct triples until k=1
% after which all (2^k) "legal" ktuples are formed
return = 0;
}
Algorithm 2
This algorithm starts exactly like Algorithm 1, by forming only new and distinct doubles and turning them into triples in the next immediate IPP round, but goes beyond the triples by immediately forming the next possible quadruples.
**Algorithm 2 or [A2]**
%initialize
s[n] = [1,1,1, ... ,1];
k = log_2(n)
%running inside IPP
int main() {
s[0] = u_k^{1};
s[2^{k1}] = u_k;
s[2^{k2}] = s[0];
s[3*2^{k2}] = s[2^{k1}];
t = k4;
for (j = k  1; j > t; j) {
for (i = 0; i < n; i++) {
if ( i mod 2^j == 0 ) {
s[i] = s[i]*(u_j)^{b(i,j)};
l = i + 2^(j1);
s[l] = s[l]*(u_j^{b(l,j)};
}
}
}
%if k4 > 0, then the program proceeds as follows:
s[1] = u_{k4}^{1};
s[1+2^{k4}] = u_{k4};
s[1+2^{k1}] = s[1];
s[(1+2^{k4})+(2^{k1})] = s[1+2^{k4}];
%if k5 > 0, then the program proceeds as follows:
t = k8;
for (j = k5; j > t; j) {
for (i = 0; i < n; i++) {
if ( i mod (1+2^(k1) == 0 ) {
s[i] = s[i]*(u_j)^{b(i,j)};
l = i + 2^j;
s[l] = s[l]*(u_j^{b(l,j)};
}
}
}
% program continues forming new and distinct quadruples until k=1
% after which all (2^k) "legal" ktuples are formed
return = 0;
}
Algorithm 3
This algorithm computes new distinct doubles as soon as it is possible to do so, until the end of the IPP rounds. The program then forms any possible triples. Largersized subproducts are then computed by firstly consuming the smallest existing "tuples".
**Algorithm 3 or [A3]**
%initialize
s[n] = [1,1,1, ... ,1];
k = log_2(n);
%running inside IPP
int main() {
s[0] = u_k^{1};
s[2^{k1}] = u_k;
s[2^{k2}] = s[0];
s[2^{k1} + 2^{k2}] = s[2^{k1}];
t = k2;
for (j = k  1; j > t; j) {
for (i = 0; i < n; i++) {
if ( i mod 2^j == 0 ) {
s[i] = s[i]*(u_j)^{b(i,j)};
l = i + 2^(j1);
s[l] = s[l]*(u_j^{b(l,j)};
}
}
}
%if k2 > 0, then program proceeds as follows:
s[1] = u_{k2}^{1};
s[1+2^{k1}] = u_{k2};
s[1+2^{k2}] = s[1];
s[1+3*(2^{k2})] = s[1+2^{k1}];
%if k3 > 0, then the program proceeds as follows:
t = k3;
for (j = k3; j > t; j) {
for (i = 0; i < n; i++) {
if ( i mod 1+2^(k1) == 0 ) {
s[i] = s[i]*(u_j)^{b(i,j)};
l = i + 2^j;
s[l] = s[l]*(u_j^{b(l,j)};
}
}
}
% program continues forming new and distinct doubles until k=1
% after which all (2^k) "legal" ktuples are formed
return = 0;
}
Algorithm 4
This algorithm is the same as Algorithm 3 throughout the IPP rounds. However, at the end of the IPP rounds, the program gives preference to the formation of all possible distinct quadruples. Largersized subproducts are then computed by firstly consuming the largest existing "tuples".
**Algorithm 4 or [A4]**
The same pseudocode used for Algorithm 3 applies to Algorithm 4
Example 1 (Algorithm 1 or [A1])
Let $n = 32$ so that $k = 5$. The coefficients $s_i$ for $i \in \{0, 1, 2, ... , 15 \}$ are $32$ quintets:
$$ u_5^{b(i,5)} * u_4^{b(i,4)} * u_3^{b(i,3)} * u_2^{b(i,2)} * u_1^{b(i,1)} $$
The number of multiplications per IPP round is given at the bottom of each column. The vector of coefficients is initialized to $\mathbf{s} = ( s_0 = 1, s_1 = 1, s_2 = 1, ... , s_{n1} = 1 )$. Table 2 only displays the updated and computed values $s_i$ since the initialization of $\mathbf{s}$.
[A1] $\text{ for }$ $ n=2^5$  j = 5  j = 4  j = 3  j = 2  j = 1 

$s_0$  $u_5^{1}$  $u_5^{1} * u_4^{1}$  $u_5^{1} * u_4^{1} * u_3^{1}$  
$s_1$  $u_2^{1}$  $u_2^{1} * u_1$  
$s_2$  
$s_3$  $u_2$  $u_2 * u_1^{1}$  
$s_4$  $u_5^{1} * u_4^{1} * u_3$  
$s_5$  
$s_6$  
$s_7$  
$s_8$  $u_5^{1}$  $u_5^{1} * u_4$  $u_5^{1} * u_4 * u_3^{1}$  
$s_9$  
$s_{10}$  
$s_{11}$  
$s_{12}$  $u_5^{1} * u_4 * u_3$  
$s_{13}$  
$s_{14}$  
$s_{15}$  
$s_{16}$  $u_5$  $u_5 * u_4^{1}$  $u_5 * u_4^{1} * u_3^{1} $  
$s _{17}$  $u_2^{1}$  $u_2^{1} * u_1^{1}$  
$s_{18}$  
$s_{19}$  $u_2$  $u_2 * u_1$  
$s_{20}$  $ u_5 * u_4^{1} * u_3$  
$s_{21}$  
$s_{22}$  
$s_{23}$  
$s_{24}$  $u_5$  $u_5 * u_4 $  $u_5 * u_4 * u_3^{1} $  
$s_{25}$  
$s_{26}$  
$s_{27}$  
$ s_{28} $  $u_5 * u_4 * u_3 $  
$s_{29}$  
$s_{30}$  
$s_{31}$  
mult. cost  $0$  $4$  $8$  $0$  $4$ 
At the end of the $k$ IPP rounds, there are $eight$ distinct triples and $four$ distinct doubles. These are sufficient to form all the required $32$ quintets, and it takes exactly $32$ multiplications to form them all.
The total cost of computing the coefficients for $n = 32$ using Algorithm 1 is $\mathbf{4 + 8 + 4 + 32 = 48}$.
Appendix B: Proof of Theorem 1 and its Preliminaries
The proof of Theorem 1 makes use of a few basic properties of the parity values $b(i,j)$, which are described in this appendix.
Notation and Definitions
The letters $j$, $k$ and $l$ denote nonnegative integers, unless otherwise stated. In addition: $n = 2^k$, $i \in \{ 0, 1, 2, ... , n1 \}$ and $j \in \{ 1, 2, ... , k \}$.
The multiplicative identity of the field $\mathbb{F}_p$ is denoted by $1_{\mathbb{F}_p}$.
The IPP verifier's $j$th challenge is denoted by $u_j$ and its parity exponent is defined by $b(i,j) = \begin{cases} {1} & {\text{if}\ \ (i\ \ mod\ \ 2^j) < 2^{j1}} \\ {+1} & {\text{if}\ \ (i\ \ mod\ \ 2^j) \geq 2^{j1}} \end{cases} $
Preliminary Lemmas and Corollaries
Details of preliminary results needed to complete the proof of Theorem 1 are provided here in the form of lemmas and corollaries.
Proofs of these lemmas and their corollaries follow readily from the definition of the parity exponent $b(i,j)$ of verifier challenges and their multiplicative inverses.
The next lemma is a wellknown fact contained in undergraduate mathematics text books, mostly in tables of formulas.
Lemma 1
For an indeterminate $x$, we have
$ \ \ x^k  1 = (x  1)(x^{k1} + x^{k2} + ... + x + 1) $
Corollary 1

$ \ \ 2^k  1\ \ =\ \ 2^{k1} + 2^{k2} + \dots + 2 + 1$

$ \ \ 2^k  1\ \ \geq\ \ 2^{k1}\ \ $ for all $ k \geq 1$

$ \ \ (2^l  1) \text{ mod } 2^j = 2^{j1} + 2^{j2} + \dots + 2 + 1\ \ $ for any $l \geq j$

$ \ \ ((n  1)  2^{j1}) \text{ mod } 2^j = 2^{j2} + 2^{j3} + \dots + 2 + 1 $

$ \ \ i\ \ =\ \ c_{l1} \cdot 2^{l1} + c_{l2} \cdot 2^{l2} + \dots + c_1 \cdot 2 + c_0\ \ $ for $ l < k $ and some $ c_{li} \in \{ 0 , 1 \}$
Lemma 2

$ \ \ b(0,j) = 1 $

$ \ \ b(1,1) = +1 $

$ \ \ b(n2,1) = 1 $

$ \ \ b(1,j) = 1 ,\ \ \forall\ \ j > 1 $

$ \ \ b(n1,j) = +1 $

$ \ \ b(n2,j) = +1 ,\ \ \forall\ \ j > 1 $

$ \ \ b( 2^j , j ) = 1 $

$ \ \ b( 2^{j1} , j ) = +1 $

$ \ \ b(2^l,j) = 1,\ \ \forall\ \ l < j1 $

$ \ \ b(2^l,j) = 1,\ \ \forall\ \ l > j $

$ \ \ b((n1)2^{j1}, j) = 1 $
Corollary 2

$ \ \ b(0,j) = (1) \cdot b(n1,j),\ \ \forall\ \ j $

$ \ \ b(i,j) = (1) \cdot b( (n1)i , j ),\ \ \forall\ \ i\ \ \text{and}\ \ \forall\ \ j $

$ \ \ b( 2^{j1} , j ) = b(n1,j),\ \ \forall\ \ j $

$ \ \ b(0,j) = b((n1)2^{j1}, j),\ \ \forall\ \ j $
Proof of Corollary 2, Part (2)
By induction on $k$, where $j \in \{ 1, 2, 3, \dots , k \} $.
For $j = 1$, where $ i $ is even: Note that $ i \text{ mod } 2^1 = 0 < 2^0 = 1 $, and thus $ b(i,1) = 1 $. On the other hand, $ ((n1)i)$ is odd, hence $ ((n1)i) \text{ mod } 2^1 = 1 = 2^0 $, so that $ b((n1)i, j) = +1 $.
For $j = 1$, where $ i $ is odd: Similarly, $ i \text{ mod } 2^1 = 1 = 2^0 $ and $ b(i,1) = +1 $. Since $ ((n1)i) $ is even, $ ((n1)i) \text{ mod } 2^1 = 0 < 2^0 $, and therefore $ b((n1)i, j) = 1 $.
This proves the base case, i.e. $ b(i,1) = (1) \cdot b((n1)i, j) $.
Now for $j > 1$: Note that by Part (5) of Corollary 1,
$\ \ i \text{ mod } 2^j = c_{j1} \cdot 2^{j1} + c_{j2} \cdot 2^{j2} + \dots + c_1 \cdot 2 + c_0
$ and
$ ((n1)i) \text{ mod } 2^j = (2^{j1} + 2^{j2} + \dots + 2 + 1)  (c_{j1} \cdot 2^{j1} + c_{j2} \cdot 2^{j2} + \dots + c_1 \cdot 2 + c_0) $.
Suppose that
$ b(i, j) = +1 $. Then $ c_{j1} \cdot 2^{j1} + c_{j2} \cdot 2^{j2} + \dots + c_1 \cdot 2 + c_0 \geq 2^{j1}$,
which means $ c_{j1} = 1$. This implies
$ ((n1)i) \text{ mod } 2^j = (2^{j2} + 2^{j3} + \dots + 2 + 1)  (c_{j2} \cdot 2^{j2} + c_{j3} \cdot 2^{j3} + \dots + c_1 \cdot 2 + c_0) < 2^{j1} $.
Yielding $ b((n1)i) = 1 $. The converse argument is the same.
Suppose that $ b(i, j) = 1 $. Then $ c_{j1} \cdot 2^{j1} + c_{j2} \cdot 2^{j2} + \dots + c_1 \cdot 2 + c_0 < 2^{j1} $, which means $ c_{j1} = 0 $. This also implies $ ((n1)i) \text{ mod } 2^j \geq 2^{j1} $. Hence $ b((n1)i) = +1 $. Again, the converse argument here follows the reverse argument.
The above two cases prove the inductive step, i.e. $ b(i, j) = (1) \cdot b((n1)i) $.
Proof of Theorem 1
Theorem 1 (Some properties of the set of coefficients ${ s_i }$)
Let $s_i = \prod\limits_{j = 1}^k u_j^{b(i,j)}$ be the coefficient of $G_i$, the $i$th component of the initial IPP
input vector $\mathbf{G} = ( G_0 , G_1 , G_2 , ... , G_{n1})$.
Then,

$\ \ s_i \cdot s_{(n1)  i} = 1_{\mathbb{F}_p}\ \ $ for all $i \in \{ 0, 1, 2, ... , n1 \}$

$\ \ s_{2^{(j1)}} \cdot s_{n1} = u_j^2\ \ $ for all $j \in \{ 1, 2, 3, ... , k \}$

$\ \ s_0 \cdot s_{(n1)  2^{(j1)}} = u_j^{2}\ \ $ for all $j \in \{ 1, 2, 3, ... , k \}$
Proof

By induction on $n$ , where $ i \in \{ 0, 1, 2, \dots , n1 \} $.
For $ i = 0 $. By Part (1) of Corollary 2, $\ \ b(0,j) = (1) \cdot b(n1,j)\ \ \text{ for all } j $ . But this holds true if, and only if $ \ \ u_j^{b(0,j)} = \Big( u_j^{b(n1,j)} \Big)^{1} $. And hence $ s_0 \cdot s_{n1} = 1_{\mathbb{F}_p} $, proving the base case.
The inductive step: Suppose $ s_{i1} \cdot s_{(n1)  (i1)} = 1_{\mathbb{F}_p} .\ \ $
And now, $$ s_i \cdot s_{(n1)i} = \big( s_{i1} \cdot s_{(n1)  (i1)} \big) \cdot u_j^{b(i,j)} \cdot u_j^{b((n1)i,j)} .$$By the inductive step, this yields, $$ s_i \cdot s_{(n1)i} = 1_{\mathbb{F}_p} \cdot u_j^{b(i,j)} \cdot u_j^{b((n1)i,j)} .$$
According to Part (2) of Corollary 2, $b(i,j) = (1) \cdot b((n1)i,j)$. Which holds true if, and only if $\ \ u_j^{b(i,j)} = \Big( u_j^{b((n1)i,j)} \Big)^{1} .$ It therefore follows that $ s_i \cdot s_{(n1)i} = 1_{\mathbb{F}_p} \cdot u_j^{b(i,j)} \cdot u_j^{b((n1)i,j)} = 1_{\mathbb{F}_p} \cdot 1_{\mathbb{F}_p} = 1_{\mathbb{F}_p}$.

This part follows readily from Part (3) of Corollary 2.

This part also follows readily from Part (4) of Corollary 2.
Contributors
Trustless Recursive ZeroKnowledge Proofs
 Introduction
 Notation and Preliminaries
 Recursive Proofs Composition Overview
 ProofCarrying Data
 Cycles of Elliptic Curves
 Brief Survey: Recursive Proofs Protocols
 Conclusion
 References
 Appendix A: Pairing Cryptography
 Contributors
Introduction
In pursuit of blockchain scalability investigation in this report focuses on the second leg of recursive proofs composition: "proofs that verify other proofs".
Recursive proofs composition is like a huge machinery with many moving parts. For the sake of brevity, the aim here is to present only the most crucial cryptographic tools and techniques required to achieve recursive proofs composition.
Amortization strategies that recursive proofs use to accomplish reduced verification costs are enabled by a powerful tool called a ProofCarrying Data or PCD, first introduced by Alessandro Chiesa in his PhD thesis [4]. The resulting proof system is made efficient and more practical by the use of a technique that utilises cycles of elliptic curves, first realised by Eli BenSasson et al [2]. These two are what the report hinges around.
The details of recursive proofs composition are in the mathematics. So effort is taken to simplify concepts, while keeping technical reader's interests in mind. At the end the reader will appreciate the power of recursive proofs composition, how it uses PCDs and cycles of elliptic curves, as well as why it needs the two to achieve significant blockchain scalability.
Notation and Preliminaries
Notation and terminology in this report is standard. But in order to achieve a clear understanding of the concept of cycles of elliptic curves, a few basic facts about elliptic curves are herein ironed out.
Fields and Elliptic Curves
A field is any set $\mathbb{F}$ of objects upon which addition '$+$' and multiplication '$*$' are defined in a specific way, and such that $( \mathbb{F}, + )$ forms an abelian group and $( \mathbb{F} \backslash \{ 0 \} , * )$ also forms an abelian group of nonzero elements with an identity $1_{\mathbb{F}}$. The most common notation for $ \mathbb{F} \backslash \{ 0 \} $ is $ \mathbb{F}^{\times}.$ A more elaborate definition of a field can be found here.
Note that in blockchain research papers an elliptic curve refers to what is actually an elliptic curve group. This can cause confusion especially when talking about the scalar field as opposed to the base field.

In Mathematics an elliptic curve $E$ over a field $\mathbb{F}$ generally refers to a locus, that is the curve formed by all the points satisfying a given equation. For example, an equation of the form $y^2 = x^3  ax + b$ where $a$ and $b$ are elements of some field $\mathbb{F}$, say the rationals $\mathbb{Q}$, the reals $\mathbb{R}$ or the complex numbers $\mathbb{C}.$ Such a field $\mathbb{F}$ is referred to as the base field for $E$.

The fields; $\mathbb{Q}$, $\mathbb{R}$ and $\mathbb{C}$; are infinite sets and are thus not useful for cryptographic purposes. In cryptography, base fields that are of practical purposes are preferably finite and of a large prime order. This ensures that the discrete log problem is sufficiently difficult, making the cryptosystem secure against common cryptanalytic attacks.

Note that all finite fields are either of prime order or power of a prime. So then any finite field $\mathbb{F}$ is either $\mathbb{F}_p$ or $\mathbb{F}_{p^n}$ for some prime number $p$. See [6, Lemma 3.19] for the proof of this fact. Actually, it can be shown that the orders of their respective multiplicative groups $\mathbb{F}_p^{\times}$ and $\mathbb{F}_{p^n}^{\times}$ are $p1$ and $p^n1$, [6, Proposition 6.1].

An elliptic curve group $E(\mathbb{F})$ is formed by first defining 'addition' of elliptic curve points, picking a point $(x,y)$ on the curve $E$ and using it to generate a cyclic group by doubling it, $2 \cdot (x,y)$, and forming all possible scalar multiples $\alpha \cdot (x,y)$. The group 'addition' of points and doubling are illustrated in Figure 1 below. All these points generated by $(x,y)$ together with the point at infinity, $ \mathcal{O}$, form an algebraic group under the defined 'addition' of points.
 Once an elliptic curve group $E(\mathbb{F})$ is defined, the scalar field can be described. The scalar field is the field that is isomorphic to (i.e., has the same order as) the largest cyclic subgroup of the elliptic curve group $E(\mathbb{F})$. So then, if the order of the elliptic curve group is a prime $p$, then the scalar field is $\mathbb{F}_p$. The order of the elliptic curve group $E(\mathbb{F})$ is denoted by $\# E(\mathbb{F})$.
Ultimately, unlike an elliptic curve $E$ over a general field $\mathbb{F}$, an elliptic curve group $E(\mathbb{F})$ is discrete, consisting of only a finite number of points. The sizes of these groups are bounded by what is known as the Hasse bound [12]. Algorithms used to compute these sizes are also known, see [13].
Example 1
Consider the curve $E$ of points satisfying this equation $$y^2 = x^3  43x + 166$$ Pick the point $$(x,y) = (3,8)$$ which is on the curve $E$ because $$y^2 = 8^2 = 64$$ and $$3^3  43(3) + 166 = 64$$ Doubling yields $$2 \cdot (3,8) = (5, 16)$$ and the rest of the scalar multiples are $$3\cdot (3,8) =(11, 32) $$ $$4\cdot (3,8) = (11, 32) $$ $$5\cdot (3,8) = (5, 16) $$ $$6\cdot (3,8) = (3, 8) $$ and $$7\cdot (3,8) = \mathcal{O} $$ Note that the entire elliptic curve group is $$E(\mathbb{Q}) = \{ \mathcal{O}, (3,8), (5, 16), (11, 32), (11, 32), (5, 16), (3, 8) \} $$ which is a cyclic group of order $7$ generated by the point $(3,8)$. Since $7$ is a prime number, the largest subgroup of $E(\mathbb{Q})$ is of order $7$. It follows that the scalar field of $E$ is $ \mathbb{F}_7 = \mathbb{Z}_7$ while $\mathbb{Q}$ is the base field. See [10] for full details on this example.
In accordance with literature, an elliptic curve group $E(\mathbb{F})$ will henceforth be referred to as an elliptic curve. And unless otherwise stated, the base field will be a field of a large prime order $p$, denoted by $\mathbb{F}_p$.
Arithmetic Circuits and R1CS
A zeroknowledge proof typically involves two parties, the prover $\mathcal{P}$ and the verifier $\mathcal{V}$. The prover $\mathcal{P}$ has to convince the verifier $\mathcal{V}$ that he knows the correct solution to a set computational problem without disclosing the exact solution. So the prover has to produce a proof $\pi$ that attests to his knowledge of the correct solution, and the verifier must be able to check its veracity without accepting false proofs.
Arithmetic circuits are computational models commonly used when solving NP statements. The general process for a zeroknowledge proof system is to convert the set computation or statement being proved into an arithmetic circuit $\mathcal{C}$ and further encode the circuit into an equivalent constraint system. These three are all equivalent in the sense that; the proof $\pi$ satisfies the constraint system if and only if it satisfies $\mathcal{C}$, and the circuit $\mathcal{C}$ is satisfied if and only if the prover has the correct solution to the original computational problem.
Recursive Proofs Composition Overview
In their recent paper [1], Benedikt Buenz et al. report that, "Recursive proofs composition has been shown to lead to powerful primitives such as incrementallyverifiable computation (IVC) and proofcarrying data (PCD)." Thus recognising the two main components of recursive proofs composition, IVC and PCD. The former was adequately investigated in [9], and the latter is now the focus of this report.
Verification Amortization Strategies
These strategies are briefly mentioned here but their detailed descriptions can be found in [9].
Verifiable Computation allows a verifier to delegate expensive computations to untrusted third parties and be able to check correctness of the proofs these third parties submit.
Inductive proofs take advantage of whatever recursion there may be in a computational problem. And due to the Principle of Mathematical Induction, a verifier need only check correctness of the "base step" and the "inductive step". Making this a powerful tool when it comes to saving verification costs.
Incrementally Verifiable Computation is the strategy where, in addition to delegating computations to several untrusted parties, the verifier does not execute verification as often as he receives proofs from third parties but rather collects these proofs and only executes a single proof at the end.
Nested Amortization, the strategy here is to reduce the cost of an expensive computation to a sublinear cost (logarithmic relative to the cost of the original computation) by collapsing the cost of two computations to a cost of one.
ProofCarrying Data
Recursive proofs composition uses an abstraction called proofcarrying data (PCD) when dealing with distributed computations. These PCDs are powerful tools that enable practical implementation of the above verification amortization strategies.
What is a PCD?
ProofCarrying Data, or PCD, is a cryptographic mechanism that allows proof strings $\pi_i$ to be carried along with messages $m_i$ in a distributed computation [15].
Such proofs $\pi_i$ attest to the fact that their corresponding messages $m_i$ as well as the history leading to the messages comply with a specified predicate. The assumption here is that there are specific invariant properties that all propagated messages need to maintain.
An obvious example of a distributed computation in PoW blockchains is mining. And of course, the integrity of any PoW blockchains relies on the possibility for proper validation of the PoW.
Trustlessness via PCDs
A PCD achieves trustlessness in the sense that untrusted parties carry out distributed computations, and a protocol compiler $\mathbf{\Pi}$ is used to enforce compliance to a predicate specified by the proof system designer. Such a compiler is typically defined as a function $$\mathbf{\Pi}(m_i, m_{loc,i}, \mathbf{m}_{inc} ) \in \{ 0, 1 \}$$ taking as inputs a newly formed message $m_i$ at node $i$, local data $m_{loc,i}$ only known to node $i$, and a vector $\mathbf{m}_{inc}$ of all incoming messages received by node $i$.
Since a PCD is equipped with a protocol compiler $\mathbf{\Pi}$ that ensures that every message is predicatecompliant, then it enables trustless cryptographic proof systems for two reasons:
 mutually distrustful parties can perform distributed computations that run indefinitely, and
 due to proof strings attached to all previous messages, any node $j$ can verify any intermediate state of the computation and propagate a new message $m_j$ with its proof string $\pi_j$ attached to it.
It now becomes clear how recursive proofs composition accomplishes blockchain scalability. Any new node can take advantage of IVC and a PCD to succinctly verify the current state of the blockchain without concern about the chain's historical integrity.
The PCD abstraction is no doubt the very secret to achieving blockchain scaling especially via recursive proofs composition.
Cycles of Elliptic Curves
Given the above discussion on PCDs, note that the cryptographic technique of using a cycle of elliptic curves is not much about scalability but rather about efficiency of arithmetic circuits. The main trick is to exploit the proof system's field structure.
Why Cycle of Elliptic Curves?
Why then the need for a second elliptic curve that warrants the use of a cycle of elliptic curves?
The reason a second elliptic curve is needed is due to the inherent field structure of elliptic curves. There are two practical aspects to carefully consider when choosing fields for the underlying Elliptic Curve Cryptosystem of a blockchain;
 firstly, the best field arithmetic for arithmetic circuits that are instantiated with $E(\mathbb{F}_q)$, and
 secondly, the possible or mathematically permissible orders of the scalar field of $E(\mathbb{F}_q)$.
The Native Field Arithmetic
When instantiating a proof system's arithmetic circuit using an elliptic curve $E(\mathbb{F}_q)$, it is important to take cognizance of the two types of field arithmetic involved: the base field arithmetic and the scalar field arithmetic.
For an elliptic curve $E(\mathbb{F}_p)$ where $p$ is a prime for simplicity,
 the base field is $\mathbb{F}_p$ and so its arithmetic consists of addition modulo $p$ and multiplication modulo $p$,
 the scalar field is $\mathbb{F}_r$, where $r$ is the prime order of the largest subgroup of $E(\mathbb{F}_p)$, so in this case the arithmetic consists of addition modulo $r$ and multiplication modulo $r$.
The question now is which arithmetic is better to use in a circuit instantiated with $E(\mathbb{F}_p)$? The next example makes the choice obvious.
Example 2
Consider the elliptic curve $E(\mathbb{Q})$ from Example 1. Since $E(\mathbb{Q})$ is isomorphic to $\mathbb{F}_7$, group addition of two elements in $E(\mathbb{Q})$ amounts to adding their scalars modulo $7$. That is, $$\ \ \alpha \cdot (3,8) + \beta \cdot (3,8) = ((\alpha + \beta) \text{ mod } 7) \cdot (3,8)$$ For example, if $\alpha = 5$ and $\beta = 6$, group addition of two points is carried out as follows $$\ \ 5 \cdot (3,8) + 6 \cdot (3,8) = ( 11 \text{ mod } 7) \cdot (3,8) = 4 \cdot (3,8)$$ It follows then that when instantiating any circuit using $E(\mathbb{Q})$, the arithmetic of the scalar field $ \mathbb{F}_7 $ is more natural to use than the base field's.
Remark: For practical purposes the order of the base field is always a large prime, and thus the infinite field of rationals $\mathbb{Q}$ is never used. Yet, even in cases where the base field is finite and of prime order, a more native arithmetic is that of the scalar field.
The Order of the Scalar Field
For any finite field $\mathbb{F}_q$, as noted earlier here, either $q = p$ or $q = p^n$ for some $p$ a prime. Also, either $$\# (\mathbb{F}_q^{\times}) = p  1\ \ \text{ or }\ \ \# (\mathbb{F}_q^{\times}) = p^n 1$$ Now, what about the order of the scalar field, $\# (\mathbb{F}_r)$?
By definition, the scalar field $\mathbb{F}_r$ is isomorphic to the largest cyclic subgroup of the elliptic curve group $E(\mathbb{F}_q)$. Also, according to a wellknown result, called Lagrange's Theorem, $$\# (\mathbb{F}_r) \ \ \text{ divides }\ \ \# E(\mathbb{F}_q) $$ In pairingbased cryptography, and for security reasons, $\mathbb{F}_r$ is chosen such that $\# (\mathbb{F}_r) = p^k  1$ where $k > 1$ is the smallest integer such that $r$ divides $p^k  1$. The value $k$ is referred to as the embbeding degree of $\# E(\mathbb{F}_q)$, a concept discussed later here.
It is thus mathematically impossible for the scalar field $\mathbb{F}_r$ of the elliptic curve $E(\mathbb{F}_q)$ to have the same order as the base field $\mathbb{F}_q$.
No ProverVerifier Dichotomy
In the envisaged proofofproofs system using recursive proofs composition, the tremendous accomplishment is to allow every participant to simultaneously be a prover and a verifier. However, this presents a serious practical problem.
Note the following common practices when implementing proof systems (gleaning information from [15]);
 The proof system is instantiated with an elliptic curve $E$ over a base field $\mathbb{F}_q$
 The most natural field arithmetic for the arithmetic circuit is the scalar field's, $\mathbb{F}_r$arithmetic
 When computing operations on elliptic curve points inside the proof system verifier, the base field arithmetic is used. i.e., verifier uses $\mathbb{F}_q$arithmetic.
Normally, when there is a clear dichotomy between a prover and a verifier, the use of two distinct field arithmetics would not be an issue.
"But here we are encoding our verifier inside of our arithmetic circuit; thus, we will need to simulate $\mathbb{F}_q$ arithmetic inside of $\mathbb{F}_r$ arithmetic," as Straka clarifies in [15].
Basically, the problem is that there is no efficient way to use both field arithmetics when there is no proververifier dichotomy.
The ideal solution would be choosing the elliptic curve $E(\mathbb{F}_q)$ such that $q = r$. However, as observed above here, it is mathematically impossible for $\# (\mathbb{F}_r)$ to equal $\# (\mathbb{F}_q)$.
The adopted solution is to find a second elliptic curve, say $E(\mathbb{F}_r)$, with a scalar field $\mathbb{F}_q$ if possible. This is the reason why pairingfriendly curves, and recently Amicable pairs of elliptic curves, have come to be deployed in zeroknowledge proof systems such as zkSNARKs.
The proof system aimed at is illustrated in Figure 3 below.
Pairingfriendly Elliptic Curves
It was Groth in [18] who first constructed "a pairingbased (preprocessing) SNARK for arithmetic circuit satisfiability, which is an NPcomplete language". But when it comes to a scalable zeroknowledge proof system, it was BenSasson et al in [2] who first presented a practical recursive proofs composition that uses a cycle of elliptic curves.
Definition 1: Given an elliptic curve $E$ over a field $\mathbb{F}$, the embedding degree of an elliptic curve $E(\mathbb{F}_q)$ is the smallest positive integer $k$ such that $r$ divides $p^k  1$, where $r$ is the order of the largest cyclic subgroup of the elliptic curve $E(\mathbb{F}_q)$.
Definition 2: For secure implementation of pairingbased cryptographic systems, elliptic curves with small embedding degree $k$ and large primeorder subgroups are used. Such elliptic curves are called pairingfriendly.
According to Freeman et al [14], "pairingfriendly curves are rare and thus require specific constructions." In the same paper, the authors furnish what they call a "single coherent framework" of constructions of pairingfriendly elliptic curves.
The Coda blockchain [19] is an example of a deployed protocol using the BenSasson approach in [2]. It uses pairingfriendly MNT curves of embedding degrees 4 and 6.
Amicable Pairs of Elliptic Curves
According to Bowe et al in [8], pairingbased curves of small embedding degree like MNT constructions used in Coda require curves of size approaching 800 bits for 128bit security level.
Previously, constructions of pairingfriendly curves were mostly restricted to embedding degrees $k \leq 6$, until Barreto and Naehrig constructed curves of prime order and embedding degree $k = 12$ in [20].
Some researchers, such as Chiesa et al [21], do not make much distinction between a cycle of pairingfriendly elliptic curves and Aliquot cycle of elliptic curves (to be defined below). It is perhaps due the fact that, unlike pairingfriendly curves, Aliquot cycles are more concerned with elliptic curves of prime orders.
Minimum required properties for a second elliptic curve that forms an Amicable Pair with the elliptic curve group $E(\mathbb{F}_q)$ are,
 the second curve must also be of a large prime order,
 it must be compatible with the first curve in the sense that the verifier operations can be efficiently carried out in it, and
 its Discrete Log Problem must be comparably as difficult as in the first curve $E(\mathbb{F}_q)$.
An elliptic curve $E$ over a field $\mathbb{F}$ has a good reduction at a prime $p$ if the elliptic curve group $E(\mathbb{F}_p)$ has all the above mentioned properties.
Definition 3: [16] An Aliquot Cycle of an elliptic curve $E$ over $\mathbb{Q}$ refers to a sequence of distinct primes $(p_1, p_2, \dots , p_l)$ such that $E$ has good reduction at each $p_i$ and $$ \# E(\mathbb{F}_{p_{1}}) = p_2 ,\ \ \# E(\mathbb{F}_{p_{2}}) = p_3 ,\ \ \dots\ \ , \# E(\mathbb{F}_{p_{l1}}) = p_l ,\ \ \# E(\mathbb{F}_{p_{l}}) = p_1 $$
An Aliquot cycle as defined above has length $l$.
Definition 4: [16] An Amicable Pair of an elliptic curve $E$ over $\mathbb{Q}$ is any pair of primes $(p, q)$ such that $E$ has good reduction at $p$ and $q$ such that $$\# E(\mathbb{F}_p) = q \text{ } \text{ and } \text{ } \# E(\mathbb{F}_q) = p $$
Thus an Amicable pair is basically an Aliquot cycle of length $l = 2$. That is, an Aliquot cycle of only two elliptic curves.
Depending on the curve at hand, and unlike pairingfriendly curves, some curves have a large number of amicable pairs. For instance, in [16], Silverman and Stange report that the curve of $y^2 = x^3 + 2$ has more than $800$ amicable pairs using prime numbers that are less than $10^6$.
See Figure 3 above for a simplified depiction of a recursive proofs system using an Amicable pair of elliptic curves.
Brief Survey: Recursive Proofs Protocols
There are two recursive proofs protocols using amicable pairs of elliptic curves that are of keen interest; Coda and Halo. The Sonic protocol is mentioned here because it is a close predecessor of Halo, and it utilises a few Bulletproofs techniques.
Coda Protocol
The Coda Protocol seems to be more successful at scalability than Halo, though the two have fundamental differences. It is claimed in [19] that Coda can handle a throughput of thousands of transactions per second. And this could perhaps be attributed to its architecture, a decentralized ledger instead of a typical blockchain.
Coda follows BenSasson et al's approach to cycle of curves by constructing two SNARKs, Tic and Toc, that verify each other. Note that this means the recursive proofs composition circuit, as seen in Figure 3, is actually a twoway circuit.
The main disadvantage of Coda is that it uses a trusted setup. But also, to achieve 128bit security at low embedding degrees it requires 750bitsized curves.
Sonic Protocol
The Sonic protocol aims at providing zeroknowledge arguments of knowledge for the satisfiability of constraint systems representing NPhard languages [23]. Its constraint system is defined with respect to a twovariate polynomial used in Bulletproofs, originally designed by Bootle et al [24].
It is basically a zkSNARK that uses an updatable Structured Reference String (SRS). It achieves noninteraction via the FiatShamir transformation. The SRS is made updatable not only for continual strengthening, but also to allow reusability. Unlike the SRS used in Groth's scheme [18] which grows quadratically with the size of its arithmetic circuit, Sonic's only grows linearly.
Sonic is built from a polynomial commitment scheme and a signature of correct computation. The latter primitive seems to achieve what a PCD does, allowing a third party helper to provide solutions to a computation as well as proof that the computation was correctly carried out.
Lastly, Sonic makes use of an elliptic curve construction known as BLS12381 in order to achieve 128bit security at the minimum [23].
Halo Protocol
Although Halo is a variant of Sonic, the two differ mainly in that Halo uses no trusted setup. It however inherits all Bulletproofs techniques that Sonic employs, including the polynomial commitment scheme and the constraint system.
Halo has been adjusted to leverage the nested amortization technique. It also uses a modified version of the Bulletproofs innerproduct proof, which is appropriately adopted to suit its polynomial commitment scheme and the constraint system [11].
An Amicable pair of 255bit sized curves, named Tweedledee and Tweedledum, are employed in Halo, see [11, Section 6.1]. Again, the target security level is 128bit.
In [1] the authors purportedly present a collection of results that establish the theoretical foundations for a generalization of the approach used in Halo.
Halo is no doubt the closest recursive proofs protocol to Bulletproofs, and hence of keen interest to Tari for possibly developing a recursive proofs system that achieves scaling of the Tari Blockchain.
Conclusion
Recursive proofs composition is not only fascinating but also powerful, even as evidenced by real life applications such as the Coda Protocol and the Halo Protocol.
This report completes the necessary theory needed to understand what is recursive proofs composition, what it entails, how its various components work, and why the technique of cycles of curves is necessary.
Thus it still remains to consider the feasibility of designing and implementing a recursive proofs composition that can achieve significant scalability for the Tari Blockchain or Tari DAN.
Since Curve25519 has enormous embedding degree, which is in the order of $10^{75}$, it is not known whether the technique of using Amicable pairs could directly apply to the Bulletproofs setting. Discussions have already begun on using 'halfpairingfriendly' curves. That is, requiring only one curve in a cycle of curves to be pairingfriendly [11, Section 7].
The big take away from this research work is the PCD and what it is capable of. Buenz et al say, "PCD supports computations defined on (possibly infinite) directed acyclic graphs, with messages passed along directed edges" [1]. This together with the Coda example imply that an ingenious combination of a DLT and a PCD for Layer 2 could achieve immense blockchain scalability.
References
[1] B. Buenz, P. Mishra, A. Chiesa and N. Spooner, "ProofCarrying Data from Accumulation Schemes", May 25, 2020 [online]. Available: https://eprint.iacr.org/2020/499.pdf. Date accessed: 2020‑07‑01.
[2] E. BenSasson, A. Chiesa, E. Tromer, and M. Virza, "Scalable Zero Knowledge via Cycles of Elliptic Curves", September 18, 2016 [online]. Available: https://eprint.iacr.org/2014/595.pdf. Date accessed: 2020‑07‑01.
[3] A. Chiesa and E. Tromer, "ProofCarrying Data and Hearsay Arguments from Signature Cards", ICS 2010 [online]. Available: https://people.eecs.berkeley.edu/~alexch/docs/CT10.pdf. Date accessed: 2020‑07‑01.
[4] A. Chiesa, "ProofCarrying Data," PhD Thesis, MIT, June 2010. [online]. Available: https://pdfs.semanticscholar.org/6c6b/bf89c608c74be501a6c6406c976b1cf1e3b4.pdf. Date accessed: 2020‑07‑01.
[5] E. BenSasson, A. Chiesa, P. Mishra and N. Spooner, "ProofCarrying Data from Accumulation Schemes", May 25, 2020 [online]. Available: https://eprint.iacr.org/2020/499.pdf. Date accessed: 2020‑07‑06.
[6] A. Landesman, "NOTES ON FINITE FIELDS", [online]. Available: https://web.stanford.edu/~aaronlan/assets/finitefields.pdf. Date accessed: 2020‑07‑06.
[7] E. BenSasson, "The ZKP Cambrian Explosion and STARKs" November 2019 [online]. Available: https://starkware.co/decks/cambrian_explosion_nov19.pdf. Date accessed: 2020‑07‑05.
[8] S. Bowe, J. Grigg and D. Hopwood, "Halo: Recursive Proof Composition without a Trusted Setup" [online]. Available: https://pdfs.semanticscholar.org/83ac/e8f26e6c57c6c2b4e66e5e81aafaadd7ca38.pdf. Date accessed: 2020‑07‑05.
[9] Tari Labs University, "Amortization of Bulletproofs Innerproduct Proof", May 2020 [online]. Available: https://tlu.tarilabs.com/cryptography/amortizationbpipp/mainreport.html#amortizationofbulletproofsinnerproductproof. Date accessed: 2020‑07‑06.
[10] P. Bartlett, "Lecture 9: Elliptic Curves, CCS Discrete Math I", 2014 [online]. Available: http://web.math.ucsb.edu/~padraic/ucsb_2014_15/ccs_discrete_f2014/ccs_discrete_f2014_lecture9.pdf. Date accessed: 2020‑07‑06.
[11] S. Bowe, J. Grigg and D. Hopwood, "Halo: Recursive Proof Composition without a Trusted Setup", Electric Coin Company, 2019 [online]. Available: https://pdfs.semanticscholar.org/83ac/e8f26e6c57c6c2b4e66e5e81aafaadd7ca38.pdf. Date accessed: 2020‑07‑06.
[12] M.C. Welsh, "ELLIPTIC CURVE CRYPTOGRAPHY" REUpapers, 2017, [online]. Available: http://math.uchicago.edu/~may/REU2017/REUPapers/CoatesWelsh.pdf. Date accessed: 2020‑07‑07.
[13] A.V. Sutherland, "Elliptic Curves, Lecture 9 Schoof's Algorithm", Spring 2015, [online]. Available: https://math.mit.edu/classes/18.783/2015/LectureNotes9.pdf. Date accessed: 2020‑07‑07.
[14] D. Freeman, M. Scott and E. Teske, "A TAXONOMY OF PAIRINGFRIENDLY ELLIPTIC CURVES", [online]. Available: https://eprint.iacr.org/2006/372.pdf. Date accessed: 2020‑07‑07.
[15] M. Straka, "Recursive Zeroknowledge Proofs: A Comprehensive Primer" [online], 2019‑12‑08. Available: https://www.michaelstraka.com/posts/recursivesnarks/. Date accessed: 2020‑07‑07.
[16] J.H. Silverman and K.E. Stange, "Amicable pairs and aliquot cycles for elliptic curves" [online], 2019‑12‑08. Available: https://arxiv.org/pdf/0912.1831.pdf. Date accessed: 2020‑07‑08.
[17] A. Menezes, "An Introduction to PairingBased Cryptography" [online]. Available: https://www.math.uwaterloo.ca/~ajmeneze/publications/pairings.pdf. Date accessed: 2020‑07‑11.
[18] E. Groth, "On the Size of Pairingbased Noninteractive Arguments" [online]. Available: https://eprint.iacr.org/2016/260.pdf. Date accessed: 2020‑07‑12.
[19] I. Meckler and E. Shapiro, "Coda: Decentralized cryptocurrency at scale" O(1) Labs whitepaper. May 10, 2018. [online]. Available: https://cdn.codaprotocol.com/v2/static/codawhitepaper051020180.pdf. Date accessed: 2020‑07‑12.
[20] P.L.M.N. Barreto, M. Naehrig, "PairingFriendly Elliptic Curves of Prime Order" [online]. Available: https://eprint.iacr.org/2005/133.pdf. Date accessed: 2020‑07‑12.
[21] A. Chiesa, L. Chua and M. Weidner, "On cycles of pairingfriendly elliptic curves" [online]. Available: https://arxiv.org/pdf/1803.02067.pdf. Date accessed: 2020‑07‑12.
[22] S. Bowe, "Halo: Recursive Proofs without Trusted Setups (video)", Zero Knowledge Presentations Nov 15, 2019 [online]. Available: https://www.youtube.com/watch?v=OhkHDw54C04. Date accessed: 2020‑07‑13.
[23] M. Maller, S. Bowe, M. Kohlweiss and S. Meiklejohn, "Sonic: ZeroKnowledge SNARKs from LinearSize Universal and Updateable Structured Reference Strings," Cryptology ePrint Archive: Report 2019/099. Last revised July 8, 2019. [online]. Available: https://eprint.iacr.org/2019/099
[24] J. Bootle, A. Cerulli, P. Chaidos, J. Groth and C. Petit, "Efficient Zeroknowledge Arguments for Arithmetic Circuits in the Discrete Log Setting" [online]. Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 327‑357. Springer, 2016. Available: https://eprint.iacr.org/2016/263.pdf. Date accessed: 2019‑12‑21.
Appendix A: Pairing Cryptography
Pairingbased cryptographic systems are defined on pairings like the Weil pairing and the Tate pairing all characterised by a bilinear mapping defined on a pair of groups; an additively group including a special element $\mathcal{O}$, and a multiplicative group.
Although applications of pairingbased cryptography were known for a long time in the form of oneround threeparty key agreements, they became more popular with the emergence of identitybased encryption.
Let $n$ be a prime number, $P$ a generator of an additivelywritten group $G_1 = ⟨P⟩$ with identity $\mathcal{O}$, and $G_T$ a multiplicativelywritten group of order $n$ with identity $1$.
Definition A1: [17]
A bilinear pairing on $(G_1, G_T)$ is a map
$$ \hat{e} : G_1 \times G_1 \to G_T$$
satisfying the following properties,
(a) (bilinear) For all $R$, $S$, $T \in G_1$, $\ \ \hat{e} (R + S,T) = \hat{e} (R,T) \hat{e}(S,T)\ \ $ and
$\ \ \hat{e} ( R , S + T ) = \hat{e} ( R , S ) \hat{e} ( R , T )$
(b) (nondegeneracy) $\hat{e} (P, P ) \not= 1$
(c) (computability) The map $ \hat{e}$ can be efficiently computed.
One of the most important properties of the bilinear map $\hat{e}$ is that, for all $a$,$b$ $\in \mathbb{Z}$, $$ \hat{e}(aS,bT) = \hat{e}(S,T)^{ab}\ $$
Take the BonehLynnShacham short signature, or BLSsignature, as an example of a pairingbased cryptographic primitive.
Example A1 The BLSsignature uses a bilinear map $\hat{e}$ on $(G_1, G_T)$ for which the DiffieHellman Problem is intractable. Say, Alice wants to send a message to Bob with an attached signature.
 Alice randomly selects an integer $a \in [ 1, n1 ]$ and creates a public key $A = aP$ where $P$ is generator of the group $G_1$.
 Alice's BLSsignature on a message $m \in \{ 0, 1 \}^n$ is $S = aM$ where $M = H(m)$ and $H$ a hash function $H : \{ 0, 1 \}^n \to G_1 \backslash \mathcal{O}$.
How can Bob or any party verify Alice's signature?
 By first computing $M = H(m)$ and then check if $\hat{e}(P,S) = \hat{e}(A,M)$
This is why the verification works, $$\hat{e}(P,S) = \hat{e}(P, aM) = \hat{e}(P,M)^a$$ $$\hat{e}(A,M) = \hat{e}(aP,M) = \hat{e}(P,M)^a $$
See the diagram below that illustrates the verification.
Contributors
Consensus Mechanisms
Purpose
Consensus mechanisms "are crucial for a blockchain in order to function correctly. They make sure everyone uses the same blockchain" [1].
Definitions

From Investopedia: A consensus mechanism is a faulttolerant mechanism that is used in computer and blockchain systems to achieve the necessary agreement on a single data value or a single state of the network among distributed processes or multiagent systems [2].

From KPMG: Consensus mechanism  A method of authenticating and validating a value or transaction on a Blockchain or a distributed ledger without the need to trust or rely on a central authority. Consensus mechanisms are central to the functioning of any blockchain or distributed ledger [3].
References
[1] "Different Blockchain Consensus Mechanisms", Hacker Noon [online]. Available: https://hackernoon.com/differentblockchainconsensusmechanismsd19ea6c3bcd6. Date accessed: 2019‑06‑07.
[2] Investopedia: "Consensus Mechanism (Cryptocurrency)" [online]. Available: https://www.investopedia.com/terms/c/consensusmechanismcryptocurrency.asp. Date accessed: 2019‑06‑07.
[3] KPMG: "Consensus  Immutable Agreement for the Internet of Value" [online]. Available: https://assets.kpmg/content/dam/kpmg/pdf/2016/06/kpmgblockchainconsensusmechanism.pdf. Date accessed: 2019‑06‑07.
Understanding Byzantine Faulttolerant Consensus
Background
When considering the concept of consensus in cryptocurrency and cryptographic protocols, the Byzantine Generals Problem is often referenced, where a protocol is described as being Byzantine Fault Tolerant (BFT). This stems from an analogy, as a means to understand the problem of distributed consensus.
To classify Byzantine failure: If a node in a system is exhibiting Byzantine failure, it is called a traitor node. The traitor (which is a flaky or malicious node) sends conflicting messages, leading to an incorrect result of the calculation that the distributed system is trying to perform.
Randomized Gossip Methods
Dr. Dahlia Malkhi
Ph.D. Computer Science
Summary
"Randomized Gossip Methods" by Dahlia Malkhi, PWL Conference, September 2016.
As an introduction, gossipbased protocols are simple, robust, efficient and fault tolerant. This talk provides insight into gossip protocols and how they function, as well as the reasoning behind the instances in which they do not function. It touches on three protocols from randomized gossip methods: Rumor Mongering, which spreads gossip in each communication; Name Dropper, which pushes new neighbors in each communication; and Scalable Weaklyconsistent Infectionstyle Process Group Membership (SWIM), which pulls a heartbeat in each communication.
Video
Note: Transcripts are available here.
Slides
Byzantine Fault Tolerance, Blockchain and Beyond
Dr. Ittai Abraham
Ph.D. Computer Science
Summary
"BFT, Blockchain and Beyond" by Ittai Abraham, Israel Institute for Advanced Studies, 2018.
This talk provides an overview of blockchain and decentralized trust, with the focus on Byzantine Fault Tolerance (BFT). Traditional BFT protocols are assessed and compared with the modern Nakamoto Consensus.
The presentation looks at a hybrid solution of combining traditional and modern consensus mechanisms. The talk delves into the types of consensus; asynchrony, synchrony and partial synchrony, and provides a brief history of all three, as well as their recent implementation and responsiveness.
In addition, the fundamental tradeoff of decentralized trust is assessed, and performance, decentralization and privacy are compared.
Video
Part 1
Part 2
BFT Consensus Mechanisms
Having trouble viewing this presentation?
View it in a separate window.
Introduction to Applications of Byzantine Consensus Mechanisms
 Introduction
 Brief Survey of Byzantine Faulttolerant Consensus Mechanisms
 Permissioned Byzantine Faulttolerant Protocols
 Permissionless Byzantine Faulttolerant Protocols
 References
 Appendices
 Contributors
Introduction
When considering how Tari will potentially build its second layer, an analysis of the most promising Byzantine Consensus Mechanisms and their applications was sought.
Important to consider is the "scalability trilemma". This phrase, referred to in [1], takes into account the potential tradeoffs regarding decentralization, security and scalability:

Decentralization. A core principle on which the majority of the systems are built, taking into account censorshipresistance and ensuring that everyone, without prejudice, is permitted to participate in the decentralized system.

Scalability. Encompasses the ability of the network to process transactions. Thus, if a public blockchain is deemed to be efficient, effective and usable, it should be designed to handle millions of users on the network.

Security. Refers to the immutability of the ledger and takes into account threats of 51% attacks, Sybil attacks, Distributed Denial of Service (DDoS) attacks, etc.
Through the recent development of this ecosystem, most blockchains have focused on two of the three factors, namely decentralization and security, at the expense of scalability. The primary reason for this is that nodes must reach consensus before transactions can be processed [1].
This report examines proposals considering Byzantine Faulttolerant (BFT) consensus mechanisms, and considers their feasibility and efficiency in meeting the characteristics of scalability, decentralization and security. In each instance, the following are assessed:
 protocol assumptions;
 reference implementations; and
 discernment regarding whether the protocol may be used for Tari as a means to maintain the distributed asset state.
Appendix A contains terminology related to consensus mechanisms, including definitions of Consensus; Binary Consensus; Byzantine Fault Tolerance; Practical Byzantine Faulttolerant Variants; Deterministic and Nondeterministic Protocols; and ScalabilityPerformance Tradeoff.
Appendix B discusses timing assumptions, including degrees of synchrony, which range from Synchrony and Partial Synchrony to Weak Synchrony, Random Synchrony and Asynchrony; as well as the problem with timing assumptions, including Denial of Service (DoS) Attack, FLP Impossibility and Randomized Agreement.
Brief Survey of Byzantine Faulttolerant Consensus Mechanisms
Many peertopeer, online, realtime strategy games use a modified lockstep protocol as a consensus protocol in order to manage the game state between players in a game. Each game action results in a game state delta broadcast to all other players in the game, along with a hash of the total game state. Each player validates the change by applying the delta to their own game state and comparing the game state hashes. If the hashes do not agree, then a vote is cast, and those players whose game states are in the minority are disconnected and removed from the game (known as a desync) [2].
Permissioned Byzantine Faulttolerant Protocols
Background
Byzantine agreement schemes are considered well suited for permissioned blockchains, where the identity of the participants is known. Examples include Hyperledger and Tendermint. Here, the Federated Consensus Algorithm is implemented [3].
Protocols
Hyperledger Fabric
Hyperledger Fabric (HLF) began as a project under the LinX Foundation in early 2016 [4]. The aim was to create an opensource, crossindustry, standard platform for distributed ledgers. HLF is an implementation of a distributed ledger platform for running smart contracts and leveraging familiar and proven technologies. It has a modular architecture, allowing pluggable implementations of various functions. The distributed ledger protocol of the fabric is run on the peers. The fabric distinguishes peers as:
 validating peers (they run the consensus algorithm, thus validating the transactions); and
 nonvalidating peers (they act as a proxy that helps in connecting clients to validating peers).
The validating peers run a BFT consensus protocol for executing a replicated state machine that accepts deploy, invoke and query transactions as operations [5].
The blockchain's hash chain is computed based on the executed transactions and resulting persistent state. The replicated execution of chaincode (the transaction that involves accepting the code of the smart contract to be deployed) is used for validating the transactions. It is assumed that among n validating peers, at most f<n/3 (where f is the number of faulty nodes and n is the number of nodes present in the network) may behave arbitrarily, while others will execute correctly, thus adapting to concept BFT consensus. Since HLF proposes to follow a Practical Byzantine Faulttolerant (PBFT) consensus protocol, the chaincode transactions must be deterministic in nature, otherwise different peers might have different persistent states. The SIEVE protocol is used to filter out the nondeterministic transactions, thus assuring a unique persistent state among peers [5].
While being redesigned for a v1.0 release, the format's goal was to achieve extensibility. This version allowed for modules such as membership and consensus mechanism to be exchanged. Being permissioned, this consensus mechanism is mainly responsible for receiving the transaction request from the clients and establishing a total execution order. So far, these pluggable consensus modules include a centralized, single order for testing purposes, and a crashtolerant ordering service based on Apache Kafka [3].
Tendermint
Overview
Tendermint Core is a BFT ProofofStake (PoS) protocol, which is composed of two protocols in one: a consensus algorithm and a peertopeer networking protocol. Inspired by the design goal behind Raft, the authors of [6] specified Tendermint as being an easytounderstand, developerfriendly algorithm that can do algorithmically complex systems engineering.
Tendermint is modeled as a deterministic protocol, live under partial synchrony, which achieves throughput within the bounds of the latency of the network and individual processes themselves.
Tendermint rotates through the validator set, in a weighted roundrobin fashion. The higher the stake (i.e. voting power) that a validator possesses, the greater its weighting; and the more times, proportionally, it will be elected as a leader. Thus, if one validator has the same amount of voting power as another validator, they will both be elected by the protocol an equal amount of times [6].
Critics have argued that Tendermint is not decentralized, and that one can distinguish and target leadership, launching a DDoS attack against them, and sniffling the progression of the chain. Although Sentry Architecture (containing sentry nodes, refer to Sentry Nodes) has been implemented in Tendermint, the argument regarding the degree of decentralization is still questionable.
Sentry Nodes
Sentry nodes are guardians of a validator node and provide validator nodes with access to the rest of the network. They are well connected to other full nodes on the network. Sentry nodes may be dynamic, but should maintain persistent connections to some evolving random subset of each other. They should always expect to have direct incoming connections from the validator node and its backup(s). They do not report the validator node's address in the Peer Exchange Reactor (PEX) and may be more strict about the quality of peers they keep.
Sentry nodes belonging to validators that trust each other may wish to maintain persistent connections via Virtual Private Network (VPN) with one another, but only report each other sparingly in the PEX [7].
Permissionless Byzantine Faulttolerant Protocols
Background
BFT protocols face several limitations when utilized in permissionless blockchains. They do not scale well with the number of participants, resulting in performance deterioration for the targeted network sizes. In addition, they are not well established in this setting, thus they are prone to security issues such as Sybil attacks. Currently, there are approaches that attempt to circumvent or solve this problem [3].
Protocols and Algorithms
Paxos
The Paxos family of protocols includes a spectrum of tradeoffs between the number of processors, number of message delays before learning the agreed value, activity level of individual participants, number of messages sent and types of failures. Although the FLP theorem (named after the authors Michael J. Fischer, Nancy Lynch, and Mike Paterson) states that there is no deterministic faulttolerant consensus protocol that can guarantee progress in an asynchronous network, Paxos guarantees safety (consistency), and the conditions that could prevent it from making progress are difficult to provoke [8].
Paxos achieves consensus as long as there are f failures, where f < (n1)/2. These failures cannot be Byzantine, otherwise the BFT proof would be violated. Thus it is assumed that messages are never corrupted, and that nodes do not collude to subvert the system.
Paxos proceeds through a set of negotiation rounds, with one node having "Leadership" status. Progress will stall if the leader becomes unreliable, until a new leader is elected, or if an old leader suddenly comes back online and a dispute between two leader nodes arises.
ChandraToueg
The Chandra–Toueg consensus algorithm [9], published in 1996, relies on a special node that acts as a failure detector. In essence, it pings other nodes to make sure they are still responsive. This implies that the detector stays online and that the detector must continuously be made aware when new nodes join the network.
The algorithm itself is similar to the Paxos algorithm, which also relies on failure detectors and requires f<n/2, where n is the total number of processes.
Raft
Raft is a consensus algorithm designed as an alternative to Paxos. It was meant to be more understandable than Paxos by means of separation of logic, but it is also formally proven safe and offers some additional features [10].
Raft achieves consensus via an elected leader. Each follower has a timeout in which it expects the heartbeat from the leader. It is thus a synchronous protocol. If the leader fails, an election is held to find a new leader. This entails nodes nominating themselves on a firstcome, firstserved basis. Hung votes require the election to be scrapped and restarted. This suggests that a high degree of cooperation is required by nodes, and that malicious nodes could easily collude to disrupt a leader and then prevent a new leader from being elected. Raft is a simple algorithm, but is clearly unsuitable for consensus in cryptocurrency applications.
While Paxos, Raft and many other wellknown protocols tolerate crash faults, BFT protocols, beginning with PBFT, tolerate even arbitrary corrupted nodes. Many subsequent protocols offer improved performance, often through optimistic execution that provides excellent performance when there are no faults; when clients do not contend much; and when the network is well behaved, and there is at least some progress.
In general, BFT systems are evaluated in deployment scenarios where latency and the central processing unit (CPU) are the bottleneck. Thus the most effective protocols reduce the number of rounds and minimize expensive cryptographic operations.
In a recent line of work, [11] advocated improvement of the worstcase performance, providing service quality guarantees even when the system is under attack, even if this comes at the expense of performance in the optimistic case. However, although the "Robust BFT protocols in this vein gracefully tolerate comprised nodes, they still rely on timing assumptions about the underlying network" [28]. Thus, focus shifted to asynchronous networks.
Hashgraph
Background
The Hashgraph consensus algorithm [12] was released in 2016. It claims Byzantine Fault Tolerance (BFT) under complete asynchrony assumptions, no leaders, no round robin, no ProofofWork (PoW), eventual consensus with probability one, and high speed in the absence of faults.
Hashgraph is based on the gossip protocol, which is a fairly efficient distribution strategy that entails nodes randomly sharing information with each other, similar to how human beings gossip.
Nodes jointly build a Hashgraph reflecting all of the gossip events. This allows Byzantine agreement to be achieved through virtual voting. Alice does not send Bob a vote over the Internet. Instead, Bob calculates what vote Alice would have sent, based on his knowledge of what Alice knows.
Hashgraph uses digital signatures to prevent undetectable changes to transmitted messages. It does not violate the FLP theorem, since it is nondeterministic.
The Hashgraph has some similarities to a blockchain. To quote the white paper: "The Hashgraph consensus algorithm is equivalent to a blockchain in which the 'chain' is constantly branching, without any pruning, where no blocks are ever stale, and where each miner is allowed to mine many new blocks per second, without proofofwork" [12].
Because each node keeps track of the Hashgraph, there is no need to have voting rounds in Hashgraph; each node already knows what all of its peers will vote for, and thus consensus is reached purely by analyzing the graph.
Gossip Protocol
The gossip protocol works like this:
 Alice selects a random peer node, say Bob, and sends him everything she knows. She then selects another random node and repeats the process indefinitely.
 Bob, on receiving Alice's information, marks this as a gossip event and fills in any gaps in his knowledge from Alice's information. Once done, he continues gossiping, using his updated information.
The basic idea behind the gossip protocol is the following: A node wants to share some information with the other nodes in the network. Then, periodically, it randomly selects a node from the set of nodes and exchanges the information. The node that receives the information then randomly selects a node from the set of nodes and exchanges the information, and so on. The information is periodically sent to N targets, where N is the fanout [13].
The cycle is the number of rounds to spread the information. The fanout is the number of nodes a node gossips with in each cycle.
With a fanout = 1, $O(LogN)$ cycles are necessary for the update to reach all the nodes. In this way, information spreads throughout the network in an exponential fashion [12].
The gossip history can be represented as a directed graph, as shown in Figure 1.
Figure 1: Gossip Protocol Directed Graph
Hashgraph introduces a few important concepts that are used repeatedly in later BFT consensus algorithms: famous witnesses and strongly seeing.
Ancestors
If an event (x1) comes before another event (x2) and they are connected by a line, the older event is an ancestor of that event. If both events were created by the same node, then x1 is a selfancestor of x2.
Note: The gossip protocol defines an event as being a (self)ancestor of itself!
Seeing
If an event x1 is an ancestor of x2, then we say that x1 sees x2, as long as the node is not aware of any forks from x2. So in the absence of forks, all events will see all of their ancestors.
+> y

x ++

+> z
In the preceding example, x is an ancestor of both y and z. However, because there is no ancestor relationship between y and z, the seeing condition fails, and so y cannot see x, and z cannot see x.
It may be the case that it takes time before nodes in the protocol detect the fork. For instance, Bob may create z and y, but share z with Alice and y with Charlie. Both Alice and Charlie will eventually learn about the deception, but until that point, Alice will believe that y sees x, and Charlie will believe that z sees x. This is where the concept of strongly seeing comes in.
Strongly Seeing
If a node examines its Hashgraph and notices that an event z sees an event x, and not only that, but it can draw an ancestor relationship (usually via multiple routes) through a supermajority of peer nodes, and that a different event from each node also sees x, then it is said that according to this node, z strongly sees x. The example in Figure 2 comes from [12]:
Figure 2: Illustration of Stronglyseeing
Construct of Gossiping
The main consensus algorithm loop consists of every node (Alice), selecting a random peer node (Bob) and sharing their graph history. Now Alice and Bob have the same graph history.
Alice and Bob both create a new event with the new knowledge they have just learnt from their peer. Alice repeats this process continuously.
Internal Consensus
After a sync, a node will determine the order for as many events as possible, using three procedures. The algorithm uses constant n (the number of nodes) and a small constant value c>2.
 Firstly, we have the Swirlds Hashgraph consensus algorithm. Each member runs this in parallel. Each sync brings in new events, which are then added to the Hashgraph. All known events are then divided into rounds. Then the first events in each round are decided on as being famous or not (through purely local Byzantine agreement with virtual voting). Then the total order is found on those events for which enough information is available. If two members independently assign a position in history to an event, they are guaranteed to assign the same position, and guaranteed to never change it, even as more information comes in. Furthermore, each event is eventually assigned such a position, with a probability of one [12].
in parallel:
loop
sync all known events to a random member
end loop
loop
receive a sync
create a new event
call divideRounds
call decideFame
call findOrder
end loop
 Secondly, we have the divideRounds procedure. As soon as an event x is known, it is assigned a round number x.round, and the Boolean value x.witness is calculated, indicating whether it is the first event that a member created in that round [12].
for each event x
r ← max round of parents of x ( or 1 if none exist )
if x can strongly see more than 2/3*n round r witnesses
x.round ← r + 1
else
x.round ← r
x.witness ← ( x has no self parent )  ( x.round > x.selfParent.round )
 Thirdly, we have the decideFame procedure. For each witness event (i.e. an event x where x.witness is true), decide whether it is famous (i.e. assign a Boolean to x.famous). This decision is done by a Byzantine agreement protocol based on virtual voting. Each member runs it locally, on their own copy of the Hashgraph, with no additional communication. The protocol treats the events in the Hashgraph as if they were sending votes to each other, although the calculation is purely local to a member’s computer. The member assigns votes to the witnesses of each round, for several rounds, until more than twothirds of the population agrees [12].
for each event x in order from earlier rounds to later
x.famous ← UNDECIDED
for each event y in order from earlier rounds to later
if x.witness and y.witness and y.round > x.round
d ← y.round  x.round
s ← the set of witness events in round y.round1 that y can strongly see
v ← majority vote in s ( is TRUE for a tie )
t ← number of events in s with a vote of v
if d = 1 // first round of the election
y.vote ← can y see x ?
else if d mod c > 0 // this is a normal round
if t > 2* n /3 // if supermajority, then decide
x.famous ← v
y.vote ← v
break // y loop
else // else, just vote
y.vote ← v
else if t > 2* n /3 // this is a coin round
y.vote ← v
else // else flip a coin
y.vote ← middle bit of y.signature
Criticisms
An attempt to address some of the following criticisms has been presented [14]:
 The Hashgraph protocol is patented and is not open source.
 In addition, the Hashgraph white paper assumes that n, the number of nodes in the network, is constant. In practice, n can increase, but performance likely degrades badly as n becomes large [15].
 Hashgraph is not as "fair" as claimed in their paper, with at least one attack being proposed [16].
SINTRA
SINTRA is a Secure INtrusionTolerant Replication Architecture used for the coordination in asynchronous networks subject to Byzantine faults. It consists of a collection of protocols and is implemented in Java, providing secure replication and coordination among a group of servers connected by a widearea network, such as the Internet. For a group consisting of n servers, it tolerates up to $t<n/3$ servers failing in arbitrary, malicious ways, which is optimal for the given model. The servers are connected only by asynchronous pointtopoint communication links. Thus, SINTRA automatically tolerates timing failures as well as attacks that exploit timing. The SINTRA group model is static. This means that failed servers must be recovered by mechanisms outside of SINTRA, and the group must be initialized by a trusted process.
The protocols exploit randomization, which is needed to solve Byzantine agreement in such asynchronous distributed systems. Randomization is provided by a thresholdcryptographic pseudorandom generator, a cointossing protocol based on the DiffieHellman problem. Threshold cryptography is a fundamental concept in SINTRA, as it allows the group to perform a common cryptographic operation for which the secret key is shared among the servers such that no single server or small coalition of corrupted servers can obtain useful information about the key. SINTRA provides thresholdcryptographic schemes for digital signatures, publickey encryption and unpredictable pseudorandom number generation (cointossing). It contains broadcast primitives for reliable and consistent broadcasts, which provide agreement on individual messages sent by distinguished senders. However, these primitives cannot guarantee a total order for a stream of multiple messages delivered by the system, which is needed to build faulttolerant services using the state machine replication paradigm. This is the problem of atomic broadcast and requires more expensive protocols based on Byzantine agreement. SINTRA provides multiple randomized Byzantine agreement protocols, for binary and multivalued agreement, and implements an atomic broadcast channel on top of agreement. An atomic broadcast that also maintains a causal order in the presence of Byzantine faults is provided by the secure causal atomic broadcast channel [17].
Figure 3 illustrates SINTRA's modular design. Modularity greatly simplifies the construction and analysis of the complex protocols needed to tolerate Byzantine faults.
Figure 3: Design of SINTRA
HoneyBadgerBFT
HoneyBadgerBFT was released in November 2016 and is seen as the first practical asynchronous BFT consensus algorithm. It was designed with cryptocurrencies in mind, where bandwidth is considered scarce, but an abundance of CPU power is available. Thus, the protocol implements publicprivate key encryption to increase the efficiency of the establishment of consensus. The protocol works with a fixed set of servers to run the consensus. However, this leads to centralization and allows an attacker to specifically target these servers [3].
In its threshold encryption scheme, any one party can encrypt a message using a master public key, and it requires f+1 correct nodes to compute and reveal decryption shares for a ciphertext before the plaintext can be recovered.
The work of HoneyBadgerBFT is closely related to SINTRA, which, as mentioned earlier, is a system implementation based on the asynchronous atomic broadcast protocol from [18]. This protocol consists of a reduction from Atomic Broadcast Channel (ABC) to Asynchronous Common Subset (ACS), as well as a reduction from ACS to Multivalue Validated Agreement (MVBA).
HoneyBadger offers a novel reduction from ABC to ACS that provides better efficiency (by O(N) factor) through batching, while using threshold encryption to preserve censorship resilience. Better efficiency is also obtained by cherrypicking improved instantiations of subcomponents. For example, the expensive MVBA is circumvented by using an alternative ACS along with an efficient reliable broadcast (RBC) [28].
Stellar Consensus Protocol
Stellar Consensus Protocol (SCP) is an asynchronous protocol proposed. It considered to be a global consensus protocol consisting of nomination protocol and ballot protocol. SCP is said to be BFT, by bringing with it the concept of quorum slices and defeated BFT [5].
Each participant forms a quorum of other users, thus creating a trust hierarchy, which requires complex trust decisions [3]. Initially, the nomination proctor is run. During this, new values called candidate values are proposed for agreement. Each node receiving these values will vote for a single value among these. Eventually, it results in unanimously selected values for that slot.
After successful execution of nomination protocol, the nodes deploy the ballot protocol. This involves the federated voting to either commit or abort the values resulting from nomination protocol. This results in externalizing the ballot for the current slot. The aborted ballots are now declared irrelevant. However, there can be stuck states where nodes cannot reach a conclusion regarding whether to abort or commit a value. This situation is avoided by moving it to a highervalued ballot, and considering it in a new ballot protocol execution. This aids in case a node believes that this stuck ballot was committed. Thus SCP assures avoidance and management of stuck states and provides liveliness.
The concept of quorum slices in case of SCP provides asymptotic security and flexible trust, making it more acceptable than other earlier consensus algorithms utilizing Federated BFT, such as the Ripple protocol consensus algorithm [29]. Here, the user is provided more independence in deciding whom to trust [30].
SCP protocol claims to be free of blocked states, and provides decentralized control, asymptotic security, flexible trust and low latency. However, it does not guarantee safety all the time. If the user node chooses an inefficient quorum slice, security is not guaranteed. In the event of partition or misbehaving nodes, it halts progress of the network until consensus can be reached.
LinBFT
LinBFT is a BFT protocol for blockchain systems. It allows for the amortized communication volume per block O(n) under reasonable conditions (where n is the number of participants) while satisfying deterministic guarantees on safety and liveness. It satisfies liveness in a partially synchronous network.
LinBFT cuts down its O(n^{4}) complexity by implementing changes by O(n): linear view change, threshold signature and verifiable random functions. This is clearly optimal, in that disseminating a block already takes O(n) transmissions.
LinBFT is designed to be implemented for permissionless, public blockchain systems and takes into account anonymous participants without a publickey infrastructure, PoS, rotating leader and dynamic participant set [31]. For instance, participants can be anonymous, without a centralized public key infrastructure (PKI) public key among themselves, and participate in a distributed key generation (DKG) protocol required to create threshold signatures, both of which are communicationheavy processes.
LinBFT is compatible with PoS, which counters Sybil attacks and deters dishonest behavior through slashing [31].
Algorand
The Algorand white paper was released in May 2017. Algorand is a synchronous BFT consensus mechanism, where the blocks are added at a minimum rate [19]. It allows participants to privately check whether they are chosen for consensus participation and requires only one message per user, thus limiting possible attacks [3].
Alogrand scales up to 500,000 users by employing verifiable random functions, which are pseudorandom functions able to provide verifiable proofs that the output of said function is correct [3]. It introduces the concept of a concrete coin. Most of these BFT algorithms require some type of randomness oracle, but all nodes need to see the same value if the oracle is consulted. This had previously been achieved through a _common _coin idea. The concrete coin uses a much simpler approach, but only returns a binary value [19].
Thunderella
Thunderella implements an asynchronous strategy, where a synchronous strategy is used as a fallback in the event of a malfunction [20], thus it achieves both robustness and speed. It can be applied in permissionless networks using PoW. Network robustness and "instant confirmations" require 75% of the network to be honest, as well as the presence of a leader node.
Snowflake to Avalanche
This consensus protocol was first seen in [39]. The paper outlines four protocols that are building blocks forming a protocol family. These leaderless BFT protocols, published by Team Rocket, are built on a metastable mechanism. They are called Slush, Snowflake, Snowball and Avalanche.
The protocols differ from the traditional consensus protocols and the Nakamoto consensus protocols by not requiring an elected leader. Instead, the protocol simply guides all the nodes to consensus.
These four protocols are described as a new family of protocols due to this concept of metastability: a means to establish consensus by guiding all nodes towards an emerging consensus without requiring leaders, while still maintaining the same level of security and inducing a speed that exceeds current protocols.
This is achieved through the formation of "subquorums", which are small, randomized samples from nodes on the network. This allows for greater throughputs and sees parallel consensuses running before they merge to form the overarching consensus, which can be seen as similar in nature to the gossip protocol.
With regard to safety, throughput (the number of transactions per second) and scalability (the number of people supported by the network), Slush, Snowflake, Snowball and Avalanche seem to be able to achieve all three. They impart a probabilistic safety guarantee in the presence of Byzantine adversaries and achieve a high throughput and scalability due to their concurrent nature. A synchronous network is assumed.
The current problem facing the design of BFT protocols is that a system can be very fast when a small number of nodes are active, as there are fewer decisions to make. However, when there are many users and an increase in transactions, the system cannot be maintained.
Unlike the PoW implementation, which requires constant active participation from the miners, Avalanche can function even when nodes are dormant.
While traditional consensus protocols require O(n^{2}) communication, their communication complexity ranges from O(kn log n) to O(kn) for some security parameter k<<n. In a sense, Team Rocket highlight that the communication complexity of their protocols is less intensive than that of O(n^{2}) communications, thus making these protocols faster and more scalable.
To backtrack a bit, Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. It describes the worstcase scenario and can be used to describe the execution time required by an algorithm [21]. In the case of consensus algorithms, O describes a finite expected number of steps or operations [22]. For example, O(1) describes an algorithm that will always execute in the same time, regardless of the size of the input data set. O(n) describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set. O(n^{2}) represents an algorithm whose performance is directly proportional to the square of the size of the input data set.
The reason for this is O(n^{2}) suggests that the rate of growth of function is determined by n^{2}, where n is the number of people on the network. Thus, the addition of a person exponentially increases the time taken to disseminate the information on the network, while traditional consensus protocols require everyone to communicate with one another, making it a laborious process [32].
Despite assuming a synchronous network, which is susceptible to the DoS attacks, this new family of protocols "reaches a level of security that is simply good enough while surging forward with other advancements" [32].
PARSEC
PARSEC is a BFT consensus algorithm possessing weak synchrony assumptions (highly asynchronous, assuming random delays with finite expected value). Similar to Hashgraph, it has no leaders, no round robin, no PoW and reaches eventual consensus with probability one. It differs from Hashgraph, in that it provides high speed in the absence and presence of faults. Thus, it avoids the structures of delegated PoS (DPoS), which requires a trusted set of leaders, and does not have a round robin (where a permissioned set of miners sign each block).
PARSEC is fully open, unlike Hashgraph, which is patented and closed sourced. The reference implementation of PARSEC, written in Rust, was released a few weeks after the white paper ([33], [23]).
The general problem in reaching Byzantine agreement on any value is reduced to the simple problem of reaching binary Byzantine agreement on the nodes participating in each decision. This has allowed PARSEC to reuse the binary Byzantine agreement protocol (Signaturefree Asynchronous Byzantine Consensus) after adapting it to the gossip protocol [34].
Similar to HoneybadgerBFT, this protocol is composed through the additions of interesting ideas presented in literature. Similar to Hashgraph and Avalanche, a gossip protocol is used to allow efficient communication between nodes [33].
Finally, the need for a trusted leader or a trusted setup phase implied in [27] is removed by porting the key ideas to an asynchronous setting [35].
The network is set to N of N instances of the algorithm communicating via randomly synchronous connections. Due to random synchrony, all users can reach an agreement as to what is going on. There is no guarantee for nodes on the timing that they should be receiving messages, and a possibility of up to t Byzantine (arbitrary) failures are allowed, where 3t<N. The instances where no failures have occurred are deemed correct or honest, while the failed instances are termed faulty or malicious. Since a Byzantine failure model allows for malicious behavior, any set of instances containing more than 2/3N of them is referred to as the supermajority.
When a node receives a gossip request, it creates a new event and sends a response back (in Hashgraph, the response was optional). Each gossip event contains [24]:
 The data being transmitted.
 The selfparent (the hash of another gossip event created by the same node).
 The otherparent (a hash of another gossip event created by a different node).
 The Cause for creation, which can be a Request for information, a Response to another node’s request, or an Observation. An observation is when a node creates a gossip event to record an observation that the node made themselves.
 The creator ID (public key).
 The signature – signing the preceding information.
The selfparent and otherparent prevent tampering because they are signed and related to other gossip events [24].
As with Hashgraph, it is difficult for adversaries to interfere with the consensus algorithm, because all voting is virtual and done without sharing details of votes cast. Each node figures out what other nodes would have voted, based on their copy of the gossip graph.
PARSEC also uses the concept of a concrete coin, from Algorand. This is used to break ties, particularly in cases where an adversary is carefully managing communication between nodes in order to maintain a deadlock on votes.
In step 1, nodes try and converge on a "true" result for a set of results. If this is not achieved, they move on to step 2, which is trying to converge on a "false" result. If consensus still cannot be reached, a coin flip is made and they go back to step 1 in another voting round.
Democratic BFT
This is a deterministic Byzantine consensus algorithm that relies on a new weak coordinator. This protocol is implemented in the Red Belly blockchain and is said to achieve 30,000 transactions per second on Amazon Cloud Trials [25]. Through the coupling with an optimized variant of the reduction of multivalve to binary consensus from [26], the Democratic BFT (DBFT) consensus algorithm was generated. It terminates in four message delays in the good case, when all nonfaulty processes propose the same value [36].
The term weak coordinator is used to describe the ability of the algorithm to terminate in the presence of a faulty or slow coordinator, unlike previous algorithms that do not have the ability to terminate. The fundamental idea here is to allow processes to complete asynchronous rounds as soon as they receive a threshold of messages, instead of having to wait for a message from a coordinator that may be slow. The resulting algorithm assumes partial synchrony; is resilience and time optimal; and does not require signatures.
Moving away from the impossibility of solving consensus in asynchronous message systems, where processes can be faulty or Byzantine, the technique of randomization or additional synchrony is adopted.
Randomized algorithms can use perprocess "local" coins, or a shared common coin to solve consensus probabilistically among n processes despite $t<n/3$ Byzantine processes. When based on local coins, the existing algorithms converge O(n^{2.5}) expected time.
A recent randomized algorithm that does not contain a signature solves consensus in O(1) expected time under a fair scheduler, where O is the binary.
To solve the consensus problem deterministically and prevent the use of the common coin, researchers have assumed partial or eventual synchrony. Here, these solutions require a unique coordinator process, referred to as the leader, in order to remain nonfaulty.
There are advantages and disadvantages to this technique:

The advantage is that if the coordinator is nonfaulty, and if the messages are delivered in a timely manner in an asynchronous round, then the coordinator broadcasts its proposal to all processes and this value is decided after a constant number of message delays.

The disadvantage is that a faulty coordinator can dramatically impact the algorithm performance by leveraging the power it has in a round, and imposing its value on all. Nonfaulty processes thus have no other choice but to decide nothing in this round.
This protocol sees the use of a weak coordinator, which allows for the introduction of a new deterministic Byzantine consensus algorithm that is time optimal, resilience optimal and does not require the use of signatures. Unlike the classic, strong coordinator, the weak coordinator does not impose its value. It allows nonfaulty processes to decide a value quickly, without the need of the coordinator, while helping the algorithm to terminate if nonfaulty processes know that they proposed distinct values that might all be decided. In addition, the presence of a weak coordinator allows rounds to be executed optimistically without waiting for a specific message. This is unlike classic BFT algorithms that have to wait for a particular message from their coordinator, and occasionally have to recover from a slow network or faulty coordinator.
With regard to the problem of a slow of Byzantine coordinator, the weak coordinator helps agreement by contributing a value while still allowing termination in a constant number of message delays. It is thus unlike the classic coordinator or the eventual leader, which cannot be implemented in the Binary Byzantine Consensus Algorithm, BAMP_{n,t}[t<n/3].
The validation of protocol was conducted similarly to that of the HoneyBadger blockchain, where "Coin", the randomization algorithm from [27] was used. Using the 100 Amazon Virtual Machines located in five data centers on different continents, it was shown that the DBFT algorithm outperforms that of "Coin", which is known to terminate in O(1) rounds in expectation. In addition, since Byzantine behaviors have been seen to severely affect the performance of strong coordinatorbased consensus, four different Byzantine attacks have been tested in the validation.
Summary of Findings
Table 1 highlights the characteristics of the abovementioned BFT Protocols. Asymptotic Security, Permissionless Blockchain, Timing Assumptions, Decentralized Control, Low Latency and Flexible Trust form part of the value system.
 Asymptotic Security  this depends only on digital signatures (and hash functions) for security.
 Permissionless Protocol  this allows anybody to create an address and begin interacting with the protocol.
 Timing Assumptions  refer to Forms of Timing Assumptions  Degrees of Synchrony.
 Decentralized Control  consensus is achieved and defended by protecting the identity of that node until its job is done, through a leaderless node.
 Low Latency  this describes a computer network that is optimized to process a very high volume of data messages with minimal delay.
 Flexible Trust  where users have the freedom to trust any combinations of parties they see fit.
Table 1: Characteristics of BFT Protocols
Protocol  Permissionless Protocol  Timing Assumptions  Decentralized Control  Low Latency  Flexible Trust  Asymptotic Security 

Hyperledger Fabric (HLF)  Partially synchronous  ✓  ✓  
Tendermint  Partially synchronous  ✓  ✓  ✓  
Paxos  ✓  Partially synchronous  ✓  ✓  ✓  
ChandraToureg  ✓  Partially synchronous  ✓  ✓  
Raft  ✓  Weakly synchronous  ✓  ✓  ✓  
HashGraph  ✓  Asynchronous  ✓  ✓  ✓  
SINTRA  ✓  Asynchronous  ✓  ✓  
HoneyBadgerBFT  ✓  Asynchronous  ✓  ✓  ✓  ✓ 
Stellar Consensus Protocol  ✓  Asynchronous  ✓  ✓  ✓  ✓ 
LinBFT  ✓  Partially synchronous  ✓  ✓  
Algorand  ✓  Synchronous  ✓  ✓  ✓  
Thunderella  ✓  Synchronous  ✓  ✓  ✓  
Avalanche  ✓  Synchronous  ✓  ✓  ✓  
PARSEC  ✓  Weakly synchronous  ✓  ✓  
Democratic BFT  ✓  Partially synchronous  ✓  ✓  ✓ 
BFT consensus protocols have been considered as a means to disseminate and validate information. The question of whether Schnorr multisignatures can perform the same function in validating information through the action of signing will form part of the next review.
References
[1] B. Asolo, "Breaking down the Blockchain Scalability Trilemma" [online]. Available: https://bitcoinist.com/breakingdownthescalabilitytrilemma/. Date accessed: 2018‑10‑01.
[2] Wikipedia: "Consensus Mechanisms" [online]. Available: https://en.wikipedia.org/wiki/Consensus_(computer_science). Date accessed: 2018‑10‑01.
[3] S. Rusch, "Highperformance Consensus Mechanisms for Blockchains" [online]. Available: http://conferences.inf.ed.ac.uk/EuroDW2018/papers/eurodw18Rusch.pdf. Date accessed: 2018‑08‑30.
[4] C. Cachin "Architecture of the Hyperledger Blockchain Fabric" [online]. Available: https://www.zurich.ibm.com/dccl/papers/cachin_dccl.pdf. Date accessed: 2018‑09‑16.
[5] L. S. Sankar, M. Sindhu and M. Sethumadhavan, "Survey of Consensus Protocols on Blockchain Applications" [online]. Available: https://ieeexplore.ieee.org/document/8014672/. Date accessed: 2018‑08‑30.
[6] "Tendermint Explained  Bringing BFTbased PoS to the Public Blockchain Domain" [online]. Available: https://blog.cosmos.network/tendermintexplainedbringingbftbasedpostothepublicblockchaindomainf22e274a0fdb. Date accessed: 2018‑09‑30.
[7] "Tendermint Peer Discovery" [online]. Available: https://github.com/tendermint/tendermint/blob/master/docs/spec/p2p/node.md. Date accessed: 2018‑10‑22.
[8] Wikipedia: "Paxos" [online]. Available: https://en.wikipedia.org/wiki/Paxos_(computer_science). Date accessed: 2018‑10‑01.
[9] Wikipedia: "ChandraToueg Consensus Algorithm" [online]. Available: https://en.wikipedia.org/wiki/Chandra%E2%80%93Toueg_consensus_algorithm. Date accessed: 2018‑09‑13.
[10] Wikipedia: "Raft" [online]. Available: https://en.wikipedia.org/wiki/Raft_(computer_science). Date accessed: 2018‑09‑13.
[11] A. Clement, E. Wong, L. Alvisi, M. Dahlin and M. Marchetti, "Making Byzantine Fault Tolerant Systems Tolerate Byzantine Faults" [online]. Available: https://www.usenix.org/legacy/event/nsdi09/tech/full_papers/clement/clement.pdf. Date accessed 2018‑10‑22.
[12] L. Baird, "The Swirlds Hashgraph Consensus Algorithm: Fair, Fast, Byzantine Fault Tolerance" [online]. Available: https://www.swirlds.com/downloads/SWIRLDSTR201601.pdf. Date accessed: 2018‑09‑30.
[13] "Just My Thoughts: Introduction to Gossip" [online]. Available: https://managementfromscratch.wordpress.com/2016/04/01/introductiontogossip/. Date accessed 2018‑10‑22.
[14] L. Baird, "Swirlds and Sybil Attacks" [online]. Available: http://www.swirlds.com/downloads/SwirldsandSybilAttacks.pdf. Date accessed: 2018‑09‑30.
[15] Y. Jia, "Demystifying Hashgraph: Benefits and Challenges" [online]. Available: https://hackernoon.com/demystifyingHashgraphbenefitsandchallengesd605e5c0cee5. Date accessed: 2018‑09‑30.
[16] M. Graczyk, "Hashgraph: A Whitepaper Review" [online]. Available: https://medium.com/opentoken/Hashgraphawhitepaperreviewf7dfe2b24647. Date accessed: 2018‑09‑30.
[17] C. Cachin and J. A. Poritz, "Secure Intrusiontolerant Replication on the Internet" [online]. Available: https://cachin.com/cc/papers/sintra.pdf. Date accessed: 2018‑10‑22.
[18] C. Cachin, K. Kursawe, F. Petzold and V. Shoup, "Secure and Efficient Asynchronous Broadcast Protocols" [online]. Available: https://www.shoup.net/papers/ckps.pdf. Date accessed 2018‑10‑22.
[19] J. Chen and S. Micali, "Algorand" White Paper" [online]. Available: https://arxiv.org/pdf/1607.01341.pdf. Date accessed: 2018‑09‑13.
[20] R. Pass and E. Shi, "Thunderella: Blockchains with Optimistic Instant Confirmation" White Paper [online]. Available: https://eprint.iacr.org/2017/913.pdf. Date accessed: 2018‑09‑13.
[21] "A Beginner's Guide to Big O Notation" [online]. Available: https://robbell.net/2009/06/abeginnersguidetobigonotation/. Date accessed: 2018‑10‑22.
[22] J. Aspnes and M. Herlihy, "Fast Randomized Consensus using Shared Memory" [online]. Available: http://www.cs.yale.edu/homes/aspnes/papers/jalg90.pdf. Date accessed: 2018‑10‑22.
[23] "Prototol for Asynchronous, Reliable, Secure and Efficient Consensus" [online]. Available: https://github.com/maidsafe/parsec. Date accessed 2018‑10‑22.
[24] "Project Spotlight: Maidsafe and PARSEC Part 2" [online]. Available: https://flatoutcrypto.com/home/maidsafeparsecexplanationpt2. Date accessed: 2018‑09‑18.
[25] "Red Belly Blockchain" [online]. Available: https://www.ccn.com/tag/redbellyblockchain/. Date accessed: 2018‑10‑10.
[26] M. BenOr, B. Kelmer and T Rabin, "Asynchronous Secure Computations with Optimal Resilience" [online]. Available: https://dl.acm.org/citation.cfm?id=198088. Date accessed 2018‑10‑22.
[27] A. Mostefaoui, M.Hamouna and Michel Raynal, "Signaturefree Asynchronous Byzantine Consensus with $t<n/3$ and O(n^{2}) Messages" [online]. Available: https://hal.inria.fr/hal00944019v2/document. Date accessed 2018‑10‑22.
[28] A. Miller, Y. Xia, K. Crowman, E. Shi and D. Song, "The Honey Badger of BFT Protocols", White Paper [online]. Available: https://eprint.iacr.org/2016/199.pdf. Date accessed: 2018‑08‑30.
[29] D. Schwartz, N. Youngs and A. Britto, "The Ripple Protocol Consensus Algorithm" [online]. Available: https://ripple.com/files/ripple_consensus_whitepaper.pdf. Date accessed: 2018‑09‑13.
[30] J. Kwon, "TenderMint: Consensus without Mining" [online]. Available: http://theeye.eu/public/Books/campdivision.com/PDF/Computers%20General/Privacy/bitcoin/tendermint_v05.pdf. Date accessed: 2018‑09‑20.
[31] Y. Yang, "LinBFT: LinearCommunication Byzantine Fault Tolerance for Public Blockchains" [online]. Available: https://arxiv.org/pdf/1807.01829.pdf. Date accessed: 2018‑09‑20.
[32] "Protocol Spotlight: Avalanche Part 1" [online]. Available: https://flatoutcrypto.com/home/avalancheprotocol. Date Accessed: 2018‑09‑09.
[33] P. Chevalier, B. Kaminski, F. Hutchison, Q. Ma and S. Sharma, "Protocol for Asynchronous, Reliable, Secure and Efficient Consensus (PARSEC)". White Paper [online]. Available: http://docs.maidsafe.net/Whitepapers/pdf/PARSEC.pdf. Date accessed: 2018‑08‑30.
[34] "Project Spotlight: Maidsafe and PARSEC Part 1" [online]. Available: https://medium.com/@flatoutcrypto/projectspotlightmaidsafeandparsecpart14830cec8d9e3. Date accessed: 2018‑08‑30.
[35] S. Micali, "Byzantine Agreement Made Trivial" [online]. Available: https://people.csail.mit.edu/silvio/Selected%20Scientific%20Papers/Distributed%20Computation/BYZANTYNE%20AGREEMENT%20MADE%20TRIVIAL.pdf. Date accessed: 2018‑08‑30.
[36] T. Crain, V. Gramoli, M. Larrea and M. Raynal, "DBFT: Efficient Byzantine Consensus with a Weak Coordinator and its Application to Consortium Blockchains" [online]. Available: http://gramoli.redbellyblockchain.io/web/doc/pubs/DBFTpreprint.pdf. Date accessed: 2018‑09‑30.
[37] Wikipedia: "Byzantine Fault Tolerance" [online]. Available: https://en.wikipedia.org/wiki/Byzantine_fault_tolerance. Date accessed: 2018‑09‑30.
[38] Wikipedia: "Liveness" [online]. Available: https://en.wikipedia.org/wiki/Liveness. Date accessed: 2018‑09‑30.
[39] Team Rocket, "Snowflake to Avalanche: A Novel Metastable Consensus Protocol Family for Cryptocurrencies" [online]. Available: https://ipfs.io/ipfs/QmUy4jh5mGNZvLkjies1RWM4YuvJh5o2FYopNPVYwrRVGV. Date accessed: 2018‑09‑30.
Appendices
Appendix A: Terminology
In order to gain a full understanding of the field of consensus mechanisms, specifically BFT consensus mechanisms, certain terms and concepts need to be defined and fleshed out.
Consensus
Distributed agents (these could be computers, generals coordinating an attack, or sensors in a nuclear plant) that communicate via a network (be it digital, courier or mechanical) need to agree on facts in order to act as a coordinated whole.
When all nonfaulty agents agree on a given fact, then we say that the network is in consensus. Consensus is achieved when all nonfaulty agents agree on a prescribed fact.
A consensus protocol may adhere to a host of formal requirements, including:
 Agreement  where all correct processes agree on the same fact.
 Weak Validity  where, for all correct processes, the output must be the input for some correct process.
 Strong Validity  where, if all correct processes receive the same input value, they must all output that value.
 Termination  where all processes must eventually decide on an output value [2].
Binary Consensus
There is a unique case of the consensus problem, referred to as the binary consensus, which restricts the input and hence the output domain to a single binary digit {0,1}.
When the input domain is large, relative to the number of processes, e.g. an input set of all the natural numbers, it can be shown that consensus is impossible in a synchronous message passing model [2].
Byzantine Fault Tolerance
Byzantine failures are considered the most general and most difficult class of failures among the failure modes. The socalled failstop failure mode occupies the simplest end of the spectrum. Whereas failstop failure mode simply means that the only way to fail is a node crash, detected by other nodes, Byzantine failures imply no restrictions, which means that the failed node can generate arbitrary data, pretending to be a correct one. Thus, Byzantine failures can confuse failure detection systems, which makes fault tolerance difficult [37].
Several papers in the literature contextualize the problem using generals at different camps, situated outside the enemy castle, needing to decide whether or not to attack. A consensus algorithm that would fail, would perhaps see one general attack while all the others stay back, resulting in the vulnerability of first general.
One key property of a blockchain system is that the nodes do not trust each other, meaning that some may behave in a Byzantine manner. The consensus protocol must therefore tolerate Byzantine failures.
A network is Byzantine Fault Tolerant (BFT) when it can provide service and reach a consensus despite faults or failures of the system. The processes use a protocol for consensus or atomic broadcast (a broadcast where all correct processes in a system of multiple processes receive the same set of messages in the same order); i.e. the same sequence of messages [[46]]) agree on a common sequence of operations to execute [20].
The literature on distributed consensus is vast, and there are many variants of previously proposed protocols being developed for blockchains. They can be largely classified along a spectrum:

One extreme consists of purely computationbased protocols, which use proof of computation to randomly select a node that singlehandedly decides the next operation.

The other extreme is purely communicationbased protocols, in which nodes have equal votes and go through multiple rounds of communication to reach consensus, Practical Byzantine Fault Tolerance (PBFT) being the prime example, which is a replication algorithm designed to be BFT [10].
For systems with n nodes, of which f are Byzantine, it has been shown that no algorithm exists that solves the consensus problem for f > n/3 [21].
So how then does the Bitcoin protocol get away with only needing 51% honest nodes to reach consensus? Well, strictly speaking, Bitcoin is NOT a BFTCM, because there is never absolute finality in bitcoin ledgers; there is always a chance (however small) that someone can 51% attack the network and rewrite the entire history. Bitcoin is a probabilistic consensus, rather than deterministic.
Practical Byzantine Faulttolerant Variants
PoW suffers from nonfinality, i.e. a block appended to a blockchain is not confirmed until it is extended by many other blocks. Even then, its existence in the blockchain is only probabilistic. For example, eclipse attacks on Bitcoin exploit this probabilistic guarantee to allow double spending. In contrast, the original PBFT protocol is deterministic [10].
Deterministic and Nondeterministic Protocols
Deterministic, bounded Byzantine agreement relies on consensus being finalized for each epoch before moving to the next one, ensuring that there is some safety about a consensus reference point prior to continuing. If, instead, you allow an unbounded number of consensus agreements within the same epoch, then there is no overall consensus reference point with which to declare finality, and thus safety is compromised [8].
For nondeterministic or probabilistic protocols, the probability that an honest node is undecided after r rounds approaches zero as r approaches infinity.
Nondeterministic protocols that solve consensus under the purely asynchronous case potentially rely on random oracles and generally incur high message complexity overhead, as they depend on reliable broadcasting for all communication.
Protocols such as HoneyBadgerBFT fall into this class of nondeterministic protocols under asynchrony. Normally, they require three instances of reliable broadcast for a single round of communication [6].
ScalabilityPerformance Tradeoff
As briefly mentioned in the Introduction, the scalability of BFT protocols considering the number of participants is highly limited and the performance of most protocols deteriorates as the number of involved replicas increases. This effect is especially problematic for BFT deployment in permissionless blockchains [7].
The problem of BFT scalability is twofold: a high throughput, as well as a large consensus group with good reconfigurability that can tolerate a high number of failures are both desirable properties in BFT protocols. However, they are often in direct conflict.
Bitcoin mining, for example, supports thousands of participants, offers good reconfigurability, i.e. nodes can join or leave the network at any time, and can tolerate a high number of failures. However, they are only able to process a severely limited number of transactions per second. Most BFT protocols achieve a significantly higher throughput, but are limited to small groups of participants of less than 20 nodes and the group reconfiguration is not easily achievable.
Several approaches have been employed to remedy these problems, e.g. threshold cryptography, creating new consensus groups for every round and limiting the number of necessary messages to reach consensus [3].
Appendix B: Timing Assumptions
Forms of Timing Assumptions  Degrees of Synchrony
Synchrony
Here, the time for nodes to wait and receive information is predefined. If a node has not received an input within the redefined time structure, there is a problem [5].
In synchronous systems, it is assumed that all communications proceed in rounds. In one round, a process may send all the messages it requires, while receiving all messages from other processes. In this manner, no message from one round may influence any messages sent within the same round [21].
A △Tsynchronous network guarantees that every message sent is delivered after, at most, a delay of △T (where △T is a measure of real time) [6]. Synchronous protocols come to a consensus after △T [5].
Partial Synchrony
Here, the network retains some form of a predefined timing structure. However, it can operate without knowing said assumption of how fast nodes can exchange messages over the network. Instead of pushing out a block every x seconds, a partially synchronous blockchain would gauge the limit, with messages always being sent and received within the unknown deadline.
Partially synchronous protocols come to a consensus in an unknown, but finite period [5].
Unknown△T Model
The protocol is unable to use the delay bound as a parameter [6].
Eventually Synchronous
The message delay bound △ is only guaranteed to hold after some (unknown instant, called the "Global Stabilization Time" [6].
Weak Synchrony
Most existing BFT systems, even those called "robust", assume some variation of weak synchrony, where messages are guaranteed to be delivered after a certain bound △T, but △T may be timevarying or unknown to the protocol designer.
However, the liveness properties of weakly synchronous protocols can fail completely when the expected timing assumptions are violated, e.g. due to a malicious network adversary. In general, liveness refers to a set of properties of concurrent systems, that require a system to make progress despite the fact that its concurrently executing components may have to "take turns" in critical sections. These are parts of the program that cannot be simultaneously run by multiple components [38].
Even when the weak synchrony assumptions are satisfied in practice, weakly synchronous protocols degrade significantly in throughput when the underlying network is unpredictable. Unfortunately, weakly asynchronous protocols require timeout parameters that are difficult to tune, especially in cryptocurrency application settings; and when the chosen timeout values are either too long or too short, throughput can be hampered.
In terms of feasibility, both weak and partially synchronous protocols are equivalent. A protocol that succeeds in one setting can be systematically adapted for another. In terms of concrete performance, however, adjusting for weak synchrony means gradually increasing the timeout parameter over time, e.g. by an exponential backoff policy. This results in delays when recovering from transient network partition. Protocols typically manifest these assumptions in the form of a timeout event. For example, if parties detect that no progress has been made within a certain interval, then they take a corrective action such as electing a new leader. Asynchronous protocols do not rely on timers, and make progress whenever messages are delivered, regardless of actual clock time [6].
Random Synchrony
Messages are delivered with random delays, such that the average delay is finite. There may be periods of arbitrarily long days (this is a weaker assumption than weak synchrony, and only a bit stronger than full asynchrony, where the only guarantee is that messages are eventually delivered). It is impossible to tell whether an instance has failed by completely stopping, or if there is just a delay in message delivery [1].
Asynchrony
Asynchronous Networks and Protocols
In an asynchronous network, the adversary can deliver messages in any order and at any time. However, the message must eventually be delivered between correct nodes. Nodes in an asynchronous network effectively have no use for realtime clocks, and can only take actions based on the ordering of messages they receive [6]. The speed is determined by the speed at which the network communicates, instead of a fixed limit of x seconds.
An asynchronous protocol requires a different means to decide when all nodes are able to come to a consensus.
As will be discussed in FLP Impossibility, FLP result rules out the possibility of the deterministic asynchronous protocols for atomic broadcast and many other tasks. A deterministic protocol must therefore make some stronger timing assumptions [6].
Counting Rounds in Asynchronous Networks
Although the guarantee of eventual delivery is decoupled from notions of "real time", it is nonetheless desirable to characterize the running time of asynchronous protocols. The standard approach is for the adversary to assign each message a virtual round number, subject to the condition that every (r1) message between correct nodes must be delivered before any (r+1) message is sent.
Problem with Timing Assumptions
General
The problem with both synchronous and partially synchronous assumptions is that "the protocols based on timing assumptions are unsuitable for decentralized, cryptocurrency settings, where network links can be unreliable, network speeds change rapidly, and network delays may even be adversarially induced" [6].
Denial of Service Attack
Basing a protocol on timings, exposes the network to Denial of Service (DoS) attacks. A synchronous protocol will be deemed unsafe if a DoS slows down the network sufficiently. Even though a partially synchronous protocol would be safe, it would be unable to operate, as the messages would be exposed to interference.
An asynchronous protocol would be able to function under a DoS attack. However, it is difficult to reach consensus, as it is impossible to know if the network is under attack, or if a particular message is delayed by the protocol itself.
FLP Impossibility
Reference [22] maps out what it is possible to achieve with distributed processes in an asynchronous environment.
The result, referred to as the FLP result, which raised the problem of consensus, i.e. getting a distributed network of processors to agree on a common value. This problem was known to be solvable in a synchronous setting, where processes could proceed in simultaneous steps. The synchronous solution was seen as resilient to faults, where processors crash and take no further part in the computation. Synchronous models allow failures to be detected by waiting one entire step length for a reply from a processor, and presuming that it has crashed if no reply is received.
This kind of failure detection is not possible in an asynchronous setting, as there are no bounds on the amount of time a processor might take to complete its work and then respond. The FLP result shows that in an asynchronous setting, where only one processor might crash, there is no distributed algorithm that solves the consensus problem [23].
Randomized Agreement
The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. Every protocol for this problem has the possibility of nontermination [22]. While the vast majority of PBFT protocols steer clear of this impossibility result by making timing assumptions, randomness (and, in particular, cryptography) provides an alternative route. Asynchronous BFT protocols have been used for a variety of tasks such as binary agreement (ABA), reliable broadcast (RBC) and more [6].
Contributors
 https://github.com/kevoulee
 https://github.com/CjS77
 https://github.com/hansieodendaal
 https://github.com/anselld
Scaling
Purpose
The blockchain scalability problem refers to the discussion concerning the limits on the transaction throughput a blockchain network can process. It is related to the fact that records (known as blocks) in the bitcoin blockchain are limited in size and frequency [1].
Definitions

From Bitcoin StackExchange: The term "onchain scaling" is frequently used to exclusively refer to increasing the blockchain capacity by means of bigger blocks. However, in the literal sense of the term, it should refer to any sort of protocol change that improves the network's capacity at the blockchain layer. These approaches tend to provide at most a linear capacity increase, although some are also scalability improvements [2].
Examples:
 blocksize/blockweight increase;
 witness discount of segregated witness;
 smaller size of Schnorr signatures;
 BellareNeven signature aggregation;
 key aggregation.

From Tari Labs: Analogous to the OSI layers for communication, in blockchain technology decentralized Layer 2 protocols, also commonly referred to as Layer 2 scaling, refers to transaction throughput scaling solutions. Decentralized Layer 2 protocols run on top of the main blockchain (offchain), while preserving the attributes of the main blockchain (e.g. cryptoeconomic consensus). Instead of each transaction, only the resultant of a number of transactions is embedded onchain [3].
References
[1] Wikipedia: "Bitcoin Scalability Problem" [online]. Available: https://en.wikipedia.org/wiki/Bitcoin_scalability_problem. Date accessed: 2019‑06‑11.
[2] Bitcoin StackExchange: "What is the Difference between Onchain Scaling and Offchain Scaling?" [online]. Available: https://bitcoin.stackexchange.com/questions/63375/whatisthedifferencebetweenonchainscalingandoffchainscaling. Date accessed: 2019‑06‑10.
[3] Tari Labs: "Layer 2 Scaling Survey  What is Layer 2 Scaling?" [online]. Available: layer2scalinglandscape/layer2scalingsurvey.html#whatislayer2scaling. Date accessed: 2019‑06‑10.
Layer 2 Scaling Survey
 What is Layer 2 Scaling?
 How will this be Applicable to Tari?
 Layer 2 Scaling Current Initiatives
 Observations
 References
 Contributors
What is Layer 2 Scaling?
In the blockchain and cryptocurrency world, transaction processing scaling is a tough problem to solve. This is limited by the average block creation time, the block size limit, and the number of newer blocks needed to confirm a transaction (confirmation time). These factors make "over the counter" type transactions similar to Master Card or Visa nearly impossible if done on the main blockchain (onchain).
Let's postulate that blockchain and cryptocurrency "take over the world" and are responsible for all global noncash transactions performed, i.e. 433.1 billion in 2014 to 2015 [24]. This means 13,734 transactions per second (tx/s) on average! (To put this into perspective, VisaNet currently processes 160 billion transactions per year [25] and is capable of handling more than 65,000 transaction messages per second [26].) This means that if all of those were simple singleinputsingleoutput noncash transactions and performed on:

SegWitenabled Bitcoin "like" blockchains that can theoretically handle ~21.31tx/s, we would need ~644 parallel versions, and with a SegWit transaction size of 190 bytes [27], the combined blockchain growth would be ~210GB per day!

Ethereum "like" blockchains, and taking current gas prices into account, Ethereum can theoretically process ~25.4tx/s, then ~541 parallel versions would be needed and, with a transaction size of 109 bytes ([28], [29]), the combined blockchain growth would be ~120GB per day!
This is why we need a proper scaling solution that would not bloat the blockchain.
The Open Systems Interconnection (OSI) model defines seven layers for communication functions of a computing system. Layer 1 refers to the physical layer and Layer 2 to the data link layer. Layer 1 is never concerned with functions of Layer 2 and up; it just delivers transmission and reception of raw data. In turn, Layer 2 only knows about Layer 1 and defines the protocols that deliver nodetonode data transfer [1].
Analogous to the OSI layers for communication, in blockchain technology, decentralized Layer 2 protocols, also commonly referred to as Layer 2 scaling, refers to transaction throughput scaling solutions. Decentralized Layer 2 protocols run on top of the main blockchain (offchain), while preserving the attributes of the main blockchain (e.g. cryptoeconomic consensus). Instead of each transaction, only the result of a number of transactions is embedded onchain [2].
Also:

Does every transaction need every parent blockchain node in the world to verify it?

Would I be willing to have (temporary) lower security guarantees for most of my daytoday transactions if I could get them validated (whatever we take that to mean) nearinstantly?
If you can answer "no" and "yes", then you're looking for a Layer 2 scaling solution.
How will this be Applicable to Tari?
Tari is a highthroughput protocol that will need to handle realworld transaction volumes. For example, Big Neon, the initial business application to be built on top of the Tari blockchain, requires highvolume transactions in a short time, especially when tickets sales open and when tickets will be redeemed at an event. Imagine filling an 85,000 seat stadium with 72 entrance queues on match days. Serialized realworld scanning boils down to approximately 500 tickets in four minutes, or approximately two spectators allowed access per second per queue.
This would be impossible to do with parent blockchain scaling solutions.
Layer 2 Scaling Current Initiatives
Micropayment Channels
What are they?
A micropayment channel is a class of techniques designed to allow users to make multiple Bitcoin transactions without committing all of the transactions to the Bitcoin blockchain. In a typical payment channel, only two transactions are added to the blockchain, but an unlimited or nearly unlimited number of payments can be made between the participants [10].
Several channel designs have been proposed or implemented over the years, including:
 Nakamoto highfrequency transactions;
 Spillmanstyle payment channels;
 CLTVstyle payment channels;
 PoonDryja payment channels;
 DeckerWattenhofer duplex payment channels;
 DeckerRussellOsuntokun eltoo channels;
 Hashed TimeLocked Contracts (HTLCs).
With specific focus on Hashed TimeLocked Contracts: This technique can allow payments to be securely routed across multiple payment channels. HTLCs are integral to the design of more advanced payment channels such as those used by the Lightning Network.
The Lightning Network is a secondlayer payment protocol that operates on top of a blockchain. It enables instant transactions between participating nodes. The Lightning Network features a peertopeer system for making micropayments of digital cryptocurrency through a network of bidirectional payment channels without delegating custody of funds and minimizing the trust of third parties [11].
Normal use of the Lightning Network consists of opening a payment channel by committing a funding transaction to the relevant blockchain. This is followed by making any number of Lightning transactions that update the tentative distribution of the channel's funds without broadcasting to the blockchain; and optionally followed by closing the payment channel by broadcasting the final version of the transaction to distribute the channel's funds.
Who uses them?
The Lightning Network is spreading across the cryptocurrency landscape. It was originally designed for Bitcoin. However, Litecoin, Zcash, Ethereum and Ripple are just a few of the many cryptocurrencies planning to implement or test some form of the network [12].
Strengths
 Micropayment channels are one of the leading solutions that have been presented to scale Bitcoin, which do not require a change to the underlying protocol.
 Transactions are processed instantly, the account balances of the nodes are updated, and the money is immediately accessible to the new owner.
 Transaction fees are a fraction of the transaction cost [13].
Weaknesses
 Micropayment channels are not suitable for making bulk payment, as the intermediate nodes in the multichannel payment network may not be loaded with money to move the funds along.
 Recipients cannot receive money unless their node is connected and online at the time of the transaction. At the time of writing (July 2018), channels were only bilateral.
Opportunities
Opportunities are fewer than expected, as Tari's ticketing use case requires many fast transactions with many parties, and not many fast transactions with a single party. Nonfungible assets must be "broadcasted", but state channels are private between two parties.
State Channels
What are they?
State channels are the more general form of micropayment channels. They can be used not only for payments, but for any arbitrary "state update" on a blockchain, such as changes inside a smart contract [16].
State channels allow multiple transactions to be made within offchain agreements with very fast processing, and the final settlement onchain. They keep the operation mode of blockchain protocol, but change the way it is used so as to deal with the challenge of scalability.
Any change of state within a state channel requires explicit cryptographic consent from all parties designated as "interested" in that part of the state [19].
Who uses them?
On Ethereum:

 Uses state channels to research state channel technology, define protocols and develop reference implementations.
 State channels work with any ERC20compatible token.
 State updates between two parties are done via digitally signed and hashlocked transfers as the consensus mechanism, called balance proofs, which are also secured by a timeout. These can be settled on the Ethereum blockchain at any time. Raiden Network uses HTLCs in exactly the same manner as the Lightning Network.

Counterfactual ([16], [19], [31])
 Uses state channels as a generalized framework for the integration of native state channels into Ethereumbased decentralized applications.
 A generalized state channel generalized framework is one where state is deposited once, and is then used afterwards by any application or set of applications.
 Counterfactual instantiation means to instantiate a contract without actually deploying it onchain. It is achieved by making users sign and share commitments to the multisig wallet.
 When a contract is counterfactually instantiated, all parties in the channel act as though it has been deployed, even though it has not.
 A global registry is introduced. This is an onchain contract that maps unique deterministic addresses for any Counterfactual contract to actual onchain deployed addresses. The hashing function used to produce the deterministic address can be any function that takes into account the bytecode, its owner (i.e. the multisig wallet address), and a unique identifier.
 A typical Counterfactual state channel is composed of counterfactually instantiated objects.

 Uses state channels as a decentralized slot machine gambling platform, but still using centralized serverbased random number generation.
 Instantiates a normal "Raidenlike" state channel (called fate channel) between the player and the casino. Final states are submitted to blockchain after the betting game is concluded.
 Investigating the use of threshold cryptography such as BonehLynnShacham (BLS) signature schemes to enable truly secure random number generation by a group of participants.
On NEO:
 Trinity is an opensource network protocol based on NEP5 smart contracts.
 Trinity for NEO is the same as the Raiden Network for Ethereum.
 Trinity uses the same consensus mechanism as the Raiden network.
 A new token, TNC, has been introduced to fund the Trinity network, but NEO, NEP5 and TNC tokens are supported.
Strengths
 Allows payments and changes to smart contracts.
 State channels have strong privacy properties. Everything is happening "inside" a channel between participants.
 State channels have instant finality. As soon as both parties sign a state update, it can be considered final.
Weaknesses
State channels rely on availability; both parties must be online.
Opportunities
Opportunities are fewer than expected, as Tari's ticketing use case requires many fast transactions with many parties, and not many fast transactions with a single party. Nonfungible assets must be "broadcasted", but state channels are private between two parties.
Offchain Matching Engines
What are they?
Orders are matched offchain in a matching engine and fulfilled onchain. This allows complex orders, supports crosschain transfers, and maintains a public record of orders as well as a deterministic specification of behavior. Offchain matching engines make use of a token representation smart contract that converts global assets into smart contract tokens and vice versa [5].
Who uses them?

Neon Exchange (NEX) ( [5], [35])
 NEX uses a NEO decentralized application (dApp) with tokens.
 Initial support is planned for NEO, ETH, NEP5 and ERC20 tokens.
 Crosschain support is planned for trading BTC, LTC and RPX on NEX.
 The NEX offchain matching engine will be scalable, distributed, faulttolerant, and function continuously without downtime.
 Consensus is achieved using cryptographically signed requests; publicly specified deterministic offchain matching engine algorithms; and public ledgers of transactions and reward for foul play. The trade method of the exchange smart contract will only accept orders signed by a private key held by the matching engine.
 The matching engine matches the orders and submits them to the respective blockchain smart contract for execution.
 A single invocation transaction on NEO can contain many smart contract calls. Batch commit of matched orders in one onchain transaction is possible.

 An Ethereum ERC20based smart contract token (ZRX).
 Provides an opensource protocol to exchange ERC20compliant tokens on the Ethereum blockchain using offchain matching engines in the form of dApps (Relayers) that facilitate transactions between Makers and Takers.
 Offchain order relay + onchain settlement.
 Maker chooses Relayer, specifies token exchange rate, expiration time, fees to satisfy Relayer's fee schedule, and signs order with a private key.
 Consensus is governed with the publicly available DEX smart contract: addresses, token balances, token exchange, fees, signatures, order status and final transfer.
Strengths

Performance {NEX, 0x}:
 offchain request/order;
 offchain matching.

NEXspecific:
 batched onchain commits;
 crosschain transfers;
 support of national currencies;
 public JavaScript Object Notation (JSON) Application Programmers Interface (API) and web extension API for thirdparty applications to trade tokens;
 development environment  Elixir on top of Erlang to enable scalable, distributed, and faulttolerant matching engine;
 Cure53 full security audit on web extension, NEX tokens will be regulated as registered European securities.

0xspecific:
 opensource protocol to enable creation of independent offchain dApp matching engines (Relayers);
 totally transparent matching of orders with no single point of control
 maker's order only enters a Relayer's order book if fee schedule is adhered to,
 exchange can only happen if a Taker is willing to accept;
 consensus and settlement governed by the publicly available DEX smart contract.
Weaknesses

At the time of writing (July 2018), both NEX and 0x were still in development.

NEXspecific:
 a certain level of trust is required, similar to a traditional exchange;
 closed liquidity pool.

0xspecific:
Opportunities
Matching engines in general have opportunities for Tari; the specific scheme is to be investigated further.
Masternodes
What are they?
A masternode is a server on a decentralized network. It is utilized to complete unique functions in ways ordinary mining nodes cannot, e.g. features such as direct send, instant transactions and private transactions. Because of their increased capabilities, masternodes typically require an investment in order to run. Masternode operators are incentivized and are rewarded by earning portions of block rewards in the cryptocurrency they are facilitating. Masternodes will get the standard return on their stakes, but will also be entitled to a portion of the transaction fees, allowing for a greater return on investment (ROI) ([7], [9]).
 Dash Example [30]. Dash was the first cryptocurrency to implement the masternode model in its protocol. Under what Dash calls its proof of service algorithm, a secondtier network of masternodes exists alongside a firsttier network of miners to achieve distributed consensus on the blockchain. This twotiered system ensures that proof of service and proof of work perform symbiotic maintenance of Dash's network. Dash masternodes also enable a decentralized governance system that allows node operators to vote on important developments within the blockchain. A masternode for Dash requires a stake of 1,000 DASH. Dash and the miners each have 45% of the block rewards. The other 10% goes to the blockchain's treasury fund. Operators are in charge of voting on proposals for how these funds will be allocated to improve the network.
 Dash Deterministic Ordering. A special deterministic algorithm is used to create a pseudorandom ordering of the masternodes. By using the hash from the proofofwork for each block, security of this functionality is provided by the mining network.
 Dash Trustless Quorums. The Dash masternode network is trustless, where no single entity can control the outcome. N pseudorandom masternodes (Quorum A) are selected from the total pool to act as an oracle for N pseudorandom masternodes (Quorum B) that are selected to perform the actual task. Quorum A nodes are the nodes that are the closest, mathematically, to the current block hash, while Quorum B nodes are the furthest. This process is repeated for each new block in the blockchain.
 Dash Proof of Service. Bad actors could also run masternodes. To reduce the possibility of bad acting, nodes must ping the rest of the network to ensure they remain active. All masternode verification is done randomly via the Quorum system by the masternode network itself. Approximately 1% of the network is verified each block. This results in the entire masternode network being verified approximately six times per day. Six consecutive violations result in the deactivation of a masternode.
Who uses them?
Block, Bata, Crown, Chaincoin, Dash, Diamond, ION, Monetary Unit, Neutron, PIVX, Vcash and XtraBytes [8].
Strengths
Masternodes:
 help to sustain and take care of the ecosystem and can protect blockchains from network attacks;
 can perform decentralized governance of miners by having the power to reject or orphan blocks if required ([22], [30]);
 can support decentralized exchanges by overseeing transactions and offering fiat currency gateways;
 can be used to facilitate smart contracts such as instant transactions, anonymous transactions and decentralized payment processor;
 can facilitate a decentralized marketplace such as the blockchain equivalent of peerrun commerce sites such as eBay [22];
 compensate for Proof of Work's limitations  they avoid mining centralization and consume less energy [22];
 promise enhanced stability and network loyalty, as larger dividends and high initial investment costs make it less likely that operators will abandon their position in the network [22].
Weaknesses
 Maintaining masternodes can be a long and arduous process.
 ROI is not guaranteed and is inconsistent. In some applications, Masternodes only get rewarded if they mine a block and if they are randomly chosen to get paid.
 In general, a masternode's IP address is publicized and thus open to attacks.
Opportunities
 Masternodes do not have a specific standard or protocol; many different implementations exist. If the Tari protocol employs Masternodes, they can be used to facilitate smart contracts offchain and to enhance the security of the primary blockchain.
 Masternodes increase the incentives for people to be involved with Tari.
Plasma
What is it?
Plasma blockchains are a chain within a blockchain, with state transitions enforced by bonded (time to exit) fraud proofs (block header hashes) submitted on the root chain. Plasma enables management of a tiered blockchain without a full persistent record of the ledger on the root blockchain, and without giving custodial trust to any third party. The fraud proofs enforce an interactive protocol of rapid fund withdrawals in case of foul play such as block withholding, and in cases where bad actors in a lowerlevel tier want to commit blocks to the root chain without broadcasting this to the higherlevel tiers [4].
Plasma is a framework for incentivized and enforced execution of smart contracts, scalable to a significant amount of state updates per second, enabling the root blockchain to be able to represent a significant amount of dApps, each employing its own blockchain in a tree format [4].
Plasma relies on two key parts, namely reframing all blockchain computations into a set of MapReduce functions, and an optional method to do Proof of Stake (PoS) token bonding on top of existing blockchains (enforced in an onchain smart contract). Nakamoto Consensus incentives discourage block withholding or other Byzantine behavior. If a chain is Byzantine, it has the option of going to any of its parents (including the root blockchain) to continue operation or exit with the current committed state [4].
MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key [38]. The Plasma MapReduce includes commitments on data to computation as input in the map phase, and includes a merkleized proofofstate transition in the reduce step when returning the result [4].
Who uses it?
 Loom Network, using Delegated Proof of Stake (DPoS) consensus and validation, enabling scalable Application Specific Side Chains (DAppChains), running on top of Ethereum ([4], [15]).
 OMG Network (OmiseGO), using PoS consensus and validation, a Plasma blockchain scaling solution for finance running on top of Ethereum ([6], [14]).
Strengths
 Not all participants need to be online to update state.
 Participants do not need a record of entry on the parent blockchain to enable their participation in a Plasma blockchain.
 Minimal data is needed on the parent blockchain to confirm transactions when constructing Plasma blockchains in a tree format.
 Private blockchain networks can be constructed, enforced by the root blockchain. Transactions may occur on a local private blockchain and have financial activity bonded by a public parent blockchain.
 Rapid exit strategies in case of foul play.
Weaknesses
At the time of writing (July 2018), Plasma still needed to be proven on other networks apart from Ethereum.
Opportunities
 Has opportunities for Tari as a Layer 2 scaling solution.
 Possibility to create a Tari ticketing Plasma dAppChain running of Monero without creating a Tarispecific root blockchain? Note: This will make the Tari blockchain dependent on another blockchain.
 The Loom Network's Software Development Kit (SDK) makes it extremely easy for anyone to create a new Plasma blockchain. In less than a year, a number of successful and diverse dAppChains have launched. The next one could easily be for ticket sales...
TrueBit
What is it?
TrueBit is a protocol that provides security and scalability by enabling trustless smart contracts to perform and offload complex computations. This makes it different from state channels and Plasma, which are more useful for increasing the total transaction throughput of the Ethereum blockchain. TrueBit relies on solvers (akin to miners), who have to stake their deposits in a smart contract, solve computation and, if correct, get their deposit back. If the computation is incorrect, the solver loses their deposit. TrueBit uses an economic mechanism called the "verification game", where an incentive is created for other parties, called challengers, to check the solvers' work ([16], [40], [43]).
Who uses it?
Golem cites TrueBit as a verification mechanism for its forthcoming outsourced computation network LivePeer, a video streaming platform ([39], [41], [42]).
Strengths
 Outsourced computation  anyone in the world can post a computational task, and anyone else can receive a reward for completing it [40].
 Scalable  by decoupling verification for miners into a separate protocol, TrueBit can achieve high transaction throughput without facing a Verifier's Dilemma [40].
Weaknesses
At the time of writing (July 2018), TrueBit had not yet been fully tested.
Opportunities
Nothing at the moment as, Tari would not be doing heavy/complex computation, at least not in the short term.
TumbleBit
What is it?
The TumbleBit protocol was invented at the Boston University. It is a unidirectional, unlinkable payment hub that is fully compatible with the Bitcoin protocol. TumbleBit allows parties to make fast, anonymous, offchain payments through an untrusted intermediary called the Tumbler. Noone, not even the Tumbler, can tell which payer paid which payee during a TumbleBit epoch, i.e. a time period of significance.
Two modes of operation are supported:
 a classic mixing/tumbling/washing mode; and
 a fullyfledged payment hub mode.
TumbleBit consists of two interleaved fairexchange protocols that rely on the Rivest–Shamir–Adleman (RSA) cryptosystem's blinding properties to prevent bad acting from either users or Tumblers, and to ensure anonymity. These protocols are:
 RSAPuzzleSolver Protocol; and
 PuzzlePromise Protocol.
TumbleBit also supports anonymizing through Tor to ensure that the Tumbler server can operate as a hidden service ([87], [88], [94], [95], [96]).
TumbleBit combines offchain cryptographic computations with standard onchain Bitcoin scripting functionalities to realize smart contracts [97] that are not dependent on Segwit. The most important Bitcoin functionality used here includes hashing conditions, signing conditions, conditional execution, 2‑of‑2 multisignatures and timelocking [88].
Who does it?
The Boston University provided a proofofconcept and reference implementation alongside the white paper [90]. NTumbleBit [91] is being developed as a C# production implementation of the TumbleBit protocol that at the time of writing (July 2018) was being deployed by Stratis with its Breeze implementation [92], at alpha/experimental release level in TestNet.
NTumbleBit will be a crossplatform framework, server and client for the TumbleBit payment scheme. TumbleBit is separated into two modes, tumbler mode and payment hub mode. The tumbler mode improves transaction fungibility and offers risk free unlinkable transactions. Payment hub mode is a way of making offchain payments possible without requiring implementations like Segwit or the lightning network. [89]
Strengths
 Anonymity properties. TumbleBit provides unlinkability without the need to trust the Tumbler service, i.e. untrusted intermediary [88].
 Denial of Service (DoS) and Sybil protection. "TumbleBit uses transaction fees to resist DoS and Sybil attacks." [88]
 Balance. "The system should not be exploited to print new money or steal money, even when parties collude." [88]
 As a classic tumbler. TumbleBit can also be used as a classic Bitcoin tumbler [88].
 Bitcoin compatibility. TumbleBit is fully compatible with the Bitcoin protocol [88].
 Scalability. Each TumbleBit user only needs to interact with the Tumbler and the corresponding transaction party; this lack of coordination between all TumbleBit users makes scalability possible for the tumbler mode [88].
 Batch processing. TumbleBit supports onetoone, manytoone, onetomany and manytomany transactions in payment hub mode [88].
 Masternode compatibility. The TumbleBit protocol can be fully implemented as a service in a Masternode. "The Breeze Wallet is now fully capable of providing enhanced privacy to bitcoin transactions through a secure connection. Utilizing Breeze Servers that are preregistered on the network using a secure, trustless registration mechanism that is resistant to manipulation and censorship." ([92], [93], [98])
 Nearly production ready. The NTumbleBit and Breeze implementations have gained TestNet status.
Weaknesses
 Privacy is not 100% proven. Payees have better privacy than the payers, and theoretically collusion involving payees and the Tumbler can exist to discover the identity of the payer [99].
 The Tumbler service is not distributed. More work needs to be done to ensure a persistent transaction state in case a Tumbler server goes down.
 Equal denominations are required. The TumbleBit protocol can only support a common denominator unit value [88].
Opportunities
TumbleBit has benefits for Tari as a trustless Masternode matching/batch processing engine with strong privacy features.
Counterparty
What is it?
Counterparty is NOT a blockchain. Counterparty is a token protocol released in January 2014 that operates on Bitcoin.
It has a fully functional Decentralized Exchange (DEX), as well as several hardcoded smart contracts defined that
include contracts for difference and binary options ("bets"). To operate, Counterparty utilizes "embedded consensus",
which means that a Counterparty transaction is created and embedded into a Bitcoin transaction, using encoding such as
1of3 multisignature (multisig), Pay to Script Hash (P2SH) or Pay To Public Key Hash (P2PKH). Counterparty nodes, i.e.
nodes that run both bitcoind
and the counterpartyserver
applications, will receive Bitcoin transactions as normal
(from bitcoind
). The counterpartyserver
will then scan each, and decode and parse any embedded Counterparty
transactions it finds. In effect, Counterparty is a ledger within the larger Bitcoin ledger, and the functioning of
embedded consensus can be thought of as similar to the fitting of one Russian stacking doll inside another ([73],
[74], [75]).
Embedded consensus also means that the nodes maintain identical ledgers without using a separate peertopeer network,
solely using the Bitcoin blockchain for all communication (i.e. timestamping, transaction ordering and transaction
propagation). Unlike Bitcoin, which has the concept of both a soft fork and a hard fork, a change to the protocol or
"consensus" code of Counterparty always has the potential to create a hard fork. In practice, this means that each
Counterparty node must run the same version of counterpartyserver
(or at least the same minor version, e.g. the
"3" in 2.3.0) so that the protocol code matches up across all nodes ([99], [100]).
Unlike Bitcoin's UTXO model, the Counterparty token protocol utilizes an accounts system where each Bitcoin address is an account, and Counterparty credit and debit transactions for a specific token type affect account balances of that token for that given address. The decentralized exchange allows for lowfriction exchanging of different tokens between addresses, utilizing the concept of "orders", which are individual transactions made by Counterparty clients, and "order matches", which the Counterparty protocol itself will generate as it parses new orders that overlap existing active orders in the system. It is the Counterparty protocol code itself that manages the escrow of tokens when an order is created, the exchange of tokens between two addresses that have overlapping orders, and the release of those assets from escrow postexchange.
Counterparty uses its own token, XCP, which was created through a "proof of burn" process during January 2014 [101].
In that month, over 2,000 bitcoins were destroyed by various individuals sending them to an unspendable address on the
Bitcoin network (1CounterpartyXXXXXXXXXXXXXXXUWLpVr
), which caused the Counterparty protocol to award the sending
address with a corresponding amount of XCP. XCP is used for payment of asset creation fees, collateral for contracts
for difference/binary options, and often as a base token in decentralized exchange transactions (largely due to the
complexities of using Bitcoin (BTC) in such trades).
Support for the Ethereum Virtual Machine (EVM) was implemented, but never included on the MainNet version [73]. With
the Counterparty EVM implementation, all published Counterparty smart contracts “live” at Bitcoin addresses that start
with a C
. Counterparty is used to broadcast an execute
transaction to call a specific function or method in the
smart contract code. Once an execution transaction is confirmed by a Bitcoin miner, the Counterparty federated nodes
will receive the request and execute that method. The contract state is modified as the smart contract code executes
and is stored in the Counterparty database [99].
General consensus is that a federated network is a distributed network of centralized networks. The Ripple blockchain implements a Federated Byzantine Agreement (FBA) consensus mechanism. Federated sidechains implements a security protocol using a trusted federation of mutually distrusting functionaries/notaries. Counterparty utilizes a "full stack" packaging system for its components and all dependencies, called the "federated node" system. However, this meaning refers to federated in the general definition, i.e. "set up as a single centralized unit within which each state or division keeps some internal autonomy" ([97], [98], [71]).
Who uses it?
The most notable projects built on top of Counterparty are Age of Chains, Age of Rust, Augmentors, Authparty, Bitcorns, Blockfreight™, Blocksafe, BTCpaymarket.com, CoinDaddy, COVAL, FoldingCoin, FootballCoin, GetGems, IndieBoard, LTBCoin  Letstalkbitcoin.com, Mafia Wars, NVO, Proof of Visit, Rarepepe.party, SaruTobi Island, Spells of Genesis, Takara, The Scarab Experiment, Token.FM, Tokenly, TopCoin and XCP DEX [75]. In the past, projects such as Storj and SWARM also built on Counterparty.
COVAL is being developed with the primary purpose of moving value using “offchain” methods. It uses its own set of node runners to manage various "offchain" distributed ledgers and ledger assigned wallets to implement an extended transaction value system, whereby tokens as well as containers of tokens can be securely transacted. Scaling within the COVAL ecosystem is thus achievable, because it is not only reliant on the Counterparty federated nodes to execute smart contracts [76].
Strengths
 Counterparty provides a simple way to add "Layer 2" functionality, i.e. hardcoded smart contracts, to an already existing blockchain implementation that supports basic data embedding.
 Counterparty's embedded consensus model utilizes "permissionless innovation", meaning that even the Bitcoin core developers could not stop the use of the protocol layer without seriously crippling the network.
Weaknesses
 Embedded consensus requires lockstep upgrades from network nodes to avoid forks.
 Embedded consensus imposes limitations on the ability of the secondary layer to interact with the primary layer's token. Counterparty was not able to manipulate BTC balances or otherwise directly utilize BTC.
 With embedded consensus, nodes maintain identical ledgers without using a peertopeer network. One could claim that this hampers the flexibility of the protocol. It also limits the speed of the protocol to the speed of the underlying blockchain.
Opportunities
 Nodes can implement improved consensus models such as Federated Byzantine Agreement [98].
 Refer to Scriptless Scripts.
2way Pegged Secondary Blockchains
What are they?
A 2way peg (2WP) allows the "transfer" of BTC from the main Bitcoin blockchain to a secondary blockchain and vice versa at a fixed rate by making use of an appropriate security protocol. The "transfer" actually involves BTC being locked on the main Bitcoin blockchain and unlocked/made available on the secondary blockchain. The 2WP promise is concluded when an equivalent number of tokens on the secondary blockchain are locked (in the secondary blockchain) so that the original bitcoins can be unlocked ([65], [71]).
 Sidechain: When the security protocol is implemented using Simplified Payment Verification (SPV) proofs  blockchain transaction verification without downloading the entire blockchain, the secondary blockchain is referred to as a Sidechain [65].
 Drivechain: When the security protocol is implemented by giving custody of the BTC to miners  where miners vote when to unlock BTC and where to send them, the secondary blockchain is referred to as a Drivechain. In this scheme, the miners will sign the block header using a Dynamic Membership Multiparty Signature (DMMS) ([65], [71]).
 Federated Peg/Sidechain: When the security protocol is implemented by having a trusted federation of mutually distrusting functionaries/notaries, the secondary blockchain is referred to as a Federated Peg/Sidechain. In this scheme, the DMMS is replaced with a traditional multisignature scheme ([65], [71]).
 Hybrid SidechainDrivechainFederated Peg: When the security protocol is implemented by SPV proofs going to the secondary blockchain and dynamic mixture of miner DMMS and functionaries/notaries multisignatures going back to the main Bitcoin blockchain, the secondary blockchain is referred to as a Hybrid SidechainDrivechainFederated Peg ([65], [71], [72]).
The following figure shows an example of a 2WP Bitcoin secondary blockchain using a Hybrid SidechainDrivechainFederated Peg security protocol [65]:
BTC on the main Bitcoin blockchain are locked by using a P2SH transaction, where BTC can be sent to a script hash instead of a public key hash. To unlock the BTC in the P2SH transaction, the recipient must provide a script matching the script hash and data, which makes the script evaluate to true [66].
Who does them?
 RSK (formerly Rootstock) is a 2WP Bitcoin secondary blockchain using a hybrid sidechaindrivechain security protocol. RSK is scalable up to 100 transactions per second (Tx/s) and provides a secondlayer scaling solution for Bitcoin, as it can relieve onchain Bitcoin transactions ([100], [101], [102]).
 Hivemind (formerly Truthcoin) is implementing a PeertoPeer Oracle Protocol that absorbs accurate data into a blockchain so that Bitcoin users can speculate in Prediction Markets [67].
 Blockstream is implementing a Federated Sidechain called Liquid, with the functionaries/notaries being made up of participating exchanges and Bitcoin businesses [72].
Strengths
 Permissionless innovation: Anyone can create a new blockchain project that uses the underlying strengths of the main Bitcoin blockchain, using real BTC as the currency [63].
 New features: Sidechains/Drivechains can be used to test or implement new features without risk to the main Bitcoin blockchain or without having to change its protocol, such as Schnorr signatures and zeroknowledge proofs ([63], [68]).
 ChainsasaService (CaaS): It is possible to create a CaaS with data storage 2WP secondary blockchains [68].
 Smart Contracts: 2WP secondary blockchains make it easier to implement smart contracts [68].
 Scalability: 2WP secondary blockchains can support larger block sizes and more transactions per second, thus scaling the Bitcoin main blockchain [68].
Weaknesses
 Security: Transferring BTC back into the main Bitcoin blockchain is not secure enough and can be manipulated because Bitcoin does not support SPV from 2WP secondary blockchains [64].
 51% attacks: 2WP secondary blockchains are hugely dependent on merged mining. Mining power centralization and 51% attacks are thus a real threat, as demonstrated for Namecoin and Huntercoin (refer to Merged Mining Introduction).
 The DMMS provided by mining is not very secure for small systems, while the trust of the federation/notaries is riskier for large systems [71].
Opportunities
2WP secondary blockchains may present interesting opportunities to scale multiple payments that would be associated with multiple nonfungible assets living on a secondary layer. However, take care with privacy and security of funds as well as with transferring funds into and out of the 2WP secondary blockchains.
Lumino
What is it?
Lumino Transaction Compression Protocol (LTCP) is a technique for transaction compression that allows the processing of a higher volume of transactions, but the storing of much less information. The Lumino Network is a Lightninglike extension of the RSK platform that uses LTCP. Delta (difference) compression of selected fields of transactions from the same owner is done by using aggregate signing of previous transactions so that previous signatures can be disposed of [103].
Each transaction contains a set of persistent fields called the Persistent Transaction Information (PTI) and a compound record of user transaction data called the SigRec. A Lumino block stores two Merkle trees: one containing all PTIs; and the other containing all transaction IDs (hash of the signed SigRec). This second Merkle tree is conceptually similar to the Segwit witness tree, thus forming the witness part. Docking is the process where SicRec and signature data can be pruned from the blockchain if valid linked PTI information exists [103].
Who does it?
RSK, which was newly launched on MainNet in January 2018. The Lumino Network must still be launched in TestNet ([61], [62]).
Strengths
The Lumino Network promises high efficiency in pruning the RSK blockchain.
Weaknesses
 The Lumino Network has not yet been released.
 Details about how the Lumino Network will handle payment channels were not decisive in the white paper [103].
Opportunities
LTCP pruning may be beneficial to Tari.
Scriptless Scripts
What is it?
Scriptless Scripts was coined and invented by mathematician Andrew Poelstra. It entails offering scripting functionality without actual scripts on the blockchain to implement smart contracts. At the time of writing (July 2018) it can only work on the Mimblewimble blockchain and makes use of a specific Schnorr signature scheme [81] that allows for signature aggregation, mathematically combining several signatures into a single signature, without having to prove Knowledge of Secret Keys (KOSK). This is known as the plain publickey model, where the only requirement is that each potential signer has a public key. The KOSK scheme requires that users prove knowledge (or possession) of the secret key during public key registration with a certification authority, and is one way to generically prevent roguekey attacks ([78], [79]).
Signature aggregation properties sought here are ([78], [79]):
 must be provably secure in the plain publickey model;
 must satisfy the normal Schnorr equation, whereby the resulting signature can be written as a function of a combination of the public keys;
 must allow for Interactive Aggregate Signatures (IAS), where the signers are required to cooperate;
 must allow for Noninteractive Aggregate Signatures (NAS), where the aggregation can be done by anyone;
 must allow each signer to sign the same message;
 must allow each signer to sign their own message.
This is different to a normal multisignature scheme where one message is signed by all.
Let's say Alice and Bob each needs to provide half a Schnorr signature for a transaction, whereby Alice promises to reveal a secret to Bob in exchange for one crypto coin. Alice can calculate the difference between her half Schnorr signature and the Schnorr signature of the secret (adaptor signature) and hand it over to Bob. Bob then has the ability to verify the correctness of the adaptor signature without knowing the original signatures. Bob can then provide his half Schnorr signature to Alice so she can broadcast the full Schnorr signature to claim the crypto coin. By broadcasting the full Schnorr signature, Bob has access to Alice's half Schnorr signature and he can then calculate the Schnorr signature of the secret because he already knows the adaptor signature, thereby claiming his prize. This is also known as ZeroKnowledge Contingent payments ([77], [80]).
Who does it?
Mimblewimble is being cited by Andrew Poelstra as being the ultimate Scriptless Script [80].
Strengths
 Data savings. Signature aggregation provides data compression on the blockchain.
 Privacy. Nothing about the Scriptless Script smart contract, other than the settlement transaction, is ever recorded on the blockchain. No one will ever know that an underlying smart contract was executed.
 Multiplicity. Multiple digital assets can be transferred between two parties in a single settlement transaction.
 Implicit scalability. Scalability on the blockchain is achieved by virtue of compressing multiple transactions into a single settlement transaction. Transactions are only broadcast to the blockchain once all preconditions are met.
Weaknesses
Recent work by Maxwell et al. ([78], [79]) showed that a naive implementation of Schnorr multisignatures that satisfies key aggregation is not secure, and that the Bellare and Neven (BN) Schnorr signature scheme loses the key aggregation property in order to gain security in the plain publickey model. They proposed a new Schnorrbased multisignature scheme with key aggregation called MuSig, which is provably secure in the plain publickey model. It has the same key and signature size as standard Schnorr signatures. The joint signature can be verified in exactly the same way as a standard Schnorr signature with respect to a single “aggregated” publickey, which can be computed from the individual public keys of the signers. Note that the case of interactive signature aggregation where each signer signs their own message must still be proven by a complete security analysis.
Opportunities
Tari plans to implement the Mimblewimble blockchain and should implement the Scriptless Scripts together with the MuSig Schnorr signature scheme. However, this in itself will not provide the Layer 2 scaling performance that will be required. Big Neon, the initial business application to be built on top of the Tari blockchain, requires to "facilitate 500 tickets in 4 minutes", i.e. approximately two spectators allowed access every second, with negligible latency.
The Mimblewimble Scriptless Scripts could be combined with a federated node (or specialized masternode), similar to that being developed by Counterparty. The secrets that are revealed by virtue of the MuSig Schnorr signatures can instantiate normal smart contracts inside the federated node, with the final consolidated state update being written back to the blockchain after the event.
Directed Acyclic Graph (DAG) Derivative Protocols
What is it?
In mathematics and computer science, a Directed Acyclic Graph (DAG) is a finite directed graph with no directed cycles. A directed graph is acyclic if and only if it has a topological ordering, i.e. for every directed edge uv from vertex u to vertex v, u comes before v in the ordering (age) [85].
DAGs in blockchain were first proposed as the GHOST protocol ([87], [88]), a version of which is implemented in Ethereum as the Ethash PoW algorithm (based on DaggerHashimoto ([102], [103])). Then Braiding ([83], [84]), Jute [86], SPECTRE [89] and PHANTOM [95] were presented. The principle of DAG in blockchain is to present a way to include traditional offchain blocks into the ledger, which is governed by mathematical rules. A parent that is simultaneously an ancestor of another parent is disallowed:
The main problems to be solved by the DAG derivative protocols are:
 inclusion of orphaned blocks (decrease the negative effect of slow propagation); and
 mitigation against selfish mining attacks.
The underlying concept is still in the research and exploration phase [82].
In most DAG derivative protocols, blocks containing conflicting transactions, i.e. conflicting blocks, are not orphaned. A subsequent block is built on top of both of the conflicting blocks, but the conflicting transactions themselves are thrown out while processing the chain. SPECTRE, for one, provides a scheme whereby blocks vote to decide which transactions are robustly accepted, robustly rejected or stay in an indefinite “pending” state in case of conflicts. Both conflicting blocks become part of the shared history, and both conflicting blocks earn their respective miners a block reward ([82], [93], [94]).
Note: Braiding requires that parents and siblings may not contain conflicting transactions.
Inclusive (DAG derivative) protocols that integrate the contents of traditional offchain blocks into the ledger result in incentives for behavior changes by the nodes, which leads to an increased throughput, and a better payoff for weak miners [88].
DAG derivative protocols are not Layer 2 Scaling solutions, but they offer significant scaling of the primary blockchain.
Who does it?

The School of Engineering and Computer Science, The Hebrew University of Jerusalem ([87], [88], [89], [93], [94])
 GHOST, SPECTRE, PHANTOM

DAGlabs [96] (Note: This is the commercial development chapter.)
 SPECTRE, PHANTOM
 SPECTRE provides high throughput and fast confirmation times. Its DAG structure represents an abstract vote regarding the order between each pair of blocks, but this pairwise ordering may not be extendable to a full linear ordering due to possible Condorcet cycles.
 PHANTOM provides a linear ordering over the blocks of the DAG and can support consensus regarding any general computation (smart contracts), which SPECTRE cannot. In order for a computation or contract to be processed correctly and consistently, the full order of events in the ledger is required, particularly the order of inputs to the contract. However, PHANTOM’s confirmation times are much slower than those in SPECTRE.
 SPECTRE, PHANTOM

Ethereum as the Ethash PoW algorithm that has been adapted from GHOST

[Dr. Bob McElrath] ([83], [84])
 Brading

David Vorick [86]
 Jute

Crypto currencies:
Strengths
 Layer 1 scaling: increased transaction throughput on the main blockchain.
 Fairness: better payoff for weak miners.
 Decentralization mitigation: weaker miners also get profits.
 Transaction confirmation times: confirmation times of several seconds (SPECTRE).
 Smart contracts: support smart contracts (PHANTOM).
Weaknesses
 Still not proven 100%, development continuing.
 The DAG derivative protocols differ on important aspects such as miner payment schemes, security models, support for smart contracts, and confirmation times. Thus, all DAG derivative protocols are not created equal  beware!
Opportunities
Opportunities exist for Tari in applying the basic DAG principles to make a 51% attack harder by virtue of fairness and miner decentralization resistance. Choosing the correct DAG derivative protocol can also significantly improve Layer 1 scaling.
Observations
 Further investigation into the more promising Layer 2 scaling solutions and technologies is required to verify alignment, applicability and usability.
 Although not all technologies covered here are Layer 2 Scaling solutions, the strengths should be considered as building blocks for the Tari protocol.
References
[1] "OSI Mode" [online]. Available: https://en.wikipedia.org/wiki/OSI_model. Date accessed: 2018‑06‑07.
[2] "Decentralized Digital Identities and Block Chain  The Future as We See It" [online]. Available: https://www.microsoft.com/enus/microsoft365/blog/2018/02/12/decentralizeddigitalidentitiesandblockchainthefutureasweseeit/. Date accessed: 2018‑06‑07.
[3] "Trinity Protocol: The Scaling Solution of the Future?" [Online.] Available: https://www.investinblockchain.com/trinityprotocol. Date accessed: 2018‑06‑08.
[4] J. Poon and V. Buterin, "Plasma: Scalable Autonomous Smart Contracts" [online]. Available: http://plasma.io/plasma.pdf. Date accessed: 2018‑06‑14.
[5] "NEX: A High Performance Decentralized Trade and Payment Platform" [online]. Available: https://nash.io/pdfs/whitepaper_v2.pdf. Date accessed: 2018‑06‑11.
[6] J. Poon and OmiseGO Team, "OmiseGO: Decentralized Exchange and Payments Platform" [online]. Available: https://cdn.omise.co/omg/whitepaper.pdf. Date accessed: 2018‑06‑14.
[7] "The Rise of Masternodes Might Soon be Followed by the Creation of Servicenodes" [online]. Available: https://cointelegraph.com/news/theriseofmasternodesmightsoonbefollowedbythecreationofservicenodes. Date accessed: 2018‑06‑13.
[8] "What are Masternodes? Beginner's Guide" [online]. Available: https://blockonomi.com/masternodeguide/. Date accessed: 2018‑06‑14.
[9] "What the Heck is a DASH Masternode and How Do I get One" [online]. Available: https://medium.com/dashfornewbies/whattheheckisadashmasternodeandhowdoigetone56e24121417e. Date accessed: 2018‑06‑14.
[10] "Payment Channels" (online). Available: https://en.bitcoin.it/wiki/Payment_channels. Date accessed: 2018‑06‑14.
[11] "Lightning Network" (online). Available: https://en.wikipedia.org/wiki/Lightning_Network. Date accessed: 2018‑06‑14.
[12] "Bitcoin Isn't the Only Crypto Adding Lightning Tech Now" [online]. Available: https://www.coindesk.com/bitcoinisntcryptoaddinglightningtechnow/. Date accessed: 2018‑06‑14.
[13] "What is Bitcoin Lightning Network? And How Does it Work?" [Online.] Available: https://cryptoverze.com/whatisbitcoinlightningnetwork/. Date accessed: 2018‑06‑14.
[14] "OmiseGO" [online]. Available: https://omisego.network/. Date accessed: 2018‑06‑14.
[15] "Everything You Need to Know About Loom Network, All In One Place (Updated Regularly)" [online]. Available: https://medium.com/loomnetwork/everythingyouneedtoknowaboutloomnetworkallinoneplaceupdatedregularly64742bd839fe. Date accessed: 2018‑06‑14.
[16] "Making Sense of Ethereum's Layer 2 Scaling Solutions: State Channels, Plasma, and Truebit" [online]. Available: https://medium.com/l4media/makingsenseofethereumslayer2scalingsolutionsstatechannelsplasmaandtruebit22cb40dcc2f4. Date accessed: 2018‑06‑14.
[17] "Trinity: Universal Offchain Scaling Solution" [online]. Available: https://trinity.tech. Date accessed: 2018‑06‑14.
[18] "Trinity White Paper: Universal Offchain Scaling Solution" [online]. Available: https://trinity.tech/#/writepaper. Date accessed: 2018‑06‑14.
[19] J. Coleman, L. Horne, and L. Xuanji, "Counterfactual: Generalized State Channels" [online]. Available at: https://counterfactual.com/statechannels and https://l4.ventures/papers/statechannels.pdf. Date accessed: 2018‑06‑15.
[20] "The Raiden Network" [online]. Available: https://raiden.network/. Date accessed: 2018‑06‑15.
[21] "What is the Raiden Network?" [Online.] Available: https://raiden.network/101.html. Date accessed: 2018‑06‑15.
[22] "What are Masternodes? An Introduction and Guide" [online]. Available: https://coincentral.com/whataremasternodesanintroductionandguide/. Date accessed: 2018‑06‑15.
[23] "State Channels in Disguise?" [Online.] Available: https://funfair.io/statechannelsindisguise. Date accessed: 2018‑06‑15.
[24] "World Payments Report 2017, © 2017 Capgemini and BNP Paribas" [online]. Available: https://www.worldpaymentsreport.com. Date accessed: 2018‑06‑20.
[25] "VISA" [online]. Available: https://usa.visa.com/visaeverywhere/innovation/contactlesspaymentsaroundtheglobe.html. Date accessed: 2018‑06‑20.
[26] "VisaNet Fact Sheet 2017 Q4" [online]. Available: https://usa.visa.com/dam/VCOM/download/corporate/media/visanettechnology/visanetfactsheet.pdf. Date accessed: 2018‑06‑20.
[27] "With 100% segwit transactions, what would be the max number of transaction confirmation possible on a block?" [Online.] Available: https://bitcoin.stackexchange.com/questions/59408/with100segwittransactionswhatwouldbethemaxnumberoftransactionconfi. Date accessed: 2018‑06‑21.
[28]: "A Gentle Introduction to Ethereum" [online]. Available: https://bitsonblocks.net/2016/10/02/agentleintroductiontoethereum/. Date accessed: 2018‑06‑21.
[29] "What is the size (bytes) of a simple Ethereum transaction versus a Bitcoin transaction?" [Online.] Available: https://ethereum.stackexchange.com/questions/30175/whatisthesizebytesofasimpleethereumtransactionversusabitcointrans?rq=1. Date accessed: 2018‑06‑21.
[30] "What is a Masternode?" [Online.] Available at: http://dashmasternode.org/whatisamasternode. Date accessed: 2018‑06‑14.
[31] "Counterfactual: Generalized State Channels on Ethereum" [online]. Available: https://medium.com/statechannels/counterfactualgeneralizedstatechannelsonethereumd38a36d25fc6. Date accessed: 2018‑06‑26.
[32] J. Longley and O. Hopton, "FunFair Technology Roadmap and Discussion" [online]. Available: https://funfair.io/wpcontent/uploads/FunFairTechnicalWhitePaper.pdf. Date accessed: 2018‑06‑27.
[33] "0x Protocol Website" [online]. Available: https://0xproject.com/. Date accessed: 2018‑06‑28.
[34] "0x: An open protocol for decentralized exchange on the Ethereum blockchain" [online]. Available: https://0xproject.com/pdfs/0x_white_paper.pdf. Date accessed: 2018‑06‑28.
[35] "NEX/Nash website" [online]. Available: https://nash.io. Date accessed: 2018‑06‑28.
[36] "Frontrunning, Griefing and the Perils of Virtual Settlement (Part 1)" [online]. Available: https://blog.0xproject.com/frontrunninggriefingandtheperilsofvirtualsettlementpart18554ab283e97. Date accessed: 2018‑06‑29.
[37] "Frontrunning, Griefing and the Perils of Virtual Settlement (Part 2)" [online]. Available: https://blog.0xproject.com/frontrunninggriefingandtheperilsofvirtualsettlementpart2921b00109e21. Date accessed: 2018‑06‑29.
[38] "MapReduce: Simplified Data Processing on Large Clusters" [online]. Available: https://storage.googleapis.com/pubtoolspublicpublicationdata/pdf/16cb30b4b92fd4989b8619a61752a2387c6dd474.pdf. Date accessed: 2018‑07‑02.
[39] "The Golem WhitePaper (Future Integrations)" [online]. Available: https://golem.network/crowdfunding/Golemwhitepaper.pdf. Date accessed: 2018‑06‑22.
[40] J. Teutsch, and C. Reitwiessner, "A Scalable Verification Solution for Block Chains" [online]. Available: http://people.cs.uchicago.edu/~teutsch/papers/truebit.pdf. Date accessed: 2018‑06‑22.
[41] "Livepeer's Path to Decentralization" [online]. Available: https://medium.com/livepeerblog/livepeerspathtodecentralizationa9267fd16532. Date accessed: 2018‑06‑22.
[42] "Golem Website" [online]. Available: https://golem.network. Date accessed: 2018‑06‑22.
[43] "TruBit Website" [online]. Available: https://truebit.io. Date accessed: 2018‑06‑22.
[44] "TumbleBit: An Untrusted Bitcoincompatible Anonymous Payment Hub" [online]. Available: http://cspeople.bu.edu/heilman/tumblebit. Date accessed: 2018‑07‑12.
[45] E. Heilman, L. AlShenibr, F. Baldimtsi, A. Scafuro and S. Goldberg, "TumbleBit: An Untrusted Bitcoincompatible Anonymous Payment Hub" [online].Available: https://eprint.iacr.org/2016/575.pdf. Date accessed: 2018‑07‑08.
[46] "Anonymous Transactions Coming to Stratis" [online]. Available: https://medium.com/@Stratisplatform/anonymoustransactionscomingtostratisfced3f5abc2e. Date accessed: 2018‑07‑08.
[47] "TumbleBit Proof of Concept GitHub Repository" [online]. Available: https://github.com/BUSEC/TumbleBit. Date accessed: 2018‑07‑08.
[48] "NTumbleBit GitHub Repository" [online]. Available: https://github.com/nTumbleBit/nTumbleBit. Date accessed: 2018‑07‑12.
[49] "Breeze Tumblebit Server Experimental Release" [online]. Available: https://stratisplatform.com/2017/07/17/breezetumblebitserverexperimentalrelease. Date accessed: 2018‑07‑12.
[50] "Breeze Wallet with Breeze Privacy Protocol (Dev. Update)" [online]. Available: https://stratisplatform.com/2017/09/20/breezewalletwithbreezeprivacyprotocoldevupdate. Date accessed: 2018‑07‑12.
[51] E. Heilman, F. Baldimtsi and S. Goldberg, "Blindly Signed Contracts  Anonymous Onchain and Offchain Bitcoin Transactions" [online]. Available: https://eprint.iacr.org/2016/056.pdf. Date accessed: 2018‑07‑12.
[52] E. Heilman and L. AlShenibr, "TumbleBit: An Untrusted Bitcoincompatible Anonymous Payment Hub  08 October 2016", in Conference: Scaling Bitcoin 2016 Milan. Available: https://www.youtube.com/watch?v=8BLWUUPfh2Q&feature=youtu.be&t=1h3m10s. Date accessed: 2018‑07‑13.
[53] "Better Bitcoin Privacy, Scalability: Developers Making TumbleBit a Reality" [online]. Available: https://bitcoinmagazine.com/articles/betterbitcoinprivacyscalabilitydevelopersaremakingtumblebitreality. Date accessed: 2018‑07‑13.
[54] Bitcoinwiki: "Contract" [online]. Available: https://en.bitcoin.it/wiki/Contract. Date accessed: 2018‑07‑13.
[55] "Bitcoin Privacy is a Breeze: TumbleBit Successfully Integrated Into Breeze" [online]. Available: https://stratisplatform.com/2017/08/10/bitcoinprivacytumblebitintegratedintobreeze. Date accessed: 201807‑13.
[56] "TumbleBit Wallet Reaches One Step Forward" [online]. Available: https://www.bitcoinmarketinsider.com/tumblebitwalletreachesonestepforward. Date accessed: 2018‑07‑13.
[57] "A Survey of Second Layer Solutions for Blockchain Scaling Part 1" [online]. Available: https://www.ethnews.com/asurveyofsecondlayersolutionsforblockchainscalingpart1. Date accessed: 2018‑07‑16.
[58] "Secondlayer Scaling" [online]. Available: https://lunyr.com/article/SecondLayer_Scaling. Date accessed: 2018‑07‑16.
[59] "RSK Website" [online]. Available: https://www.rsk.co. Date accessed: 2018‑07‑16.
[60] S. D. Lerner, "Lumino Transaction Compression Protocol (LTCP)" [online]. Available: https://uploads.strikinglycdn.com/files/ec5278f8218c407aaf3cab71a910246d/LuminoTransactionCompressionProtocolLTCP.pdf. Date accessed: 2018‑07‑16.
[61] "Bitcoinbased Ethereum Rival RSK Set to Launch Next Month" [online]. Available: https://cryptonewsmonitor.com/2017/11/11/bitcoinbasedethereumrivalrsksettolaunchnextmonth. Date accessed: 2018‑07‑16.
[62] "RSK Blog Website" [online]. Available: https://media.rsk.co/. Date accessed: 2018‑07‑16.
[63] "Drivechain: Enabling Bitcoin Sidechain" [online]. Available: http://www.drivechain.info. Date accessed: 2018‑07‑17.
[64] "Drivechain  The Simple Two Way Peg" [online]. Available: http://www.truthcoin.info/blog/drivechain. Date accessed: 2018‑07‑17.
[65] "Sidechains, Drivechains, and RSK 2Way Peg Design" [online]. Available: https://www.rsk.co/blog/sidechainsdrivechainsandrsk2waypegdesign or https://uploads.strikinglycdn.com/files/27311e59083249b5ab0e2b0a73899561/Drivechains_Sidechains_and_Hybrid_2way_peg_Designs_R9.pdf. Date accessed: 2018‑07‑18.
[66] "Pay to Script Hash" [online]. Available: https://en.bitcoin.it/wiki/Pay_to_script_hash. Date accessed: 2018‑07‑18.
[67] Hivemind Website [online]. Available: http://bitcoinhivemind.com. Date accessed: 2018‑07‑18.
[68] "Drivechains: What do they Enable? Cloud 3.0 Services Smart Contracts and Scalability" [online]. Available: http://drivechains.org/whataredrivechains/whatdoesitenable. Date accessed: 2018‑07‑19.
[69] "Bloq’s Paul Sztorc on the 4 Main Benefits of Sidechains" [online]. Available: https://bitcoinmagazine.com/articles/bloqspaulsztorconthemainbenefitsofsidechains1463417446. Date accessed: 2018‑07‑19.
[70] Blockstream Website [online]. Available: https://blockstream.com/technology. Date accessed: 2018‑07‑19.
[71] A. Back, M. Corallo, L. Dashjr, M. Friedenbach, G. Maxwell, A. Miller, A. Poelstra, J. Timón and P. Wuille, 20141022. "Enabling Blockchain Innovations with Pegged Sidechains" [online]. Available: https://blockstream.com/sidechains.pdf. Date accessed: 2018‑07‑19.
[72] J. Dilley, A. Poelstra, J. Wilkins, M. Piekarska, B. Gorlick and M. Friedenbachet, "Strong Federations: An Interoperable Blockchain Solution to Centralized Third Party Risks" [online]. Available: https://blockstream.com/strongfederations.pdf. Date accessed: 2018‑07‑19.
[73] "CounterpartyXCP/Documentation/Smart Contracts/EVM FAQ" [online]. Available: https://github.com/CounterpartyXCP/Documentation/blob/master/Basics/FAQSmartContracts.md. Date accessed: 2018‑07‑23.
[74] "Counterparty Development 101" [online]. Available: https://medium.com/@droplister/counterpartydevelopment1012f4d9b0c8df3. Date accessed: 2018‑07‑23.
[75] "Counterparty Website" [online]. Available: https://counterparty.io. Date accessed: 2018‑07‑24.
[76] "COVAL Website" [online]. Available: https://coval.readme.io/docs. Date accessed: 2018‑07‑24.
[77] "Scriptless Scripts: How Bitcoin Can Support Smart Contracts Without Smart Contracts" [online]. Available: https://bitcoinmagazine.com/articles/scriptlessscriptshowbitcoincansupportsmartcontractswithoutsmartcontracts. Date accessed: 2018‑0724.
[78] "Key Aggregation for Schnorr Signatures" [online]. Available: https://blockstream.com/2018/01/23/musigkeyaggregationschnorrsignatures.html. Date accessed: 2018‑07‑24.
[79] G. Maxwell, A. Poelstra, Y. Seurin and P. Wuille, "Simple Schnorr Multisignatures with Applications to Bitcoin" [online]. Available: 20 May 2018, https://eprint.iacr.org/2018/068.pdf. Date accessed: 2018‑07‑24.
[80] A. Poelstra, 4 March 2017, "Scriptless Scripts" [online]. Available: https://download.wpsoftware.net/bitcoin/wizardry/mwslides/201703mitbitcoinexpo/slides.pdf. Date accessed: 2018‑07‑24.
[81] "bipschnorr.mediawiki" [online]. Available: https://github.com/sipa/bips/blob/bipschnorr/bipschnorr.mediawiki. Date accessed: 2018‑07‑26.
[82] "If There is an Answer to Selfish Mining, Braiding could be It" [online]. Available: https://bitcoinmagazine.com/articles/ifthereisananswertoselfishminingbraidingcouldbeit1482876153. Date accessed: 2018‑07‑27.
[83] B. McElrath, "Braiding the Blockchain", in Conference: Scaling Bitcoin, Hong Kong, 7 Dec 2015 [online]. Available: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/2_breaking_the_chain_1_mcelrath.pdf. Date accessed: 2018‑07‑27.
[84] "Braid Examples" [online]. Available: https://rawgit.com/mcelrath/braidcoin/master/Braid%2BExamples.html. Date accessed: 2018‑07‑27.
[85] "Directed Acyclic Graph" [online]. Available: https://en.wikipedia.org/wiki/Directed_acyclic_graph. Date accessed: 2018‑07‑30.
[86] "Braiding Techniques to Improve Security and Scaling" [online]. Available: https://scalingbitcoin.org/milan2016/presentations/D2%20%209%20%20David%20Vorick.pdf. Date accessed: 2018‑07‑30.
[87] Y. Sompolinsky and A. Zohar, "GHOST: Secure Highrate Transaction Processing in Bitcoin" [online]. Available: https://eprint.iacr.org/2013/881.pdf. Date accessed: 2018‑07‑30.
[88] Y. Lewenberg, Y. Sompolinsky and A. Zohar, "Inclusive Blockchain Protocols" [online]. Available: http://fc15.ifca.ai/preproceedings/paper_101.pdf. Date accessed: 2018‑07‑30.
[89] Y. Sompolinsky, Y. Lewenberg and A. Zohar, "SPECTRE: A Fast and Scalable Cryptocurrency Protocol" [online]. Available: http://www.cs.huji.ac.il/~yoni_sompo/pubs/16/SPECTRE_complete.pdf. Date accessed: 2018‑07‑30.
[90] "IOTA Website" [online]. Available: https://www.iota.org/. Date accessed: 2018‑07‑30.
[91] NANO: "Digital Currency for the Real World – the fast and free way to pay for everything in life" [online]. Available: https://nano.org/en. Date accessed: 2018‑07‑30.
[92] "Byteball" [online]. Available: https://byteball.org/. Date accessed: 2018‑07‑30.
[93] "SPECTRE: Serialization of Proofofwork Events, Confirming Transactions via Recursive Elections" [online]. Available: https://medium.com/@avivzohar/thespectreprotocol7dbbebb707b. Date accessed: 2018‑07‑30.
[94] Y. Sompolinsky, Y. Lewenberg and A. Zohar, "SPECTRE: Serialization of Proofofwork Events: Confirming Transactions via Recursive Elections" [online]. Available: https://eprint.iacr.org/2016/1159.pdf. Date accessed: 2018‑07‑30.
[95] Y. Sompolinsky and A. Zohar, "PHANTOM: A Scalable BlockDAG Protocol" [online]. Available: https://docs.wixstatic.com/ugd/242600_92372943016c47ecb2e94b2fc07876d6.pdf. Date accessed: 2018‑07‑30.
[96] "DAGLabs Website" [online]. Available: https://www.daglabs.com. Date accessed: 2018‑07‑30.
[97] "Beyond Distributed and Decentralized: what is a federated network?" [Online.] Available: http://networkcultures.org/unlikeus/resources/articles/whatisafederatednetwork. Date accessed: 2018‑08‑13.
[98] "Federated Byzantine Agreement" [online]. Available: https://towardsdatascience.com/federatedbyzantineagreement24ec57bf36e0. Date accessed: 2018‑08‑13.
[99] "Counterparty Documentation: Frequently Asked Questions" [online]. Available: https://counterparty.io/docs/faq. Date accessed: 2018‑09‑14.
[100] "Counterparty Documentation: Protocol Specification" [online]. Available: https://counterparty.io/docs/protocol_specification. Date accessed: 2018‑09‑14.
[101] "Counterparty News: Why ProofofBurn, March 23, 2014" [online]. Available: https://counterparty.io/news/whyproofofburn. Date accessed: 2018‑09‑14.
[102] V. Buterin, "Dagger: A MemoryHard to Compute, MemoryEasy to Verify Scrypt Alternative" [online]. Available: http://www.hashcash.org/papers/dagger.html. Date accessed: 2019‑02‑12.
[103] T. Dryja, "Hashimoto: I/O Bound Proof of Work" [online]. Available: https://web.archive.org/web/20170810043640/https://pdfs.semanticscholar.org/3b23/7cc60c1b9650e260318d33bec471b8202d5e.pdf. Date accessed: 2019‑02‑12.
Contributors
 https://github.com/hansieodendaal
 https://github.com/Kevoulee
 https://github.com/ksloven
 https://github.com/robbydermody
 https://github.com/anselld
Layer 2 Scaling  Executive Summary
Having trouble viewing this presentation?
View it in a separate window.
Directed Acyclic Graphs
Background
The principle of Directed Acyclic Graphs (DAGs) in blockchain is to present a way to include traditional offchain blocks into the ledger, which is governed by mathematical rules. The main problems to be solved by the DAG derivative protocols are:
 inclusion of orphaned blocks (decrease the negative effect of slow propagation); and
 mitigation against selfish mining attacks.
Braiding the Blockchain
Dr. Bob McElrath
Ph.D. Theoretical Physics
Summary
"Braiding the Blockchain" by Dr. Bob McElrath, Scaling Bitcoin, Hong Kong, December 2015.
This talk discusses the motivation for using Directed Acyclic Graphs (DAGs), which are orphaned blocks, throughput and more inclusive mining. New terms are defined to make DAGs applicable to blockchain, as it needs to be more specific: Braid vs. DAG, Bead vs. block, Sibling and Incest.
The Braid approach:
 incentivizes miners to quickly transmit beads;
 prohibits parents from containing conflicting transactions (unlike GHOST or SPECTRE);
 constructs beads to be valid Bitcoin blocks if they meet the difficulty target.
Video
Note: Transcripts are available here.
Slides
GHOSTDAG
Aviv Zohar
Prof. at The Hebrew University and Chief Scientist @ QEDit
Summary
"The GHOSTDAG Protocol" by Yonatan Sompolinsky, Scaling Bitcoin, Tokyo, October 2018.
This talk discusses the goal going from chain to DAG being to get an ordering of the blocks that does not change when time between blocks approaches block propagation time; and security that does not break at higher throughputs. Terms introduced here are Anticone (the view of the DAG a block sees in the past and the future) and $k$cluster (a set of blocks with an Anticone at most $k$). The protocol also makes use of a greedy algorithm in order to find the optimal $k$cluster at each step as it attempts to find the overall optimal $k$cluster.
Video
Notes:
Slides
SPECTRE
Aviv Zohar
Prof. at The Hebrew University and Chief Scientist @ QEDit
Summary
"Scalability II  GHOST/SPECTRE" by Dr. Aviv Zohar, Technion Summer School on Decentralized Cryptocurrencies and Blockchains, 2017.
This talk discusses the application of DAGs in the SPECTRE protocol. Three insights into the Bitcoin protocol are shared: DAGs are more powerful; Bitcoin is related to voting; and amplification. These are discussed in relation to SPECTRE, while properties sought are consistency, safety and weak liveness. Voting outcomes are strongly rejected, strongly accepted or pending.
Video
PHANTOM
Yonatan Sompolinsky
Founding Scientist, Daglabs
Summary
""BlockDAG Protocols  SPECTRE, PHANTOM"* by Yonatan Sompolinsky, Blockchain Protocol Analysis and Security Engineering, Stanford, 2018.
This talk introduces the BlockDAG protocol PHANTOM, and motivates it by virtue of blockchains not being able to scale and BlockDAGs being a generalization of blockchains. The mining protocol references all tips in the DAG (as opposed to the tip of the longest chain) and also publishes all mined blocks as soon as possible (similar to Bitcoin). Blocks honestly created (i.e. honest blocks) will only be unconnected if they were created at approximately the same time. PHANTOM's goal is to recognize honest $k$clusters, order the blocks within and disregard the rest.
Video
Laser Beam
 Introduction
 Detail Scheme
 Conclusions, Observations and Recommendations
 Appendices
 References
 Contributors
Introduction
ProofofWork (PoW) blockchains are notoriously slow, as transactions need to be a number of blocks in the past to be confirmed, and have poor scalability properties. A payment channel is a class of techniques designed to allow two or more parties to make multiple blockchain transactions, without committing all of the transactions to the blockchain. Resulting funds can be committed back to the blockchain. Payment channels allow multiple transactions to be made within offchain agreements. They keep the operation mode of the blockchain protocol, but change the way in which it is used in order to deal with the challenge of scalability [1].
The Lightning Network is a secondlayer payment protocol that was originally designed for Bitcoin, and which enables instant transactions between participating nodes. It features a peertopeer system for making micropayments through a network of bidirectional payment channels. The Lightning Network's dispute mechanism requires all users to constantly watch the blockchain for fraud. Various mainnet implementations that support Bitcoin exist and, with small tweaks, some of them are also able to support Litecoin ([2], [3], [4], [10]).
Laser Beam is an adaptation of the Lightning Network for the Mimblewimble protocol, to be implemented for Beam ([5], [6], [7]). At the time of writing of this report (November 2019), the specifications were far advanced, but still work in progress. Beam has a working demonstration in its mainnet repository, which at this stage demonstrates offchain transactions in a single channel between two parties [8]. According to the Request for Comment (RFC) documents, Beam plans to implement routing across different payment channels in the Lightning Network style.
Detail Scheme
Beam's version of a multisignature (MultiSig) is actually a $2\text{of}2$ multiparty Unspent Transaction Output (UTXO), where each party keeps its share of the blinding factor of the Pedersen commitment, $C(v,k_{1}+k_{2})=\Big(vH+(k_{1}+k_{2})G\Big)$, secret. (Refer to Appendix A for notation used.) The multiparty commitment is accompanied by a single multiparty Bulletproof range proof, similar to that employed by Grin, where the individual shares of the blinding factor are used to create the combined range proof [9].
In the equations that follow Alice's and Bob's contributions are denoted by subscripts $_{a}$ and $_{b}$ respectively; $f$ is the fee and $\mathcal{X}$ is the excess. Note that blinding factors denoted by $\hat{k}$, $k^{\prime}$ and $k^{\prime\prime}$, and values denoted by $v^{\prime}$ and $v^{\prime\prime}$, have a special purpose, discussed later, so that $\hat{k}_{N_{a}} \neq k_{N_{a}} \neq k^{\prime} \neq k^{\prime\prime}_{N_{a}}$ and $v_{N_{a}} \neq v^{\prime}_{N_{a}} \neq v^{\prime\prime}_{N_{a}}$.
Funding Transaction
The parties collaborate to create the multiparty UTXO (i.e. commitment and associated multiparty range proof), combined on‑chain funding transaction (Figure 1) and an initial refund transaction for each party (off‑chain). All refund transactions have a relative time lock in their kernel, referencing the kernel of the original combined funding transaction, which has to be confirmed on the blockchain.
The initial funding transaction between Alice and Bob is depicted in (1). The lock height in the signature challenge corresponds to the current blockchain height. Input commitment values and blinding factors are identified by superscript $^{\prime\prime}$.
$$ \begin{aligned} \text{Inputs}(0)+\text{MultiSig}(0)+\text{fee} &= \text{Excess}(0) \\ \Big((v^{\prime\prime}_{0_{a}}H+k^{\prime\prime}_{0_{a}}G)+(v^{\prime\prime}_{0_{b}}H+k^{\prime\prime}_{0_{b}}G)\Big)+\Big(v_{0}H+(k_{0_{a}}+k_{0_{b}})G\Big)+fH &= \mathcal{X}_{0} \end{aligned} \tag{1} $$
Alice and Bob also need to set up their own respective refund transactions so they can be compensated should the channel never be used; this is performed via a refund procedure. A refund procedure (off‑chain) consists of four parts, whereby each user creates two transactions: one kept partially secret (discussed below) and the other shared. Each partially secret transaction creates a different intermediate multiparty UTXO, which is then used as input in two shared transactions, to spending the same set of outputs to each participant.
All consecutive refund procedures work in exactly the same way. In the equations that follow, double subscripts $_{AA}$, $_{AB}$, $_{BA}$ and $_{BB}$ have the following meaning: the first letter and the second letter indicate who controls the transaction and who created the value, respectively. Capitalized use of $R$ and $P$ denotes public nonce and public blinding factor respectively. The $N_{\text{th}}$ refund procedure is as follows:
Refund Procedure
Alice  Part 1
Alice and Bob set up Alice's intermediate MultiSig funding transaction (Figure 2), spending the original funding MultiSig UTXO. The lock height $h_{N}$ corresponds to the current blockchain height.
$$ \begin{aligned} \text{MultiSig}(0)+\text{MultiSig}(N)_{A}+\text{fee} & =\text{Excess}(N)_{A1} \\ \Big(v_{0}H+(k_{0_{a}}+k_{0_{b}})G\Big) + \Big((v_{0}f)H+(\hat{k}_{N_{a}}+k_{N_{b}})G\Big) + fH &= \mathcal{X}_{N_{A1}} \end{aligned} \tag{2} $$
They collaborate to create $\text{MultiSig}(N)_{A}$, its Bulletproof range proof, the signature challenge and Bob's portion of the signature, $s_{N_{AB1}}$. Alice does not share the final kernel and thus keeps her part of the aggregated signature, $s_{N_{AA1}}$, hidden.
$$ \begin{aligned} \mathcal{X}_{N_{A1}} &= (k_{0_{a}}+\hat{k}_{N_{a}})G+(k_{0_{b}}+k_{N_{b}})G \\ &= P_{N_{AA1}}+P_{N_{AB1}} \end{aligned} \tag{3} $$
$$ \begin{aligned} \text{Challenge:}\quad\mathcal{H}(R_{N_{AA1}}+R_{N_{AB1}}\parallel P_{N_{AA1}}+P_{N_{AB1}}\parallel f\parallel h_{N}) \end{aligned} \tag{4} $$
$$ \begin{aligned} \text{Final signature tuple, kept secret:}\quad(s_{N_{AA1}}+s_{N_{AB1}},R_{N_{AA1}}+R_{N_{AB1}}) \end{aligned} \tag{5} $$
$$ \begin{aligned} \text{Kernel of this transaction:}\quad\mathcal{K}_{N_{AA1}} \end{aligned} \tag{6} $$
Bob  Part 1
Alice and Bob set up Bob's intermediate MultiSig funding transaction (Figure 2), also spending the original funding MultiSig UTXO. The lock height $h_{N}$ again corresponds to the current blockchain height.
$$ \begin{aligned} \text{MultiSig}(0)+\text{MultiSig}(N)_{B}+\text{fee} & =\text{Excess}(N)_{B1} \\ \Big(v_{0}H+(k_{0_{a}}+k_{0_{b}})G\Big) + \Big((v_{0}f)H+(k_{N_{a}}+\hat{k}_{N_{b}})G\Big) + fH &= \mathcal{X}_{N_{B1}} \end{aligned} \tag{7} $$
They collaborate to create $\text{MultiSig}(N)_{A}$, its Bulletproof range proof, the signature challenge and Alice's portion of the signature. Bob does not share the final kernel and thus keeps his part of the aggregated signature hidden.
$$ \begin{aligned} \mathcal{X}_{N_{B1}} &= (k_{0_{a}}+k_{N_{a}})G+(k_{0_{b}}+\hat{k}_{N_{b}})G \\ &= P_{N_{BA1}}+P_{N_{BB1}} \end{aligned} \tag{8} $$
$$ \begin{aligned} \text{Challenge:}\quad\mathcal{H}(R_{N_{BA1}}+R_{N_{BB1}}\parallel P_{N_{BA1}}+P_{N_{BB1}}\parallel f\parallel h_{N}) \end{aligned} \tag{9} $$
$$ \begin{aligned} \text{Final signature tuple, kept secret:}\quad(s_{N_{BA1}}+s_{N_{BB1}},R_{N_{BA1}}+R_{N_{BB1}}) \end{aligned} \tag{10} $$
$$ \begin{aligned} \text{Kernel of this transaction:}\quad\mathcal{K}_{N_{BB1}} \end{aligned} \tag{11} $$
Alice  Part 2
Alice and Bob set up a refund transaction (Figure 3), which Alice controls, with the same relative time lock $h_{rel}$ to the intermediate funding transaction's kernel, $\mathcal{K}_{N_{AA1}}$. Output commitment values and blinding factors are identified by superscript $^{\prime}$.
$$ \begin{aligned} \text{MultiSig}(N)_{A}+\text{Outputs}(N)+\text{fee} & =\text{Excess}(N)_{A2} \\ \Big((v_{0}f)H+(\hat{k}_{N_{a}}+k_{N_{b}})G\Big)+\Big((v_{N_{a}}^{\prime}H+k_{N_{a}}^{\prime}G)+(v_{N_{b}}^ {\prime}H+k_{N_{b}}^{\prime}G)\Big)+fH &=\mathcal{X}_{N_{A2}} \\ \end{aligned} \tag{12} $$
They collaborate to create the challenge and the aggregated signature. Because the final kernel $\mathcal{K}_{N_{AA1}}$ is kept secret, only its hash is shared. Alice shares this transaction's final kernel with Bob. The signature challenge is determined as follows:
$$ \begin{aligned} \mathcal{X}_{N_{A2}} &= (\hat{k}_{N_{a}}+k_{N_{a}}^{\prime})G + (k_{N_{b}}+k_{N_{b}}^{\prime})G \\ &= P_{N_{AA2}}+P_{N_{AB2}} \\ \end{aligned} \tag{13} $$
$$ \begin{aligned} \text{Challenge:}\quad\mathcal{H}(R_{N_{AA2}}+R_{N_{AB2}}\parallel P_{N_{AA2}}+P_{N_{AB2}}\parallel f \parallel\mathcal{H}(\mathcal{K}_{N_{AA1}})\parallel h_{rel}) \end{aligned} \tag{14} $$
Bob  Part 2
Alice and Bob set up a refund transaction (Figure 3), which Bob controls, with a relative time lock $h_{rel}$ to the intermediate funding transaction's kernel, $\mathcal{K}_{N_{BB1}}$. Output commitment values and blinding factors are identified by superscript $^{\prime}$.
$$ \begin{aligned} \text{MultiSig}(N)_{B}+\text{Outputs}(N)+\text{fee} & =\text{Excess}(N)_{B2} \\ \Big((v_{0}f)H+(k_{N_{a}}+\hat{k}_{N_{b}})G\Big)+\Big((v_{N_{a}}^{\prime}H+k_{N_{a}}^{\prime}G)+(v_{N_{b}}^ {\prime}H+k_{N_{b}}^{\prime}G)\Big)+fH &=\mathcal{X}_{N_{B2}} \\ \end{aligned} \tag{15} $$
They collaborate to create the challenge and the aggregated signature. Because the final kernel $\mathcal{K}_{N_{BB1}}$ is kept secret, only its hash is shared. Bob shares this transaction's final kernel with Alice. The signature challenge is determined as follows:
$$ \begin{aligned} \mathcal{X}_{N_{B2}} &= (k_{N_{a}}+k_{N_{a}}^{\prime})G + (\hat{k}_{N_{b}}+k_{N_{b}}^{\prime})G \\ &= P_{N_{BA2}}+P_{N_{BB2}} \end{aligned} \tag{16} $$
$$ \begin{aligned} \text{Challenge:}\quad\mathcal{H}(R_{N_{BA2}}+R_{N_{BB2}}\parallel P_{N_{BA2}}+P_{N_{BB2}}\parallel f \parallel\mathcal{H}(\mathcal{K}_{N_{BB1}})\parallel h_{rel}) \end{aligned} \tag{17} $$
Revoke Previous Refund
Whenever the individual balances in the channel change, a new refund procedure is negotiated, revoking previous agreements (Figure 4).
Revoking refund transactions involves revealing blinding factor shares for the intermediate multiparty UTXOs, thereby nullifying their further use. After the four parts of the refund procedure have been concluded successfully, the previous round's blinding factor shares $\hat{k}_{(N1)}$ are revealed to each other, in order to revoke the previous agreement.
Alice:
$$ \begin{aligned} \text{MultiSig}(N1)_{A}:\quad\Big((v_{0}f)H+(\hat{k}_{(N1)_{a}}+k_{(N1)_{b}})G\Big) & \quad & \lbrace\text{Alice's commitment}\rbrace \\ \hat{k}_{(N1)_{a}} & \quad & \lbrace\text{Alice shares with Bob}\rbrace \\ \end{aligned} \tag{18} $$
Bob:
$$ \begin{aligned} \text{MultiSig}(N1)_{B}:\quad\Big((v_{0}f)H+(k_{(N1)_{a}}+\hat{k}_{(N1)_{b}})G\Big) & \quad & \lbrace\text{Bob's commitment}\rbrace \\ \hat{k}_{(N1)_{b}} & \quad & \lbrace\text{Bob shares with Alice}\rbrace \end{aligned} \tag{19} $$
Note that although the kernels for transactions (2) and (10) were kept secret, when the Bulletproof range proofs for $\text{MultiSig}(N1)_{A}$ and $\text{MultiSig}(N1)_{B}$ were constructed, resultant values of those MultiSig Pedersen commitments were revealed to the counterparty. Each of them is thus able to verify the counterparty's blinding factor share by constructing the counterparty's MultiSig Pedersen commitment.
Alice verifies:
$$
\begin{aligned}
\Big((v_{0}f)H+(k_{(N1)_{a}}+\hat{k}_{(N1)_{b}})G\Big) \overset{?}{=} C(v_{0}f,\ k_{(N1)_{a}}+
\hat{k}_{(N1)_{b}})
\end{aligned}
\tag{20}
$$
Bob verifies:
$$ \begin{aligned} \Big((v_{0}f)H+(\hat{k}_{(N1)_{a}}+k_{(N1)_{b}})G\Big) \overset{?}{=} C(v_{0}f,\ \hat{k}_{(N1)_{a}}+ k_{(N1)_{b}}) \end{aligned} \tag{21} $$
Although each party will now have its counterparty's blinding factor share in the counterparty's intermediate multiparty UTXO, they will still not be able to spend it, because the corresponding transaction kernel is still kept secret by the counterparty. The previous round's corresponding transaction (2) or (10), with a fully signed transaction kernel, was never published on the blockchain.
Punishment Transaction
If a counterparty decides to broadcast a revoked set of refund transactions, and if the honest party is actively monitoring the blockchain and able to detect the attempted foul play, a punishment transaction can immediately be constructed before the relative time lock $h_{rel}$ expires. Whenever any of the counterparty's intermediate multiparty UTXOs, $\text{MultiSig}(Nm)\ \text{for}\ 0<m<N$, becomes available in the blockchain, the honest party can spend all the funds to its own output, because it knows the total blinding factor and the counterparty does not.
Channel Closure
Whenever the parties agree to a channel closure, the original on‑chain multiparty UTXO, $\text{MultiSig}(0)$, is spent to their respective outputs in a collaborative transaction. In the event of a single party deciding to close the channel unilaterally for whatever reason, its latest refund transaction is broadcast, effectively closing the channel. In either case, the parties have to wait for the relative time lock $h_{rel}$ to expire before being able to claim their funds.
Opening a channel requires one collaborative funding transaction on the blockchain. Closing a channel involves each party broadcasting its respective portion of the refund transaction to the blockchain, or collaborating to broadcast a single settlement transaction. A roundtrip open channel, multiple off‑chain spending and close channel thus involves, at most, three on‑chain transactions.
Conclusions, Observations and Recommendations

MultiSig
The Laser Beam MultiSig corresponds to a Mimblewimble $2of2$ Multiparty Bulletproof UTXO, as described here. There is more than one method for creating the MultiSig's associated Bulletproof range proof, and utilizing the Bulletproofs MPC Protocol will enable wallet reconstruction. It may also make information sharing, while constructing the Bulletproof range proof, more secure.

Linked Transactions
Part 2 of the refund procedure requires a kernel with a relative time lock to the kernel of its corresponding part 1 refund procedure, when those kernels are not yet available in the base layer, as well as a different, nonsimilar, signature challenge. Metadata about the linked transaction kernel and nonsimilar signature challenge creation must therefore be embedded within part 2 refund transaction kernels.

Refund Procedure
In the event that for round $N$, Alice or Bob decides to stop negotiations after their respective part 1 has been concluded and that transaction has been broadcast, the result would be that funding UTXO, $\text{MultiSig}(0)$, would be replaced by the respective $\text{MultiSig}(N)$. The channel will still be open. However, it also cannot be spent unilaterally, as the blinding factor is shared. Any new updates of the channel will then need to be based on $\text{MultiSig}(N)$ as the funding UTXO. Note that this cannot happen if the counterparties construct transactions for part 1 and part 2, conclude part 2 by signing it, and only then sign part 1.
In the event that for round $N$, Alice or Bob decides to stop negotiations after their respective part 1 and part 2 have been concluded and those transactions have been broadcast, it will effectively be a channel closure.

Revoke Attack Vector
When revoking the previous refund $(N1)$, Alice can get hold of Bob's blinding factor share $\hat{k}_{(N1)_{b}}$ (19), and after verifying that it is correct (20), refuse to give up her blinding factor share. This will leave Alice with the ability to broadcast any of refund transactions $(N1)$ and $N$, without fear of Bob broadcasting a punishment transaction. Bob, on the other hand, will only be able to broadcast refund transaction $N$.
Appendices
Appendix A: Notation Used

Let $p$ be a large prime number, $\mathbb{Z}_{p}$ denote the ring of integers $\mathrm{mod} \text{ } p$, and $\mathbb{F}_{p}$ be the group of elliptic curve points.

Let $G\in\mathbb{F}_{p}$ be a random generator point (base point) and let $H\in\mathbb{F}_{p}$ be specially chosen so that the value $x_{H}$ to satisfy $H=x_{H}G$ cannot be found, except if the Elliptic Curve Discrete Logarithm Problem (ECDLP) is solved.

Let commitment to value $v\in\mathbb{Z}_{p}$ be determined by calculating $C(v,k)=(vH+kG)$, which is called the Elliptic Curve Pedersen Commitment (Pedersen Commitment), with $k\in\mathbb{Z}_{p}$ (the blinding factor) a random value.

Let scalar multiplication be depicted by $\cdot$, e.g. $e\cdot(vH+kG)=e\cdot vH+e\cdot kG$.
References
[1] J. A. Odendaal, "Layer 2 Scaling Survey  Tari Labs University" [online]. Available: https://tlu.tarilabs.com/scaling/layer2scalinglandscape/layer2scalingsurvey.html. Date accessed: 2019‑10‑06.
[2] J. Poon, T. Dryja (2016). "The Bitcoin Lightning Network: Scalable OffChain Instant Payments" [online]." Available: http://lightning.network/lightningnetworkpaper.pdf. Date accessed: 2019‑07‑04.
[3] "GitHub: lightningnetwork/lightningrfc: Lightning Network Specifications" [online]. Available: https://github.com/lightningnetwork/lightningrfc. Date accessed: 2019‑07‑04.
[4] X. Wang, "What is the situation of Litecoin's Lightning Network now?" [online]. Available: https://coinut.com/blog/whatsthesituationoflitecoinslightningnetworknow. Date accessed: 2019‑09‑11.
[5] The Beam Team, "GitHub: Lightning Network  BeamMW/beam Wiki" [online]. Available: https://github.com/BeamMW/beam/wiki/LightningNetwork. Date accessed: 2019‑07‑05.
[6] F. Jahr, "Beam  Lightning Network Position Paper. (v 1.0)" [online]. Available: https://docs.beam.mw/Beam_lightning_network_position_paper.pdf. Date accessed: 2019‑07‑04.
[7] F. Jahr, "GitHub: fjahr/lightningmw, Lightning Network Specifications" [online]. Available: https://github.com/fjahr/lightningmw. Date accessed: 2019‑07‑04.
[8] The Beam Team, "GitHub: beam/node/laser_beam_demo at master  BeamMW/beam" [online]. Available: https://github.com/BeamMW/beam/tree/master/node/laser_beam_demo. Date accessed: 2019‑07‑05.
[9] The Beam Team, "GitHub: beam/ecc_bulletproof.cpp at mainnet  BeamMW/beam" [online]. Available: https://github.com/BeamMW/beam/blob/mainnet/core/ecc_bulletproof.cpp. Date accessed: 2019‑07‑05.
[10] D. Smith, N. Kohen, and C. Stewart, “Lightning 101 for Exchanges” [online]. Available: https://suredbits.com/lightning101forexchangesoverview. Date accessed: 2019‑11‑06.
Contributors
 https://github.com/hansieodendaal
 https://github.com/anselld
 https://github.com/SWvheerden
 https://github.com/Empiech007
 https://github.com/mikethetike
Merged Mining
Purpose
The purpose of merged mining is to allow the mining of more than one cryptocurrency without necessitating additional ProofofWork effort (PoW) [1].
Definitions

From BitcoinWiki: Merged mining is the act of using work done on another block chain (the Parent) on one or more Auxiliary block chains and to accept it as valid on its own chain, using Auxiliary ProofofWork (AuxPoW), which is the relationship between two block chains for one to trust the other's work as their own. The Parent block chain does not need to be aware of the AuxPoW logic as blocks submitted to it are still valid blocks [2].

From CryptoCompare: Merged mining is the process of allowing two different crypto currencies based on the same algorithm to be mined simultaneously. This allows low hash powered crypto currencies to increase the hashing power behind their network by bootstrapping onto more popular crypto currencies [3].
References
[1] "Merged Mining" [online]. Conference Paper. Available: https://www.researchgate.net/publication/319647721_Merged_Mining_Curse_or_Cure. Date accessed: 2019‑06‑10.
[2] BitcoinWiki: "Merged Mining Specification" [online]. Available: https://en.bitcoin.it/wiki/Merged_mining_specification. Date accessed: 2019‑06‑10.
[3] CryptoCompare: "What is merged mining – Bitcoin & Namecoin – Litecoin & Dogecoin?" [online]. Available: https://www.cryptocompare.com/mining/guides/whatismergedminingbitcoinnamecoinlitecoindogecoin/. Date accessed: 2019‑06‑10.
Merged Mining Introduction
 What is Merged Mining?
 Merged Mining with Multiple Auxiliary Chains
 Merged Mining  Interesting Facts and Case Studies
 Attack Vectors
 References
 Contributors
What is Merged Mining?
Merged mining is the act of using work done on another blockchain (the Parent) on one or more than one Auxiliary blockchain and to accept it as valid on its own chain, using Auxiliary ProofofWork (AuxPoW), which is the relationship between two blockchains for one to trust the other's work as their own. The Parent blockchain does not need to be aware of the AuxPoW logic, as blocks submitted to it are still valid blocks [1].
As an example, the structure of merged mined blocks in Namecoin and Bitcoin is shown here [25]:
A transaction set is assembled for both blockchains. The hash of the AuxPoW block header is then inserted in the "free" bytes region (coinbase field) of the coinbase transaction and submitted to the Parent blockchain's ProofofWork (PoW). If the merge miner solves the block at the difficulty level of either blockchain or both blockchains, the respective block(s) are reassembled with the completed PoW and submitted to the correct blockchain. In the case of the Auxiliary blockchain, the Parent's block hash, Merkle tree branch and coinbase transaction are inserted in the Auxiliary block's AuxPoW header. This is to prove that enough work that meets the difficulty level of the Auxiliary blockchain was done on the Parent blockchain ([1], [2], [25]).
The propagation of Parent and Auxiliary blocks is totally independent and only governed by each chain's difficulty level. As an example, the following diagram shows how this can play out in practice with Namecoin and Bitcoin when the Parent difficulty (D_{BTC}) is more than the Auxiliary difficulty (D_{NMC}). Note that BTC block 2' did not become part of the Parent blockchain propagation.
Merged Mining with Multiple Auxiliary Chains
A miner can use a single Parent to perform merged mining on multiple Auxiliary blockchains. The Merkle tree root of a Merkle tree that contains the block hashes of the Auxiliary blocks as leaves must then be inserted in the Parent's coinbase field as shown in the following diagram. To prevent double spending attacks, each Auxiliary blockchain must specify a unique ID that can be used to derive the leaf of the Merkle tree where the respective block hash must be located [25].
Merged Mining  Interesting Facts and Case Studies
Namecoin (#307) with Bitcoin (#1)
 Namecoin, the first fork of Bitcoin, introduced merged mining with Bitcoin [1] from block 19,200 onwards [3]. At the time of writing (May 2018), the block height of Namecoin was greater than 400,500 [4].
 Over the fiveday period from 23 May 2018 to 27 May 2018, only 226 out of 752 blocks posted transaction values over and above the block reward of 25 NMC, with an average transaction value of 159.231 NMC including the block reward. [4]
 Slush Pool merged mining Namecoin with Bitcoin rewards all miners with BTC equivalent to NMC via an external exchange service [5].
 P2pool, Multipool, Slush Pool, Eligius and F2pool are cited as top Namecoin merged mining pools [6].
@ 20180530  Bitcoin [16]  Namecoin [16]  Ratio 

Block time target (s)  600  600  100.00% 
Hash rate (Ehash/s)  31.705  21.814  68.80% 
Blocks count  525,064  400,794  76.33% 
Dogecoin (#37) with Litecoin (#6)
 Dogecoin introduced merged mining with Litecoin [8] from block 371,337 onwards [9]. At the time of writing (May 2018), the block height of Dogecoin was greater than 2,240,000 [10].
 Many in the Dogecoin user community believe merged mining with Litecoin saved Dogecoin from a 51% attack [8].
@ 20180530  Litecoin [16]  Dogecoin [16]  Ratio 

Block time target (s)  150  60  40.00% 
Hash rate (Thash/s)  311.188  235.552  75.69% 
Blocks count  1,430,517  2,241,120  156.67% 
Huntercoin (#779) with Bitcoin (#1) or Litecoin (#6)
 Huntercoin was released as a live experimental test to see how blockchain technology could handle fullon game worlds [22].
 Huntercoin was originally designed to be supported for only one year, but development and support will continue [22].
 Players are awarded coins for gaming, thus the world's first human mineable cryptocurrency.
 Coin distribution: 10 coins per block, nine for the game world and one for the miners [22].
@ 20180601  Huntercoin 

Block time target (s)  120 
blockchain size (GB)  17 
Pruned blockchain size (GB)  0.5 
Blocks count  2,291,060 
PoW algorithm (for merged mining)  SHA256, Scrypt 
Myriad (#510) with Bitcoin (#1) or Litecoin (#6)
 Myriad is the first currency to support five PoW algorithms and claims its multiPoW algorithm approach offers exceptional 51% resistance [23].
 Myriad introduced merged mining from block 1,402,791 onwards [24].
@ 20180601  Myriad 

Block time target (s)  60 
blockchain size (GB)  2.095 
Blocks count  2,442,829 
PoW algorithm (for merged mining)  SHA256d, Scrypt 
PoW algorithm (others)  MyrGroestl, Skein, Yescrypt 

Some solved multiPoW block examples follow:
Monero (#12)/DigitalNote (#166) + FantomCoin (#1068)
 FantamCoin was the first CryptoNotebased coin to develop merged mining with Monero, but was abandoned until DigitalNote developers became interested in merged mining with Monero and revived FantamCoin in October 2016 ([17], [18], [19]).
FantamCoin Release notes 2.0.0
 Fantomcoin 2.0 by XDNproject, major FCN update to the latest
cryptonotefoundation codebase
 New FCN+XMR merged merged mining
 Default block size  100Kb
DigitalNote Release notes 4.0.0beta
 EmPoWering XDN network security with merged mining with any CryptoNote
cryptocurrency
 Second step to the PoA with the new type of PoW merged mining blocks
 DigitalNote and FantomCoin merged mining with Monero are now stuck with the recent CryptoNightbased Monero forks such as Monero Classic and Monero Original after Monero's recent hard fork to CryptoNight v7. (Refer to Attack Vectors.)
@ 20180531  Monero [16]  DigitalNote [16]  Ratio 

Block time target (s)  120  240  200.00% 
Hash rate (Mhash/s)  410.804  13.86  3.37% 
Blocks count  1,583,869  660,075  41.67% 
@ 20180531  Monero [16]  FantomCoin [16]  Ratio 

Block time target (s)  120  60  50.00% 
Hash rate (Mhash/s)  410.804  19.29  4.70% 
Blocks count  1,583,869  2,126,079  134.23% 
Some Statistics
Mergemined blocks in some cryptocurrencies on 18 June 2017 [24]:
Observations
 The Auxiliary blockchain's target block times can be smaller than, equal to or larger than the Parent blockchain.
 The Auxiliary blockchain's hash rate is generally smaller than, but of the same order of magnitude as that of, the Parent blockchain.
 A multiPoW algorithm approach may further enhance 51% resistance.
Attack Vectors
51% Attacks

51% attacks are real and relevant today. Bitcoin Gold (rank #28 @ 2018‑05‑29) and Verge (rank #33 @ 2018‑05‑29) suffered recent attacks with double spend transactions following ([11], [12]).

In a conservative analysis, successful attacks on PoW cryptocurrencies are more likely when dishonest entities control more than 25% of the total mining power [24].

Tari tokens are envisaged to be merged mined with Monero [13]. The Monero blockchain security is therefore important to the Tari blockchain.

Monero recently (6 April 2018) introduced a hard fork with upgraded PoW algorithm CryptoNight v7 at block height 1,546,000 to maintain its Application Specific Integrated Circuit (ASIC) resistance and hence guard against 51% attacks. The Monero team proposes changes to their PoW every scheduled fork (i.e. every six months) ([14], [15]).

An interesting question arises regarding what needs to happen to the Tari blockchain if the Monero blockchain is hard forked. Since the CryptoNight v7 hard fork, the network hash rate for Monero hovers around approximately 500MH/s, whereas in the two months immediately prior it was approximately 1,000MH/s [20](https://chainradar.com/xmr/chart). Thus 50% of the hash power can be ascribed to ASICS and botnet miners.
NiceHash statistics for CryptoNight v7 [21] show a lag of two days for approximately 100,600 miners to get up to speed with providing the new hashing power after the Monero hard fork.
The Tari blockchain will have to fork together with or just after a scheduled Monero fork. The Tari blockchain will be vulnerable to ASIC miners until it has been forked.
Double Proof
 A miner could cheat the PoW system by putting more than one Auxiliary block header into one Parent block [7].
 Multiple Auxiliary blocks could be competing for the same PoW, and could subject your Auxiliary blockchain to nothingatstake attacks if the chain is forked, maliciously or by accident, with consequent attempts to reverse transactions ([7], [26]).
 More than one Auxiliary blockchain will be mergemined with Monero.
Analysis of Mining Power Centralization Issues
With reference to [24] and [25]:
 In Namecoin, F2Pool reached and maintained a majority of the mining power for prolonged periods.
 Litecoin has experienced slight centralization since mid2014, caused by Clevermining and F2Pool, among others.
 In Dogecoin, F2Pool was responsible for generating more than 33% of the blocks per day for significant periods, even exceeding the 50% threshold around the end of 2016.
 Huntercoin was instantly dominated by F2Pool and remained in this state until mid2016.
 Myriadcoin appears to have experienced only a moderate impact. Multimergemined blockchains allow for more than one parent cryptocurrency and have a greater chance of acquiring a higher difficulty per PoW algorithm than the respective parent blockchain.
 Distribution of overall percentage of days below or above the centralization indicator thresholds on 18 June 2017 was as follows:
Introduction of New Attack Vectors
With reference to [24] and [25]:
 Miners can generate blocks for the mergemined child blockchains at almost no additional cost, enabling attacks without risking financial losses.
 Merged mining as an attack vector works both ways, as parent cryptocurrencies cannot easily prevent being mergemined by auxiliary blockchains.
 Merged mining can increase the hash rate of auxiliary blockchains, but it is not conclusively successful as a bootstrapping technique.
 Empirical evidence suggests that only a small number of mining pools are involved in merged mining, and they enjoy block shares beyond the desired security and decentralization goals.
References
[1] "Merged Mining Specification" [online). Available: https://en.bitcoin.it/wiki/Merged_mining_specification. Date accessed: 2018‑05‑28.
[2] "How does Merged Mining Work?" [Online.] Available: https://bitcoin.stackexchange.com/questions/273/howdoesmergedminingwork. Date accessed: 2018‑05‑28.
[3] "MergedMining.mediawiki" [online]. Available: https://github.com/namecoin/wiki/blob/master/MergedMining.mediawiki. Date accessed: 2018‑05‑28.
[4] "Bchain.info  Blockchain Explorer (NMC)" [online]. Available: https://bchain.info/NMC. Date accessed: 2018‑05‑28.
[5] "SlushPool Merged Mining" [online]. Available: https://slushpool.com/help/firstaid/faqmergedmining. Date accessed: 2018‑05‑28.
[6] "5 Best Namecoin Mining Pools of 2018 (Comparison)" [online]. Available: https://www.prooworld.com/namecoin/bestnamecoinminingpools. Date accessed: 2018‑05‑28.
[7] "Alternative Chain" [online]. Available: https://en.bitcoin.it/wiki/Alternative_chain#Protecting_against_double_proof. Date accessed: 2018‑05‑28.
[8] "Merged Mining AMA/FAQ" [online]. Available: https://www.reddit.com/r/dogecoin/comments/22niq9/merged_mining_amafaq. Date accessed: 2018‑05‑29.
[9] "The Forkening is Happening at ~9:00AM EST" [online]. Available: https://www.reddit.com/r/dogecoin/comments/2fyxg1/the_forkening_is_happening_at_900am_est_a_couple. Date accessed: 2018‑05‑29.
[10] "Dogecoin Blockchain Explorer" [online]. Available: https://dogechain.info. Date accessed: 2018‑05‑29.
[11] "Bitcoin Gold Hit by Double Spend Attack, Exchanges Lose Millions" [online]. Available: https://www.ccn.com/bitcoingoldhitbydoublespendattackexchangeslosemillions. Date accessed: 2018‑05‑29.
[12] "Privacy Coin Verge Succumbs to 51% Attack" [Again]. [Online.] Available: https://www.ccn.com/privacycoinvergesuccumbsto51attackagain. Date accessed: 2018‑05‑29.
[13] "Tari Official Website" [online]. Available: https://www.tari.com. Date accessed: 2018‑05‑29.
[14] "Monero Hard Forks to Maintain ASIC Resistance, but ‘Classic’ Hopes to Spoil the Party" [online]. Available: https://www.ccn.com/monerohardforkstomaintainasicresistancebutclassichopestospoiltheparty. Date accessed: 2018‑05‑29.
[15] "PoW Change and Key Reuse" [online]. Available: https://getmonero.org/2018/02/11/powchangeandkeyreuse.html. Date accessed: 2018‑05‑29.
[16] "BitInfoCharts" [online]. Available: https://bitinfocharts.com. Date accessed: 2018‑05‑30.
[17] "Merged Mining with Monero" [online]. Available: https://minergate.com/blog/mergedminingwithmonero. Date accessed: 2018‑05‑30.
[18] "ANN DigitalNote XDN  ICCO Announce  NEWS" [online]. Available: https://bitcointalk.org/index.php?topic=1082745.msg16615346#msg16615346. Date accessed: 2018‑05‑31.
[19] "DigitalNote xdnproject" [online]. Available: https://github.com/xdnproject. Date accessed: 2018‑05‑31.
[20] "Monero Charts" [online]. Available: https://chainradar.com/xmr/chart. Date accessed: 2018‑05‑31.
[21] "Nicehash Statistics for CryptoNight v7" [online]. Available: https://www.nicehash.com/algorithm/cryptonightv7. Date accessed: 2018‑05‑31.
[22] "Huntercoin: A Blockchain based Game World" [online]. Available: http://huntercoin.org. Date accessed: 2018‑06‑01.
[23] "Myriad: A Coin for Everyone" [online]. Available: http://myriadcoin.org. Date accessed: 2018‑06‑01.
[24] "Merged Mining: Curse or Cure?" [Online.] Available: https://eprint.iacr.org/2017/791.pdf. Date accessed: 2019‑02‑12.
[25] Merged Mining: Analysis of Effects and Implications [online]. Available: http://repositum.tuwien.ac.at/obvutwhs/download/pdf/2315652. Date accessed: 2019‑02‑12.
[26] "Problems  Consensus  8. Proof of Stake" [online]. Available: https://github.com/ethereum/wiki/wiki/Problems. Date accessed: 2018‑06‑05.
Contributors
Digital Assets
Purpose
"All digital assets provide value, but not all digital assets are valued equally" [1].
Definitions
Digital Assets

From Simplicable: A digital asset is something that has value and can be owned but has no physical presence [2].

From The Digital Beyond: A digital asset is content that is stored in digital form or an online account, owned by an individual. The associated digital data are classified as intangible, personal property, as long as they stay digital, otherwise they quickly become tangible personal property [3].

From Wikipedia: A digital asset, in essence, is anything that exists in a binary format and comes with the right to use. Data that do not possess that right are not considered assets. Digital assets come in many forms and may be stored on many types of digital appliances which are, or will be in existence once technology progresses to accommodate for the conception of new modalities which would be able to carry digital assets; notwithstanding the proprietorship of the physical device onto which the digital asset is located [4].
Nonfungible Tokens

From Wikipedia: A nonfungible token (NFT) is a special type of cryptographic token which represents something unique; nonfungible tokens are thus not interchangeable. This is in contrast to cryptocurrencies like Bitcoin, and many network or utility tokens that are fungible in nature. NFTs are used to create verifiable digital scarcity, for example in several specific applications that require unique digital items like cryptocollectibles and cryptogaming [5].

From BTC Manager: Nonfungible tokens (NFTs) are blockchain tokens that are designed to be distinguishable from each other. Using unique metadata, avatars, individual token IDs, and custody chains, NFTs are created to ensure that no two NFT tokens are identical. This is because they exist to store information rather than value, unlike their fungible counterparts [6].
References
[1] "What Exactly is a Digital Asset & How to Get the Most Value from Them?" [Online.] Available: https://merlinone.com/whatisadigitalasset/. Date accessed: 2019‑06‑10.
[2] "11 Examples of Digital Assets" [online]. Available: https://simplicable.com/new/digitalasset. Date accessed: 2019‑06‑10.
[3] J. Romano, "A Working Definition of Digital Assets" [online]. The Digital Beyond. Available: http://www.thedigitalbeyond.com/2011/09/aworkingdefinitionofdigitalassets/. Date accessed: 2019‑06‑10.
[4] Wikipedia: "Digital Asset" [online]. Available: https://en.wikipedia.org/wiki/Digital_asset. Date accessed: 2019‑06‑10.
[5] Wikipedia: "Nonfungible Token" [online]. Available: https://en.wikipedia.org/wiki/Nonfungible_token. Date accessed: 2019‑06‑10.
[6] BTC Manager: "Nonfungible Tokens: What Are They?" [Online.] Available: https://btcmanager.com/nonfungibletokens/. Date accessed: 2019‑06‑10.
Application of Howey to Blockchain Network Token Sales
 Introduction
 What is a Security?
 What is an Investment Contract (i.e. Howey Test)?
 Fourth Prong of Howey Test  Efforts of Others
 Decentralization, Consumptive Purpose and Priming Purchasers' Expectations
 Conclusion
 Footnotes
 References
 Contributors
Introduction
Many blockchain networks use, or intend to use, cryptographic tokens ("tokens" or "digital assets") for various purposes. These purposes include an incentive for network participants to contribute computing power to the network; or to gain access to certain goods, services or other network functionality. Some promoters of such a network sell tokens, or the right to receive tokens once the network has launched. They then use the sales proceeds to finance development and maintenance of the networks.
The U.S. Securities and Exchange Commission (SEC) has determined that many of the token offerings and sales over the past few years have been in violation of federal securities laws. In some cases, the SEC has taken action against the promoters of the offerings.
Refer to the following Orders under the Order Instituting CeaseandDesist Proceedings Pursuant to [1, Section 8A]:
 Making Findings, and Imposing a CeaseandDesist Order against Munchee Inc. (the Munchee Order);
 Making Findings, and Imposing Penalties and a CeaseandDesist Order against CarrierEQ, Inc., D/B/A/ Airfox (the Airfox Order);
 Making Findings, and Imposing Penalties and a CeaseandDesist Order against Paragon Coin, Inc. (the Paragon Order); and
 Making Findings, and Imposing a CeaseandDesist Order against GladiusNetwork LLC (the Gladius Order).
In each case, the SEC's decision to take action has been underpinned by its determination that the digital assets that were offered and sold were securities pursuant to Section 2(a)(1) of the Securities Act of 1933  the "Securities Act" or "Act" [1]. Under [1, Section 5], it is generally unlawful for any person, directly or indirectly, to offer or sell a security without complying with the registration requirement of Section 5, unless the securities offering qualifies for an exemption from registration.
The sanctions for a violation of [1, Section 5] can be significant. They include a preliminary and permanent injunction; rescission; disgorgement; prejudgment interest; and civil money penalties. It is important that those seeking to promote token offerings and sales do so only after ensuring that the tokens to be sold are not securities under [1], or that the offering, if it involves securities, complies with [1, Section 5], or otherwise qualifies for an exemption from registration thereunder.
While the Securities Act [1]; relevant case law and administrative proceedings; as well as guidance and statements of the SEC and its officials shed much light on many of the considerations involved in an analysis of whether a given digital asset is a security, the facts and circumstances relating to each token, token offering and sale are typically quite unique. Many who seek to undertake token offerings and sales struggle to achieve clarity regarding whether such offerings and sales must be registered under [1], or have to fit within an exemption from the Act's registration requirements. As a result, there is significant regulatory uncertainty surrounding the offering and sale of tokens.
The Commission recognizes the need for more guidance in this area and continues to assist those seeking to achieve a greater understanding of whether the U.S. federal securities laws apply to the offer or sale of a particular digital asset. This report:

Reviews [2] (the "Framework", which was recently published by the SEC's Strategic Hub for Innovation and Financial Technology, along with previous statements by William Hinman, the Director of the SEC's Division of Corporation Finance. Note: "The Framework" represents the views of SEC Staff. It is not a rule, regulation or statement of the SEC, and it is not binding on the Divisions of the SEC or the SEC itself.

Explains how [2] is indicative of the SEC's approach to applying the investment contract test laid out in [3] to digital assets. The SEC adopted this test, commonly referred to as the Howey test [4], and has applied the test in subsequent actions involving digital assets.^{f1} Note: [3, Section 21(a)] authorizes the SEC to investigate violations of the federal securities laws and, in its discretion, to "publish information concerning any such violations". The Report does not constitute an adjudication of any fact or issue, nor does it make any findings of violations by any individual or entity.
In particular, this report examines some of the SEC Staff's statements relating to the fourth prong of the Howey test, i.e. the "efforts of others", and discusses factors for promoters of token offerings to consider when assessing whether a token purchaser may have a reasonable expectation of earning profits from the significant efforts of others. It should be noted that this report is in no way intended to constitute or be relied upon as legal advice, or to substitute for obtaining competent legal advice from an experienced attorney.
What is a Security?
Section 2(a)(1) of the Securities Act [1] defines a security as follows:
The term "security" means any note, stock, treasury stock, security future, securitybased swap, bond, debenture, evidence of indebtedness, certificate of interest or participation in any profitsharing agreement, collateraltrust certificate, preorganization certificate or subscription, transferable share, investment contract, votingtrust certificate, certificate of deposit for a security, fractional undivided interest in oil, gas, or other mineral rights, any put, call, straddle, option, or privilege on any security, certificate of deposit, or group or index of securities (including any interest therein or based on the value thereof), or any put, call, straddle, option, or privilege entered into on a national securities exchange relating to foreign currency, or, in general, any interest or instrument commonly known as a "security", or any certificate of interest or participation in, temporary or interim certificate for, receipt for, guarantee of, or warrant or right to subscribe to or purchase, any of the foregoing.
While blockchain network tokens (as well as a wide variety of other instruments) are not explicitly included in the definition of "security" under the Act, courts have abstracted the common elements of an "investment contract", which is included in the definition of "security" under Section 2(a)(1) (and Section 3(a)(10) of the Exchange Act of 1934, which is contained in [3]), to establish a "flexible rather than a static principle, one that is capable of adaptation to meet the countless and variable schemes devised by those who seek the use of the money of others on the promise of profits" [3, paragraph 299].
Accordingly, a determination of whether an instrument not specifically enumerated under Section 2(a)(1) of the Act may be deemed to be a security that implicates the federal securities laws must include an analysis of whether it is an investment contract under Howey.
What is an Investment Contract (i.e. Howey Test)?
Howey (and its progeny) established the framework of a fourpart test to determine whether an instrument is an investment contract and therefore a security subject to the U.S. federal securities laws. Howey provides that an investment contract exists if there is:
 an investment of money;
 into a common enterprise;
 with a reasonable expectation of profits;
 derived from the entrepreneurial or managerial efforts of others.
For an instrument to be an investment contract, each element of the Howey test must be met. If any element of the test is not met, the instrument is not an investment contract.
Fourth Prong of Howey Test  Efforts of Others
While the "efforts of others" prong of the Howey test is, at some level, no more important in an application of Howey than any of the other prongs, it is frequently the prong on which the most uncertainty hangs. The "efforts of others" is often the focus when it comes to public blockchain networks. This is because the decentralization of control that many such projects seek to foster prompts the following question: do the kind and degree of the existing decentralization mean that any expectation of profits token purchasers may have does not come from the efforts of others for purposes of Howey? The determination that token purchasers reasonably expected profits to come from the efforts of a centralized (or at least coordinated) person or group has been central in the SEC's findings that the tokens were securities in each of the recent SEC enforcement actions relating to token offerings and sales.^{f1}
Decentralization, Consumptive Purpose and Priming Purchasers' Expectations
Background
On 14 June 2018, Hinman delivered a speech^{f2} addressing whether "a digital asset that was originally offered in a securities offering [can] ever later be sold in a manner that does not constitute an offering of a security", and noted two cases where he believed this was indeed possible:

"where there is no longer any central enterprise being invested in", e.g. purchases of the digital assets related to a decentralized enterprise or network; and

"where the digital asset is sold only to be used to purchase a good or service available through the network on which it was created", e.g. purchases of digital assets for a consumptive purpose.
He posed a set of six questions directly related to the application of the "efforts of others" prong of the Howey test to offerings and sales of digital assets:
Is there a person or group that has sponsored or promoted the creation and sale of the digital asset, the efforts of whom play a significant role in the development and maintenance of the asset and its potential increase in value?
Has this person or group retained a stake or other interest in the digital asset such that it would be motivated to expend efforts to cause an increase in value in the digital asset? Would purchasers reasonably believe such efforts will be undertaken and may result in a return on their investment in the digital asset?
Has the promoter raised an amount of funds in excess of what may be needed to establish a functional network, and, if so, has it indicated how those funds may be used to support the value of the tokens or to increase the value of the enterprise? Does the promoter continue to expend funds from proceeds or operations to enhance the functionality and/or value of the system within which the tokens operate?
Are purchasers 'investing', that is seeking a return? In that regard, is the instrument marketed and sold to the general public instead of to potential users of the network for a price that reasonably correlates with the market value of the good or service in the network?
Does application of the Securities Act protections make sense? Is there a person or entity others are relying on that plays a key role in the profitmaking of the enterprise such that disclosure of their activities and plans would be important to investors? Do informational asymmetries exist between the promoters and potential purchasers/investors in the digital asset?
Do persons or entities other than the promoter exercise governance rights or meaningful influence?
(He also posed a separate set of seven questions exploring "contractual or technical ways to structure digital assets so they function more like a consumer item and less like a security").
Hinman noted that these questions are useful to consider when assessing the facts and circumstances surrounding offerings and sales of digital assets to determine "whether a third party  be it a person, entity or coordinated group of actors  drives the expectation of a return". If such a party does drive an expectation of a return on a purchased digital asset, i.e. priming purchasers' expectations, there is a greater likelihood that the asset will be deemed to be an investment contract and therefore a security.
In the Framework [2], the SEC Staff similarly focuses attention on whether a purchaser of a digital asset has a reasonable expectation of profits derived from the efforts of others. These three themes: decentralization, consumptive purpose and priming purchasers' expectations, provide useful context for much of its discussion. The following offers for consideration select implications of the Framework's [2] guidance and Hinman's speech, as industry participants seek a greater understanding of whether or not a digital asset is a security.
Decentralization
The Framework [2] reinforces that the degree and nature of a promoter's involvement will have a bearing on the Howey analysis. The facts and circumstances relating to a promoter are key. If a promoter does not play a significant role in the "development, improvement (or enhancement), operation, or promotion of the network" [2, Section 3] underlying the tokens, it cuts against finding the "efforts of others" prong has been met. A promoter may seek to play a more limited role in these areas. Or, along similar lines, influence over the "essential tasks and responsibilities" of the network may be widely dispersed, i.e. decentralized, among many unaffiliated network stakeholders so that there is no identifiable "person or group" that continues to play a significant role, especially as compared to the role that the dispersed stakeholders play [2, Section 4]^{f3}.
The SEC has placed particular attention on promoter efforts to impact a token's supply and/or demand and has also focused on a promoter's efforts to use the proceeds from a token offering to create an ecosystem that will drive demand for the tokens once the network is functional.^{f1} Further, the SEC has singled out promoter efforts to maintain a token's price by intervening in the buying and selling of tokens, separate from developing and maintaining the underlying network.
Further, the SEC's Senior Advisor for Digital Assets, Valerie Szczepanik, noted promoter efforts in this area may implicate U.S. securities law, recently stating she has "seen stablecoins that purport to control price through some kind of pricing mechanism… controlled through supply and demand in some way to keep the price within a certain band", and "[W]here there is one central party controlling the price fluctuation over time, [that] might be getting into the land of securities" [6].
In contrast to the kinds of efforts that meet Howey's fourth prong, to the extent that a token's value is driven by market forces (supply and demand conditions), it suggests that the value of the token is attributable to factors other than the promoter's efforts. This is assuming that the promoter does not put some mechanism in place to manage token supply or demand in order to maintain a token's price.
In addition, decentralization of control calls into question whether the application of the Securities Act [1] makes sense to begin with. A main purpose of [1] is to ensure that securities issuers "tell the public the truth about their businesses, the securities they are selling, and the risks involved in investing" [5]. Typically, the issuer of a security plays a key role in the success of the enterprise. Investors rely on the experience, judgment and skill of the enterprise's management team and board of directors to drive profits. The disclosure requirements of [1] focus primarily on issuers of securities. A decentralized network, where no central person or entity plays a key role, however, challenges both identification of a party who should be subject to the disclosure obligations of [1], and the information expected to be disclosed. Indeed, speaking about Bitcoin, Hinman acknowledged as much, stating he "[does] not see a central third party whose efforts are a key determining factor in the enterprise. The network on which Bitcoin functions is operational and appears to have been decentralized for some time. Applying the disclosure regime of the federal securities laws to the offer and resale of Bitcoin would seem to add little value". Decentralization ultimately invokes consideration of whether application of [1] truly serves its purpose. as well as the practical realities of how doing so would even work.
Consumptive Purpose
Token purchasers who plan to use their tokens to access a network's functionality typically are not speculating that the tokens will increase in value. The finding that a token was offered to potential purchasers in a manner inconsistent with a consumptive purpose has been a factor in several of the SEC's recent orders, for example:

From paragraph 18 of the Munchee Order [1, Section 8A]: "[The] marketing did not use the Munchee App or otherwise specifically target current users of the Munchee App to promote how purchasing MUN tokens might let them qualify for higher tiers and bigger payments on future reviews... Instead, Munchee and its agents promoted the MUN token offering in forums aimed at people interested in investing in Bitcoin and other digital assets").

From paragraph 16 of the Airfox Order [1, Section 8A]: "AirFox primarily aimed its promotional efforts for the initial coin offering at digital token investors rather than anticipated users of AirTokens").
Accordingly, a promoter may seek to limit the offering and sale of tokens to prospective network users. A promoter may also seek to ensure that, in both its content and intended audience, the marketing, and other communication related to the token offering, is consistent with consumption being the reason for purchasing tokens. The SEC Staff has indicated that the argument that purchasers are acquiring tokens for consumption will be stronger once the network is functional and tokens can actually be used to purchase goods or access services through the network.
For example:

Reference [2, Section 9] states that it is less likely the Howey test is met if various characteristics are present, including "[h]olders of the digital asset are immediately able to use it for its intended functionality on the network").

Paragraph 7 of the Airfox Order [1, Section 8A] states "The terms of AirFox's initial coin offering purported to require purchasers to agree that they were buying AirTokens for their utility as a medium of exchange for mobile airtime, and not as an investment or a security. At the time of the ICO, this functionality was not available… Despite the reference to AirTokens as a medium of exchange, at the time of the ICO, investors purchased AirTokens based upon anticipation that the value of the tokens would rise through AirFox's future managerial and entrepreneurial efforts."
Promoters may also consider whether "restrictions on the transferability of the digital asset are consistent with the asset's use and not facilitating a speculative market" [2, Section 10].
Priming Purchasers' Expectations
The SEC has given much attention to whether token purchasers were led to expect that the promoter would make efforts to increase token value. In this respect, in bringing enforcement actions, the SEC has highlighted statements made by promoters that specifically tout the opportunity to profit from purchasing a token.
For example, from paragraph 17 of the Munchee Order [1, Section 8A]:
Munchee made public statements or endorsed other people's public statements that touted the opportunity to profit. For example… Munchee created a public posting on Facebook, linked to a thirdparty YouTube video, and wrote '199% GAINS on MUN token at ICO price! Sign up for PRE‑SALE NOW!' The linked video featured a person who said 'Today we are going to talk about Munchee. Munchee is a crazy ICO… Pretty much, if you get into it early enough, you'll probably most likely get a return on it.' This person… 'speculate[d]' that a \$1,000 investment could create a $94,000 return.
Promotional statements explaining that tokens would be listed on digital asset exchanges for secondary trading also draw attention. Refer to the following examples.

From paragraph 13 of the Munchee Order [1, Section 8A]: Munchee stated it would work to ensure that MUN holders would be able to sell their MUN tokens on secondary markets, saying that "Munchee will ensure that MUN token is available on a number of exchanges in varying jurisdictions to ensure that this is an option for all tokenholders.");

From paragraph 22 of the Gladius Order [1, Section 8A]: During and following the offering, Gladius attempted to make GLA Tokens available for trading on major digital asset trading platforms. On Gladius Web Pages, Gladius principals and agents stated that "[w]e've been approached by some of the largest exchanges, they're very interested," and represented that the GLA Token would be available to trade on 'major' trading platforms after the ICO.

From [2, page 5], which states that it is more likely that there is a reasonable expectation of profit if various characteristics are present, including if the digital asset is marketed using "[t]he availability of a market for the trading of the digital asset, particularly where the [promoter] implicitly or explicitly promises to create or otherwise support a trading market for the digital asset".
Secondary trading on exchanges provides token holders an alternative to using the token on the network and, especially when facilitated and highlighted by the promoter, can support a finding of investment intent for token purchasers, i.e. buying with an eye on reselling for a profit instead of consuming goods or services through the network.
The Framework [2, Section 6] states that it is more likely there is a reasonable expectation of profit if various characteristics are present, including if "the digital asset is offered broadly to potential purchasers as compared to being targeted to expected users of the goods or services or those who have a need for the functionality of the network", and/or "[t]he digital asset is offered and purchased in quantities indicative of investment intent instead of quantities indicative of a user of the network".
The Framework [2] also notes that purchasers may have an expectation that the promoter will "undertake efforts to promote its own interests and enhance the value of the network or digital assets" [2, Section 5]. Token purchasers who are not aware of the promoter's interest in the network's success, such as through the promoter having retained a stake in the tokens, should not have any reason to expect the promoter to undertake any efforts to drive token value in order to increase its own profits.
Conclusion
Parties seeking to undertake offerings and sales of tokens in compliance with U.S. securities laws may look to the various actions and statements of the SEC for guidance as to whether the token is a security under the Act. As tokens are not specifically enumerated under the definition of "security", the analysis hinges on the application of the Howey test to determine if the token is an "investment contract" under the Act, and therefore a security. The decentralization of control many public blockchain projects seek to foster places Howey's "efforts of others" prong as a key area of focus. Questions regarding the application of this prong generally relate to:
 decentralization of the enterprise underlying the tokens;
 consumptive purpose of token purchasers; and
 whether token purchasers were led to have an expectation that the promoter would take efforts to drive token value.
This report has offered insight into some of the factors that promoters of token offerings may consider in assessing whether the kind and degree of decentralization that exists means that any expectation of profits token purchasers may have does not come from the efforts of others. It is also indicative of the SEC's approach to evaluating the offer and sale of digital assets.
Footnotes
f1: For examples, refer to paragraph 32 of the Munchee Order and
paragraph 21 of the Airfox Order and accompanying text.
Paragraph 32 of Munchee Order: "MUN token
purchasers had a reasonable expectation of profits from their investment in the Munchee enterprise. The proceeds of the
MUN token offering were intended to be used by Munchee to build an "ecosystem" that would create demand for MUN tokens
and make MUN tokens more valuable".
Paragraph 21 of Airfox Order: "AirFox told investors that the company
would improve the AirFox App, add new functionality, enter into agreements with thirdparty telecommunication companies,
and take other steps to encourage the use of AirTokens and foster the growth of the ecosystem. Investors reasonably
expected they would profit from the success of AirFox's efforts to grow the ecosystem and the concomitant rise in the
value of AirTokens".
f2: William Hinman, Director of the Division of Corp. Fin., SEC, Remarks at the Yahoo Finance All Markets Summit: Crypto: Digital Asset Transactions: When Howey Met Gary (Plastic) (14 June 2018).
f3: Reference [2, Section 4] states that it is more likely that the purchaser of a digital asset is relying on the efforts of others if various characteristics are present. This includes "[t]here are essential tasks or responsibilities performed and expected to be performed by [a promoter], rather than an unaffiliated, dispersed community of network users".
References
[1] "U.S. Congress. United States Code: Securities Act of, 15 U.S.C. §§ 77a77mm 1934" [online]. Available: https://www.loc.gov/item/uscode1934001015002a/. Date accessed: 2019‑03‑07.
[2] "Framework for 'Investment Contract' Analysis of Digital Assets" [online]. Available: https://www.sec.gov/corpfin/frameworkinvestmentcontractanalysisdigitalassets>. Date accessed: 2019‑03‑10.
[3] "U.S. Congress. United States Code: Securities Exchanges, 15 U.S.C. §§ 78a78jj, 1964" [online]. Available: https://www.loc.gov/item/uscode1964003015002b/. Date accessed: 2019‑03‑07.
[4] "SEC v. W. J. Howey Co., 328 U.S. 293 (1946)" [online]. Available: https://en.wikipedia.org/wiki/SEC_v._W._J._Howey_Co.). Date accessed: 2019‑04‑05.
[5] "U.S. Securities and Exchange Commission, What We Do" [online]. Available: https://www.sec.gov/Article/whatwedo.html. Date accessed: 2018‑03‑07.
[6] Guillermo Jimenez, SEC's Crypto Czar: "Stablecoins might be Violating Securities Laws", Decrypt (2019) [online]. Available: https://decryptmedia.com/5940/secscryptoczarstablecoinsmightbeviolatingsecuritieslaws. Date accessed: 2019‑03‑19.
Contributors
NonFungible Tokens
Having trouble viewing this presentation?
View it in a separate window.
Confidential Assets
 Introduction
 Preliminaries
 Basis of Confidential Assets
 Confidential Asset Implementations
 Conclusions, Observations and Recommendations
 References
 Appendices
 Contributors
Introduction
Confidential assets, in the context of blockchain technology and blockchainbased cryptocurrencies, can have different meanings to different audiences, and can also be something totally different or unique depending on the use case. They are a special type of digital asset and inherit all its properties, except that they are also confidential. Confidential assets therefore have value, can be owned but have no physical presence. The confidentiality aspect implies that the amount of assets owned, as well as the asset type that was transacted in, can be confidential. A further classification can be made with regard to whether they are fungible (interchangeable) or nonfungible (unique, not interchangeable). Confidential assets can only exist in the form of a cryptographic token or derivative thereof that is also cryptographically secure, at least under the Discrete Logarithmic Problem^{def} (DLP) assumption.
The basis of confidential assets is confidential transactions as proposed by Maxwell [4] and Poelstra et al. [5], where the amounts transferred are kept visible only to participants in the transaction (and those they designate). Confidential transactions succeed in making the transaction amounts private, while still preserving the ability of the public blockchain network to verify that the ledger entries and Unspent Transaction Output (UTXO) set still add up. All amounts in the UTXO set are blinded, while preserving public verifiability. Poelstra et al. [5] showed how the asset types can also be blinded in conjunction with the output amounts. Multiple asset types can be accommodated within single transactions on the same blockchain.
This report investigates confidential assets as a natural progression of confidential transactions.
Preliminaries
This section gives the general notation of mathematical expressions when specifically referenced in the report. This notation is important preknowledge for the remainder of the report.
 Let $ p $ be a large prime number.
 Let $ \mathbb G $ denote a cyclic group of prime order $ p $.
 Let $ \mathbb Z_p $ denote the ring of integers $ modulo \mspace{4mu} p $.
 Let $ \mathbb F_p $ be a group of elliptic curve points over a finite (prime) field.
 All references to Pedersen Commitment will imply Elliptic Curve Pedersen Commitment.
Basis of Confidential Assets
Confidential transactions, asset commitments and Asset Surjection Proofs (ASPs) are really the basis of confidential assets. These concepts are discussed in the following sections.
Confidential Transactions
Confidential transactions are made confidential by replacing each explicit UTXO with a homomorphic commitment, such as a Pedersen Commitment, and made robust against overflow and inflation attacks by using efficient zeroknowledge range proofs, such as a Bulletproof [1].
Range proofs provide proof that a secret value, which has been encrypted or committed to, lies in a certain interval. They prevent any numbers coming near the magnitude of a large prime, say $ 2^{256} $, that can cause wraparound when adding a small number, e.g. proof that a number $ x \in [0,2^{64}  1] $.
Pedersen Commitments are perfectly hiding (an attacker with infinite computing power cannot tell what amount has been committed to) and computationally binding (no efficient algorithm running in a practical amount of time can produce fake commitments, except with small probability). The Elliptic Curve Pedersen Commitment to value $ x \in \mathbb Z_p $ has the following form:
$$ C(x,r) = xH + rG \tag{1} $$
where $ r \in \mathbb Z_p $ is a random blinding factor, $ G \in \mathbb F_p $ is a random generator point and
$ H \in \mathbb F_p $ is specially chosen so that the value $ x_H $ satisfying $ H = x_H G $ cannot be found, except
if the Elliptic Curve DLP^{def} (ECDLP) is solved. The number $ H $ is what is known as a Nothing Up
My Sleeve (NUMS) number. With secp256k1, the value of $ H $ is the SHA256 hash of a simple encoding of the $ x $coordinate
of the generator point $ G $. The Pedersen Commitment scheme is implemented with three
algorithms: Setup()
to set up the commitment parameters $ G $ and $ H $; Commit()
to commit
to the message $ x $ using the commitment parameters $ r $, $ H $ and $ G $; and Open()
to open and verify
the commitment ([5], [6], [7], [8]).
Mimblewimble ([9], [10]) is based on and achieves confidentiality using these confidential transaction primitives. If confidentiality is not sought, inputs may be given as explicit amounts, in which case the homomorphic commitment to the given amount will have a blinding factor $ r = 0 $.
Asset Commitments and Surjection Proofs
The different assets need to be identified and transacted with in a confidential manner, and proven to not be inflationary. This is made possible by using asset commitments and ASPs ([1], [14]).
Given some unique asset description $ A $, the associated asset tag $ H_A \in \mathbb G $ is calculated using the
Pedersen Commitment function Setup()
using $ A $ as auxiliary input. (Selection of $ A $ is discussed
in Asset Issuance.) Consider a transaction with two inputs and two outputs involving two distinct
asset types $ A $ and $ B $:
$$ \begin{aligned} in_A = x_1H_A + r_{A_1}G \mspace{15mu} \mathrm{,} \mspace{15mu} out_A = x_2H_A + r_{A_2}G \\ in_B = y_1H_B + r_{B_1}G \mspace{15mu} \mathrm{,} \mspace{15mu} out_B = y_2H_B + r_{B_2}G \end{aligned} \tag{2} $$
In the equations that follow, a commitment to the value of $ 0 $ will be denoted by $ (\mathbf{0}) $, i.e. $C(0,r) = 0H + rG = (\mathbf{0})$. For relation (2) to hold, the sum of the outputs minus the sum of the inputs must be a commitment to the value of $ 0 $:
$$ \begin{aligned} (out_A + out_B)  (in_A + in_B) = (\mathbf{0}) \\ (x_2H_A + r_{A_2}G) + (y_2H_B + r_{B_2}G)  (x_1H_A + r_{A_1}G)  (y_1H_B + r_{B_1}G) = (\mathbf{0}) \\ (r_{A_2} + r_{B_2}  r_{A_1}  r_{B_1})G + (x_2  x_1)H_A + (y_2  y_1)H_B = (\mathbf{0}) \end{aligned} \tag{3} $$
Since $ H_A $ and $ H_B $ are both NUMS asset tags, the only way relation (3) can hold is if the total input and output amounts of asset $ A $ are equal and if the same is true for asset $ B $. This concept can be extended to an unlimited amount of distinct asset types, as long as each asset tag can be a unique NUMS generator. The problem with relation (3) is that the asset type of each output is publicly visible, thus the assets that were transacted in are not confidential. This can be solved by replacing each asset tag with a blinded version of itself. The asset commitment to asset tag $ H_A $ (blinded asset tag) is then defined as the point
$$ H_{0_A} = H_A + rG \tag{4} $$
Blinding of the asset tag is necessary to make transactions in the asset, i.e. which asset was transacted in, confidential. The blinded asset tag $ H_{0_A} $ will then be used in place of the generator $ H $ in the Pedersen Commitments. Such Pedersen Commitments thus commit to the committed amount as well as to the underlying asset tag. Inspecting the Pedersen Commitment, it is evident that a commitment to the value $ x_1 $ using the blinded asset tag $ H_{0_A} $ is also a commitment to the same value using the asset tag $ H_A $:
$$ x_1H_{0_A} + r_{A_1}G = x_1(H_A + rG) + r_{A_1}G = x_1H_A + (r_{A_1} + x_1r)G \tag{5} $$
Using blinded asset tags, the transaction in relation (2) then becomes:
$$ \begin{aligned} in_A = x_1H_{0_A} + r_{A_1}G \mspace{15mu} \mathrm{,} \mspace{15mu} out_A = x_2H_{0_A} + r_{A_2}G \\ in_B = y_1H_{0_B} + r_{B_1}G \mspace{15mu} \mathrm{,} \mspace{15mu} out_B = y_2H_{0_B} + r_{B_2}G \end{aligned} \tag{6} $$
Correspondingly, the zero sum rule translates to:
$$ \begin{aligned} (out_A + out_B)  (in_A + in_B) = (\mathbf{0}) \\ (x_2H_{0_A} + r_{A_2}G) + (y_2H_{0_B} + r_{B_2}G)  (x_1H_{0_A} + r_{A_1}G)  (y_1H_{0_B} + r_{B_1}G) = (\mathbf{0}) \\ (r_{A_2} + r_{B_2}  r_{A_1}  r_{B_1})G + (x_2  x_1)H_{0_A} + (y_2  y_1)H_{0_B} = (\mathbf{0}) \end{aligned} \tag{7} $$
However, using only the sum to zero rule, it is still possible to introduce negative amounts of an asset type. Consider blinded asset tag
$$ H_{0_A} = H_A + rG \tag{8} $$
Any amount of blinded asset tag $ H_{0_A} $ will correspond to a negative amount of asset $ A $, thereby inflating its supply. To solve this problem, an ASP is introduced, which is a cryptographic proof. In mathematics, a surjection function simply means that for every element $ y $ in the codomain $ Y $ of function $ f $, there is at least one element $ x $ in the domain $ X $ of function $ f $ such that $ f(x) = y$.
An ASP scheme provides a proof $ \pi $ for a set of input asset commitments $ [ H_i ] ^n_{i=1} $, an output commitment $ H = H_{\hat i} + rG $ for $ \hat i = 1 \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} n $ and blinding factor $ r $. It proves that every output asset type is the same as some input asset type, while blinding which outputs correspond to which inputs. Such a proof $ \pi $ is secure if it is a zeroknowledge proof of knowledge for the blinding factor $ r $. Let $ H_{0_{A1}} $ and $ H_{0_{A2}} $ be blinded asset tags that commit to the same asset tag $ H_A $:
$$ H_{0_{A1}} = H_A + r_1G \mspace{15mu} \mathrm{and} \mspace{15mu} H_{0_{A2}} = H_A + r_2G \tag{9} $$
Then
$$ H_{0_{A1}}  H_{0_{A2}} = (H_A + r_1G)  (H_A + r_2G) = (r_1  r_2)G \tag{10} $$
will be a signature key with secret key $ r_1  r_2 $. Thus for an $ n $ distinct multiple asset type transaction, differences can be calculated between each output and all inputs, e.g. $ (out_A  in_A) , (out_A  in_B) \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} (out_A  in_n) $, and so on for all outputs. This has the form of a ring signature, and if $ out_A $ has the same asset tag as one of the inputs, the transaction signer will know the secret key corresponding to one of these differences, and be able to produce the ring signature. The ASP is based on the BackMaxwell range proof (refer to Definition 9 of [1]), which uses a variation of Borromean ring signatures [18]. The Borromean ring signature, in turn, is a variant of the AbeOhkuboSuzuki (AOS) ring signature [19]. An AOS ASP computes a ring signature that is equal to the proof $ \pi $ as follows:
 Calculate $ n $ differences $ H  H_{\hat i } $ for $ \hat i = 1 \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} n $, one of which will be equal to the blinding factor $ r $.
 Calculate a ring signature $ S $ of an empty message using the $ n $ differences.
The resulting ring signature $ S $ is equal to the proof $ \pi $, and the ASP consists of this ring signature $ S $ ([1], [14]).
Confidential Asset Scheme
Using the building blocks discussed in Basis of Confidential Assets, asset issuance, asset transactions and asset reissuance can be performed in a confidential manner.
Asset Transactions
Confidential assets propose a scheme where multiple noninterchangeable asset types can be supported within a single transaction. This all happens within one blockchain and can theoretically improve the value of the blockchain by offering a service to more users. It can also enable extended functionality such as base layer atomic asset trades. The latter implies Alice can offer Bob $ 100 $ of asset type $ A $ for $ 50 $ of asset type $ B $ in a single transaction, both participants using a single wallet. In this case, no relationship between output asset types can be established or inferred, because all asset tags are blinded. Privacy can be increased, as the blinded asset types bring another dimension that needs to be unraveled in order to obtain user identity and transaction data by not having multiple singleasset transactions. Such a confidential asset scheme simplifies verification and complexity, and reduces onchain data. It also prohibits censorship of transactions involving specific asset types, and blinds assets with low transaction volume where users could be identified very easily [1].
Assets originate in assetissuance inputs, which take the place of coinbase transactions in confidential transactions. The asset type to pay fees must be revealed in each transaction, but in practice, all fees could be paid in only one asset type, thus preserving privacy. Payment authorization is achieved by means of the input signatures. A confidential asset transaction consists of the following data:

A list of inputs, each of which can have one of the following forms:
 a reference to an output of another transaction, with a signature using that output's verification key; or
 an asset issuance input, which has an explicit amount and asset tag.

A list of outputs that contains:
 a signature verification key;
 an asset commitment $ H_0 $ with an ASP from all input asset commitments to $ H_0 $;
 Pedersen commitment to an amount using generator $ H_0 $ in place of $ H $, with the associated BackMaxwell range proof.

A fee, listed explicitly as $ { (f_i , H_i) }_{i=1}^n $, where $ f_i $ is a nonnegative scalar amount denominated in the asset with tag $ H_i $.
Every output has a range proof and ASP associated with it, which are proofs of knowledge of the Pedersen commitment opening information and asset commitment blinding factor. Every range proof can be considered as being with respect to the underlying asset tag $ H_A $, rather than the asset commitment $ H_0 $. The confidential transaction is restricted to only inputs and outputs with asset tag $ H_A $, except that output commitments minus input commitments minus fee sum to a commitment to $ 0 $ instead of to the point $ 0 $ itself [1].
However, confidential assets come at an additional data cost. For a transaction with $ m $ outputs and $ n $ inputs, in relation to the units of space used for confidential transactions, the asset commitment has size $ 1$, the ASP has size $ n + 1 $ and the entire transaction therefore has size $ m(n + 2) $ [1].
Asset Issuance
It is important to ensure that any auxiliary input $ A $ used to create asset tag $ H_A \in \mathbb G $ only be used once to prevent inflation by means of many independent issuances. Associating a maximum of one issuance with the spending of a specific UTXO can ensure this uniqueness property. Poelstra et al. [5] suggest the use of a Ricardian contract [11] to be hashed together with the reference to the UTXO being spent. This hash can then be used to generate the auxiliary input $ A $ as follows. Let $I $ be the input being spent (an unambiguous reference to a specific UTXO used to create the asset), let $ \widehat {RC} $ be the issuerspecified Ricardian contract, then the asset entropy $ E $ is defined as
$$ E = \mathrm {Hash} ( \mathrm {Hash} (I) \parallel \mathrm {Hash} (\widehat {RC})) \tag{11} $$
The auxiliary input $ A $ is then defined as
$$ A = \mathrm {Hash} ( E \parallel 0) \tag{12} $$
Note that a Ricardian contract $ \widehat {RC} $ is not crucial for entropy $ E $ generation, as some other unique NUMS value could have been used in its stead, but only suggested. Ricardian contracts warrant a bit more explanation and are discussed in Appendix B.
Every noncoinbase transaction input can have up to one new asset issuance associated with it. An asset issuance (or asset definition) transaction input then consists of the UTXO being spent, the Ricardian contract, either an initial issuance explicit value or a Pedersen commitment, a range proof and a Boolean field indicating whether reissuance is allowed ([1], [13]).
Asset Reissuance
The confidential asset scheme allows the asset owner to later increase or decrease the amount of the asset in circulation, given that an asset reissuance token is generated together with the initial asset issuance. Given an asset entropy $ E $, the asset reissuance capability is the element (asset tag) $ H_{\hat A} \in \mathbb G $ obtained using an alternative auxiliary input $ \hat A $ defined as
$$ \hat A = \mathrm {Hash} ( E \parallel 1) \tag{13} $$
The resulting asset tag $ H_{\hat A} \in \mathbb G $ is linked to its reissuance capability, and the asset owner can assert their reissuance right by revealing the blinding factor $ r $ for the reissuance capability along with the original asset entropy $ E $. An asset reissuance (or asset definition) transaction input then consists of the spend of a UTXO containing an asset reissuance capability, the original asset entropy, the blinding factor for the asset commitment of the UTXO being spent, either an explicit reissuance amount or Pedersen commitment and a range proof [1].
The same mechanism can be used to manage capabilities for other restricted operations, e.g. to decrease issuance, to destroy the asset, or to make the commitment generator the hash of a script that validates the spending transaction. It is also possible to change the name of the default asset that is created upon blockchain initialization and the default asset used to pay fees on the network ([1], [13]).
Flexibility
ASPs prove that the asset commitments associated with outputs commit to legitimately issued asset tags. This feature allows compatible blockchains to support indefinitely many asset types, which may be added after the chain has been defined. There is room to adapt this scheme for optimal tradeoff between ASP data size and privacy by introducing a global dynamic list of assets, whereby each transaction selects a subset of asset tags for the corresponding ASPs [1].
If all the asset tags are defined at the instantiation of the blockchain, this will be compatible with the Mimblewimble protocol. The range proofs used for the development of this scheme were based on the BackMaxwell range proof scheme (refer to Definition 9 of [1]). Poelstra et al. [1] suggest more efficient range proofs, ASPs and use of aggregate range proofs. It is thus an open question whether Bulletproofs could fulfill this requirement.
Confidential Asset Implementations
Three independent implementations of confidential assets are summarized here. The first two implementations closely resemble the theoretical description given in [1], with the last implementation adding Bulletproofs' functionality to confidential assets.
Elements Project
Elements [31] is an open source, sidechaincapable blockchain platform, providing access to
advanced features such as Confidential Transactions and Issued Assets in Github: ElementsProject/elements
[16].
It allows digitizable collectables, reward points and attested assets (e.g. gold coins) to be realized on a
blockchain. The main idea behind Elements is to serve as a research platform and testbed for changes to the Bitcoin
protocol. The Elements project's implementation of confidential assets is called Issued Assets ([13], [14], [15])
and is based on its formal publication in [1].
The Elements project hosts a working demonstration (shown in Figure 2) of confidential asset transfers
involving five parties in Github: ElementsProject/confidentialassetsdemo
[17]. The demonstration depicts a scenario
where a coffee shop owner, Dave, charges a customer, Alice, for coffee in an asset called MELON. Alice does not hold
enough MELON and needs to convert some AIRSKY into MELON, making use of an exchange operated by Charlie. The coffee
shop owner, Dave, has a competitor, Bob, who is trying to gather information about Dave's sales. Due to the
blockchain's confidential transactions and assets features, he will not be able to see anything useful by processing
transactions on the blockchain. Fred is a miner and does not care about the detail of the transactions, but he makes
blocks on the blockchain when transactions enter his miner mempool. The demonstration also includes generating the
different types of assets.
Figure 2: Elements Confidential Assets Transfer Demonstration [17]
Chain Core Confidential Assets
Chain Core [20] is a shared, multiasset, cryptographic ledger, designed for enterprise financial infrastructure. It
supports the coexistence and interoperability of multiple types of assets on the same network in their Confidential
Assets framework. Chain Core is based on [1], and available as an open source project in Github: chain/chain
[21],
which has been archived. It has been succeeded by Sequence, a ledgerasaservice project, which enables secure
tracking and transfer of balances in a token format ([22], [23]). Sequence offers a free plan for up to
1,000,000 transactions per month.
Chain Core implements all native features as defined in [1]. It was also working towards implementing ElGamal commitments into Chain Core to make its Confidential Assets framework quantum secure, but it is unclear if this effort was concluded at the time the project was archived ([24], [25]).
Cloak
Chain/Interstellar [26] introduced Cloak [29], a redesign of Chain Core's Confidential Assets framework to make use
of Bulletproof range proofs [27]. It is available as an open source project in Github: interstellar/spacesuit
[28].
Cloak is all about confidential asset transactions, called cloaked transactions, which exchange values of different
asset types, called flavors. The protocol ensures that values are not transmuted to any other asset types, that
quantities do not overflow, and that both quantities and asset types are kept secret.
A traditional Bulletproofs implementation converts an arithmetic circuit into a Rank1 Constraint System (R1CS); Cloak bypasses arithmetic circuits and provides an Application Programmers Interface (API) for building a constraint system directly. The R1CS API consists of a hierarchy of taskspecific “gadgets”, and is used by the Prover and Verifier alike to allocate variables and define constraints. Cloak uses a collection of gadgets such as “shuffle”, “merge”, “split” and “range proof” to build a constraint system for cloaked transactions. All transactions of the same size are indistinguishable, because the layout of all the gadgets is only determined by the number of inputs and outputs.
At the time of writing this report, the Cloak development was still ongoing.
Conclusions, Observations and Recommendations

The idea to embed a Ricardian contract in the asset tag creation as suggested by Poelstra et al. [1] warrants more investigation for a new confidential blockchain protocol such as Tari. Ricardian contracts could be used in asset generation in the probable second layer.

Asset commitments and ASPs are important cryptographic primitives for confidential asset transactions.

The Elements project implemented a range of useful confidential asset framework features and should be critically assessed for usability in a probable Tari second layer.

Cloak has the potential to take confidential assets implementation to the next level in efficiency and should be closely monitored. Interstellar is in the process of fully implementing and extending Bulletproofs for use in confidential assets.

If confidential assets are to be implemented in a Mimblewimble blockchain, all asset tags must be defined at their instantiation, otherwise they will not be compatible.
References
[1] A. Poelstra, A. Back, M. Friedenbach, G. Maxwell and P, Wuille, "Confidential Assets", Blockstream [online]. Available: https://blockstream.com/bitcoin17final41.pdf. Date accessed: 2018‑12‑25.
[2] Wikipedia: "Discrete Logarithm" [online]. Available: https://en.wikipedia.org/wiki/Discrete_logarithm. Date accessed: 2018‑12‑20.
[3] A. Sadeghi and M. Steiner, "Assumptions Related to Discrete Logarithms: Why Subtleties Make a Real Difference" [online]. Available: http://www.semper.org/sirene/publ/SaSt_01.dhetal.long.pdf. Date accessed: 2018‑12‑24.
[4] G. Maxwell, "Confidential Transactions Write up" [online]. Available: https://people.xiph.org/~greg/confidential_values.txt. Date accessed: 2018‑12‑10.
[5] A. Gibson, "An Investigation into Confidential Transactions", July 2018 [online]. Available: https://github.com/AdamISZ/ConfidentialTransactionsDoc/blob/master/essayonCT.pdf. Date accessed: 2018‑12‑22.
[6] Pedersencommitment: An Implementation of Pedersen Commitment Schemes [online]. Available: https://hackage.haskell.org/package/pedersencommitment. Date accessed: 2018‑12‑25.
[7] B. Franca, "Homomorphic Miniblockchain Scheme", April 2015 [online]. Available: http://cryptonite.info/files/HMBC.pdf. Date accessed: 2018‑12‑22.
[8] C. Franck and J. Großschädl, "Efficient Implementation of Pedersen Commitments Using Twisted Edwards Curves", University of Luxembourg [online]. Available: http://orbilu.uni.lu/bitstream/10993/33705/1/MSPN2017.pdf. Date accessed: 2018‑12‑22.
[9] A. Poelstra, "Mimblewimble", October 2016 [online]. Available: http://diyhpl.us/~bryan/papers2/bitcoin/mimblewimbleandytoshidraft20161020.pdf. Date accessed: 2018‑12‑13.
[10] A. Poelstra, "Mimblewimble Explained", November 2016 [online]. Available: https://www.weusecoins.com/mimblewimbleandrewpoelstra/. Date accessed: 2018‑12‑10.
[11] I. Grigg, "The Ricardian Contract", First IEEE International Workshop on Electronic Contracting. IEEE (2004) [online]. Available: http://iang.org/papers/ricardian_contract.html. Date accessed: 2018‑12‑13.
[12] D. Koteshov, "Smart vs. Ricardian Contracts: What’s the Difference?" February 2018 [online]. Available: https://www.elinext.com/industries/financial/trends/smartvsricardiancontracts/. Date accessed: 2018‑12‑13.
[13] Elements by Blockstream: "Issued Assets  You can Issue your own Confidential Assets on Elements"
[online]. Available: https://elementsproject.org/features/issuedassets. Date accessed: 2018‑12‑14.
[14] Elements by Blockstream: "Issued Assets  Investigation, Principal Investigator: Andrew Poelstra" [online]. Available: https://elementsproject.org/features/issuedassets/investigation. Date accessed: 2018‑12‑14.
[15] Elements by Blockstream: "Elements Code Tutorial  Issuing your own Assets" [online]. Available: https://elementsproject.org/elementscodetutorial/issuingassets. Date accessed: 2018‑12‑14.
[16] Github: ElementsProject/elements [online]. Available: https://github.com/ElementsProject/elements. Date accessed: 2018‑12‑18.
[17] Github: ElementsProject/confidentialassetsdemo [online]. Available: https://github.com/ElementsProject/confidentialassetsdemo. Date accessed: 2018‑12‑18.
[18] G. Maxwell and A. Poelstra, "Borromean Ring Signatures" (2015) [online]. Available: http://diyhpl.us/~bryan/papers2/bitcoin/Borromean%20ring%20signatures.pdf. Date accessed: 2018‑12‑18.
[19] M. Abe, M. Ohkubo and K. Suzuki, "1outofn Signatures from a Variety of Keys" [online]. Available: https://www.iacr.org/cryptodb/archive/2002/ASIACRYPT/50/50.pdf. Date accessed: 2018‑12‑18.
[20] Chain Core [online]. Available: https://chain.com/docs/1.2/core/getstarted/introduction. Date accessed: 2018‑12‑18.
[21] Github: chain/chain [online]. Available: https://github.com/chain/chain. Date accessed: 2018‑12‑18.
[22] Chain: "Sequence" [online]. Available: https://chain.com/sequence. Date accessed: 2018‑12‑18.
[23] Sequence Documentation [online]. Available: https://dashboard.seq.com/docs. Date accessed: 2018‑12‑18.
[24] O Andreev: "Hidden in Plain Sight: Transacting Privately on a Blockchain  Introducing Confidential Assets in the Chain Protocol", Chain [online]. Available: https://blog.chain.com/hiddeninplainsighttransactingprivatelyonablockchain835ab75c01cb. Date accessed: 2018‑12‑18.
[25] Blockchains in a Quantum Future  Protecting Against PostQuantum Attacks on Cryptography [online]. Available: https://blog.chain.com/preparingforaquantumfuture45535b316314. Date accessed: 2018‑12‑18.
[26] Inter/stellar website [online]. Available: https://interstellar.com. Date accessed: 2018‑12‑22.
[27] C. Yun, "Programmable Constraint Systems for Bulletproofs" [online]. Available: https://medium.com/interstellar/programmableconstraintsystemsforbulletproofs365b9feb92f7. Date accessed: 2018‑12‑22.
[28] Github: interstellar/spacesuit [online]. Available: https://github.com/interstellar/spacesuit/blob/master/spec.md. Date accessed: 2018‑12‑18.
[29] Github: interstellar/spacesuit/spec.md [online]. Available: https://github.com/interstellar/spacesuit/blob/master/spec.md. Date accessed: 2018‑12‑18.
[30] Wikipedia: "Ricardian Contract" [online]. Available: https://en.wikipedia.org/wiki/Ricardian_contract. Date accessed: 2018‑12‑21.
[31] Wikipedia: "Elements by Blockstream" [online]. Available: https://elementsproject.org. Date accessed: 2018‑12‑21.
Appendices
Appendix A: Definition of Terms
Definitions of terms presented here are high level and general in nature. Full mathematical definitions are available in the cited references.
 Discrete Logarithm/Discrete Logarithm Problem (DLP): In the mathematics of real numbers, the logarithm $ \log_b^a $ is a number $ x $ such that $ b^x=a $, for given numbers $ a $ and $ b $. Analogously, in any group $ G $, powers $ b^k $ can be defined for all integers $ k $, and the discrete logarithm $ \log_ba $ is an integer $ k $ such that $ b^k=a $. Algorithms in publickey cryptography base their security on the assumption that the discrete logarithm problem over carefully chosen cyclic finite groups and cyclic subgroups of elliptic curves over finite fields has no efficient solution ([2], [3]).
Appendix B: Ricardian Contracts vs. Smart Contracts
A Ricardian contract is “a digital contract that deﬁnes the terms and conditions of an interaction, between two or more peers, that is cryptographically signed and veriﬁed, being both human and machine readable and digitally signed” [12]. With a Ricardian contract, the information from the legal document is placed in a format that can be read and executed by software. The cryptographic identification offers high levels of security. The main properties of a Ricardian contract are (also refer to Figure 1):
 human readable;
 document is printable;
 program parsable;
 all forms (displayed, printed and parsed) are manifestly equivalent;
 signed by issuer;
 can be identified securely, where security means that any attempts to change the linkage between a reference and the contract are impractical.
Figure 1: Bowtie Diagram of a Ricardian Contract [12]
Ricardian contracts are robust (due to identification by cryptographic hash functions), transparent (due to readable text for legal prose) and efficient (due to computer markup language to extract essential information [30].
A smart contract is “a computerized transaction protocol that executes the terms of a contract. The general objectives are to satisfy common contractual conditions” [12]. With smart contracts, digital assets can be exchanged in a transparent and nonconflicting way. They provide trust. The main properties of a smart contract are:
 selfexecuting (it doesn't run unless someone initiates it);
 immutable;
 selfverifying;
 autoenforcing;
 cost saving;
 removes third parties or escrow agents.
It is possible to implement a Ricardian contract as a smart contract, but not in all instances. A smart contract is a preagreed digital agreement that can be executed automatically. A Ricardian contract records “intentions” and “actions” of a particular contract, no matter if it has been executed or not. Hashes of Ricardian contracts can refer to documents or executable code ([12], [30]).
The Ricardian contract design pattern has been implemented in several projects and is free of any intellectual property restrictions [30].
Contributors
 https://github.com/hansieodendaal
 https://github.com/philiprza
 https://github.com/neonknight64
 https://github.com/anselld
Blockchainrelated Protocols
Purpose
Users often take blockchain protocols for granted when analyzing the potential of a cryptocurrency. The different blockchain protocols used can play a prominent role in the success of a cryptocurrency [1].
Definitions

From Wikipedia  1: A blockchain is a growing list of records, called blocks, which are linked using cryptography. Each block contains a cryptographic hash of the previous block, a time stamp, and transaction data (generally represented as a Merkle tree root hash) [2].

From Wikipedia  2: A security protocol (cryptographic protocol or encryption protocol) is an abstract or concrete protocol that performs a securityrelated function and applies cryptographic methods, often as sequences of cryptographic primitives. A protocol describes how the algorithms should be used. A sufficiently detailed protocol includes details about data structures and representations, at which point it can be used to implement multiple, interoperable versions of a program [3].

From Market Protocol: A protocol can be seen as a methodology agreed upon by two or more parties that establish a common set of rules that
they can agree upon so as to enter into a binding agreement. Protocols are therefore the set of rules that govern the network. Blockchain protocols usually include rules about consensus, transaction validation and network participation [4]. 
From Encyclopaedia Britannica: Protocol, in computer science, is a set of rules or procedures for transmitting data between electronic devices, such as computers. In order for computers to exchange information, there must be a preexisting agreement as to how the information will be structured and how each side will send and receive it. Without a protocol, a transmitting computer, for example, could be sending its data in eightbit packets while the receiving computer might expect the data in 16bit packets [5].
References
[1] "A List of Blockchain Protocols  Explained and Compared" [online]. Available: https://cryptomaniaks.com/blockchainprotocolslistexplained. Date accessed: 2019‑06‑11.
[2] Wikipedia: "Blockchain" [online]. Available: https://en.wikipedia.org/wiki/Blockchain. Date accessed: 2019‑06‑10.
[3] Wikipedia: "Cryptographic Protocol" [online]. Available: https://en.wikipedia.org/wiki/Cryptographic_protocol. Date accessed: 2019‑06‑11.
[4] L. Jovanovic, "Blockchain Crash Course: Protocols, dApps, APIs and DEXs" [online]. MARKET Protocol. Available: https://en.wikipedia.org/wiki/Digital_asset. Date accessed: 2019‑06‑11.
[5] "Protocol  Computer Science" [online]. Encyclopaedia Britannica. Available: https://www.britannica.com/technology/protocolcomputerscience. Date accessed: 2019‑06‑11.
Mimblewimble
This presentation contains somewhat outdated content. Whilst the description here follows the original Mimblewimble paper pretty closely, it no longer precisely describes how Mimblewimble transactions are implemented in say, Grin or Tari.
The presentation is left here as is since it offers a pretty gentle introduction to the math involved, and it may be instructive to see how things used to be before you head off to read a more detailed and uptodate account.