Introduction
Welcome to Tari Labs University.
Our mission: To be the premier destination for balanced and accessible learning material for blockchain, digital currency and digital assets learning material.
We hope to make this a learning experience for us at Tari Labs: as a means to grow our knowledge base and internal expertise or as a refresher, but we think this will also be an excellent resource for anyone interested in the myriad of disciplines required to understand blockchain technology.
We would like this platform to be a place of learning  accessible to anyone, irrespective of their degree of expertise. Our aim is to cover a wide range of topics that are relevant to the Tari space, starting at a beginner level, and extending down a path of deeper complexity.
You are welcome to contribute to our online content. To help you get started, we've compiled a Style Guide for Tari Labs University reports. By using this Style Guide you can help us to ensure consistency in the content and layout of TLU reports.
Errors, Comments and Contributions
We want this collection of educational presentations and videos to be a collaborative affair.
This extends to our presentations. We are learning along with you. Our content may not be perfect first time around, so we invite you to alert us to errors and issues or, better yet, if you know how to make a pull request, to contribute a fix, write the correction and use a pull request.
As much as this learning platform is called Tari Labs University and will see input from many internal contributors and external experts, we would like you to contribute to new material, be it in the form of a suggestion of topics, varying the skill levels of presentations, or posting presentations that you may feel will benefit us as a growing community. In the words of Yoda, “Always pass on what you have learned.”
If you are considering contributing content to Tari Labs University, please be aware of our guiding principles.
Guiding Principles
 The topic researched should be potentially relevant to the Tari protocol; chat to us on #tariresearch on IRC if you're not sure.
 The topic should be thoroughly researched.
 A critical approach should be taken taken (in the academic sense), with critiques and commentaries sought out and presented alongside the main topic. Remember that every white paper promises the world, so go and look for counterclaims.
 A recommendation/conclusion section should be included, providing a critical analysis on whether or not the technology/proposal would be useful to the Tari protocol.
 The work presented should be easy to read and understand, distilling complex topics into a form that is accessible to a technical but nonexpert audience. Use your own voice.
The Submission Process
This is the basic process we follow within Tari Labs. As an external contributor, we'd appreciate it if you followed the same process.
 Get some agreement from the community that the topic is of interest.
 Write up your report.
 Push a first draft of your report as a Pull Request.
 The community will peerreview the report; much the same as we would with a code Pull Request.
 The report gets merged into the master.
 Receive the fame and acclaim that is due.
Learning Paths
We have put the presentations and reports into categories of difficulty, interest, and format.
Presentations
 NonFungible Tokens  An introduction to nonfungible tokens (NFTs), including the implementation of NFTs, Ethereum standards, and players in the Blockchainbased ticketing industry.
 Crypto101  An introduction to elliptic curve math and digital signatures.
 Mimblewimble  An introduction to Mimblewimble  a protocol that focuses on scalability and privacy through the implementation of confidential transactions.
 Lightning Network for Dummies  An introduction to the Lightning Network, including examples of its workings, pros and cons.
 Layer 2 Scaling Survey (Part 1)  An overview of different layer 2 scaling solutions being worked at today, as well as a basic SWOT analysis of each.
 Layer 2 Scaling Survey (Part 2)  An overview of different layer 2 scaling solutions being worked at today, as well as a basic strengths, weaknesses, opportunities, and threats (SWOT) analysis of each.
 Layer 2 Scaling Executive summary  An overview of the scaling landscape, how it will be applicable to Tari, what the scaling context is for Tari, and what viable scaling alternatives exist for Tari.
 RGB Protocol  An introduction to the RGB protocol.
 SPV, Merkle Trees and Bloom Filters  An introduction to Simple Payment Verification (SPV) and how it is achieved with Merkle trees and Bloom Filters.
 Atomic Swaps  An introduction to the basics of Atomic Swaps.
 Byzantine Fault Tolerance and Consensus Mechanisms  Understanding Byzantine Generals Problem and how consensus is achieved in cryptocurrencies.
 Basics of Scriptless Scripts  An introduction to the basics of Scriptless Scripts.
Reports
 Merged Mining  Provides a fundamental understanding of the concept of merged mining, including definitions, relevant case studies, and vector attacks.
 Layer 2 Scaling Survey (Part 1)  Presents an overview of different layer 2 scaling solutions being worked on today, as well as a basic SWOT analysis of each.
 Layer 2 Scaling Survey (Part 2)  Presents an overview of different layer 2 scaling solutions being worked on today, as well as a basic SWOT analysis of each.
 Atomic Swaps  Presents the basics of Atomic Swaps.
 Basics of Scriptless Scripts  Presents the basics of Scriptless Scripts.
 Introduction to Schnorr Signatures  Presents the basics of Schnorr Signatures and Signature Aggregation.
Beginners
Here we have a set of introductory level presentations:
 Crypto101  An introduction to elliptic curve math and digital signatures.
 Mimblewimble  An introduction to Mimblewimble, a protocol that focuses on scalability and privacy through the implementation of confidential transactions.
 Lightning Network for Dummies  An introduction to the Lightning Network, including examples of its workings, pros and cons.
 NonFungible Tokens  An introduction to nonfungible tokens (NFTs), including the implementation of NFTs, Ethereum standards, and players in the Blockchainbased ticketing industry.
 Byzantine Fault Tolerance and Consensus Mechanisms  Understanding Byzantine Generals Problem and how consensus is achieved in cryptocurrencies.
Stepup from Beginners
A small jump...
 Basics of Scriptless Scripts  Basics of Scriptless Scripts.
 Introduction to Schnorr Signatures  Basics of Schnorr Signatures and Signature Aggregation.
Lay of the Land
 Layer 2 Scaling Survey (Part 1)  Presents an overview of different layer 2 scaling solutions being worked on today, as well as a basic SWOT analysis of each.
 Layer 2 Scaling Survey (Part 2)  Presents an overview of different layer 2 scaling solutions being worked on today, as well as a basic SWOT analysis of each.
 Layer 2 Scaling Executive summary  Presents the scaling landscape, how it will be applicable to Tari, what the scaling context is for Tari, and what viable scaling alternatives exist for Tari.
Cryptography
From Wikipedia
Cryptography or cryptology (from Ancient Greek: κρυπτός, kryptós "hidden, secret"; and γράφειν graphein, "to write", or λογία logia, "study", respectively) is the practice and study of techniques for secure communication in the presence of third parties called adversaries. More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages; various aspects in information security such as data confidentiality, data integrity, authentication, and nonrepudiation are central to modern cryptography.
From SearchSecurity
Cryptography is a method of protecting information and communications through the use of codes so that only those for whom the information is intended can read and process it. The prefix "crypt" means "hidden" or "vault" and the suffix "graphy" stands for "writing."
Elliptic curves 101
Having trouble viewing this presentation?
View it in a separate window.
Introduction to Schnorr Signatures
 Overview
 Let's get Started
 Basics of Schnorr Signatures
 Schnorr Signatures
 MuSig
 References
 Contributors
Overview
Privatepublic key pairs are the cornerstone of much of the cryptographic security underlying everything from secure web browsing to banking to cryptocurrencies. Privatepublic key pairs are asymmetric. This means that given one of the numbers (the private key), it's possible to derive the other one (the public key). However, doing the reverse is not feasible. It's this asymmetry that allows one to share the public key, uh, publicly and be confident that no one can figure out our private key (which we keep very secret and secure).
Asymmetric key pairs are employed in two main applications:
 in authentication, where you prove that you have knowledge of the private key; and
 in encryption, where messages can be encoded and only the person possessing the private key can decrypt and read the message.
In this introduction to digital signatures, we'll be talking about a particular class of keys: those derived from elliptic curves. There are other asymmetric schemes, not least of which are those based on products of prime numbers, including RSA keys [1].
We're going to assume you know the basics of elliptic curve cryptography (ECC). If not, don't stress, there's a gentle introduction in a previous chapter.
Let's get Started
This is an interactive introduction to digital signatures. It uses Rust code to demonstrate some of the ideas presented here, so you can see them at work. The code for this introduction uses the libsecp256krs library.
That's a mouthful, but secp256k1 is the name of the elliptic curve that secures a lot of things in many cryptocurrencies' transactions, including Bitcoin.
This particular library has some nice features. We've overridden the +
(addition) and *
(multiplication)
operators so that the Rust code looks a lot more like mathematical formulae. This makes it much easier
to play with the ideas we'll be exploring.
WARNING! Don't use this library in production code. It hasn't been battlehardened, so use this one in production instead.
Basics of Schnorr Signatures
Public and Private Keys
The first thing we'll do is create a public and private key from an elliptic curve.
On secp256k1, a private key is simply a scalar integer value between 0 and ~2^{256}. That's roughly how many atoms there are in the universe, so we have a big sandbox to play in.
We have a special point on the secp256k1 curve called G, which acts as the "origin". A public key is calculated by
adding G on the curve to itself, \( k_a \) times. This is the definition of multiplication by a scalar, and is
written as:
$$
P_a = k_a G
$$
Let's take an example from this post, where
it is known that the public key for 1
, when written in uncompressed format, is 0479BE667...C47D08FFB10D4B8
.
The following code snippet demonstrates this:
extern crate libsecp256k1_rs; use libsecp256k1_rs::{ SecretKey, PublicKey }; #[allow(non_snake_case)] fn main() { // Create the secret key "1" let k = SecretKey::from_hex("0000000000000000000000000000000000000000000000000000000000000001").unwrap(); // Generate the public key, P = k.G let pub_from_k = PublicKey::from_secret_key(&k); let known_pub = PublicKey::from_hex("0479BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8").unwrap(); // Compare it to the known value assert_eq!(pub_from_k, known_pub); println!("Ok") }
Creating a Signature
Approach Taken
Reversing ECC math multiplication (i.e. division) is pretty much infeasible when using properly chosen random values for your scalars ([5],[6]). This property is called the Discrete Log Problem, and is used as the principle behind many cryptography and digital signatures. A valid digital signature is evidence that the person providing the signature knows the private key corresponding to the public key with which the message is associated, or that they have solved the Discrete Log Problem.
The approach to creating signatures always follows this recipe:
 Generate a secret onceoff number (called a nonce), r.
 Create a public key, R from r (where R = r.G).
 Send the following to Bob, your recipient  your message (m), R, and your public key (P = k.G).
The actual signature is created by hashing the combination of all the public information above to create a challenge, e: $$ e = H(R  P  m) $$ The hashing function is chosen so that e has the same range as your private keys. In our case, we want something that returns a 256 bit number, so SHA256 is a good choice.
Now the signature is constructed using your private information: $$ s = r + ke $$ Bob can now also calculate e, since he already knows m, R, P. But he doesn't know your private key, or nonce.
Note: When you construct the signature like this, it's known as a Schnorr signature, which is discussed in a following section. There are other ways of constructing s, such as ECDSA [2], which is used in Bitcoin.
But see this:
$$ sG = (r + ke)G $$
Multiply out the righthand side:
$$ sG = rG + (kG)e $$
Substitute \(R = rG \) and \(P = kG \) and we have: $$ sG = R + Pe $$
So Bob must just calculate the public key corresponding to the signature (s.G) and check that it equals the righthand side of the last equation above (R + P.e), all of which Bob already knows.
Why do we Need the Nonce?
Why do we need a nonce in the standard signature?
Let's say we naïvely sign a message m with $$ e = H(P  m) $$ and then the signature would be \(s = ek \).
Now as before, we can check that the signature is valid: $$ \begin{align} sG &= ekG \\ &= e(kG) = eP \end{align} $$ So far so good. But anyone can read your private key now because s is a scalar, so \(k = \frac{s}{e} \) is not hard to do. With the nonce you have to solve \( k = (s  r)/e \), but r is unknown, so this is not a feasible calculation as long as r has been chosen randomly.
We can show that leaving off the nonce is indeed highly insecure:
extern crate libsecp256k1_rs as secp256k1; use secp256k1::{SecretKey, PublicKey, thread_rng, Message}; use secp256k1::schnorr::{ Challenge}; #[allow(non_snake_case)] fn main() { // Create a random private key let mut rng = thread_rng(); let k = SecretKey::random(&mut rng); println!("My private key: {}", k); let P = PublicKey::from_secret_key(&k); let m = Message::hash(b"Meet me at 12").unwrap(); // Challenge, e = H(P  m) let e = Challenge::new(&[&P, &m]).as_scalar().unwrap(); // Signature let s = e * k; // Verify the signature assert_eq!(PublicKey::from_secret_key(&s), e*P); println!("Signature is valid!"); // But let's try calculate the private key from known information let hacked = s * e.inv(); assert_eq!(k, hacked); println!("Hacked key: {}", k) }
ECDH
How do parties that want to communicate securely generate a shared secret for encrypting messages? One way is called the Elliptic Curve DiffieHellmam exchange (ECDH), which is a simple method for doing just this.
ECDH is used in many places, including the Lightning Network during channel negotiation [3].
Here's how it works. Alice and Bob want to communicate securely. A simple way to do this is to use each other's public keys and calculate $$ \begin{align} S_a &= k_a P_b \tag{Alice} \\ S_b &= k_b P_a \tag{Bob} \\ \implies S_a = k_a k_b G &\equiv S_b = k_b k_a G \end{align} $$
extern crate libsecp256k1_rs as secp256k1; use secp256k1::{ SecretKey, PublicKey, thread_rng, Message }; #[allow(non_snake_case)] fn main() { let mut rng = thread_rng(); // Alice creates a publicprivate keypair let k_a = SecretKey::random(&mut rng); let P_a = PublicKey::from_secret_key(&k_a); // Bob creates a publicprivate keypair let k_b = SecretKey::random(&mut rng); let P_b = PublicKey::from_secret_key(&k_b); // They each calculate the shared secret based only on the other party's public information // Alice's version: let S_a = k_a * P_b; // Bob's version: let S_b = k_b * P_a; assert_eq!(S_a, S_b, "The shared secret is not the same!"); println!("The shared secret is identical") }
For security reasons, the private keys are usually chosen at random for each session (you'll see the term ephemeral keys being used), but then we have the problem of not being sure the other party is who they say they are (perhaps due to a maninthemiddle attack [4]).
Various additional authentication steps can be employed to resolve this problem, which we won't get into here.
Schnorr Signatures
If you follow the crypto news, you'll know that that the new hotness in Bitcoin is Schnorr Signatures.
But in fact, they're old news! The Schnorr signature is considered the simplest digital signature scheme to be provably secure in a random oracle model. It is efficient and generates short signatures. It was covered by U.S. Patent 4,995,082, which expired in February 2008 [7].
So why all the Fuss?
What makes Schnorr signatures so interesting and potentially dangerous, is their simplicity. Schnorr signatures are linear, so you have some nice properties.
Elliptic curves have the multiplicative property. So if you have two scalars x, y with corresponding points X, Y, the following holds: $$ (x + y)G = xG + yG = X + Y $$ Schnorr signatures are of the form \( s = r + e.k \). This construction is linear too, so it fits nicely with the linearity of elliptic curve math.
You saw this property in a previous section, when we were verifying the signature. Schnorr signatures' linearity makes it very attractive for, among others:
 signature aggregation;
 atomic swaps;
 "scriptless" scripts.
Naïve Signature Aggregation
Let's see how the linearity property of Schnorr signatures can be used to construct a twooftwo multisignature.
Alice and Bob want to cosign something (a Tari transaction, say) without having to trust each other; i.e. they need to be able to prove ownership of their respective keys, and the aggregate signature is only valid if both Alice and Bob provide their part of the signature.
Assuming private keys are denoted \( k_i \) and public keys \( P_i \). If we ask Alice and Bob to each supply a nonce, we can try: $$ \begin{align} P_{agg} &= P_a + P_b \\ e &= H(R_a  R_b  P_a  P_b  m) \\ s_{agg} &= r_a + r_b + (k_a + k_b)e \\ &= (r_a + k_ae) + (r_b + k_ae) \\ &= s_a + s_b \end{align} $$ So it looks like Alice and Bob can supply their own R, and anyone can construct the twooftwo signature from the sum of the Rs and public keys. This does work:
extern crate libsecp256k1_rs as secp256k1; use secp256k1::{SecretKey, PublicKey, thread_rng, Message}; use secp256k1::schnorr::{Schnorr, Challenge}; #[allow(non_snake_case)] fn main() { // Alice generates some keys let (ka, Pa, ra, Ra) = get_keyset(); // Bob generates some keys let (kb, Pb, rb, Rb) = get_keyset(); let m = Message::hash(b"a multisig transaction").unwrap(); // The challenge uses both nonce public keys and private keys // e = H(Ra  Rb  Pa  Pb  H(m)) let e = Challenge::new(&[&Ra, &Rb, &Pa, &Pb, &m]).as_scalar().unwrap(); // Alice calculates her signature let sa = ra + ka * e; // Bob calculates his signature let sb = rb + kb * e; // Calculate the aggregate signature let s_agg = sa + sb; // S = s_agg.G let S = PublicKey::from_secret_key(&s_agg); // This should equal Ra + Rb + e(Pa + Pb) assert_eq!(S, Ra + Rb + e*(Pa + Pb)); println!("The aggregate signature is valid!") } #[allow(non_snake_case)] fn get_keyset() > (SecretKey, PublicKey, SecretKey, PublicKey) { let mut rng = thread_rng(); let k = SecretKey::random(&mut rng); let P = PublicKey::from_secret_key(&k); let r = SecretKey::random(&mut rng); let R = PublicKey::from_secret_key(&r); (k, P, r, R) }
But this scheme is not secure!
Key Cancellation Attack
Let's take the previous scenario again, but this time, Bob knows Alice's public key and nonce ahead of time, by waiting until she reveals them.
Now Bob lies and says that his public key is \( P_b' = P_b  P_a \) and public nonce is \( R_b' = R_b  R_a \).
Note that Bob doesn't know the private keys for these faked values, but that doesn't matter.
Everyone assumes that \(s_{agg} = R_a + R_b' + e(P_a + P_b') \) as per the aggregation scheme.
But Bob can create this signature himself: $$ \begin{align} s_{agg}G &= R_a + R_b' + e(P_a + P_b') \\ &= R_a + (R_b  R_a) + e(P_a + P_b  P_a) \\ &= R_b + eP_b \\ &= r_bG + ek_bG \\ \therefore s_{agg} &= r_b + ek_b = s_b \end{align} $$
extern crate libsecp256k1_rs as secp256k1; use secp256k1::{SecretKey, PublicKey, thread_rng, Message}; use secp256k1::schnorr::{Schnorr, Challenge}; #[allow(non_snake_case)] fn main() { // Alice generates some keys let (ka, Pa, ra, Ra) = get_keyset(); // Bob generates some keys as before let (kb, Pb, rb, Rb) = get_keyset(); // ..and then publishes his forged keys let Pf = Pb  Pa; let Rf = Rb  Ra; let m = Message::hash(b"a multisig transaction").unwrap(); // The challenge uses both nonce public keys and private keys // e = H(Ra  Rb'  Pa  Pb'  H(m)) let e = Challenge::new(&[&Ra, &Rf, &Pa, &Pf, &m]).as_scalar().unwrap(); // Bob creates a forged signature let s_f = rb + e * kb; // Check if it's valid let sG = Ra + Rf + e*(Pa + Pf); assert_eq!(sG, PublicKey::from_secret_key(&s_f)); println!("Bob successfully forged the aggregate signature!") } #[allow(non_snake_case)] fn get_keyset() > (SecretKey, PublicKey, SecretKey, PublicKey) { let mut rng = thread_rng(); let k = SecretKey::random(&mut rng); let P = PublicKey::from_secret_key(&k); let r = SecretKey::random(&mut rng); let R = PublicKey::from_secret_key(&r); (k, P, r, R) }
Better Approaches to Aggregation
In the Key Cancellation Attack, Bob didn't know the private keys for his published R and P values. We could defeat Bob by asking him to sign a message proving that he does know the private keys.
This works, but it requires another round of messaging between parties, which is not conducive to a great user experience.
A better approach would be one that incorporates one or more of the following features:
 It must be provably secure in the plain publickey model, without having to prove knowledge of secret keys, as we might have asked Bob to do in the naïve approach.
 It should satisfy the normal Schnorr equation, i.e. the resulting signature can be verified with an expression of the form \( R + e X \).
 It allows for Interactive Aggregate Signatures (IAS), where the signers are required to cooperate.
 It allows for Noninteractive Aggregate Signatures (NAS), where the aggregation can be done by anyone.
 It allows each signer to sign the same message, m.
 It allows each signer to sign their own message, \( m_i \).
MuSig
MuSig is a recently proposed ([8],[9]) simple signature aggregation scheme that satisfies all of the properties in the preceding section.
MuSig Demonstration
We'll demonstrate the interactive MuSig scheme here, where each signatory signs the same message. The scheme works as follows:
 Each signer has a publicprivate key pair, as before.
 Each signer shares a commitment to their public nonce (we'll skip this step in this demonstration). This step is necessary to prevent certain kinds of rogue key attacks [10].
 Each signer publishes the public key of their nonce, \( R_i \).
 Everyone calculates the same "shared public key", X as follows:
$$ \begin{align} \ell &= H(X_1  \dots  X_n) \\ a_i &= H(\ell  X_i) \\ X &= \sum a_i X_i \\ \end{align} $$
Note that in the preceding ordering of public keys, some deterministic convention should be used, such as the lexicographical order of the serialized keys.
 Everyone also calculates the shared nonce, \( R = \sum R_i \).
 The challenge, e is \( H(R  X  m) \).
 Each signer provides their contribution to the signature as:
$$ s_i = r_i + k_i a_i e $$
Notice that the only departure here from a standard Schnorr signature is the inclusion of the factor \( a_i \).
The aggregate signature is the usual summation, \( s = \sum s_i \).
Verification is done by confirming that as usual: $$ sG \equiv R + e X \ $$ Proof: $$ \begin{align} sG &= \sum s_i G \\ &= \sum (r_i + k_i a_i e)G \\ &= \sum r_iG + k_iG a_i e \\ &= \sum R_i + X_i a_i e \\ &= \sum R_i + e \sum a_i X_i \\ &= R + e X \\ \blacksquare \end{align} $$ Let's demonstrate this using a threeofthree multisig:
extern crate libsecp256k1_rs as secp256k1; use secp256k1::{ SecretKey, PublicKey, thread_rng, Message }; use secp256k1::schnorr::{ Challenge }; #[allow(non_snake_case)] fn main() { let (k1, X1, r1, R1) = get_keys(); let (k2, X2, r2, R2) = get_keys(); let (k3, X3, r3, R3) = get_keys(); // I'm setting the order here. In general, they'll be sorted let l = Challenge::new(&[&X1, &X2, &X3]); // ai = H(l  p) let a1 = Challenge::new(&[ &l, &X1 ]).as_scalar().unwrap(); let a2 = Challenge::new(&[ &l, &X2 ]).as_scalar().unwrap(); let a3 = Challenge::new(&[ &l, &X3 ]).as_scalar().unwrap(); // X = sum( a_i X_i) let X = a1 * X1 + a2 * X2 + a3 * X3; let m = Message::hash(b"SomeSharedMultiSigTx").unwrap(); // Calc shared nonce let R = R1 + R2 + R3; // e = H(R  X  m) let e = Challenge::new(&[&R, &X, &m]).as_scalar().unwrap(); // Signatures let s1 = r1 + k1 * a1 * e; let s2 = r2 + k2 * a2 * e; let s3 = r3 + k3 * a3 * e; let s = s1 + s2 + s3; //Verify let sg = PublicKey::from_secret_key(&s); let check = R + e * X; assert_eq!(sg, check, "The signature is INVALID"); println!("The signature is correct!") } #[allow(non_snake_case)] fn get_keys() > (SecretKey, PublicKey, SecretKey, PublicKey) { let mut rng = thread_rng(); let k = SecretKey::random(&mut rng); let P = PublicKey::from_secret_key(&k); let r = SecretKey::random(&mut rng); let R = PublicKey::from_secret_key(&r); (k, P, r, R) }
Security Demonstration
As a final demonstration, let's show how MuSig defeats the cancellation attack from the naïve signature scheme.
Using the same idea as in the Key Cancellation Attack section, Bob has provided fake values for his
nonce and public keys:
$$
\begin{align}
R_f &= R_b  R_a \\
X_f &= X_b  X_a \\
\end{align}
$$
This leads to both Alice and Bob calculating the following "shared" values:
$$
\begin{align}
\ell &= H(X_a  X_f) \\
a_a &= H(\ell  X_a) \\
a_f &= H(\ell  X_f) \\
X &= a_a X_a + a_f X_f \\
R &= R_a + R_f (= R_b) \\
e &= H(R  X  m)
\end{align}
$$
Bob then tries to construct a unilateral signature following MuSig:
$$
s_b = r_b + k_s e
$$
Let's assume for now that \( k_s \) doesn't need to be Bob's private key, but that he can derive it using information
he knows. For this to be a valid signature, it must verify to \( R + eX \). So therefore:
$$
\begin{align}
s_b G &= R + eX \\
(r_b + k_s e)G &= R_b + e(a_a X_a + a_f X_f) & \text{The first term looks good so far}\\
&= R_b + e(a_a X_a + a_f X_b  a_f X_a) \\
&= (r_b + e a_a k_a + e a_f k_b  e a_f k_a)G & \text{The r terms cancel as before} \\
k_s e &= e a_a k_a + e a_f k_b  e a_f k_a & \text{But nothing else is going away}\\
k_s &= a_a k_a + a_f k_b  a_f k_a \\
\end{align}
$$
In the previous attack, Bob had all the information he needed on the righthand side of the analogous calculation. In MuSig,
Bob must somehow know Alice's private key and the faked private key (the terms don't cancel anymore) in order to create a unilateral signature,
and so his cancellation attack is defeated.
Replay attacks!
It's critical that a new nonce be chosen for every signing ceremony. The best way to do this is to make use of a cryptographically secure (pseudo)random number generator (CSPRNG).
But even if this is the case, let's say an attacker can trick us into signing a new message by "rewinding" the signing ceremony to the point where partial signatures are generated. At this point, the attacker provides a different message, \( e' = H(...m') \) to sign. Not suspecting any foul play, each party calculates their partial signature:
$$ s'_i = r_i + a_i k_i e' $$ However, the attacker still has access to the first set of signatures: \( s_i = r_i + a_i k_i e \). He now simply subtracts them: $$ \begin{align} s'_i  s_i &= (r_i + a_i k_i e')  (r_i + a_i k_i e) \\ &= a_i k_i (e'  e) \\ \therefore k_i &= \frac{s'_i  s_i}{a_i(e'  e)} \end{align} $$ Everything on the righthand side of the final equation is known by the attacker and thus he can trivially extract everybody's private key. It's difficult to protect against this kind of attack. One way to is make it difficult (or impossible) to stop and restart signing ceremonies. If a multisig ceremony gets interrupted, then you need to start from step one again. This is fairly unergonomic, but until a more robust solution comes along, it may be the best we have!
References
[1] "RSA (Cryptosystem)" [online]. Available: https://en.wikipedia.org/wiki/RSA_(cryptosystem). Date accessed: 20181011.
[2] "Elliptic Curve Digital Signature Algorithm", Wikipedia [online]. Available: https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm. Date accessed: 20181011.
[3] "BOLT #8: Encrypted and Authenticated Transport, Lightning RFC", Github [online].
Available: https://github.com/lightningnetwork/lightningrfc/blob/master/08transport.md. Date accessed: 20181011.
[4] "Man in the Middle Attack", Wikipedia [online]. Available: https://en.wikipedia.org/wiki/Maninthemiddle_attack. Date accessed: 20181011.
[5] "How does a Cryptographically Secure Random Number Generator Work?" StackOverflow" [online]. Available: https://stackoverflow.com/questions/2449594/howdoesacryptographicallysecurerandomnumbergeneratorwork. Date accessed: 20181011.
[6] "Cryptographically Secure Pseudorandom Number Generator", Wikipedia [online]. Available: https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator. Date accessed: 20181011.
[7] "Schnorr Signature", Wikipedia [online]. Available: https://en.wikipedia.org/wiki/Schnorr_signature. Date accessed: 20180919.
[8] "Key Aggregation for Schnorr Signatures", Blockstream [online]. Available: https://blockstream.com/2018/01/23/musigkeyaggregationschnorrsignatures.html_. Date accessed: 20180919.
[9] Gregory Maxwell, Andrew Poelstra, Yannick Seurin and Pieter Wuille, "Simple Schnorr Multisignatures with Applications to Bitcoin" [online]. Available: https://eprint.iacr.org/2018/068.pdf. Date accessed: 20180919.
[10] Manu Drijvers, Kasra Edalatnejad, Bryan Ford, Eike Kiltz, Julian Loss, Gregory Neven and Igors Stepanovs, "On the Security of TwoRound MultiSignatures", Cryptology ePrint Archive, Report 2018/417 [online]. Available: https://eprint.iacr.org/2018/417.pdf. Date accessed: 20190221.
Contributors
Introduction to Scriptless Scripts
Definition of Scriptless Scripts
Scriptless Scripts are a means to execute smart contracts offchain, through the use of Schnorr signatures. [1]
The concept of Scriptless Scripts was born from Mimblewimble, which is a blockchain design that with the exception of kernels and their signatures, does not store permanent data. \( \eta=\gamma \) Fundamental properties of Mimblewimble include both privacy and scaling, both of which require the implementation of Scriptless Scripts. [2]
A brief introduction is also given in Scriptless Scripts, Layer 2 Scaling Survey (Part 2).
Benefits of Scriptless Scripts
The benefits of Scriptless Scripts are functionality, privacy and efficiency.
Functionality
With regard to functionality, Scriptless Scripts are said to increase the range and complexity of smart contracts. Currently, as within Bitcoin Script, limitations stem from the number of OP_CODES
that have been enabled by the network. Scriptless Scripts move the specification and execution of smart contractions from the network to a discussion that only involves the participants of the smart contract.
Privacy
With regard to privacy, moving the specification and execution of smart contracts from onchain to offchain increases privacy. When onchain, many details of the smart contract are shared to the entire network. These details include the number and addresses of participants, and the amounts transferred. By moving smart contracts offchain, the network only knows that the participants agree that the terms of their contract have been satisfied and that the transaction in question is valid.
Efficiency
With regard to efficiency, Scriptless Scripts minimize the amount of data that requires verification and storage onchain. By moving smart contracts offchain, there are fewer overheads for full nodes and lower transaction fees for users. [1]
List of Scriptless Scripts
In this report, various forms of scripts will be covered, including [3]:
 Simultaneous Scriptless Scripts
 Adaptor Signatures
 Zero Knowledge Contingent Payments
Role of Schnorr Signatures
To begin with, the fundamentals of Schnorr signatures must be defined. The signer has a private key x and random nonce r. G is the generator of a discrete log hard group, and P is the public key. [4]
s, the signature, can then be computed as a simple linear transaction
$$ s=r+ex $$
Where:
$$ e=H(Prmessage) $$
$$ P=xG $$
The position on the line chosen is taken as the hash of all the data that one needs to commit to, the digital signature. The verification equation involves the multiplication of each of the terms in the equation by G and takes into account the cryptographic assumption (discrete log) where G can be multiplied in but not divided out, thus preventing deciphering.
$$ sG=rG+exG $$
Elliptic Curve Digital Signature Algorithm (ECDSA) signatures (used in Bitcoin) are not linear in x and r, and thus less useful. [2]
Schnorr Multisignatures
A multisignature (mulitsig) has multiple participants that produce a signature. Every participant might product a separate signature and concatenate them, forming a mulitsig.
With Schnorr Signatures, one can have a single public key, which is the sum of many different people's public keys. The resulting key is one against which signatures will be verifiable. [5]
The formulation of a mulitsig involves taking the sum of all components; thus all nonces and s values result in the formulation of a mulitsig. [4]
$$ s=Σs(i) $$
It can therefore be seen that these signatures are essentially Scriptless Scripts. Independent public keys of several participants are joint to form a single key and signature, which, when published, do not divulge the details as to the number of participants involved or the original public keys.
Adaptor Signatures
This mulitsig protocol can be modified to produce an adaptor signature, which serves as the building block for all Scriptless Script functions. [5]
Instead of functioning as full valid signature on a message with a key, an adaptor signature is a promise that a signature agreed to be published, will reveal a secret.
This concept is similar to that of atomic swaps. However, no scrips are implemented. Since this is elliptic curve cryptography, there is only scalar multiplication of elliptic curve points. Fortunately, like a hash function, elliptic curves function in one way, so an elliptic curve point (T), can simply be shared and the secret will be its corresponding private key.
If two parties are considered: rather than providing their nonce R in the mulitsig protocol, a blinding factor, taken as an elliptic curve point T is conceived and sent in addition to R (ie. R+T). So it can be seen that R is not blinded; it has instead been offset by the secret value T.
Here, the Schnorr mulitsig construction is modified such that the first party generates
$$ T=tG, R=rG $$
where t is the shared secret, G is the generator of discrete log hard group and r is the random nonce.
Using this information, the second party generates
$$ H(PR+Tmessage)x $$
where the coins to be swapped are contained within message. The first party can now calculate the complete signature s such that
$$ s=r+t+H(PR+Tmessage)x $$
The first party then calculates and publishes the adaptor signature s' to the second party (and any one else listening)
$$ s'=st $$
The second party can verify the adaptor signature s' by asserting s'G
$$ s'G =? R+H(PR+Tmessage)P $$
However, this is not a valid signature, as the hashed nonce point is R+T and not R.
The second party cannot retrieve a valid signature from this and requires ECDLP solving to recover s'+t, which is virtually impossible.
After the first party broadcasts s to claim the coins within message, the second party can calculate the secret t from
$$ t=ss' $$
The above is very general. However, by attaching auxiliary proofs too, an adaptor signature can be derived that will allow the translation of correct movement of the auxiliary protocol into a valid signature.
Simultaneous Scriptless Scripts
Preimages
The execution of separate transactions in an atomic fashion is achieved through preimages. If two transactions require the preimage to the same hash, once one is executed, the preimage is exposed so that the other one can be as well. Atomic swaps and Lightning channels use this construction. [4]
Difference of Two Schnorr Signatures
If we consider the difference of two Schnorr signatures:
$$ d=ss'=kk'+exe'x' $$
The above equation can be verified in a similar manner to that of a single Schnorr signature, by multiplying each term by G and confirming algebraic correctness.
$$ dG=kGk'G+exGe'x'G $$
It must be noted that the difference d is being verified, and not the Schnorr signature itself. d functions as the translating key between two separate independent Schnorr signatures. Given d and either s or s', the other can be computed. So possession of d makes these two signatures atomic. This scheme does not link the two signatures or compromise their security.
For an atomic transaction, during the setup stage, someone provides the opposing party with the value d, and asserts it as the correct value. Once the transaction is signed, it can be adjusted to complete the other transaction. Atomicity is achieved, but can only be used by the person who possesses this d value. Generally, the party that stands to lose money requires the d value.
The d value provides an interesting property with regard to atomicity. It is shared before signatures are public, which in turn allows the two transactions to be atomic once the transactions are published. By taking difference of any two Schnorr signatures, one is able to construct transcripts, such as an atomic swap multisig contract.
This is a critical feature for Mimblewimble, which was previously thought to be unable to support atomic swaps or Lightning channels. [4]
Atomic (Crosschain Swaps) Example with Adaptor Signatures
Alice has a certain number of coins on a particular blockchain; Bob also has a certain number of coins on another blockchain. Alice and Bob want to engage in an atomic exchange. However, neither blockchain is aware of the other, nor are they able to verify each other's transactions.
The classical way of achieving this involves the use of the blockchain's script system to put a hash preimage challenge and then reveal the same preimage on both sides. Once Alice knows the preimage, she reveals it to take her coins; Bob then copies it off one chain to the other chain to take his coins.
Using adaptor signatures, the same result can be achieved through simpler means. In this case, both Alice and Bob put up their coins on two of two outputs on each blockchain. They sign the mulitsig protocols in parallel, where Bob then gives Alice the adaptor signatures for each side using the same value T . This means that for Bob to take his coins, he needs to reveal t; and for Alice to take her coins, she needs to reveal T. Bob then replaces one of the signatures and publishes t, taking his coins. Alice computes t from the final signature, visible on the blockchain, and uses that to reveal another signature, giving her her coins.
Thus it can be seen that atomicity is achieved. One is still able to exchange information, but now there are no explicit hashes or preimages on the blockchain. No script properties are necessary and privacy is achieved. [4]
Zero Knowledge Contingent Payments
Zero Knowledge Contingent Payments (ZKCP) is a transaction protocol. This protocol allows a buyer to purchase information from a seller using coins in a manner that is private, scalable, secure and, importantly, in a trustless environment. The expected information is transferred only when payment is made. The buyer and seller do not need to trust each other or depend on arbitration by a third party. [6]
Mimblewimble's Core Scriptless Script
As previously stated, Mimblewimble is a blockchain design. Built similarly to Bitcoin, every transaction has inputs and outputs. Each input and output has a confidential transaction commitment. Confidential commitments have an interesting property where, in a valid balanced transaction, one can subtract the input from the output commitments, ensuring that all of the values of the Pedersen values balance out. Taking the difference of these inputs and outputs results in the mulitsig key of the owners of every output and every input in the transaction. This is referred to as the kernel.
Mimblewimble blocks will only have a list of new inputs, a list of new outputs and a list of signatures that are created from the aforementioned excess value. [7]
Since the values are homomorphically encrypted, nodes can verify that no coins are being created or destroyed.
References
[1] "Crypto Innovation Spotlight 2: Scriptless Scripts" [online].
Available: https://medium.com/blockchaincapital/cryptoinnovationspotlight2scriptlessscripts306c4eb6b3a8. Date accessed: 20180227.
[2] Andrew Poelstra, "Mimblewimble and Scriptless Scripts". Presented at Real World Crypto, 2018 [online]. Available: https://www.youtube.com/watch?v=ovCBT1gyk9c&t=0s. Date accessed: 20180111.
[3] Andrew Poelstra, "Scriptless Scripts". Presented at Layer 2 Summit Hosted by MIT DCI and Fidelity Labs on 18 May 2018 [online]. Available: https://www.youtube.com/watch?v=jzoS0tPUAiQ&t=3h36m. Date Accessed: 20180525
[4] Andrew Poelstra, "Mimblewimble and Scriptless Scripts". Presented at MIT Bitcoin Expo 2017 Day 1 [online]. Available: https://www.youtube.com/watch?v=0mVOq1jaR1U&feature=youtu.be&t=39m20. Date accessed: 4 March 2017.
[5] "Flipping the Scriptless Script on Schnorr" [online]. Available: https://joinmarket.me/blog/blog/flippingthescriptlessscriptonschnorr/. Date accessed: November 2017.
[6] "The First Successful Zeroknowledge Contingent Payment" [online]. Available: https://bitcoincore.org/en/2016/02/26/zeroknowledgecontingentpaymentsannouncement/. Date accessed: 20160226.
[7] "What is Mimblewimble?" [Online.] Available: https://www.cryptocompare.com/coins/guides/whatismimblewimble/. Date accessed: 20180630.
Contributors
https://github.com/kevoulee
https://github.com/anselld
The MuSig Schnorr Signature Scheme
Abstract
This report investigates Schnorr MultiSignature Schemes (MuSig), which makes use of key aggregation and is provably secure in the plain publickey model.
Signature aggregation involves mathematically combining several signatures into a single signature, without having to prove Knowledge of Secret Keys (KOSK). This is known as the plain publickey model where the only requirement is that each potential signer has a public key. The KOSK scheme requires that users prove knowledge (or possession) of the secret key during public key registration with a certification authority, and is one way to generically prevent roguekey attacks.
Multisignatures are a form of technology used to add multiple participants to cryptocurrency transactions. A traditional multisignature protocol allows a group of signers to produce a joint multisignature on a common message.
Contents
 The MuSig Schnorr Signature Scheme
Introduction
Schnorr Signatures and their Attack Vectors
Schnorr signatures produce a smaller onchain size, support faster validation and have better privacy. They natively allow for combining multiple signatures into one through aggregation and they permit more complex spending policies.
Signature aggregation also has its challenges. This includes the roguekey attack, where a participant steals funds using a specifically constructed key. Although this is easily solved for simple multisignatures through an enrollment procedure which involves the keys signing themselves, supporting it across multiple inputs of a transaction requires plain publickey security, meaning there is no setup.
There is an additional attack, termed the Russel attack, after Russel O'Connor, who has discovered that for multiparty schemes a party could claim ownership of someone else's private key and so spend the other outputs.
Wuille P. [1] has been able to address some of these issues and has provided a solution which refines the BellareNeven (BN) scheme. He also discussed the performance improvements that were implemented for the scaler multiplication of the BN scheme and how they enable batch validation on the blockchain. [2]
MuSig
Introduced by Itakura et al. [3], multisignature protocols allow a group of signers (that individually possess their own private/public key pair) to produce a single signature $ \sigma $ on a message $ m $. Verification of the given signature $ \sigma $ can be publicly performed given the message and the set of public keys of all signers.
A simple way to change a standard signature scheme into a multisignature scheme is to have each signer produce a standalone signature for $ m $ with its private key and to then concatenate all individual signatures.
The transformation of a standard signature scheme to a multisignature scheme needs to be useful and practical, thus the newly calculated multisignature scheme must produce signatures where the size is independent of the number of signers and similar to that of the original signature scheme. [4]
A traditional multisignature scheme is a combination of a signing and verification algorithm, where multiple signers (each with their own private/public key) jointly sign a single message, resulting in a combined signature. This can then be verified by anyone knowing the message and the public keys of the signers, where a trusted setup with KOSK is a requirement.
MuSig is a multisignature scheme that is novel in combining:
 Support for key aggregation;
 Security in the plain publickey model.
There are two versions of MuSig, that are provably secure, which differ based on the number of communication rounds:
 Threeround MuSig only relies on the Discrete Logarithm (DL) assumption, on which Elliptic Curve Digital Signature Algorithm (ECDSA) also relies
 Tworound MuSig instead relies on the slightly stronger OneMore Discrete Logarithm (OMDL) assumption
Key Aggregation
The term key aggregation refers to multisignatures that look like a singlekey signature, but with respect to an aggregated public key that is a function of only the participants' public keys. Thus, verifiers do not require the knowledge of the original participants' public keys, they can just be given the aggregated key. In some use cases, this leads to better privacy and performance. Thus, MuSig is effectively a key aggregation scheme for Schnorr signatures.
To make the traditional approach more effective and without needing a trusted setup, a multisignature scheme must provide sublinear signature aggregation along with the following properties:
 It must be provably secure in the plain publickey model
 It must satisfy the normal Schnorr equation, whereby the resulting signature can be written as a function of a combination of the public keys
 It must allow for Interactive Aggregate Signatures (IAS) where the signers are required to cooperate
 It must allow for Noninteractive Aggregate Signatures (NAS) where the aggregation can be done by anyone
 It must allow each signer to sign the same message
 It must allow each signer to sign their own message
This is different to a normal multisignature scheme where one message is signed by all. MuSig potentially provides all of those properties.
There are other multisignature schemes that already exist that provide key aggregation for Schnorr signatures, however they come with some limitations, such as needing to verify that participants actually have the private key corresponding to the pubic keys that they claim to have. Security in the plain publickey model means that no limitations exist. All that is needed from the participants is their public keys. [1]
Overview of MultiSignatures
Recently the most obvious use case for multisignatures is with regards to Bitcoin, where it can function as a more efficient replacement of $ nofn $ multisig scripts (where the signatures required to spend and the signatures possible are equal in quantity) and other policies that permit a number of possible combinations of keys.
A key aggregation scheme also lets one reduce the number of public keys per input to one, as a user can send coins to the aggregate of all involved keys rather than including them all in the script. This leads to a smaller onchain footprint, faster validation, and better privacy.
Instead of creating restrictions with one signature per input, one signature can be used for the entire transaction. Traditionally key aggregation cannot be used across multiple inputs, as the public keys are committed to by the outputs, and those can be spent independently. MuSig can be used here (with key aggregation done by the verifier).
No noninteractive aggregation scheme is known that only relies on the DL assumption, but interactive schemes are trivial to construct where a multisignature scheme has every participant sign the concatenation of all messages. Maxwell G, et al. [4] focused on key aggregation for Schnorr Signatures and showed that this is not always a desirable construction, and gave an IAS variant of BN with better properties instead. [1]
Bitcoin $ mofn $ MultiSignatures
Currently, standard transactions on the Bitcoin network can be referred to as singlesignature transactions, as they require only one signature, from the owner of the private key associated with the Bitcoin address. However, the Bitcoin network supports much more complicated transactions which can require the signatures of multiple people before the funds can be transferred. These are often referred to as $ mofn $ transactions, where m represents the amount of signatures required to spend, while n represents the amount of signatures possible. [5]
Use Cases for $ mofn $ MultiSignatures
When $ m=1 $ and $ n>1 $ it is considered a shared wallet, which could be used for small group funds that do not require much security. It is the least secure multisig option because it is not multifactor. Any compromised individual would jeopardize the entire group. Examples of use cases include funds for a weekend or evening event, or a shared wallet for some kind of game. Besides being convenient to spend from, the only benefit of this setup is that all but one of the backup/password pairs could be lost and all of the funds would be recoverable.
When $ m=n $, it is considered a partner wallet, which brings with it some nervousness as no keys can be lost. As the number of signatures required increases, the risk also increases. This type of multisignature can be considered as a hard multifactor authentication.
When $ m<0.5n $, it is considered a buddy account, which could be used for spending from corporate group funds. The consequence for the colluding minority needs to greater than possible benefits. It is considered less convenient than a shared wallet, but much more secure.
When $ m>0.5n $, a consensus account is termed. The classic multisignature wallet is a 2 of 3 and is a special case of a consensus account. A 2 of 3 scheme has the best characteristics for creating new bitcoin addresses and for secure storing and spending. One compromised signatory does not compromise the funds. A single secret key can be lost and the funds can still be recovered. If done correctly, offsite backups are created during wallet setup. The way to recover funds is known by more than one party. The balance of power with a multisignature wallet can be shifted by having one party control more keys than the other parties. If one party controls multiple keys, there is a greater risk of those keys not remaining as multiple factors.
When $ m=0.5n $, it is referred to as a split account, and is an interesting use case, as there would be 3 of 6 where one person holds 3 keys and 3 people hold 1 key. In this way one person could control their own money, but the funds could still be recoverable even if the primary key holder were to disappear with all of his keys. As $ n $ increases, the level of trust in the secondary parties can decrease. A good use case might be a family savings account that would just automatically become an inheritance account if the primary account holder were to die. [5]
Rogue Attacks
Please see Key cancellation attack demonstration in Introduction to Schnorr Signatures.
Rogue attacks are a significant concern when implementing multisignature schemes. Here a subset of corrupted signers, manipulate the public keys computed as functions of the public keys of honest users, allowing them to easily produce forgeries for the set of public keys (despite them not knowing the associated secret keys).
Initial proposals from [10], [11], [12], [13], [14], [15] and [16] were thus undone before a formal model was put forward along with a provably secure scheme from Micali et al. [17]. Unfortunately, despite being provably secure their scheme is costly and an impractical interactive key generation protocol. [4]
A means of generically preventing roguekey attacks is to make it mandatory for users to prove knowledge (or possession) of the secret key during public key registration with a certification authority [18]. Certification authority is a setting known as the KOSK assumption. The pairingbased multisignature schemes by Boldyreva [19] and Lu et al. [20] rely on the KOSK assumption in order to maintain security. However, according to [21] and [18], the cost of complexity and expense of the scheme and the unrealistic and burdensome assumptions on the Publickey Infrastructure (PKI) have made this solution problematic.
As it stands, the Bellare M. et al. [21] provides one of the most practical multisignature schemes, based on the Schnorr signature scheme, which is provably secure that does not contain any assumption on the key setup. Since the only requirement of this scheme is that each potential signer has a public key, this setting is referred to as the plainkey model.
The MicaliOhtaReyzin multisignature scheme [17] solves the roguekey attack using a sophisticated interactive key generation protocol.
Bagherzandi et al. [22] reduced the number of rounds from three to two using an homomorphic commitment scheme. Unfortunately, this increases the signature size and the computational cost of signing and verification.
Ma et al. [23] proposed a signature scheme that involved the "double hashing" technique, which sees the reduction of the signature size compared to Bagherzandi et al. [22] while using only two rounds.
However, neither of these two variants allow for key aggregation.
Multisignature schemes supporting key aggregation are easier to come by in the KOSK model. In particular, Syta et al. [24] proposed the CoSi scheme which can be seen as the naive Schnorr multisignature scheme, where the cosigners are organized in a tree structure for fast signature generation.
Interactive Aggregate Signatures (IAS)
In some situations, it may be useful to allow each participant to sign a different message rather than a single common one. An IAS is one where each signer has its own message $ m_{i} $ to sign, and the joint signature proves that the $ i $ th signer has signed $ m_{i} $. These schemes are considered to be more general than multisignature schemes, however they are not as flexible as noninteractive aggregate signatures ([25], [26]) and sequential aggregate signatures [27].
According to Bellare M. et al. [21], a generic way to turn any multisignature scheme into an IAS scheme, is if the signer running the multisignature protocol use the tuple of all public keys/message pairs involved in the IAS protocol as message.
For BN's scheme and Schnorr multisignatures, this does not increase the number of communication rounds as messages can be sent together with shares $ R_{i} $.
Applications of IAS
With regards to digital currency schemes, where all participants have the ability to validate transactions, these transactions consist of outputs (which have a verification key and amount) and inputs (which are references to outputs of earlier transactions). Each input contains a signature of a modified version of the transaction to be validated with its referenced output's key. Some outputs may require multiple signatures to be spent. Transactions spending such an output are referred to as mofn multisignature transactions [28], and the current implementation corresponds to the trivial way of building a multisignature scheme by concatenating individual signatures. Additionally, a threshold policy can be enforced where only $ m $ valid signatures out of the $ n $ possible ones are needed to redeem the transaction (again this is the most straightforward way to turn a multisignature scheme into some kind of basic threshold signature scheme).
While several multisignature schemes could offer an improvement over the currently available method, two properties increase the possible impact:
 The availability of key aggregation removes the need for verifiers to see all the involved keys, improving bandwidth, privacy, and validation cost.
 Security under the plain publickey model enables multisignatures across multiple inputs of a transaction, where the choice of signers cannot be committed to in advance. This greatly increases the number of situations in which mulitsignatures are beneficial.
Native MultiSignature Support
An improvement is to replace the need for implementing $ nofn $ multisignatures with a constantsize multisignature primitive like BN. While this is on itself an improvement in terms of size, it still needs to contain all of the signers' public keys. Key aggregation improves upon this further, as a singlekey predicate can be used instead which is both smaller and has lower computational cost for verification. Predicate encryption is an encryption paradigm which gives a master secret key owner finegrained control over access to encrypted data [29]. It also improves privacy, as the participant keys and their count remain private to the signers.
When generalizing to the $ mofn $ scenario, several options exist. One is to forego key aggregation, and still include all potential signer keys in the predicates while still only producing a single signature for the chosen combination of keys. Alternatively, a Merkle tree [30] where the leaves are permitted combinations of public keys (in aggregated form), can be employed. The predicate in this case would take as input an aggregated public key, a signature and a proof. Its validity would depend on the signature being valid with the provided key, and the proof establishing that the key is in fact one of the leaves of the Merkle tree, identified by its root hash. This approach is very generic, as it works for any subset of combinations of keys, and as a result has good privacy as the exact policy is not visible from the proof.
Some key aggregation schemes that do not protect against roguekey attacks can be used instead in the above cases, under the assumption that the sender is given a proof of knowledge/possession for the receivers' private keys. However, these schemes are difficult to prove secure except by using very large proofs of knowledge. As those proofs of knowledge/possession do not need to be seen by verifiers, they are effectively certified by the sender's validation. However, passing them around to senders is inconvenient, and easy to get wrong. Using a scheme that is secure in the plain publickey model categorically avoids these concerns.
Another alternative is to use an algorithm whose key generation requires a trusted setup, for example in the KOSK model. While many of these schemes have been proven secure, they rely on mechanisms that are usually not implemented by certification authorities. ([18], [19], [20]) [21])
CrossInput MultiSignatures
The previous sections explained how the numbers of signatures per input can generally by reduced to one, but one can go further and replace it with a single signature per transaction. Doing so requires a fundamental change in validation semantics, as the validity of separate inputs is no longer independent. As a result, the outputs can no longer be modeled as predicates, where the secret key owner is given access to encrypted data. Instead, they are modeled as functions that return a boolean (data type with only two possible values) plus a set of zero or more public keys.
Overall validity requires all returned booleans to be True
and a multisignature of the transaction with $ L $ the union of all returned keys.
With regards to Bitcoin, this can be implemented by providing an alternative to the signature checking opcode OP_CHECKSIG and related opcodes in the Script language. Instead of returning the result of an actual ECDSA verification, they always return True
, but additionally add the public key with which the verification would have taken place to a transactionwide multiset of keys. Finally, after all inputs are verified, a multisignature present in the transaction is verified against that multiset. In case the transaction spends inputs from multiple owners, they will need to collaborate to produce the multisignature, or choose to only use the original opcodes. Adding these new opcodes is possible in a backwardcompatible way. [4]
Protection against RogueKey Attacks
In Bitcoin, when taking crossinput signatures into account, there is no published commitment to the set of signers, as each transaction input can independently spend an output that requires authorization from distinct participants. This functionality was not restricted as it would then interfere with fungibility improvements such as CoinJoin [31]. Due to the lack of certification, security against roguekey attacks is of great importance.
If it is assumed that transactions used a single multisignature that was vulnerable to rogueattacks, an attacker could identify an arbitrary number of outputs he wants to steal, with the public keys $ X_{1},...,X_{nt} $ and then use the roguekey attack to determine $ X_{nt+1},...,X_{n} $ such that he can sign for the aggregated key $ \tilde{X} $. He would then send a small amount of his own money to outputs with predicates corresponding to the keys $ X_{nt+1},...,X_{n} $. Finally, he can create a transaction that spends all of the victims' coins together with the ones he just created by forging a multisignature for the whole transaction.
It can be seen that in the case of mulitsignatures across inputs, theft can occur through the ability to forge a signature over a set of keys that includes at least one key which is not controlled by the attacker. According to the plain publickey model this is considered a win for the attacker. This is in contrast to the singleinput multisignature case where theft is only possible by forging a signature for the exact (aggregated) keys contained in an existing output. As a result, it is no longer possible to rely on proofs of knowledge/possession that are private to the signers.
The Formation of MuSig
Preliminaries
Notation Used
The general notation of mathematical expressions when specifically referenced are listed here. These notations are important preknowledge for the remainder of the report.
 Let $ p $ a be large prime number.
 Let $ \mathbb{G} $ denote cyclic group of the prime order $ p $.
 Let $ \mathbb Z_p $ denote the ring of integer $ modulo \mspace{4mu} p $.
 Let a generator of $ \mathbb{G} $ be denoted by $ g $. Thus, there exists a number $ g \in\mathbb{G} $ such that $ \mathbb{G} = \lbrace 1, \mspace{3mu}g, \mspace{3mu}g^2,\mspace{3mu}g^3, ..., \mspace{3mu}g^{p1} \rbrace $.
 Let $ \textrm{H} $ denote the hash function.
 Let $ S= \lbrace (X_{1}, m_{1}),..., (X_{n}, m_{n}) \rbrace $ be the multiset of all public key/message pairs of all participants, where $ X_{1}=g^{x_{1}} $.
 Let $ \langle S \rangle $ denote a lexicographically encoding of the multiset of public key/message pairs in $ S $.
 Let $ L= \lbrace X_{1}=g^{x_{1}},...,X_{n}=g^{x_{n}} \rbrace $ be the multiset of all public keys.
 Let $ \langle L \rangle $ denote a lexicographically encoding of the multiset of public keys $ L= \lbrace X_{1}...X_{n} \rbrace $.
 Let $ \textrm{H}_{com} $ denote the hash function in the commitment phase.
 Let $ \textrm{H}_{agg} $ denote the hash function used to compute the aggregated key.
 Let $ \textrm{H}_{sig} $ denote the hash function used to compute the signature.
 Let $ X_{1} $ and $ x_{1} $ be the public and private key of a specific signer.
 Let $ m $ be the message that will be signed.
 Let $ X_{2},...,X_{n} $ be the public keys of other cosigners.
Recap on the Schnorr Signature Scheme
The Schnorr signature scheme [6] uses group parameters $(\mathbb{G\mathrm{,p,g)}}$ and a hash function $ \textrm{H} $.
A private/public key pair is a pair
$$ (x,X) \in \lbrace 0,...,p1 \rbrace \mspace{6mu} \mathsf{x} \mspace{6mu} \mathbb{G} $$
where $ X=g^{x} $
To sign a message $ m $, the signer draws a random integer $ r \in Z_{p} $ and computes
$$ \begin{aligned} R &= g^{r} \\ c &= \textrm{H}(X,R,m) \\ s &= r+cx \end{aligned} $$
The signature is the pair $ (R,s) $, and its validity can be checked by verifying whether $$ g^{s} = RX^{c} $$
This scheme is referred to as the "keyprefixed" variant of the scheme, which sees the public key hashed together with $ R $ and $ m $ [7]. This variant was thought to have a better multiuser security bound than the classic variant [8], however in [9] the keyprefixing was seen as unnecessary to enable good multiuser security for Schnorr signatures.
For the development of the MuSig Schnorrbased multisignature scheme [4], keyprefixing is a requirement for the security proof, despite not knowing the form of an attack. The rationale also follows the process in reality, as messages signed in Bitcoin always indirectly commits to the public key.
Design of the Schnorr MultiSignature Scheme
The naive way to design a Schnorr multisignature scheme would be as follows:
A group of $ n $ signers want to cosign a message $ m $. Each cosigner randomly generates and communicates to others a share
$$ R_i = g^{r_{i}} $$
Each of the cosigners then computes:
$$ R = \prod _{i=1}^{n} R_{i} \mspace{30mu} \mathrm{and} \mspace{30mu} c = \textrm{H} (\tilde{X},R,m) $$
where $$ \tilde{X} = \prod_{i=1}^{n}X_{i} $$
is the product of individual public The partial signature is then given by
$$ s_{i} = r_{i}+cx_{i} $$
All partial signatures are then combined into a single signature $(R,s)$ where
$$ s = \displaystyle\sum_{i=1}^{n}s_i \mod p $$
The validity of a signature $ (R,s) $ on message $ m $ for public keys $ \lbrace X_{1},...X_{n} \rbrace $ is equivalent to
$$ g^{s} = R\tilde{X}^{c} $$
where
$$ \tilde{X} = \prod_{i=1}^{n} X_{i} \mspace{30mu} \mathrm{and} \mspace{30mu} c = \textrm{H}(\tilde{X},R,m) $$
Note that this is exactly the verification equation for a traditional keyprefixed Schnorr signature with respect to public key $ \tilde{X} $, a property termed key aggregation. However, these protocols are vulnerable to a roguekey attack ([12], [14], [15] and [17]) where a corrupted signer sets its public key to
$$ X_{1}=g^{x_{1}} (\prod_{i=2}^{n} X_{i})^{1} $$
allowing the signer to produce signatures for public keys $ \lbrace X_{1},...X_{n} \rbrace $ by themselves.
Bellare and Neven Signature Scheme
Bellare M. et al. [21] proceeded differently in order to avoid any key setup. A group of $ n $ signers want to cosign a message $ m $. Their main idea is to have each cosigner use a distinct "challenge" when computing their partial signature
$$ s_{i} = r_{i}+c_{i}x_{i} $$
defined as
$$ c_{i} = \textrm{H}( \langle L \rangle , X_{i},R,m) $$
where
$$ R = \prod_{i=1}^{n}R_{i} $$
The equation to verify signature $ (R,s) $ on message $ m $ for the public keys $ L $ is
$$ g^s = R\prod_{i=1}^{n}X_{i}^{c_{i}} $$
A preliminary round is also added to the signature protocol, where each signer commits to its share $ R_i $ by sending $ t_i = \textrm{H}^\prime(R_i) $ to other cosigners first.
This stops any cosigner from setting $ R = \prod_{i=1}^{n}R_{i} $ to some maliciously chosen value and also allows the reduction to simulate the signature oracle in the security proof.
Bellare M. et al. [21] showed that this yields a multisignature scheme provably secure in the plain publickey model under the Discrete Logarithm assumptions, modeling $ \textrm{H} $ and $ \textrm{H}^\prime $ as random oracles. However, this scheme does not allow key aggregation anymore since the entire list of public keys is required for verification.
MuSig Scheme
MuSig is paramaterised by group parameters $(\mathbb{G\mathrm{,p,g)}}$ and three hash functions $ ( \textrm{H}_{com} , \textrm{H}_{agg} , \textrm{H}_{sig} ) $ from $ \lbrace 0,1 \rbrace ^{*} $ to $ \lbrace 0,1 \rbrace ^{l} $ (constructed from a single hash, using proper domain separation).
Round 1
A group of $ n $ signers want to cosign a message $ m $. Let $ X_1 $ and $ x_1 $ be the public and private key of a specific signer, let $ X_2 , . . . , X_n $ be the public keys of other cosigners and let $ \langle L \rangle $ be the multiset of all public keys involved in the signing process.
For $ i\in \lbrace 1,...,n \rbrace $ , the signer computes the following
$$ a_{i} = \textrm{H}_{agg}(\langle L \rangle,X_{i}) $$
as well as the "aggregated" public key
$$ \tilde{X} = \prod_{i=1}^{n}X_{i}^{a_{i}} $$
Round 2
The signer generates a random private nonce $ r_{1}\leftarrow\mathbb{Z_{\mathrm{p}}} $, computes $ R_{1} = g^{r_{1}} $ (the public nonce) and commitment $ t_{1} = \textrm{H}_{com}(R_{1}) $ and sends $t_{1}$ to all other cosigners.
When receiving the commitments $t_{2},...,t_{n}$ from the other cosigners, the signer sends $R_{1}$ to all other cosigners. This ensures that the public nonce is not exposed until all commitments have been received.
Upon receiving $R_{2},...,R_{n}$ from other cosigners, the signer verifies that $t_{i}=\textrm{H}_{com}(R_{i})$ for all $ i\in \lbrace 2,...,n \rbrace $
The protocol is aborted if this is not the case.
Round 3
If all commitment and random challenge pairs can be verified with $ \textrm{H}_{agg} $, the following is computed:
$$ \begin{aligned} R &= \prod^{n}_{i=1}R_{i} \\ c &= \textrm{H}_{sig} (\tilde{X},R,m) \\ s_{1} &= r_{1} + ca_{1} x_{1} \mod p \end{aligned} $$
Signature $s_{1}$ is sent to all other cosigners. When receiving $ s_{2},...s_{n} $ from other cosigners, the signer can compute $ s = \sum_{i=1}^{n}s_{i} \mod p$. The signature is $ \sigma = (R,s) $.
In order to verify the aggregated signature $ \sigma = (R,s) $, given a lexicographically encoded multiset of public keys $ \langle L \rangle $ and message $ m $, the verifier computes:
$$ \begin{aligned} a_{i} &= \textrm{H}_{agg}(\langle L \rangle,X_{i}) \mspace{9mu} \textrm {for} \mspace{9mu} i \in \lbrace 1,...,n \rbrace \\ \tilde{X} &= \prod_{i=1}^{n}X_{i}^{a_{i}} \\ c &= \textrm{H}_{sig} (\tilde{X},R,m) \end{aligned} $$
then accepts the signature if
$$ g^{s} = R\prod_{i=1}^{n}X_{i}^{a_{i}c}=R\tilde{X^{c}.} $$
Revisions
In a previous version of the paper by Maxwell et al. [4] published on 15 January 2018 they proposed a 2round variant of MuSig, where the initial commitment round is omitted claiming a security proof under the One More Discrete Logarithm (OMDL) assumptions ([32], [33]). Drijvers et al. [34] then discovered a flaw in the security proof and showed that through a metareduction the initial multisignature scheme cannot be proved secure using an algebraic black box reduction under the DL or OMDL assumption.
In more details, it was observed that in the 2round variant of MuSig, an adversary (controlling public keys $ X_{2},...,X_{n} $) can impose the value of $ R=\Pi_{i=1}^{n}R_{i} $ used in signature protocols since he can choose $ R_{2},...,R_{n} $ after having received $ R_{1} $ from the honest signer (controlling public key $ X_{1}=g^{x_{1}} $ ). This prevents one to use the initial method of simulating the honest signer in the Random Oracle model without knowing $ x_{1} $ by randomly drawing $ s_{1} $ and $ c $, computing $ R_1=g^{s_1}(X_1)^{a_1c}$, and later programming $ \textrm{H}_{sig}(\tilde{X}, R, m) \mspace{2mu} : = c_1 $ since the adversary might have made the random oracle query $ \textrm{H}_{sig}(\tilde{X}, R, m) $ before engaging the corresponding signature protocol.
Despite this, there is no attack currently known against the 2round variant of MuSig and that it might be secure, although this is not provable under standard assumptions from existing techniques. [4]
Turning BN’s Scheme into a Secure IAS
In order to change the BN multisignature scheme into an IAS scheme, Wuille et al. [4] proposed the scheme described below, which includes a fix to make the execution of the signing algorithm dependent on the message index.
If $ X = g^{x_i} $ is the public key of a specific signer and $ m $ the message he wants to sign, and
$$ S^\prime = \lbrace (X^\prime_{1}, m^\prime_{1}),..., (X^\prime_{n1}, m^\prime_{n1}) \rbrace $$
is the set of the public key/message pairs of other signers, this specific signer merges $ (X, m) $ and $ S^\prime $ into the ordered set
$$ \langle S \rangle \mspace{6mu} \mathrm{of} \mspace{6mu} S = \lbrace (X_{1}, m_{1}),..., (X_{n}, m_{n}) \rbrace $$
and retrieves the resulting message index $ i $ such that
$$ (X_{1}, m_{i}) = (X, m) $$
Each signer then draws $ r_{1}\leftarrow\mathbb{Z_{\mathrm{p}}} $, computes $ R_{i} = g^{r_{i}} $ and subsequently sends commitment $ t_{i} = H^\prime(R_{i}) $ in a first round and then $ R_{i} $ in a second round, and then computes
$$ R = \prod_{i=1}^{n}R_{i} $$
The signer with message index $ i $ then computes:
$$ c_{i} = H(R, \langle S \rangle, i) \mspace{30mu} \\ s_{i} = r_{i} + c_{i}x_{i} \mod p $$
and then sends $ s_{i} $ to other signers. All signers can compute
$$ s = \displaystyle\sum_{i=1}^{n}s_{i} \mod p $$
The signature is $ \sigma = (R, s) $.
Given an ordered set $ \langle S \rangle \mspace{6mu} \mathrm{of} \mspace{6mu} S = \lbrace (X_{1}, m_{1}),...,(X_{n}, m_{n}) \rbrace $ and a signature $ \sigma = (R, s) $ then $ \sigma $ is valid for $ S $ when
$$ g^s = R\prod_{i=1}^{n}X_{i} ^{H(R, \langle S \rangle, i)} $$
It must be noted that there is no need to include $ \langle L \rangle $ in the hash computation nor the public key $ X_i $ of the local signer since they are already "accounted for" through ordered set $ \langle S \rangle $ and the message index $ i $.
Note: As of writing of this report, the secure IAS scheme presented here still needs to undergo a complete security analysis.
Conclusions, Observations and Recommendations
 MuSig leads to both native and private multisignature transactions with signature aggregation.
 Signature data for multisignatures can be large and cumbersome. MuSig will allow users to create more complex transactions without burdening the network and revealing compromising information.
 The IAS case where each signer signs their own message must still be proven by a complete security analysis.
References
[1] P. Wuille, “Key Aggregation for Schnorr Signatures,” 2018. Date accessed: 20190120
[2] Blockstream, “Schnorr Signatures for Bitcoin  BPASE ’18,” 2018. Date accessed: 20190120
[3] K. Itakura, “A publickey cryptosystem suitable for digital multisignatures,” NEC J. Res. Dev., vol. 71, 1983. Date accessed: 20190120
[4] G. Maxwell et al. , “Simple Schnorr MultiSignatures with Applications to Bitcoin,” pp. 1–34, 2018. Date accessed: 20190120
[5] B. W. Contributors, “Multisignature,” 2017. Date accessed: 20190120
[6] C. P. Schnorr, “Efficient signature generation by smart cards,” Journal of cryptology, vol. 4, no. 3, pp. 161–174, 1991. Date accessed: 20190120
[7] D. J. Bernstein et al. , “Highspeed highsecurity signatures,” Journal of Cryptographic Engineering, vol. 2, no. 2, pp. 77–89, 2012. Date accessed: 20190120
[8] D. J. Bernstein, “Multiuser Schnorr security, revisited.,” IACR Cryptology ePrint Archive, vol. 2015, p. 996, 2015. Date accessed: 20190120
[9] E. Kiltz et al., “Optimal security proofs for signatures from identification schemes,” in Annual Cryptology Conference, pp. 33–61, Springer, 2016. Date accessed: 20190120
[10] C. M. Li et al., “Thresholdmultisignature schemes where suspected forgery implies traceability of adversarial shareholders,” in Workshop on the Theory and Application of Cryptographic Techniques, pp. 194–204, Springer, 1994. Date accessed: 20190120
[11] L. Harn, “Grouporiented (t, n) threshold digital signature scheme and digital multisignature,” IEE ProceedingsComputers and Digital Techniques, vol. 141, no. 5, pp. 307–313, 1994. Date accessed: 20190120
[12] P. Horster et al., “MetaMultisignature schemes based on the discrete logarithm problem,” in Information Securitythe Next Decade, pp. 128–142, Springer, 1995. Date accessed: 20190120
[13] K. Ohta and T. Okamoto, “A digital multisignature scheme based on the FiatShamir scheme,” in International Conference on the Theory and Application of Cryptology, pp. 139–148, Springer, 1991. Date accessed: 20190120
[14] S. K. Langford, “Weaknesses in some threshold cryptosystems,” in Annual International Cryptology Conference, pp. 74–82, Springer, 1996. Date accessed: 20190120
[15] M. Michels and P. Horster, “On the risk of disruption in several multiparty signature schemes,” in International Conference on the Theory and Application of Cryptology and Information Security, pp. 334–345, Springer, 1996. Date accessed: 20190120
[16] K. Ohta and T. Okamoto, “Multisignature schemes secure against active insider attacks,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. 82, no. 1, pp. 21–31, 1999. Date accessed: 20190120
[17] S. Micali et al., “Accountablesubgroup multisignatures,” in Proceedings of the 8th ACM conference on Computer and Communications Security, pp. 245–254, ACM, 2001. Date accessed: 20190120
[18] T. Ristenpart and S. Yilek, “The power of proofsofpossession: Securing multiparty signatures against roguekey attacks,” in Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 228–245, Springer, 2007. Date accessed: 20190120
[19] A. Boldyreva, “Threshold signatures, multisignatures and blind signatures based on the gapDiffieHellmangroup signature scheme,” in International Workshop on Public Key Cryptography, pp. 31–46, Springer, 2003. Date accessed: 20190120
[20] S. Lu et al., “Sequential aggregate signatures and multisignatures without random oracles,” in Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 465–485, Springer, 2006. Date accessed: 20190120
[21] M. Bellare and G. Neven, “MultiSignatures in the Plain Public Key Model and a General Forking Lemma,” Acm Ccs, pp. 390– 399, 2006. Date accessed: 20190120
[22] A. Bagherzandi et al., “Multisignatures Secure under the Discrete Logarithm Assumption and a Generalized Forking Lemma,” Proceedings of the 15th ACM conference on Computer and communications security  CCS ’08, p. 449, 2008. Date accessed: 20190120
[23] C. Ma et al., “Efficient discrete logarithm based multisignature scheme in the plain public key model,” Designs, Codes and Cryptography, vol. 54, no. 2, pp. 121–133, 2010. Date accessed: 20190120
[24] E. Syta et al., “Keeping authorities" honest or bust" with decentralized witness cosigning,” in Security and Privacy (SP), 2016 IEEE Symposium on, pp. 526–545, Ieee, 2016. Date accessed: 20190120
[25] D. Boneh et al., “Aggregate and verifiably encrypted signatures from bilinear maps,” in International Conference on the Theory and Applications of Cryptographic Techniques, pp. 416–432, Springer, 2003. Date accessed: 20190120
[26] M. Bellare et al., “Unrestricted aggregate signatures,” in International Colloquium on Automata, Languages, and Programming, pp. 411–422, Springer, 2007. Date accessed: 20190120
[27] A. Lysyanskaya et al., “Sequential aggregate signatures from trapdoor permutations,” in International Conference on the Theory and Applications of Cryptographic Techniques, pp. 74–90, Springer, 2004. Date accessed: 20190120
[28] G. Andersen, “MofN Standard Transactions,” 2011. Date accessed: 20190120
[29] E. Shen et al., “Predicate privacy in encryption systems,” in Theory of Cryptography Conference, pp. 457–473, Springer, 2009. Date accessed: 20190120
[30] R. C. Merkle, “A digital signature based on a conventional encryption function,” in Conference on the theory and application of cryptographic techniques, pp. 369–378, Springer, 1987. Date accessed: 20190120
[31] G. Maxwell, “CoinJoin: Bitcoin privacy for the real world,” 2013. Date accessed: 20190120
[32] M. Bellare and A. Palacio, “GQ and Schnorr identification schemes: Proofs of security against impersonation under active and concurrent attacks,” in Annual International Cryptology Conference, pp. 162–177, Springer, 2002. Date accessed: 20190120
[33] M. Bellare et al., “The OneMoreRSA Inversion Problems and the Security of Chaum’s Blind Signature Scheme.,” Journal of Cryptology, vol. 16, no. 3, 2003. Date accessed: 20190120
[34] M. Drijvers et al., “Okamoto Beats Schnorr: On the Provable Security of MultiSignatures,” tech. rep., 2018. Date accessed: 20190120
Contributors
Fraud Proofs and SPV (Lightweight) Clients  Easier Said than Done?
Background
The bitcoin blockchain was, as of June 2018, approximately 173 Gigabytes in size [1]. This makes it nearly impossible for everyone to run a full bitcoin node. Lightweight/Simplified Payment Verification (SPV) clients will have to be used by users, since not everyone can run full nodes due to the computational power, bandwidth and cost needed to run a full bitcoin node.
SPV clients will believe everything miners or nodes tell them, as evidenced by Peter Todd in the following screenshot of an Android client showing millions of bitcoins. The wallet was sent a transaction of 2.1 million BTC outputs [17]. Peter Todd modified the code for his node in order to deceive the bitcoin wallet, since the wallets cannot verify coin amounts [27] (code can be found in the "Quickndirty hack to lie to SPV wallets" branch on his GitHub repository).
Introduction
In the original bitcoin whitepaper [2], Satoshi recognized the limitations described in Background, and introduced the concept of a Simplified Payment Verification (SPV). This concept allows verification of payments using a lightweight client that does not need to download the entire bitcoin blockchain. It only downloads block headers with the longest proofofwork chain, which are achieved by obtaining the Merkle branch linking a transaction to a block [3]. The existence of the Merkle root in the chain, along with blocks added after the block containing the Merkle root, provides confirmation of the legitimacy of that chain.
In this system, the full nodes would need to provide an alert (known as a fraud proof) to SPV clients when an invalid block is detected. The SPV clients would then be prompted to download the full block and alerted transactions to confirm the inconsistency [2]. An invalid block need not be of malicious intent, but could be as a result of other accounting errors (whether by accident or by malicious intent).
Full Node vs SPV Client
A full bitcoin node contains the following details:
 every block;
 every transaction that has ever been sent;
 all the unspent transaction outputs (UTXOs) [4].
An SPV client, however, contains:
 a block header with transaction data relative to the client, including other transactions required to compute the Merkle root; or
 just a block header with no transactions.
What are Fraud Proofs?
Fraud proofs are a way to improve the security of SPV clients [5] by providing a mechanism for full nodes to prove that a chain is invalid, irrespective of the amount of proof of work it has [5]. Fraud proofs could also help with the bitcoin scaling debate, as SPV clients are easier to run and could thus help with bitcoin scalability issues. ([6],[18])
Fraud Proofs Possible within Existing Bitcoin Protocol
At the time of writing (February 2019), various proofs are needed to prove fraud in the bitcoin blockchain based on various actions. The following are the types of proofs needed to prove fraud based on specific fraud cases within the existing bitcoin protocol [5]:
Invalid Transaction due to Stateless Criteria Violation (Correct Syntax, Input Scripts Conditions Satisfied, etc.)
In the case of an invalid transaction, the fraud proofs consist of:
 the header of invalid block;
 the invalid transaction;
 an invalid block's Merkle tree containing the minimum number of nodes needed to prove the existence of the invalid transaction in the tree.
Invalid Transaction due to Input Already Spent
In this case, the fraud proof would consist of the following:
 the header of the invalid block;
 the invalid transaction;
 proof that the invalid transaction is within the invalid block;
 the header of the block containing the original spend transaction;
 the original spending transaction;
 proof showing that the spend transaction is within the header block of the spend transaction.
Invalid Transaction due to Incorrect Generation Output Value
In this case, the fraud proof consists of the block itself.
Invalid Transaction if Input does not Exist
In this case, the fraud proof consists of the entire blockchain.
Fraud Proofs Requiring Changes to Bitcoin Protocol
The following fraud proofs would require changes to the bitcoin protocol itself [5]:
Invalid Transaction if Input does not Exist in Old Blocks
In this case, the fraud proof consists of:
 the header of the invalid block;
 the invalid transaction;
 proof that the header of the invalid block contains the invalid transaction;
 proof that the header of the invalid block contains the leaf node corresponding to the nonexistent input;
 the block referenced by the leaf node, if it exists.
Missing Proof Tree Item
In this case, the fraud proof consists of:
 the header of the invalid block;
 the transaction of the missing proof tree node;
 an indication as to which input from the transaction of the missing proof tree node is missing;
 proof that the header of the invalid block contains the transition of the missing proof tree node;
 proof that the proof tree contains two adjacent leaf nodes.
Universal Fraud Proofs (Suggested Improvement)
As can be seen, requiring different fraud proof constructions for different fraud proofs can get cumbersome. AlBassam, et al. [26] proposed a general, universal fraudproof construction for most cases. Their proposition is to generalize the entire blockchain as a state transition system and represent the entire state as a Merkle root using a Sparse Merkle tree, with each transaction changing the state root of the blockchain. This can be simplified by this function:
transaction(state,tx) = State or Error
In the case of the bitcoin blockchain, representing the entire blockchain as a keyvalue store Sparse Merkle tree would mean:
Key = UTXO ID
Value = 1 if unspent or 0 if spent
Each transaction will change the state root of the blockchain and can be represented with this function:
TransitionRoot(stateRoot,tx,Witnesses) = stateRoot or Error
In this proposition, a valid fraud proof construction will consist of:
 the transaction;
 the prestate root;
 the poststate root;
 witnesses (Merkle proofs of all the parts of the state the transaction accesses/modifies).
Also expressed as this function:
rootTransition(stateRoot, tx, witnesses) != stateRoot
So a full node would send a light client/SPV this data to prove a valid fraud proof. The SPV would compute this function and, if the transition root of the state root is different to the state root in the block, then the block is rejected.
The poststate root can be excluded in order to save block space. However, this does increase the fraud proof size. This works with the assumption that the SPV client is connected to a minimum of one honest node.
How SPV Clients Work
SPV clients make use of Bloom filters to receive transactions that are relevant to the user [7]. Bloom filters are probabilistic data structures used to check the existence of an element in a set quicker by responding with a Boolean answer. [9]
In addition to Bloom filters, SPV clients rely on Merkle trees [26]  binary structures that have a list of all the hashes between the block (apex) and the transaction (leaf). With Merkle trees, one only needs to check a small part of the block, called a Merkle root, to prove that the transaction has been accepted in the network. [8]
Fraud proofs are integral to the security of SPV clients. However, the other components in SPV clients are not without issues.
Security and Privacy Issues with SPV Clients
Weak Bloom Filters and Merkle Tree Designs
In August 2017, a weakness in the bitcoin Merkle tree design was found to reduce the security of SPV clients. This weakness could allow an attacker to simulate a payment of an arbitrary amount to a victim using an SPV wallet, and trick the victim into accepting it as valid [10]. The bitcoin Merkle tree makes no distinction between inner and leaf nodes, and could thus be manipulated by an attack that could reinterpret transactions as nodes and nodes as transactions [11]. This weakness is due to inner nodes having no format and only requiring the length to be 64 bytes.
A bruteforce attack particularly affects systems that automatically accept SPV proofs and could be carried out with an investment of approximately USD3 million. One proposed solution is to ensure that no internal, 64bit node is ever accepted as a valid transaction by SPV wallets/clients. [11]
The BIP37 SPV [13] Bloom filters do not have relevant privacy features [7]. They leak information such as IP addresses of the user, and whether multiple addresses belong to a single owner [12] (if Tor or Virtual Private Networks (VPNs) are not used).
Furthermore, SPV clients face the risk of a denial of service attack against full nodes due to processing load (80 GB disk reads) when SPV clients sync and full nodes themselves can cause a denial of service for SPV clients by returning NULL filter responses to requests [14]. Peter Todd [15] aptly demonstrates the risk of SPV denial of service.
Improvements
To address these issues, a new concept called committed Bloom filters was introduced to improve the performance and security of SPV clients. In this concept, which can be used in lieu of BIP37 [16], a Bloom filter digest (BFD) of every block's inputs, outputs and transactions is created with a filter that consists of a small size of the overall block size [14].
A second Bloom filter is created with all transactions and a binary comparison is made to determine matching transactions. This BFD allows the caching of filters by SPV clients without the need to recompute [16]. It also introduces semitrusted oracles to improve the security and privacy of SPV clients by allowing SPV clients to download block data via any out of band method. [14]
Examples of SPV Implementations
There are two wellknown SPV implementations for bitcoin: bitcoinj and electrum. The latter does SPVlevel validation, comparing multiple electrum servers against each other. It has very similar security to bitcoinj, but potentially better privacy [25] due to bitcoinj's implementation of Bloom filters [7].
Other Suggested Fraudproof Improvements
Erasure Codes
Along with the proposed universal fraudproof solution, another data availability issue with fraud proofs is erasure coding. Erasure coding allows a piece of data M chunks long to be expanded into a piece of data N chunks long (“chunks” can be of arbitrary size), such that any M of the N chunks can be used to recover the original data. Blocks are then required to commit the Merkle root of this extended data and have light clients probabilistically check that the majority of the extended data is available. [21]
According to the proposed solution, one of three conditions will be true for the SPV client when using erasure codes [20]:
 The entire extended data is available, the erasure code is constructed correctly and the block is valid.
 The entire extended data is available, the erasure code is constructed correctly, but the block is invalid.
 The entire extended data is available, but the erasure code is constructed incorrectly.
In case (1), the block is valid and the light client can accept it. In case (2), it is expected that some other node will quickly construct and relay a fraud proof. In case (3), it is also expected that some other node will quickly construct and relay a specialized kind of fraud proof that shows that the erasure code is constructed incorrectly.
Merklix Trees
Another suggested fraud proof improvement for the bitcoin blockchain is by means of block sharding and validation using Merklix trees. Merklix trees are essentially Merkle trees that use unordered set [22]. This also assumes that there is at least one honest node per shard. Using Merklix proofs, the following can be proven [23]:
 A transaction is in the block.
 The transaction's inputs and outputs are or are not in the UTXO set.
In this scenario, SPV clients can be made aware of any invalidity in blocks and cannot be lied to about the UTXO set.
Payment Channels
Bitcoin is made to be resilient to denial of service (DoS) attacks. However, the same cannot be said for SPV clients. This could be an issue if malicious alerting nodes spam with false fraud proofs. A proposed solution to this is payment channels [6], due to them:
 operating at near instant speeds, thus allowing quick alerting of fraud proofs;
 facilitating microtransactions;
 being robust to temporary mining failures (as they use long “custodial periods”).
In this way, the use of payment channels can help with incentivizing full nodes to issue fraud proofs.
Conclusions, Observations and Recommendations
Fraud proofs can be complex [6] and hard to implement. However, they appear to be necessary for scalability of blockchains and the security and privacy for SPV clients, since not everyone can or should want to run a full node to participate in the network. The current SPV implementations are working on improving the security and privacy of these SPV clients. Furthermore, for current blockchains, a hard or soft fork would need to be done in order to accommodate the data in the block headers.
Based on the payment channels fraud proof proposal that suggests some sort of incentive for nodes that issue alert/fraud proofs, it seems likely that some sort of fraud proof provider and consumer marketplace will have to emerge.
Where Tari is concerned, it would appear that the universal fraud proof proposals or something similar would need to be looked into, as undoubtedly endusers of the protocol/network will mostly be using light clients. However, since these fraud proofs work on the assumption of a minimum of one honest node, in the case of a digital issuer (which may be one or more), a fraud proof will not be viable on this assumption, as the digital issuer could be the sole node.
References
[1] "Size of the Bitcoin Blockchain from 2010 to 2018, by Quarter (in Megabytes)" [online].
Available: https://www.statista.com/statistics/647523/worldwidebitcoinblockchainsize/. Date accessed: 20180910.
[2] Satoshi Nakamoto, "Bitcoin: A PeertoPeer Electronic Cash System" [online]. Available: https://www.bitcoin.com/bitcoin.pdf.
Date accessed: 20180910.
[3] "Simple Payment Verification" [online]. Available: http://docs.electrum.org/en/latest/spv.html. Date accessed: 20180910.
[4] "SPV, Bloom Filters and Checkpoints" [online]. Available: https://multibit.org/hd0.4/howspvworks.html. Date accessed: 20180910.
[5] "Improving the Ability of SPV Clients to Detect Invalid Chains" [online].
Available: https://gist.github.com/justusranvier/451616fa4697b5f25f60. Date accessed: 20180910.
[6] "Meditations on Fraud Proofs" [online]. Available: http://www.truthcoin.info/blog/fraudproofs/. Dated accessed: 20180910.
[7] Arthur Gervais, Ghassan O. Karame, Damian Gruber and Srdjan Capkun, "On the Privacy Provisions of Bloom Filters in Lightweight Bitcoin Clients" [online]. Available: https://eprint.iacr.org/2014/763.pdf. Date accessed: 20180910.
[8] "SPV, Bloom Filters and Checkpoints" [online]. Available: https://multibit.org/hd0.4/howspvworks.html. Date accessed: 20180910.
[9] "A Case of False Positives in Bloom Filters" [online].
Available: https://medium.com/blockchainmusings/acaseoffalsepositivesinbloomfiltersda09ec487ff0. Date accessed: 20180911.
[10] "The Design of Bitcoin Merkle Trees Reduces the Security of SPV Clients" [online].
Available: https://media.rsk.co/thedesignofbitcoinmerkletreesreducesthesecurityofspvclients/. Date accessed: 20180911.
[11] "Leafnode Weakness in Bitcoin Merkle Tree Design" [online].
Available: https://bitslog.wordpress.com/2018/06/09/leafnodeweaknessinbitcoinmerkletreedesign/. Date accessed: 20180911.
[12] "Privacy in Bitsquare" [online]. Available: https://bisq.network/blog/privacyinbitsquare/. Date accessed: 20180911.
[13] "bip0037.mediawiki" [online]. Available: https://github.com/bitcoin/bips/blob/master/bip0037.mediawiki. Date accessed: 20180911.
[14] "Committed Bloom Filters for Improved Wallet Performance and SPV Security" [online].
Available: https://lists.linuxfoundation.org/pipermail/bitcoindev/2016May/012636.html. Date accessed: 20180911.
[15] "Bloomioattack" [online]. Available: https://github.com/petertodd/bloomioattack. Date accessed: 20180911.
[16] "Committed Bloom Filters versus BIP37 SPV" [online].
Available: https://www.newsbtc.com/2016/05/10/developersintroducebloomfiltersimprovebitcoinwalletsecurity/ Date accessed: 20180912.
[17] "Fraud Proofs" [online]. Available: https://www.linkedin.com/pulse/petertoddsfraudproofstalkmitbitcoinexpo2016markmorris/.
Date accessed: 20180912.
[18] "New Satoshi Nakamoto Emails Revealed" [online]. Available: https://www.trustnodes.com/2017/08/12/newsatoshinakamotoemailsrevealed. Date accessed: 20180912.
[19] Joseph Poon and Vitalik Buterin, "Plasma: Scalable Autonomous Smart Contracts" [online]. Available: https://plasma.io/plasma.pdf.
Date accessed: 20180913.
[20] "A Note on Data Availability and Erasure Coding" [online].
Available: https://github.com/ethereum/research/wiki/Anoteondataavailabilityanderasurecoding. Date accessed: 20180913.
[21] "Vitalik Buterin and Peter Todd Go Head to Head in the Crypto Culture Wars" [online].
Available: https://www.trustnodes.com/2017/08/14/vitalikbuterinpetertoddgoheadheadcryptoculturewars. Date accessed: 20180914.
[22] "Introducing Merklix Tree as an Unordered Merkle Tree on Steroid" [online].
Available: https://www.deadalnix.me/2016/09/24/introducingmerklixtreeasanunorderedmerkletreeonsteroid/. Date accessed 20180914.
[23] "Using Merklix Tree to Shard Block Validation" [online].
Available: https://www.deadalnix.me/2016/11/06/usingmerklixtreetoshardblockvalidation/. Date accessed: 20180914.
[24] "Fraud Proofs" [online]. Available: https://bitco.in/forum/threads/fraudproofs.1617/. Date accessed: 20180918.
[25] "Whats the Difference between an API Wallet and a SPV Wallet?" [Online.]
Available: https://www.reddit.com/r/Bitcoin/comments/3c3zn4/whats_the_difference_between_an_api_wallet_and_a/. Date accessed: 20180921.
[26] Mustafa AlBassam, Alberto Sinnino and Vitalik Butterin, "Fraud Proofs: Maximising Light Client Security and Scaling Blockchains with Dishonest Majorities" [online]. Available: https://arxiv.org/pdf/1809.09044.pdf. Date accessed: 20181008.
[27] "Bitcoin Integration/Staging Tree" [online]. Available: https://github.com/petertodd/bitcoin/tree/201602lietospv.
Date accessed: 20181012.
Contributors
 https://github.com/ksloven
 https://github.com/CjS77
 https://github.com/hansieodendaal
 https://github.com/anselld
Bulletproofs and Mimblewimble
Introduction
Bulletproofs form part of the family of distinct Zeroknowledge Proof^{def} systems, like ZeroKnowledge Succinct NonInteractive ARguments of Knowledge (zkSNARK), Succinct Transparent ARgument of Knowledge (STARK) and Zero Knowledge Prover and Verifier for Boolean Circuits (ZKBoo). Zeroknowledge proofs are designed so that a prover is able to indirectly verify that a statement is true without having to provide any information beyond the verification of the statement, for example to prove that a number is found that solves a cryptographic puzzle and fits the hash value without having to reveal the Nonce^{def}. ([2], [4])
The Bulletproofs technology is a Noninteractive Zeroknowledge (NIZK) proof protocol for general Arithmetic Circuits^{def} with very short proofs (Arguments of Knowledge Systems^{def}) and without requiring a trusted setup. They rely on the Discrete Logarithm^{def} (DL) assumption and are made noninteractive using the FiatShamir Heuristic^{def}. The name 'Bulletproof' originated from a nontechnical summary from one of the original authors of the scheme's properties: "Short like a bullet with bulletproof security assumptions". ([1], [29])
Bulletproofs also implement a Multiparty Computation (MPC) protocol whereby distributed proofs of multiple provers with secret committed values are aggregated into a single proof before the FiatShamir challenge is calculated and sent to the verifier, thereby minimizing rounds of communication. Secret committed values will stay secret. ([1], [6])
The essence of Bulletproofs is its innerproduct algorithm originally presented by Groth [13] and then further refined by Bootle et al. [12]. The latter development provided a proof (argument of knowledge) for two independent (not related) binding^{def} vector Pedersen Commitments^{def} that satisfied the given innerproduct relation. Bulletproofs build on these techniques, which yield communicationefficient zeroknowledge proofs, but offer a further replacement for the inner product argument that reduces overall communication by a factor of three. ([1], [29])
Mimblewimble is a blockchain protocol designed for confidential transactions. The essence is that a Pedersen Commitment to $ 0 $ can be viewed as an Elliptic Curve Digital Signature Algorithm (ECDSA) public key, and that for a valid confidential transaction the difference between outputs, inputs, and transaction fees must be $ 0 $. A prover constructing a confidential transaction can therefore sign the transaction with the difference of the outputs and inputs as the public key. This enables a greatly simplified blockchain in which all spent transactions can be pruned, and new nodes can efficiently validate the entire blockchain without downloading any old and spent transactions. The blockchain consists only of blockheaders, remaining Unspent Transaction Outputs (UTXO) with their range proofs and an unprunable transaction kernel per transaction. Mimblewimble also allows transactions to be aggregated before being committed to the blockchain. ([1], [20])
Contents
 Bulletproofs and Mimblewimble
How do Bulletproofs work?
The basis of confidential transactions is to replace the input and output amounts with Pedersen Commitments^{def}. It is then publicly verifiable that the transactions balance (the sum of the committed inputs is greater than the sum of the committed outputs, and all outputs are positive), while keeping the specific committed amounts hidden. This makes it a zeroknowledge transaction. The transaction amounts must be encoded as $ integers \mod q $, which can overflow, but is prevented by making use of range proofs. This is where Bulletproofs come in. The essence of Bulletproofs are its ability to calculate proofs, including range proofs, from innerproducts.
The prover must convince the verifier that commitment $ C(x,r) = xH + rG $ contains a number such that $ x \in [0,2^n  1] $. If $ \mathbf {a} = (a_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} a_n) \in {0,1}^n $ is the vector containing the bits of $ x $, the basic idea is to hide all the bits of the amount in a single vector Pedersen Commitment. It must then be proven that each bit satisfies $ \omega(\omega1) = 0 $, that is each $ \omega $ is either $ 0 $ or $ 1 $, and that they sum to $ x $. As part of the ensuing protocol the verifier sends random linear combinations of constraints and challenges $ \in \mathbb{Z_p} $ to the prover. The prover is then able to construct a vectorized inner product relation containing the elements of $ \mathbf {a} $, the constraints and challenges $ \in \mathbb{Z_p} $, and appropriate blinding vectors $ \in \mathbb Z_p^n $.
These inner product vectors have size $ n $ that would require many expensive exponentiations. The Pedersen Commitment scheme allows a vector to be cut in half and to compress the two halves together, each time calculating a new set of Pedersen Commitment generators. Applying the same trick repeatedly, $ \log _2 n $ times, produces a single value. This is applied to the inner product vectors; they are reduced interactively with a logarithmic number of rounds by the prover and verifier into a single multiexponentiation of size $ 2n + 2 \log_2(n) + 1 $. This single multiexponentiation can then be calculated much faster than $ n $ separate ones. All of this are made noninteractive using the FiatShamir Heuristic^{def}.
Figure 1: Vector Pedersen Commitment Cut and Half ([12], [63])
Bulletproofs only rely on the discrete logarithm assumption. What this means in practice is that Bulletproofs are compatible with any secure elliptic curve, which makes it extremely versatile. The proof sizes are short; only $ [2 \log_2(n) + 9] $ elements are required for the range proofs and $ [\log_2(n) + 13] $ elements for arithmetic circuit proofs with $ n $ denoting the multiplicative complexity. Additionally, the logarithmic proof size enables the prover to aggregate multiple range proofs into a single short proof, as well as to aggregate multiple range proofs from different parties into one proof (see Figure 1). ([1], [3], [5])
Figure 1: Logarithmic Aggregate Bulletproofs Proof Sizes [3]
If all Bitcoin transactions were confidential, approximately 50 million UTXOs from approximately 22 million transactions would result in roughly 160GB of range proof data, when using current/linear proof systems and assuming use of 52bits to represent any value from 1 satoshi up to 21 million bitcoins. Aggregated Bulletproofs would reduce the data storage requirement to less than 17GB. [1]
In Mimblewimble the blockchain grows with the size of the UTXO set. Using Bulletproofs as a dropin replacement for range proofs in confidential transactions, the size of the blockchain would only grow with the number of transactions that have unspent outputs. This is much smaller than the size of the UTXO set. [1]
The recent implementation of Bulletproofs in Monero on 18 October 2018 saw the average data size on the blockchain per payment reduce by ~73% and the average US$based fees reduce by ~94.5% for the period 30 August 2018 to 28 November 2018 (Figure 2).
Figure 2: Monero Payment, Block and Data Size Statistics
Applications for Bulletproofs
Bulletproofs were designed for range proofs but they also generalize to arbitrary arithmetic circuits. What this means in practice is that Bulletproofs have wide application and can be efficiently used for many types of proofs. Use cases of Bulletproofs are listed below, but this list may not be exhaustive as use cases for Bulletproofs continue to evolve. ([1], [2], [3], [5], [6], [59])

Range proofs
 Range proofs are proofs that a secret value, which has been encrypted or committed to, lies in a certain interval. It prevents any numbers coming near the magnitude of a large prime, say $ 2^{256} $, that can cause wrap around when adding a small number, e.g. proof that $ x \in [0,2^{52}  1] $.

Merkle proofs
 Hash preimages in a Merkle tree [7] can be leveraged to create zeroknowledge Merkle proofs using Bulletproofs, to create efficient proofs of inclusion in massive data sets.

Proof of solvency
 Proofs of solvency are a specialized application of Merkle proofs; coins can be added into a giant Merkle tree. It can then be proven that some outputs are in the Merkle tree and that those outputs add up to some amount that the cryptocurrency exchange claims they have control over without revealing any private information. A Bitcoin exchange with 2 million customers need approximately 18GB to prove solvency in a confidential manner using the Provisions protocol [58]. Using Bulletproofs and its variant protocols proposed in [1], this size could be reduced to approximately 62MB.

Multisignatures with deterministic nonces
 With Bulletproofs every signatory can prove that their nonce was generated deterministically. A SHA256 arithmetic circuit could be used in a deterministic way to show that the derandomized nonces were generated deterministically. This will still work if one signatory were to leave the conversation and rejoin later, with no memory of interacting with the other parties they were previously interacting with.

Scriptless Scripts
 Scriptless scripts is a way to do smart contracts exploiting the linear property of Schnorr signatures, using an older form of zeroknowledge proofs called a Sigma protocol. This can all be done with Bulletproofs, which could be extended to allow assets that are functions of other assets, i.e. crypto derivatives.

Smart contracts and Cryptoderivatives

Traditionally, a new trusted setup is needed for each smart contract when verifying privacypreserving smart contracts, but with Bulletproofs no trusted setup is needed. Verification time however is linear, and it might be too complex to prove every step in a smart contract. The Refereed Delegation Model [33] has been proposed as an efficient protocol to verify smart contracts with public verifiability in the offline stage, by making use of a specific verification circuit linked to a smart contract.
A challenger will input the proof to the verification circuit and get a binary response as to the validity of the proof. The challenger can then complain to the smart contract and claim the proof is invalid and sends the proof together with the output from a chosen gate in the verification circuit to the smart contract. Interactive binary searches are then used to identify the gate where the proof turns invalid, and hence the smart contract must only check a single gate in the verification procedure, to decide whether the challenger or prover was correct. The cost is logarithmic in the number of rounds and amount of communications, with the smart contract only doing one computation. A Bulletproof can be calculated as a short proof for the arbitrary computation in the smart contract, thereby creating privacypreserving smart contracts (see Figure 3).
Figure 3: Bulletproofs for Refereed Delegation Model [5]


Verifiable shuffles

Alice has some computation and wants to prove to Bob that she has done it correctly and has some secret inputs to this computation. It is possible to create a complex function that either evaluates to 1 if all secret inputs are correct and to 0 otherwise. Such a function can be encoded in an arithmetic circuit and can be implemented with Bulletproofs to proof that the transaction is valid.

When a proof is needed that one list of values $[x_1, ... , x_n]$ is a permutation of a second list of values $[y_1, ... , y_n]$ it is called a verifiable shuffle. It has many applications for example voting, blind signatures for untraceable payments, and solvency proofs. Currently the most efficient shuffle has size $O \sqrt{n}$. Bulletproofs can be used very efficiently to prove verifiable shuffles of size $O \log(n)$ as shown in Figure 4.
Figure 4: Bulletproofs for Verifiable Shuffles [5] 
Another potential use case is to verify that two nodes executed the same list of independent instructions $ [x1,x4,x3,x2] $ and $ [x1,x2,x3,x4] $, that may be in different order, to arrive at the same next state $ N $. The nodes don't need to share the actual instructions with a Verifier, but the Verifier can show that they executed the same set without having knowledge of the instructions.


Batch verifications
 Batch verifications can be done using one of the Bulletproofs derivative protocols. This has application where the Verifier needs to verify multiple (separate) range proofs at once, for example a blockchain full node receiving a block of transactions needs to verify all transactions as well as range proofs. This batch verification is then implemented as one large multiexponentiation; it is applied to reduce the number of expensive exponentiations.
Comparison to other Zeroknowledge Proof Systems
The table below ([2], [5]) shows a highlevel comparison between Sigma protocols (i.e. interactive publiccoin protocols) and the different Zeroknowledge proof systems mentioned in this report. (The most desirable outcomes for each measurement are shown in bold italics.) The aim will be to have a proof system that is not interactive, has short proof sizes, has linear Prover runtime scalability, has efficient (sublinear) Verifier runtime scalability, has no trusted setup, is practical and is at least DL secure. Bulletproofs are unique in that they are not interactive, have a short proof size, do not require a trusted setup, have very fast execution times and are practical to implement. These attributes make Bulletproofs extremely desirable to use as range proofs in cryptocurrencies.
Proof System  Sigma Protocols  zkSNARK  STARK  ZKBoo  Bulletproofs 

Interactive  yes  no  no  no  no 
Proof Size  long  short  shortish  long  short 
Prover Runtime Scalability  linear  quasilinear  quasilinear (big memory requirement)  linear  linear 
Verifier Runtime Scalability  linear  efficient  efficient (polylogarithmically)  efficient  linear 
Trusted Setup  no  required  no  no  no 
Practical  yes  yes  not quite  somewhat  yes 
Security Assumptions  DL  nonfalsifiable, but not on par with DL  quantum secure Oneway Function (OWF) [50], which is better than DL  similar to STARKs  DL 
Interesting Bulletproofs Implementation Snippets
Bulletproofs development are currently still evolving as can be seen when following the different community development projects. Different implementations of Bulletproofs also offer different levels of efficiency, security and functionality. This section describes some of these aspects.
Current & Past Efforts
The initial prototype Bulletproofs' implementation was done by Benedikt Bünz in Java located at GitHub:bbuenz/BulletProofLib
[27].
The initial work that provided cryptographic support for a Mimblewimble implementation was mainly done by Pieter Wuille, Gregory Maxwell and Andrew Poelstra in C located at GitHub:ElementsProject/secp256k1zkp
[25]. This effort was forked as GitHub:apoelstra/secp256k1mw
[26] with main contributors being Andrew Poelstra, Pieter Wuille, and Gregory Maxwell where Mimblewimble primitives and support for many of the Bulletproof protocols (e.g. zero knowledge proofs, range proofs and arithmetic circuits) were added. Current effort also involves MuSig [48] support.
The Grin project (an open source Mimblewimble implementation in Rust) subsequently forked GitHub:ElementsProject/secp256k1zkp
[25] as GitHub:mimblewimble/secp256k1zkp
[30] and have added Rust wrappers to it as mimblewimble/rustsecp256k1zkp
[45] for use in their blockchain. The Beam project (another open source Mimblewimble implementation in C++) link directly to GitHub:ElementsProject/secp256k1zkp
[25] as their cryptographic submodule. See MimblewimbleGrin Blockchain Protocol Overview and Grin vs. BEAM, a Comparison for more information about the Mimblewimble implementation of Grin and Beam.
An independent implementation for Bulletproof range proofs was done for the Monero project (an open source CryptoNote implementation in C++) by Sarang Noether [49] in Java as the precursor and moneromooomonero [46] in C++ as the final implementation. Their implementation supports single and aggregate range proofs.
Adjoint, Inc. has also done an independent open source implementation of Bulletproofs in Haskell at GitHub: adjointio/bulletproofs
[29]. They have an open source implementation of a private permissioned blockchain with multiparty workflows aimed at the financial industry.
Chain/Interstellar has done another independent open source implementation of Bulletproofs in Rust from the ground up at GitHub:dalekcryptography/bulletproofs
[28]. They have implemented parallel Edwards formulas [39] using Intel® Advanced Vector Extensions 2 (AVX2) to accelerate curve operations. Initial testing suggests approximately 50% speedup (twice as fast) over the original libsecp256k1
based Bulletproofs implementation.
Security Considerations
Real world implementation of Ellipticcurve Cryptography (ECC) is largely based on official standards that govern the selection of curves in order to try and make the Ellipticcurve Discretelogarithm Problem (ECDLP) hard to solve, that is finding an ECC user's secret key given the user's public key. Many attacks break realworld ECC without solving ECDLP due to problems in ECC security, where implementations can produce incorrect results and also leak secret data. Some implementation considerations also favor efficiency over security. Secure implementations of the standardsbased curves are theoretically possible but highly unlikely. ([14], [32])
Grin, Beam and Adjoint use ECC curve secp256k1 [24] for their Bulletproofs implementation, which fails 1 out of the 4 ECDLP security criteria and 3 out of the 4 ECC security criteria. Monero and Chain/Interstellar use the ECC curve Curve25519 [38] for their Bulletproofs implementation, which passes all ECDLP and ECC security criteria. [32]
Chain/Interstellar goes one step further with their use of Ristretto, a technique for constructing prime order elliptic curve groups with nonmalleable encodings, which allows an existing Curve25519 library to implement a primeorder group with only a thin abstraction layer. This makes it possible for systems using Ed25519 signatures to be safely extended with zeroknowledge protocols, with no additional cryptographic assumptions and minimal code changes. [31]
The Monero project have also had security audits done on their Bulletproofs' implementation that resulted in a number of serious and critical bug fixes as well as some other code improvements. ([8], [9], [11])
Wallet Reconstruction and Switch Commitment  Grin
Grin implemented a switch commitment [43] as part of a transaction output to be ready for the age of quantum adversaries and to pose as defense mechanism. They had an original implementation that was discarded (completely removed) due to it being complex, using a lot of space in the blockchain and allowing inclusion of arbitrary data. Grin also employed a complex scheme to embed the transaction amount inside a Bulletproof range proof for wallet reconstruction, which was linked to the original switch commitment hash implementation. The latest implementation improved on all those aspects and uses a much simpler method to regain the transaction amount from a Bulletproof range proof.
Initial Implementation
The initial Grin implementation ([21], [34]. [35], [54]) hides two things in the Bulletproof range proof: a transaction amount for wallet reconstruction and an optional switch commitment hash to make the transaction perfectly binding^{def} later on as opposed to currently being perfectly hiding^{def}. Perfect in this sense means that a quantum adversary (an attacker with infinite computing power) cannot tell what amount has been committed to and is also unable to produce fake commitments. Computational, means that no efficient algorithm running in a practical amount of time can reveal the commitment amount or produce fake commitments except with small probability. The Bulletproof range proofs are stored in the transaction kernel and will thus remain persistent in the blockchain.
In this implementation a Grin transaction output contains the original (Elliptic Curve) Pedersen Commitment^{def} as well as the optional switch commitment hash. The switch commitment hash takes the resultant blinding factor $ b $, a third cyclic group random generator $ J $ and a walletseed derived random value $ r $ as input. The transaction output has the following form
$$ (vG + bH \mspace{3mu} , \mspace{3mu} \mathrm{H_{B2}}(bJ \mspace{3mu} , \mspace{3mu} r)) $$
where $ \mathrm{H_{B2}} $ is the BLAKE2 hash function [44] and $ \mathrm{H_{B2}}(bJ \mspace{3mu} , \mspace{3mu} r) $ the switch commitment hash. In order for such an amount to be spent the owner needs to reveal $ b , r $ so that the Verifier can check the opening of $ \mathrm{H_{B2}}(bJ \mspace{3mu} , \mspace{3mu} r) $ by confirming that it matches the value stored in the switch commitment hash portion of the transaction output. Grin implemented the BLAKE2 hash function, which outperforms all mainstream hash function implementations in terms of hashing speed with similar security to the latest Secure Hash Algorithm 3 (SHA3) standard [44].
In the event of quantum adversaries, the owner of an output can choose to stay anonymous and not claim ownership or reveal $ bJ $ and $ r $ whereupon the amount can be moved to the then hopefully forked quantum resistant blockchain.
In the Bulletproof range proof protocol two 32byte scalar nonces $ \tau_1 , \alpha $ (not important to know what they are) are generated with a secure random number generator. If the seed for the random number generator is known, the scalar values $ \tau_1 , \alpha $ can be recalculated when needed. Sixtyfour (64) bytes worth of message space (out of 674 bytes worth of range proof) are made available by embedding a message into those variables using a logic $ \mathrm{XOR} $ gate. This message space is used for the transaction amount for wallet reconstruction.
To ensure that the transaction amount of the output cannot be spend by only opening the (Elliptic Curve) Pedersen Commitment $ vG + bH $, the switch commitment hash and embedded message are woven into the Bulletproof range proof calculation. The initial part is done by seeding the random number generator used to calculate $ \tau_1 , \alpha $ with the output from a seed function $ \mathrm S $ that uses as input a nonce $ \eta $ (which may be equal to the original blinding factor $ b $), the (Elliptic Curve) Pedersen Commitment^{def} $ P $ and the switch commitment hash
$$ \mathrm S (\eta \mspace{3mu} , \mspace{3mu} P \mspace{3mu} , \mspace{3mu} \mathrm{H_{B2}}(bJ \mspace{3mu} , \mspace{3mu} r) ) = \eta \mspace{3mu} \Vert \mspace{3mu} \mathrm{H_{S256}}(P \mspace{3mu} \Vert \mspace{3mu} \mathrm{H_{B2}}(bJ \mspace{3mu} , \mspace{3mu} r) ) $$
where $ \mathrm{H_{S256}}$ is the SHA256 hash function. The Bulletproof range proof is then calculated with an adapted pair $ \tilde{\alpha} , \tilde{\tau_1} $ using the original $ \tau_1 , \alpha $ and two 32byte words $m_{w1} $ and $m_{w2} $ that make up the 64byte embedded message as follows:
$$ \tilde{\alpha} = \mathrm {XOR} ( \alpha \mspace{3mu} , \mspace{3mu} m_{w1}) \mspace{12mu} \mathrm{and} \mspace{12mu} \tilde{\tau_1} = \mathrm {XOR} ( \tau_1 \mspace{3mu} , \mspace{3mu} m_{w2} ) $$
To retrieve the embedded message the process is simply inverted. Notice that the owner of an output needs to keep record of the blinding factor $ b $, the nonce $ \eta $ if not equal to the blinding factor $ b $, as well as the walletseed derived random value $ r $ to be able to claim such an output.
Improved Implementation
The latter Grin implementation ([56], [57]) uses Bulletproof range proof rewinding so that wallets can recognize their own transaction outputs. This negated the requirement to remember the walletseed derived random value $ r $, nonce $ \eta $ for the seed function $ \mathrm S $ and use of the adapted pair $ \tilde{\alpha} , \tilde{\tau_1} $ in the Bulletproof range proof calculation.
In this implementation it is not necessary to remember a hash of the switch commitment as part of the transaction output set and for it to be passed around during a transaction. The switch commitment looks exactly like the original (Elliptic Curve) Pedersen Commitment $ vG + bH $ but in this instance the blinding factor $ b $ is tweaked to be $$ b = b^\prime + \mathrm{H_{B2}} ( vG + b^\prime H \mspace{3mu} , \mspace{3mu} b^\prime J ) $$ with $ b^\prime $ being the user generated blinding factor. The (Elliptic Curve) Pedersen Commitment then becomes $$ vG + b^\prime H + \mathrm{H_{B2}} ( vG + b^\prime H \mspace{3mu} , \mspace{3mu} b^\prime J ) H $$ After activation of the switch commitment in the age of quantum adversaries, users can reveal $ ( vG + b^\prime H \mspace{3mu} , \mspace{3mu} b^\prime J ) $ and Verifiers can check if it's computed correctly and use it as if it were the ElGamal Commitment^{def} $ ( vG + b H \mspace{3mu} , \mspace{3mu} b J ) $.
GitHub Extracts
The extracts of the discussions below depict the initial and improved implementations of the switch commitment and retrieving transactions amounts from Bulletproofs for wallet reconstruction.
Bulletproofs #273 [35]
{yeastplume } "The only thing I think we're missing here from being able to use this implementation is the ability to store an amount within the range proof (for wallet reconstruction). From conversations with @apoelstra earlier, I believe it's possible to store 64 bytes worth of 'message' (not nearly as much as the current range proofs)."
{apoelstra} "Ok, I can get you 64 bytes without much trouble (xoring them into tau_1
and alpha
which are easy to extract from tau_x
and mu
if you know the original seed used to produce the randomness). I think it's possible to get another 32 bytes into t
but that's way more involved since t
is a big innerproduct."
Message hiding in Bulletproofs #721 [21]
"Breaking out from #273, we need the wind a message into a bulletproof similarly to how it could be done in 'Rangeproof Classic'. This is an absolute requirement as we need to embed an output's SwitchCommitHash
(which is otherwise not committed to) and embed an output amount for wallet reconstruction. We should be able to embed up to 64 bytes of message without too much difficulty, and another 32 with more difficulty (see original issue). 64 should be enough for the time being."
Switch Commits / Bulletproofs  Status #734 [34]
"The prove function takes a value, a secret key (blinding factor in our case), a nonce, optional extra_data and a generator and produces a 674 byte proof. I've also modified it to optionally take a message (more about this in a bit). It creates the Pedersen commitment it works upon internally with these values."
"The verify function takes a proof, a Pedersen commitment and optional extra_data and returns true if proof demonstrates that the value within the Pedersen commitment is in the range [0..2^64] (and the extra_data is correct)."
"Additionally, I've added an unwind function which takes a proof, a Pedersen commitment, optional extra_data and a 32 bit nonce (which needs to be the same as the original nonce used in order to return the same message) and returns the hidden message."
"If you have the correct Pedersen commitment and proof and extra_data, and attempt to unwind a message out using the wrong nonce, the attempt won't fail, you'll get out gibberish or just wildly incorrect values as you parse back the bytes."
"The SwitchCommitHash
is currently a field of an output, and while it is stored in the Txo set and passed around during a transaction, it is not currently included in the output's hash. It is passed in as the extra_data field above, meaning that anyone validating the range proof also needs to have the correct switch commit in order to validate the range proof."
Removed all switch commitment usages, including restore #841 [55]
{ignopeverell} "After some discussion with @antiochp, @yeastplume and @tromp, we decided switch commitments weren't worth the cost of maintaining them and their drawbacks. Removing them."
{ignopeverell} "For reference, switch commitments were found to:
 add a lot of complexity and assumptions
 take additional space for little benefit right now
 allow the inclusion of arbitrary data, potentially for the worst
 provide little to no advantage in case of quantamageddon (as range proofs are still a weakness)"
{apoelstra} "After chatting with @yeastplume on IRC, I realize that we can actually use rangeproof rewinding for wallets to recognize their own outputs, which even avoids the "gap" problem of just scanning for pregenerated keys. With that in mind, it's true that the benefit of switch commitments for MW are not spectacular."
Switch commitment discussion #998 [56]
{antiochp} "Sounds like there is a "zero cost" way of getting switch commitments in as part of the commitment itself, so we would not need to store and maintain a separate "switch commitment" on each output. I saw that switch commitments have been removed for various reasons."
"Let me suggest a variant (idea suggested by Pieter Wuille initially): The switch commitment is (vG + bH), where b = b' + hash(vG + b'H,b'J). (So this "tweaks" the commitment, in a paytocontract / taproot style). Before the switch, this is just used like a normal Pedersen Commitment vG + bH. After the switch, users can reveal (vG + b'H, b'J), and verifiers check if it's computed correctly and use as if it were the ElGamal commitment (vG + bH, bJ)."
{@ignopeverell} modified the milestones: Beta / testnet3, Mainnet on 11 Jul
{@ignopeverell} added the musthave label on 24 Aug
Conclusions, Observations, Recommendations
 Bulletproofs are not Bulletproofs are not Bulletproofs. This is evident by comparing the functionality, security and performance of all the current different Bulletproof implementations as well as the evolving nature of Bulletproofs.
 The security audit instigated by the Monero project on their Bulletproofs implementation and the resulting findings and corrective actions prove that every implementation of Bulletproofs has potential risk. This risk is due to the nature of confidential transactions; transacted values and token owners are not public.
 The growing number of open source Bulletproof implementations should strengthen the development of a new confidential blockchain protocol like Tari.
 In the pure implementation of Bulletproof range proofs, a discretelog attacker (e.g. a bad actor employing a quantum computer) would be able to exploit Bulletproofs to silently inflate any currency that used them. Bulletproofs are perfectly hiding^{def} (i.e. confidential), but only computationally binding^{def} (i.e. not quantum resistant). Unconditional soundness is lost due to the data compression being employed. ([1], [5], [6] and [10])
 Bulletproofs are not only about range proofs. All the different Bulletproof use cases have a potential implementation in a new confidential blockchain protocol like Tari; in the base layer as well as in the probable 2nd layer.
References
[1] Bulletproofs: Short Proofs for Confidential Transactions and More, Blockchain Protocol Analysis and Security Engineering 2018, Bünz B. et al., http://web.stanford.edu/~buenz/pubs/bulletproofs.pdf, Date accessed: 20180918.
[2] Bullet Proofs (Transcript), Bitcoin Milan Meetup 20180202, Andrew Poelstra, https://diyhpl.us/wiki/transcripts/20180202andrewpoelstraBulletproofs, Date accessed: 20180910.
[3] Bullet Proofs (Slides), Bitcoin Milan Meetup 20180202, Andrew Poelstra, https://drive.google.com/file/d/18OTVGX7COgvnZ7T0keajhMWwOHOWfKV/view, Date accessed: 20180910.
[4] Decoding zkSNARKs, https://medium.com/wolverineblockchain/decodingzksnarks85e73886a040, Date accessed: 20180917.
[5] Bulletproofs: Short Proofs for Confidential Transactions and More (Slides), Bünz B. et al., https://cyber.stanford.edu/sites/default/files/bpase18.pptx, Date accessed: 20180918.
[6] Bulletproofs: Short Proofs for Confidential Transactions and More (Transcripts), Bünz B. et al., http://diyhpl.us/wiki/transcripts/blockchainprotocolanalysissecurityengineering/2018/Bulletproofs, Date accessed: 20180918.
[7] Merkle Root and Merkle Proofs, https://bitcoin.stackexchange.com/questions/69018/MerklerootandMerkleproofs, Date accessed: 20181010.
[8] Bulletproofs audit: fundraising, https://forum.getmonero.org/22/completedtasks/90007/Bulletproofsauditfundraising, Date accessed: 20181023.
[9] The QuarksLab and Kudelski Security audits of Monero Bulletproofs are Complete, https://ostif.org/thequarkslabandkudelskisecurityauditsofmoneroBulletproofsarecomplete, Date accessed: 20181023.
[10] Bulletproofs presentation at Feb 2 Milan Meetup (Andrew Poelstra), Reddit, https://www.reddit.com/r/Bitcoin/comments/7w72pq/Bulletproofs_presentation_at_feb_2_milan_meetup, Date accessed: 20180910.
[11] The OSTIF and QuarksLab Audit of Monero Bulletproofs is Complete – Critical Bug Patched, https://ostif.org/theostifandquarkslabauditofmoneroBulletproofsiscompletecriticalbugpatched, Date accessed: 20181023.
[12] Efficient zeroknowledge arguments for arithmetic circuits in the discrete log setting, Bootle J et al., Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 327357. Springer, 2016., https://eprint.iacr.org/2016/263.pdf, Date accessed: 20180921.
[13] Linear Algebra with Sublinear ZeroKnowledge Arguments, Groth J., https://link.springer.com/content/pdf/10.1007%2F9783642033568_12.pdf, Date accessed: 20180921.
[14] The XEdDSA and VXEdDSA Signature Schemes, Perrin T, 20161020, https://signal.org/docs/specifications/xeddsa & https://signal.org/docs/specifications/xeddsa/xeddsa.pdf, Date accessed: 20181023.
[15] Confidential Assets, Poelstra A. et al., Blockstream, https://blockstream.com/bitcoin17final41.pdf, Date accessed: 20180925.
[16] Wikipedia: Zeroknowledge Proof, https://en.wikipedia.org/wiki/Zeroknowledge_proof, Date accessed: 20180918.
[17] Wikipedia: Discrete logarithm, https://en.wikipedia.org/wiki/Discrete_logarithm, Date accessed: 20180920.
[18] How to Prove Yourself: Practical Solutions to Identification and Signature Problems, Fiat A. et al., CRYPTO 1986: pp. 186194, https://link.springer.com/content/pdf/10.1007%2F3540477217_12.pdf, Date accessed: 20180920.
[19] How not to Prove Yourself: Pitfalls of the FiatShamir Heuristic and Applications to Helios, Bernhard D. et al., https://link.springer.com/content/pdf/10.1007%2F9783642349614_38.pdf, Date accessed: 20180920.
[20] Mimblewimble Explained, https://www.weusecoins.com/mimblewimbleandrewpoelstra/, Date accessed: 20180910.
[21] Message hiding in Bulletproofs #721, https://github.com/mimblewimble/grin/issues/721, Date accessed: 20180910.
[22] pedersencommitment: An implementation of Pedersen Commitment schemes, https://hackage.haskell.org/package/pedersencommitment, Date accessed: 20180925.
[23] Zero Knowledge Proof Standardization  An Open Industry/Academic Initiative, https://zkproof.org/documents.html, Date accessed: 20180926.
[24] SEC 2: Recommended Elliptic Curve Domain Parameters, Standards for Efficient Cryptography, 20 September 2000, http://safecurves.cr.yp.to/www.secg.org/SEC2Ver1.0.pdf, Date accessed: 20180926.
[25] GitHub: ElementsProject/secp256k1zkp, Experimental Fork of libsecp256k1 with Support for Pedersen Commitments and range proofs, https://github.com/ElementsProject/secp256k1zkp, Date accessed: 20180918.
[26] GitHub: apoelstra/secp256k1mw, Fork of libsecpzkp d78f12b
to Add Support for Mimblewimble Primitives, https://github.com/apoelstra/secp256k1mw/tree/Bulletproofs, Date accessed: 20180918.
[27] GitHub: bbuenz/BulletProofLib, Library for generating noninteractive zero knowledge proofs without trusted setup (Bulletproofs), https://github.com/bbuenz/BulletProofLib, Date accessed: 20180918.
[28] GitHub: dalekcryptography/Bulletproofs, A pureRust implementation of Bulletproofs using Ristretto, https://github.com/dalekcryptography/Bulletproofs, Date accessed: 20180918.
[29] GitHub: adjointio/Bulletproofs, Bulletproofs are Short Noninteractive Zeroknowledge Proofs that Require no Trusted Setup, https://github.com/adjointio/Bulletproofs, Date accessed: 20180910.
[30] GitHub: mimblewimble/secp256k1zkp, Fork of secp256k1zkp for the Grin/MimbleWimble project, https://github.com/mimblewimble/secp256k1zkp, Date accessed: 20180918.
[31] The Ristretto Group, https://ristretto.group/ristretto.html, Date accessed: 20181023.
[32] SafeCurves: choosing safe curves for ellipticcurve cryptography, http://safecurves.cr.yp.to/, Date accessed: 20181023.
[33] Two 1Round Protocols for Delegation of Computation, Canetti R. et al., https://eprint.iacr.org/2011/518.pdf, Date accessed: 20181011.
[34] GitHub: mimblewimble/grin, Switch Commits / Bulletproofs  Status #734, https://github.com/mimblewimble/grin/issues/734, Date accessed: 20180910.
[35] GitHub: mimblewimble/grin, Bulletproofs #273, https://github.com/mimblewimble/grin/issues/273, Date accessed: 20180910.
[36] Wikipedia: Commitment scheme, https://en.wikipedia.org/wiki/Commitment_scheme, Date accessed: 20180926.
[37] Cryptography Wikia: Commitment scheme, http://cryptography.wikia.com/wiki/Commitment_scheme, Date accessed: 20180926.
[38] Curve25519: new DiﬃeHellman speed records, Bernstein D.J., https://cr.yp.to/ecdh/curve2551920060209.pdf, Date accessed: 20180926.
[39] Twisted Edwards Curves Revisited, Hisil H. et al., Information Security Institute, Queensland University of Technolog, https://iacr.org/archive/asiacrypt2008/53500329/53500329.pdf, Date accessed: 20180926.
[40] Assumptions Related to Discrete Logarithms: Why Subtleties Make a Real Difference, Sadeghi A et al., http://www.semper.org/sirene/publ/SaSt_01.dhetal.long.pdf, Date accessed: 20180924.
[41] Crypto Wiki: Cryptographic nonce, http://cryptography.wikia.com/wiki/Cryptographic_nonce, Date accessed: 20181008.
[42] Wikipedia: Cryptographic nonce, https://en.wikipedia.org/wiki/Cryptographic_nonce, Date accessed: 20181008.
[43] Switch Commitments: A Safety Switch for Confidential Transactions, Ruffing T. et al., Saarland University, https://people.mmci.unisaarland.de/~truffing/papers/switchcommitments.pdf, Date accessed: 20181008.
[44] BLAKE2 — fast secure hashing, https://blake2.net, Date accessed: 20181008.
[45] GitHub: mimblewimble/rustsecp256k1zkp, https://github.com/mimblewimble/rustsecp256k1zkp, Date accessed: 20181116.
[46] GitHub: moneroproject/monero, https://github.com/moneroproject/monero/tree/master/src/ringct, Date accessed: 20181116.
[47] Wikipedia: Arithmetic circuit complexity, https://en.wikipedia.org/wiki/Arithmetic_circuit_complexity, Date accessed: 20181108.
[48] Simple Schnorr MultiSignatures with Applications to Bitcoin, Maxwell G. et al., 20 May 2018, https://eprint.iacr.org/2018/068.pdf, Date accessed: 20180724.
[49] GitHub: bggoodell/researchlab, https://github.com/bggoodell/researchlab/tree/master/sourcecode/StringCTjava, Date accessed: 20181116.
[50] Wikipedia: Oneway function, https://en.wikipedia.org/wiki/Oneway_function, Date accessed: 20181127.
[51] Intensified ElGamal Cryptosystem (IEC), Sharma P. et al., International Journal of Advances in Engineering & Technology, Jan 2012, http://www.eijaet.org/media/58I6IJAET0612695.pdf, Date accessed: 20181009.
[52] On the Security of ElGamal Based Encryption, Tsiounis Y. et al., https://drive.google.com/file/d/16XGAByoXse5NQl57v_GldJwzmvaQlS94/view, Date accessed: 20181009.
[53] Wikipedia: Decisional Diffie–Hellman assumption, https://en.wikipedia.org/wiki/Decisional_Diffie%E2%80%93Hellman_assumption, Date accessed: 20181009.
[54] GitHub: mimblewimble/grin, Bulletproof messages #730, https://github.com/mimblewimble/grin/pull/730, Date accessed: 20181129.
[55] GitHub: mimblewimble/grin, Removed all switch commitment usages, including restore #841, https://github.com/mimblewimble/grin/pull/841, Date accessed: 20181129.
[56] GitHub: mimblewimble/grin, switch commitment discussion #998, https://github.com/mimblewimble/grin/issues/998, Date accessed: 20181129.
[57] GitHub: mimblewimble/grin, [DNM] Switch commitments #2007, https://github.com/mimblewimble/grin/pull/2007, Date accessed: 20181129.
[58] Provisions: Privacypreserving proofs of solvency for Bitcoin exchanges, Dagher G. et al., Oct 2015, https://eprint.iacr.org/2015/1008.pdf, Date accessed: 20181129.
[59] Bulletproofs: Faster Rangeproofs and Much More, Poelstra A., February 2018, https://blockstream.com/2018/02/21/bulletproofsfasterrangeproofsandmuchmore/, Date accessed: 20181130.
[60] Homomorphic Miniblockchain Scheme, Franca B., April 2015, http://cryptonite.info/files/HMBC.pdf, Date accessed: 20181122.
[61] Efficient Implementation of Pedersen Commitments Using Twisted Edwards Curves, Franck C. and Großschädl J., University of Luxembourg, http://orbilu.uni.lu/bitstream/10993/33705/1/MSPN2017.pdf, Date accessed: 20181122.
[62] An investigation into Confidential Transactions, Gibson A., July 2018, https://github.com/AdamISZ/ConfidentialTransactionsDoc/blob/master/essayonCT.pdf, Date accessed: 20181122.
[63] How to do ZeroKnowledge from DiscreteLogs in under 7kB, Bootle J., October 2016, https://www.benthamsgaze.org/2016/10/25/howtodozeroknowledgefromdiscretelogsinunder7kb, Date accessed: 20190118.
Appendices
Appendix A: Definition of Terms
Definitions of terms presented here are high level and general in nature. Full mathematical definitions are available in the cited references.

Arithmetic Circuits: An arithmetic circuit $ C $ over a field $ F $ and variables $ (x_1, ..., x_n) $ is a directed acyclic graph whose vertices are called gates. Arithmetic circuits can alternatively be described as a list of addition and multiplication gates with a collection of linear consistency equations relating the inputs and outputs of the gates. The size of an arithmetic circuit is the number of gates in it, with the depth being the length of the longest directed path. Upper bounding the complexity of a polynomial $ f $ is to find any arithmetic circuit that can calculate $ f $, whereas lower bounding is to find the smallest arithmetic circuit that can calculate $ f $. An example of a simple arithmetic circuit with size six and depth two that calculates a polynomial is shown below. ([29], [47])
 Argument of Knowledge System: Proof systems with computational soundness like Bulletproofs are sometimes called argument systems. The terms proof and argument of knowledge have exactly the same meaning and can be used interchangeably. [29]
 Commitment Scheme: A commitment scheme in a Zeroknowledge Proof^{def} is a cryptographic primitive that allows a prover to commit to only a single chosen value/statement from a finite set without the ability to change it later (binding property) while keeping it hidden from a verifier (hiding property). Both binding and hiding properties are then further classified in increasing levels of security to be computational, statistical or perfect. No commitment scheme can at the same time be perfectly binding and perfectly hiding. ([36], [37])
 Discrete Logarithm/Discrete Logarithm Problem (DLP): In the mathematics of real numbers, the logarithm $ \log_b^a $ is a number $ x $ such that $ b^x=a $, for given numbers $ a $ and $ b $. Analogously, in any group $ G $, powers $ b^k $ can be defined for all integers $ k $, and the discrete logarithm $ \log_ba $ is an integer $ k $ such that $ b^k=a $. Algorithms in publickey cryptography base their security on the assumption that the discrete logarithm problem over carefully chosen cyclic finite groups and cyclic subgroups of elliptic curves over finite fields has no efficient solution. ([17], [40])
 Elliptic Curve Pedersen Commitment: An efficient implementation of the Pedersen Commitment ([15], [22]) will use secure Elliptic Curve Cryptography (ECC), which is based on the algebraic structure of elliptic curves over finite (prime) fields. Elliptic curve points are used as basic mathematical objects, instead of numbers. Note that traditionally in elliptic curve arithmetic lower case letters are used for ordinary numbers (integers) and upper case letters for curve points. ([60], [61], [62])

The generalized Elliptic Curve Pedersen Commitment definition follows (refer to Appendix B: Notations Used):

Let $ \mathbb F_p $ be the group of elliptic curve points, where $ p $ is a large prime.

Let $ G \in \mathbb F_p $ be a random generator point (base point) and let $ H \in \mathbb F_p $ be specially chosen so that the value $ x_H $ to satisfy $ H = x_H G $ cannot be found except if the Elliptic Curve DLP (ECDLP) is solved.

Let $ r $ (the blinding factor) be a random value and element of $ \mathbb Z_p $.

The commitment to value $ x \in \mathbb Z_p $ is then determined by calculating $ C(x,r) = rH + xG $, which is called the Elliptic Curve Pedersen Commitment.


Elliptic curve point addition is analogous to multiplication in the originally defined Pedersen Commitment. Thus $ g^x $, the number $ g $ multiplied by itself $ m $ times, is analogous to $ xG $, the elliptic curve point $ G $ added to itself $ x $ times. In this context $ xG $ is also a point in $ \mathbb F_p $.

In the Elliptic Curve context $ C(x,r) = rH + xG $ is then analogous to $ C(x,r) = h^r g^x $.

The number $ H $ is what is known as a Nothing Up My Sleeve (NUMS) number. With secp256k1 the value of $ H $ is the SHA256 hash of a simple encoding of the prespecified generator point $ G $.

Similar to Pedersen Commitments, the Elliptic Curve Pedersen Commitments are also additionally homomorphic, such that for messages $ x $, $ x_0 $ and $ x_1 $, blinding factors $ r $, $ r_0 $ and $ r_1 $ and scalar $ k $ the following relation holds: $ C(x_0,r_0) + C(x_1,r_1) = C(x_0+x_1,r_0+r_1) $ and $ C(k \cdot x, k \cdot r) = k \cdot C(x, r) $.

In secure implementations of ECC it is as hard to guess $ x $ from $ xG $ as it is to guess $ x $ from $g^x $. This is called the Elliptic Curve DLP (ECDLP).

Practical implementations usually consist of three algorithms:
Setup()
to set up the commitment parameters;Commit()
to commit to the message using the commitment parameters andOpen()
to open and verify the commitment.

 ElGamal Commitment/Encryption: An ElGamal commitment is a Pedersen Commitment ([15], [22]) with an additional commitment $ g^r $ to the randomness used. The ElGamal encryption scheme is based on the Decisional DiffeHellman (DDH) assumption and the difficulty of the DLP for finite fields. The DDH assumption states that it is infeasible for a Probabilistic Polynomialtime (PPT) adversary to solve the DDH problem. (Note: The ElGamal encryption scheme should not be confused with the ElGamal signature scheme.) ([1], [51], [52], [53])
 Fiat–Shamir Heuristic/Transformation: The Fiat–Shamir heuristic is a technique in cryptography to convert an interactive publiccoin protocol (Sigma protocol) between a prover and a verifier into a onemessage (noninteractive) protocol using a cryptographic hash function. ([18], [19])
 The prover will use a
Prove()
algorithm to calculate a commitment $ A $ with a statement $ Y $ that is shared with the verifier and a secret witness value $ w $ as inputs. The commitment $ A $ is then hashed to obtain the challenge $ c $, which is further processed with theProve()
algorithm to calculate the response $ f $. The single message sent to the verifier then contains the challenge $ c $ and response $ f $.  The verifier is then able to compute the commitment $ A $ from the shared statement $ Y $, challenge $ c $ and response $ f $. The verifier will then use a
Verify()
algorithm to verify the combination of shared statement $ Y $, commitment $ A $, challenge $ c $ and response $ f $.  A weak Fiat–Shamir transformation can be turned into a strong Fiat–Shamir transformation if the hashing function is applied to the commitment $ A $ and shared statement $ Y $ to obtain the challenge $ c $ as opposed to only the commitment $ A $.
 The prover will use a
 Nonce: In security engineering, nonce is an abbreviation of number used once. In cryptography, a nonce is an arbitrary number that can be used just once. It is often a random or pseudorandom number issued in an authentication protocol to ensure that old communications cannot be reused in replay attacks. ([41], [42])
 Zeroknowledge Proof/Protocol: In cryptography, a zeroknowledge proof/protocol is a method by which one party (the prover) can convince another party (the verifier) that a statement $ Y $ is true, without conveying any information apart from the fact that the prover knows the value of $ Y $. The proof system must be complete, sound and zeroknowledge. ([16], [23])

Complete: If the statement is true and both prover and verifier follow the protocol; the verifier will accept.

Sound: If the statement is false, and the verifier follows the protocol; the verifier will not be convinced.

Zeroknowledge: If the statement is true and the prover follows the protocol, the verifier will not learn any confidential information from the interaction with the prover apart from the fact that the statement is true.

Appendix B: Notations Used
The general notation of mathematical expressions when specifically referenced are listed here, based on [1].
 Let $ p $ and $ q $ be large prime numbers.
 Let $ \mathbb G $ and $ \mathbb Q $ denote cyclic groups of prime order $ p $ and $ q $ respectively.
 let $ \mathbb Z_p $ and $ \mathbb Z_q $ denote the ring of integers $ modulo \mspace{4mu} p $ and $ modulo \mspace{4mu} q $ respectively.
 Let generators of $ \mathbb G $ be denoted by $ g, h, v, u \in \mathbb G $. In other words, there exists a number $ g \in \mathbb G $ such that $ \mathbb G = \lbrace 1 \mspace{3mu} , \mspace{3mu} g \mspace{3mu} , \mspace{3mu} g^2 \mspace{3mu} , \mspace{3mu} g^3 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} g^{p1} \rbrace \equiv \mathbb Z_p $. Note that not every element of $ \mathbb Z_p $ is a generator of $ \mathbb G $.
 Let $ \mathbb Z_p^* $ denote $ \mathbb Z_p \setminus \lbrace 0 \rbrace $ and $ \mathbb Z_q^* $ denote $ \mathbb Z_q \setminus \lbrace 0 \rbrace $, that is all invertible elements of $ \mathbb Z_p $ and $ \mathbb Z_q $ respectively. This excludes the element $ 0 $ which is not invertible.
Contributors
 https://github.com/hansieodendaal
 https://github.com/CjS77
 https://github.com/SWvheerden
 https://github.com/philiprza
 https://github.com/neonknight64
The Bulletproof Protocols
Introduction
An overview of Bulletproofs have been given in Bulletproofs and Mimblewimble, which has largely been based on the original work done by Bünz et al. [1]. They documented a number of different Bulletproof protocols, but not all of them in an obvious manner. This report summarizes and explains the different Bulletproof protocols in as simple terms as possible. It also simplifies the logic and explains the base mathematical concepts in more detail where prior knowledge was assumed. The report concludes with a discussion on an improved Bulletproof zeroknowledge proof protocol by some community members following an evolutionary approach.
Contents
 The Bulletproof Protocols
 Introduction
 Contents
 Preliminaries
 Bulletproof Protocols
 Innerproduct Argument (Protocol 1)
 How the Proof System for Protocol 1 Works, Shrinking by Recursion
 InnerProduct Verification through MultiExponentiation (Protocol 2)
 Range Proof Protocol with Logarithmic Size
 ZeroKnowledge Proof for Arithmetic Circuits
 Optimized Verifier using MultiExponentiation and Batch Verification
 Evolving Bulletproof Protocols
 Conclusions, Observations, Recommendations
 References
 Appendices
 Contributors
Preliminaries
Notations Used
The general notation of mathematical expressions when specifically referenced are listed here, based on [1]. These notations are important preknowledge for the remainder of the report.
 Let $ p $ and $ q $ be large prime numbers.
 Let $ \mathbb G $ and $ \mathbb Q $ denote cyclic groups of prime order $ p $ and $ q $ respectively.
 let $ \mathbb Z_p $ and $ \mathbb Z_q $ denote the ring of integers $ modulo \mspace{4mu} p $ and $ modulo \mspace{4mu} q $ respectively.
 Let generators of $ \mathbb G $ be denoted by $ g, h, v, u \in \mathbb G $. In other words, there exists a number $ g \in \mathbb G $ such that $ \mathbb G = \lbrace 1 \mspace{3mu} , \mspace{3mu} g \mspace{3mu} , \mspace{3mu} g^2 \mspace{3mu} , \mspace{3mu} g^3 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} g^{p1} \rbrace \equiv \mathbb Z_p $. Note that not every element of $ \mathbb Z_p $ is a generator of $ \mathbb G $.
 Let $ \mathbb Z_p^* $ denote $ \mathbb Z_p \setminus \lbrace 0 \rbrace $ and $ \mathbb Z_q^* $ denote $ \mathbb Z_q \setminus \lbrace 0 \rbrace $, that is all invertible elements of $ \mathbb Z_p $ and $ \mathbb Z_q $ respectively. This excludes the element $ 0 $ which is not invertible.
 Let $ \mathbb G^n $ and $ \mathbb Z^n_p $ be vector spaces of dimension $ n $ over $ \mathbb G $ and $ \mathbb Z_p $ respectively.
 Let $ h^r \mathbf g^\mathbf x = h^r \prod_i g_i^{x_i} \in \mathbb G $ be the vector Pedersen Commitment^{def} with $ \mathbf {g} = (g_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} g_n) \in \mathbb G^n $ and $ \mathbf {x} = (x_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} x_n) \in \mathbb G^n $.
 Let $ \mathbf {a} \in \mathbb F^n $ be a vector with elements $ a_1 \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_n \in \mathbb F $.
 Let $ \langle \mathbf {a}, \mathbf {b} \rangle = \sum _{i=1}^n {a_i \cdot b_i} $ denote the innerproduct between two vectors $ \mathbf {a}, \mathbf {b} \in \mathbb F^n $.
 Let $ \mathbf {a} \circ \mathbf {b} = (a_1 \cdot b_1 \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_n \cdot b_n) \in \mathbb F^n $ denote the entry wise multiplication of two vectors $ \mathbf {a}, \mathbf {b} \in \mathbb F^n $.
 Let $ \mathbf {A} \circ \mathbf {B} = (a_{11} \cdot b_{11} \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_{1m} \cdot b_{1m} \mspace{6mu} ; \mspace{6mu} . . . \mspace{6mu} ; \mspace{6mu} a_{n1} \cdot b_{n1} \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_{nm} \cdot b_{nm} ) $ denote the entry wise multiplication of two matrixes, also known as the Hadamard Product^{def}.
 Let $ \mathbf {a} \parallel \mathbf {b} $ denote the concatenation of two vectors; if $ \mathbf {a} \in \mathbb Z_p^n $ and $ \mathbf {b} \in \mathbb Z_p^m $ then $ \mathbf {a} \parallel \mathbf {b} \in \mathbb Z_p^{n+m} $.
 Let $ p(X) = \sum _{i=0}^d { \mathbf {p_i} \cdot X^i} \in \mathbb Z_p^n [X] $ be a vector polynomial where each coefficient $ \mathbf {p_i} $ is a vector in $ \mathbb Z_p^n $.
 Let $ \langle l(X),r(X) \rangle = \sum _{i=0}^d { \sum _{j=0}^i { \langle l_i,r_i \rangle \cdot X^{i+j}}} \in \mathbb Z_p [X] $ denote the innerproduct between two vector polynomials $ l(X),r(X) $.
 Let $ t(X)=\langle l(X),r(X) \rangle $, then the innerproduct is defined such that $ t(x)=\langle l(x),r(x) \rangle $ holds for all $ x \in \mathbb{Z_p} $.
 Let $ C=g^a = \prod _{i=1}^n g_i^{a_i} \in \mathbb{G} $ be a binding (but not hiding) commitment to the vector $ \mathbf {a} \in \mathbb Z_p^n $ where $ \mathbf {g} = (g_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} g_n) \in \mathbb G^n $. Given vector $ \mathbf {b} \in \mathbb Z_p^n $ with nonzero entries, $ \mathbf {a} \circ \mathbf {b} $ is treated as a new commitment to $ C $. For this let $ g_i^\backprime =g_i^{(b_i^{1})} $ such that $ C= \prod _{i=1}^n (g_i^\backprime)^{a_i \cdot b_i} $. The binding property of this new commitment is inherited from the old commitment.
 Let $ \mathbf a _{[:l]} = ( a_1 \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_l ) \in \mathbb F ^ l$ and $ \mathbf a _{[l:]} = ( a_{1+1} \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_n ) \in \mathbb F ^ {nl} $ be slices of vectors for $ 0 \le l \le n $ (using Python notation).
 Let $ \mathbf {k}^n $ denote the vector containing the first $ n $ powers of $ k \in \mathbb Z_p^* $ such that $ \mathbf {k}^n = (1,k,k^2, \mspace{3mu} ... \mspace{3mu} ,k^{n1}) \in (\mathbb Z_p^*)^n $.
 Let $ \mathcal{P} $ and $ \mathcal{V} $ denote the prover and verifier respectively.
 Let $ \mathcal{P_{IP}} $ and $ \mathcal{V_{IP}} $ denote the prover and verifier in relation to innerproduct calculations respectively.
Pedersen Commitments and Elliptic Curve Pedersen Commitments
The basis of confidential transactions is the Pedersen Commitment scheme defined by Pedersen T. [15].
A commitment scheme in a Zeroknowledge Proof^{def} is a cryptographic primitive that allows a prover to commit to only a single chosen value/statement from a finite set without the ability to change it later (binding property) while keeping it hidden from a verifier (hiding property). Both binding and hiding properties are then further classified in increasing levels of security to be computational, statistical or perfect. On the one end of the scale perfect means that a quantum adversary (an attacker with infinite computing power) cannot tell what amount has been committed to and is also unable to produce fake commitments. Statistical means the probability for an adversary to do the same in a finite amount of time is negligible and the least secure, computational, means that no efficient algorithm running in a practical amount of time can reveal the commitment amount or produce fake commitments except with small probability. No commitment scheme can at the same time be perfectly binding and perfectly hiding. ([12], [13])
Two variations of the Pedersen Commitment scheme sharing the same security attributes exist as defined below:
 Pedersen Commitment: The Pedersen Commitment is a system for making a blinded noninteractive commitment to a value. ([1], [3], [8], [14], [15]).

The generalized Pedersen Commitment definition follows (refer to Notations Used):

Let $ q $ be a large prime and $ p $ be a large safe prime such that $ p = 2q + 1 $.

Let $ h $ be a random generator of cyclic group $ \mathbb G $ such that $ h $ is an element of $ \mathbb Z_q^* $.

Let $ a $ be a random value and element of $ \mathbb Z_q^* $ and calculate $ g $ such that $ g = h^a $.

Let $ r $ (the blinding factor) be a random value and element of $ \mathbb Z_p^* $.

The commitment to value $ x $ is then determined by calculating $ C(x,r) = h^r g^x $, which is called the Pedersen Commitment.

The generator $ h $ and resulting number $ g $ are known as the commitment bases and should be shared along with $ C(x,r) $ with whomever wishes to open the value.


Pedersen Commitments are also additionally homomorphic, such that for messages $ x_0 $ and $ x_1 $ and blinding factors $ r_0 $ and $ r_1 $ we have $ C(x_0,r_0) \cdot C(x_1,r_1) = C(x_0+x_1,r_0+r_1) $

 Elliptic Curve Pedersen Commitment: An efficient implementation of the Pedersen Commitment^{def} will use secure Elliptic Curve Cryptography (ECC), which is based on the algebraic structure of elliptic curves over finite (prime) fields. Elliptic curve points are used as basic mathematical objects, instead of numbers. Note that traditionally in elliptic curve arithmetic lower case letters are used for ordinary numbers (integers) and upper case letters for curve points. ([26], [27], [28])

The generalized Elliptic Curve Pedersen Commitment definition follows (refer to Notations Used):

Let $ \mathbb F_p $ be the group of elliptic curve points, where $ p $ is a large prime.

Let $ G \in \mathbb F_p $ be a random generator point (base point) and let $ H \in \mathbb F_p $ be specially chosen so that the value $ x_H $ to satisfy $ H = x_H G $ cannot be found except if the Elliptic Curve DLP (ECDLP) is solved.

Let $ r $ (the blinding factor) be a random value and element of $ \mathbb Z_p $.

The commitment to value $ x \in \mathbb Z_p $ is then determined by calculating $ C(x,r) = rH + xG $, which is called the Elliptic Curve Pedersen Commitment.


Elliptic curve point addition is analogous to multiplication in the originally defined Pedersen Commitment^{def}. Thus $ g^x $, the number $ g $ multiplied by itself $ m $ times, is analogous to $ xG $, the elliptic curve point $ G $ added to itself $ x $ times. In this context $ xG $ is also a point in $ \mathbb F_p $.

In the Elliptic Curve context $ C(x,r) = rH + xG $ is then analogous to $ C(x,r) = h^r g^x $.

The number $ H $ is what is known as a Nothing Up My Sleeve (NUMS) number. With secp256k1 the value of $ H $ is the SHA256 hash of a simple encoding of the prespecified generator point $ G $.

Similar to Pedersen Commitments, the Elliptic Curve Pedersen Commitments are also additionally homomorphic, such that for messages $ x $, $ x_0 $ and $ x_1 $, blinding factors $ r $, $ r_0 $ and $ r_1 $ and scalar $ k $ the following relation holds: $ C(x_0,r_0) + C(x_1,r_1) = C(x_0+x_1,r_0+r_1) $ and $ C(k \cdot x, k \cdot r) = k \cdot C(x, r) $.

In secure implementations of ECC it is as hard to guess $ x $ from $ xG $ as it is to guess $ x $ from $g^x $. This is called the Elliptic Curve DLP (ECDLP).

Practical implementations usually consist of three algorithms:
Setup()
to set up the commitment parameters;Commit()
to commit to the message using the commitment parameters andOpen()
to open and verify the commitment.

Security aspects of (Elliptic Curve) Pedersen Commitments
By virtue of their definition Pedersen Commitments are only computationally binding but perfectly hiding. A simplified explanation follows.
If Alice wants to commit to a value $ x $ that will be sent to Bob who will at a later stage wants Alice to prove that the value $ x $ is the value that was used to generate the commitment $ C $, then Alice will select random blinding factor $ r $, calculate $ C(x,r) = h^r g^x $ and send that to Bob. Later Alice can reveal $ x $ and $ r $ and Bob can do the calculation and see that the commitment $ C $ it produces is the same as the one Alice sent earlier.
Perhaps Alice or Bob has managed to build a computer that can solve the DLP and given a public key could in reasonable time find alternate solutions to $ C(x,r) = h^r g^x $ in a reasonable time, that is $ C(x,r) = h^{r^\prime} g^{x^\prime} $. This means even though Bob can find values for $ r^\prime $ and $ x^\prime $ that produce $ C $ he cannot know if those are the specific $ x $ and $ r $ that Alice chose, because there are so many that can produce the same $ C $. Pedersen Commitments are thus perfectly hiding.
Although the Pederson Commitment is perfectly hiding it does rely on the fact that Alice has NOT cracked the DLP to be able to calculate other pairs of input values to open the commitment to another value when challenged. The Pederson Commitment is thus only computationally binding.
Bulletproof Protocols
Bulletproof protocols have multiple applications; most of these are discussed in Bulletproofs and Mimblewimble. The list below links these use cases up with the different Bulletproof protocols:
 Bulletproofs provide short noninteractive zeroknowledge proofs without a trusted setup. The small size of Bulletproofs reduce overall cost. This has applicability in distributed systems where proofs are transmitted over a network or stored for a long time.
 Range Proof Protocol with Logarithmic Size provides short single and aggregatable range proofs and can be used with multiple blockchain protocols including Mimblewimble. These can be applied to normal transactions or to some smart contract where committed values need to be proven to be in a specific range without revealing the values.
 The protocol presented in Aggregating Logarithmic Proofs can be used for Merkle proofs and proof of solvency without revealing any additional information.
 Range proofs can be compiled for a multiparty single joined confidential transaction for their known outputs using MPC Protocol for Bulletproofs. Users do not have to reveal their secret transaction values.
 Verifiable shuffles and multisignatures with deterministic nonces can be implemented with Protocol 3.
 Bulletproofs present an efficient and short zeroknowledge proof for arbitrary Arithmetic Circuits^{def} using ZeroKnowledge Proof for Arithmetic Circuits.
 Various Bulletproof protocols can be applied to scriptless scripts, to make them noninteractive and not having to use Sigma protocols.
 Batch verifications can be done using Optimized Verifier using MultiExponentiation and Batch Verification, for example a blockchain full node receiving a block of transactions needs to verify all transactions as well as range proofs.
A detailed mathematical discussion of the different Bulletproof protocols follows. Protocols 1, 2 and 3 are numbered consistently with [1], whereas the rest of the protocols are numbered to fit chronologically with a faculty sign "!" to differentiate them. Refer to Notations Used.
Note: Full mathematical definitions and terms not defined are available in [1].
Innerproduct Argument (Protocol 1)
The first and most important building block of the Bulletproofs is its efficient algorithm to calculate an innerproduct argument for two independent (not related) binding vector Pedersen Commitments^{def}. Protocol 1 is an argument of knowledge that the prover $ \mathcal{P} $ knows the openings of two binding Pedersen vector commitments that satisfy a given inner product relation. Let inputs to the innerproduct argument be independent generators $ g,h \in \mathbb G^n $, a scalar $ c \in \mathbb Z_p $ and $ P \in \mathbb G $. The argument lets the prover $ \mathcal{P} $ convince a verifier $ \mathcal{V} $ that the prover $ \mathcal{P} $ knows two vectors $ \mathbf a, \mathbf b \in \mathbb Z^n_p $ such that $$ P =\mathbf{g}^\mathbf{a}\mathbf{h}^\mathbf{b} \mspace{30mu} \mathrm{and} \mspace{30mu} c = \langle \mathbf {a} \mspace{3mu}, \mspace{3mu} \mathbf {b} \rangle $$
$ P $ is referred to as the binding vector commitment to $ \mathbf a, \mathbf b $. The inner product argument is an efficient proof system for the following relation:
$$ { (\mathbf {g},\mathbf {h} \in \mathbb G^n , \mspace{12mu} P \in \mathbb G , \mspace{12mu} c \in \mathbb Z_p ; \mspace{12mu} \mathbf {a}, \mathbf {b} \in \mathbb Z^n_p ) \mspace{3mu} : \mspace{15mu} P = \mathbf{g}^\mathbf{a}\mathbf{h}^\mathbf{b} \mspace{3mu} \wedge \mspace{3mu} c = \langle \mathbf {a} \mspace{3mu}, \mspace{3mu} \mathbf {b} \rangle } \mspace{100mu} (1) $$
Relation (1) requires sending $ 2n $ elements to the verifier $ \mathcal{V} $. The inner product $ c = \langle \mathbf {a} \mspace{3mu}, \mspace{3mu} \mathbf {b} \rangle \ $ can be made part of the vector commitment $ P \in \mathbb G $. This will enable sending only $ 2 \log 2 (n) $ elements to the verifier $ \mathcal{V} $ (explained in the next section). For a given $ P \in \mathbb G $ the prover $ \mathcal{P} $ proves that it has vectors $ \mathbf {a}, \mathbf {b} \in \mathbb Z^n_p $ for which $ P =\mathbf{g}^\mathbf{a}\mathbf{h}^\mathbf{b} \cdot u^{ \langle \mathbf {a}, \mathbf {b} \rangle } $. Here $ u \in \mathbb G $ is a fixed group element with an unknown discretelog relative to $ \mathbf {g},\mathbf {h} \in \mathbb G^n $. $$ { (\mathbf {g},\mathbf {h} \in \mathbb G^n , \mspace{12mu} u,P \in \mathbb G ; \mspace{12mu} \mathbf {a}, \mathbf {b} \in \mathbb Z^n_p ) : \mspace{15mu} P = \mathbf{g}^\mathbf{a}\mathbf{h}^\mathbf{b} \cdot u^{ \langle \mathbf {a}, \mathbf {b} \rangle } } \mspace{100mu} (2) $$
A proof system for relation (2) gives a proof system for (1) with the same complexity, thus only a proof system for relation (2) is required.
Protocol 1 is then defined as the proof system for relation (2) as shown in Figure 1. The element $ u $ is raised to a random power $ x $ (the challenge) chosen by the verifier $ \mathcal{V} $ to ensure that the extracted vectors $ \mathbf {a}, \mathbf {b} $ from Protocol 2 satisfy $ c = \langle \mathbf {a} , \mathbf {b} \rangle $.
The argument presented in Protocol 1 has the following Commitment Scheme properties:
 Perfect completeness (hiding): Every validity/truth is provable, also see Definition 9 in [1];
 Statistical witness extended emulation (binding): Robust against either extracting a nontrivial discrete logarithm relation between $ \mathbf {g} , \mathbf {h} , u $ or extracting a valid witness $ \mathbf {a}, \mathbf {b} $.
How the Proof System for Protocol 1 Works, Shrinking by Recursion
Protocol 1 uses an inner product argument of two vectors $ \mathbf a, \mathbf b \in \mathbb Z^n_p $ of size $ n $. The Pedersen Commitment scheme allows a vector to be cut in half and then to compress the two halves together. Let $ \mathrm H : \mathbb Z^{2n+1}_p \to \mathbb G $ be a hash function for commitment $ P $, with $ P = \mathrm H(\mathbf a , \mathbf b, \langle \mathbf a, \mathbf b \rangle) $. Note that commitment $ P $ and thus $ \mathrm H $ is additively homomorphic, therefore sliced vectors of $ \mathbf a, \mathbf b \in \mathbb Z^n_p $ can be hashed together with inner product $ c = \langle \mathbf a , \mathbf b \rangle \in \mathbb Z_p$. If $ n ^\prime = n/2 $, starting with relation (2), then
$$ \begin{aligned} \mathrm H(\mathbf a \mspace{3mu} , \mspace{3mu} \mathbf b \mspace{3mu} , \mspace{3mu} \langle \mathbf a , \mathbf b \rangle) &= \mathbf{g} ^\mathbf{a} \mathbf{h} ^\mathbf{b} \cdot u^{ \langle \mathbf a, \mathbf b \rangle} \mspace{20mu} \in \mathbb G \\ \mathrm H(\mathbf a_{[: n ^\prime]} \mspace{3mu} , \mspace{3mu} \mathbf a_{[n ^\prime :]} \mspace{3mu} , \mspace{3mu} \mathbf b_{[: n ^\prime]} \mspace{3mu} , \mspace{3mu} \mathbf b_{[n ^\prime :]} \mspace{3mu} , \mspace{3mu} \langle \mathbf {a}, \mathbf {b} \rangle) &= \mathbf g ^ {\mathbf a_{[: n ^\prime]}} _{[: n ^\prime]} \cdot \mathbf g ^ {\mathbf a^\prime_{[n ^\prime :]}} _{[n ^\prime :]} \cdot \mathbf h ^ {\mathbf b_{[: n ^\prime]}} _{[: n ^\prime]} \cdot \mathbf h ^ {\mathbf b^\prime_{[n ^\prime :]}} _{[n ^\prime :]} \cdot u^{\langle \mathbf {a}, \mathbf {b} \rangle} \mspace{20mu} \in \mathbb G \end{aligned} $$
Commitment $ P = L \cdot R $ can then further be sliced as follows:
$$ \begin{aligned} P &= \mathrm H(\mspace{3mu} \mathbf a_{[: n ^\prime]} \mspace{6mu} , \mspace{6mu} \mathbf a_{[n ^\prime :]} \mspace{6mu} , \mspace{6mu} \mathbf b_{[: n ^\prime]} \mspace{6mu} , \mspace{6mu} \mathbf b_{[n ^\prime :]} \mspace{6mu} , \mspace{6mu} \langle \mathbf {a}, \mathbf {b} \rangle \mspace{49mu}) \mspace{20mu} \in \mathbb G \\ L &= \mathrm H(\mspace{3mu} 0 ^ {n ^\prime} \mspace{18mu} , \mspace{6mu} \mathbf a_{[: n ^\prime]} \mspace{6mu} , \mspace{6mu} \mathbf b_{[n ^\prime :]} \mspace{6mu} , \mspace{6mu} 0 ^ {n ^\prime} \mspace{18mu} , \mspace{6mu} \langle \mathbf {a_{[: n ^\prime]}} , \mathbf {b_{[n ^\prime :]}} \rangle \mspace{3mu}) \mspace{20mu} \in \mathbb G \\ R &= \mathrm H(\mspace{3mu} \mathbf a_{[n ^\prime :]} \mspace{6mu} , \mspace{6mu} 0 ^ {n ^\prime} \mspace{18mu} , \mspace{6mu} 0 ^ {n ^\prime} \mspace{18mu} , \mspace{6mu} \mathbf b_{[: n ^\prime]} \mspace{6mu} , \mspace{6mu} \langle \mathbf {a_{[n ^\prime :]}} , \mathbf {b_{[: n ^\prime]}} \rangle \mspace{3mu}) \mspace{20mu} \in \mathbb G \end{aligned} $$
The first reduction step is shown below:
 The prover $ \mathcal{P} $ calculates $ L,R \in \mathbb G $ and sends it to the verifier $ \mathcal{V} $.
 The verifier $ \mathcal{V} $ chooses a random $ x \overset{$}{\gets} \mathbb Z _p $ and sends it to the prover $ \mathcal{P} $.
 The prover $ \mathcal{P} $ calculates $ \mathbf a^\prime , \mathbf b^\prime \in \mathbb Z^{n^\prime}_p $ and sends it to the verifier $ \mathcal{V} $:
$$ \begin{aligned} \mathbf a ^\prime &= x\mathbf a _{[: n ^\prime]} + x^{1} \mathbf a _{[n ^\prime :]} \in \mathbb Z^{n^\prime}_p \\ \mathbf b ^\prime &= x^{1}\mathbf b _{[: n ^\prime]} + x \mathbf b _{[n ^\prime :]} \in \mathbb Z^{n^\prime}_p \end{aligned} $$
 The verifier $ \mathcal{V} $ calculates $ P^\prime = L^{x^2} \cdot P \cdot R^{x^{2}} $ and accepts (verify true) if
$$ P^\prime = \mathrm H ( x^{1} \mathbf a^\prime \mspace{3mu} , \mspace{3mu} x \mathbf a^\prime \mspace{3mu} \mspace{3mu} , \mspace{3mu} \mspace{3mu} x \mathbf b^\prime \mspace{3mu} , \mspace{3mu} x^{1} \mathbf b^\prime \mspace{3mu} , \mspace{3mu} \langle \mathbf a^\prime , \mathbf b^\prime \rangle ) \mspace{100mu} (3) $$
So far, the prover $ \mathcal{P} $ only sent $ n + 2 $ elements to the verifier $ \mathcal{V} $, that is the four tuple $ ( L , R , \mathbf a^\prime , \mathbf b^\prime ) $, about half the length compared to sending the complete $ \mathbf a, \mathbf b \in \mathbb Z^n_p $. The test in relation (3) is the same as testing that $$ P^\prime = (\mathbf g ^ {x^{1}} _{[: n ^\prime]} \circ \mathbf g ^ x _{[n ^\prime :]})^{\mathbf a^\prime} \cdot (\mathbf h ^ x _{[: n ^\prime]} \circ \mathbf h ^ {x^{1}} _{[n ^\prime :]})^{\mathbf b^\prime} \cdot u^{\langle \mathbf a^\prime , \mathbf b^\prime \rangle} \mspace{100mu} (4) $$ Thus, the prover $ \mathcal{P} $ and verifier $ \mathcal{V} $ can recursively engage in an innerproduct argument for $ P^\prime $ with respect to generators $$ (\mathbf g ^ {x^{1}} _{[: n ^\prime]} \circ \mathbf g ^ x _{[n ^\prime :]} \mspace{6mu} , \mspace{6mu} \mathbf h ^ x _{[: n ^\prime]} \circ \mathbf h ^ {x^{1}} _{[n ^\prime :]} \mspace{6mu} , \mspace{6mu} u ) $$ which will result in a $ \log _2 n $ round protocol with $ 2 \log _2 n $ elements in $ \mathbb G $ and $ 2 $ elements in $ \mathbb Z _p $. The prover $ \mathcal{P} $ ends up sending the following terms to the verifier $ \mathcal{V} $: $$ (L_1 , R_1) \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} (L_{\log _2 n} , R _{\log _2 n}) \mspace{3mu} , \mspace{3mu} (a , b) $$ where $ a,b \in \mathbb Z _p $ are only sent right at the end. This protocol can be made noninteractive using the FiatShamir^{def} heuristic.
InnerProduct Verification through MultiExponentiation (Protocol 2)
The inner product argument to be calculated is that of two vectors $ \mathbf a, \mathbf b \in \mathbb Z^n_p $ of size $ n $. Protocol 2 has a logarithmic number of rounds and in each round the prover $ \mathcal{P} $ and verifier $ \mathcal{V} $ calculate a new set of generators $ ( \mathbf g ^\prime , \mathbf h ^\prime ) $, which would require a total of $ 4n $ computationally expensive exponentiations. Multiexponentiation is a technique to reduce the number of exponentiations for a given calculation. In Protocol 2 the number of exponentiations is reduced to a single multiexponentiation by delaying all the exponentiations until the last round. It can also be made noninteractive using the FiatShamir^{def} heuristic, providing a further speedup.
Let $ g $ and $ h $ be the generators used in the final round of the protocol and $ x_j $ be the challenge from the $ j _{th} $ round. In the last round the verifier $ \mathcal{V} $ checks that $ g^a h^b u ^{a \cdot b} = P $, where $ a,b \in \mathbb Z_p $ are given by the prover $ \mathcal{P} $. The final $ g $ and $ h $ can be expressed in terms of the input generators $ \mathbf {g},\mathbf {h} \in \mathbb G^n $ as: $$ g = \prod _{i=1}^n g_i^{s_i} \in \mathbb{G}, \mspace{21mu} h=\prod _{i=1}^n h_i^{1/s_i} \in \mathbb{G} $$
where $ \mathbf {s} = (s_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} s_n) \in \mathbb Z_p^n $ only depends on the challenges $ (x_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} x_{\log_2(n)}) \in \mathbb Z_p^n $. The scalars of $ \mathbf {s} $ are calculated as follows: $$ s_i = \prod ^{\log _2 (n)} _{j=1} x ^{b(i,j)} _j \mspace{15mu} \mathrm {for} \mspace{15mu} i = 1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} n $$ where $$ b(i,j) = \begin{cases} \mspace{12mu} 1 \mspace{18mu} \text{if the} \mspace{4mu} j \text{th bit of} \mspace{4mu} i1 \mspace{4mu} \text{is} \mspace{4mu} 1 \\ 1 \mspace{15mu} \text{otherwise} \end{cases} $$
The entire verification check in the protocol reduces to a single multiexponentiation of size $ 2n + 2 \log_2(n) + 1 $: $$ \mathbf g^{a \cdot \mathbf{s}} \cdot \mathbf h^{b \cdot\mathbf{s^{1}}} \cdot u^{a \cdot b} \mspace{12mu} \overset{?}{=} \mspace{12mu} P \cdot \prod _{j=1}^{\log_2(n)} L_j^{x_j^2} \cdot R_j^{x_j^{2}} \mspace{100mu} (5) $$
Protocol 2 is shown in Figure 2.
Range Proof Protocol with Logarithmic Size
This protocol provides short and aggregatable range proofs, using the improved inner product argument from Protocol 1. It is build up in 5 parts:
 how to construct a range proof that requires the verifier $ \mathcal{V} $ to check an inner product between two vectors;
 how to replace the inner product argument with an efficient innerproduct argument;
 how to efficiently aggregate $ m $ range proofs into one short proof;
 how to make interactive public coin protocols noninteractive by using the FiatShamir^{def} heuristic and
 how to allow multiple parties to construct a single aggregate range proof.
A diagrammatic overview of a range proof protocol implementation using Elliptic Curve Pedersen Commitments^{def} is given in Figure 3.
InnerProduct Range Proof
This protocol provides the ability to construct a range proof that requires the verifier $ \mathcal{V} $ to check an inner product between two vectors. The range proof is constructed by exploiting the fact that a Pedersen Commitment $ V $ is an element in the same group $ \mathbb G $ that is used to perform the inner product argument. Let $ v \in \mathbb Z_p $ and let $ V \in \mathbb G $ be a Pedersen Commitment to $ v $ using randomness $ \gamma $. The proof system will convince the verifier $ \mathcal{V} $ that commitment $ V $ contains a number $ v \in [0,2^n  1] $ such that
$$ { (g,h \in \mathbb{G}) , V , n \mspace{3mu} ; \mspace{12mu} v, \gamma \in \mathbb{Z_p} ) \mspace{3mu} : \mspace{3mu} V =h^\gamma g^v \mspace{5mu} \wedge \mspace{5mu} v \in [0,2^n  1] } $$
without revealing $ v $. Let $ \mathbf {a}_L = (a_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} a_n) \in {0,1}^n $ be the vector containing the bits of $ v, $ so that $ \langle \mathbf {a}_L, \mathbf {2}^n \rangle = v $. The prover $ \mathcal{P} $ commits to $ \mathbf {a}_L $ using a constant size vector commitment $ A \in \mathbb{G} $. It will convince the verifier $ \mathcal{V} $ that $ v $ is in $ [0,2^n  1] $ by proving that it knows an opening $ \mathbf {a}_L \in \mathbb Z_p^n $ of $ A $ and $ v, \gamma \in \mathbb{Z_p} $ such that $ V =h^\gamma g^v $ and
$$ \langle \mathbf {a}_L \mspace{3mu} , \mspace{3mu} \mathbf {2}^n \rangle = v \mspace{20mu} \mathrm{and} \mspace{20mu} \mathbf {a}_R = \mathbf {a}_L  \mathbf {1}^n \mspace{20mu} \mathrm{and} \mspace{20mu} \mathbf {a}_L \circ \mathbf {a}_R = \mathbf{0}^n \mspace{20mu} \mspace{100mu} (6) $$
This proves that $ a_1 \mspace{3mu} , \mspace{3mu} ... \mspace{3mu} , \mspace{3mu} a_n $ are all in $ {0,1} $ and that $ \mathbf {a}_L $ is composed of the bits of $ v $. However, the $ 2n + 1 $ constraints needs to be expressed as a single innerproduct constant so that Protocol 1 can be used, by letting the verifier $ \mathcal{V} $ choose a random linear combination of the constraints. To prove that a committed vector $ \mathbf {b} \in \mathbb Z_p^n $ satisfies $ \mathbf {b} = \mathbf{0}^n $ it suffices for the verifier $ \mathcal{V} $ to send a random $ y \in \mathbb{Z_p} $ to the prover $ \mathcal{P} $ and for the prover $ \mathcal{P} $ to prove that $ \langle \mathbf {b}, \mathbf {y}^n \rangle = 0 $, which will convince the verifier $ \mathcal{V} $ that $ \mathbf {b} = \mathbf{0}^n $. The prover $ \mathcal{P} $ can thus prove relation (6) by proving that
$$ \langle \mathbf {a}_L \mspace{3mu} , \mspace{3mu} \mathbf {2}^n \rangle = v \mspace{20mu} \mathrm{and} \mspace{20mu} \langle \mathbf {a}_L  1  \mathbf {a}_R \mspace{3mu} , \mspace{3mu} \mathbf {y}^n \rangle=0 \mspace{20mu} \mathrm{and} \mspace{20mu} \langle \mathbf {a}_L \mspace{3mu} , \mspace{3mu} \mathbf {a}_R \circ \mathbf {y}^n \rangle = \mathbf{0}^n \mspace{20mu} $$
Building on this, the verifier $ \mathcal{V} $ chooses a random $ z \in \mathbb{Z_p} $ and let the prover $ \mathcal{P} $ proves that
$$ z^2 \cdot \langle \mathbf {a}_L \mspace{3mu} , \mspace{3mu} \mathbf {2}^n \rangle + z \cdot \langle \mathbf {a}_L  1  \mathbf {a}_R \mspace{3mu} , \mspace{3mu} \mathbf {y}^n \rangle + \langle \mathbf {a}_L \mspace{3mu} , \mspace{3mu} \mathbf {a}_R \circ \mathbf {y}^n \rangle = z^2 \cdot v \mspace{20mu} \mspace{100mu} (7) $$
Relation (7) can be rewritten as
$$ \langle \mathbf {a}_L  z \cdot \mathbf {1}^n \mspace{3mu} , \mspace{3mu} \mathbf {y}^n \circ (\mathbf {a}_R + z \cdot \mathbf {1}^n) +z^2 \cdot \mathbf {2}^n \rangle = z^2 \cdot v + \delta (y,z) \mspace{100mu} (8) $$
where
$$ \delta (y,z) = (zz^2) \cdot \langle \mathbf {1}^n \mspace{3mu} , \mspace{3mu} \mathbf {y}^n\rangle z^3 \cdot \langle \mathbf {1}^n \mspace{3mu} , \mspace{3mu} \mathbf {2}^n\rangle \in \mathbb{Z_p} $$
can be easily calculated by the verifier $ \mathcal{V} $. The proof that relation (6) holds was thus reduced to a single innerproduct identity.
Relation (8) cannot be used in its current form without revealing information about $ \mathbf {a}_L $. Two additional blinding vectors $ \mathbf {s}_L , \mathbf {s}_R \in \mathbb Z_p^n $ are introduced with the prover $ \mathcal{P} $ and verifier $ \mathcal{V} $ engaging in the following zeroknowledge protocol (Figure 4):
Two linear vector polynomials $ l(X), r(X) \in \mathbb Z^n_p[X] $ are defined as the innerproduct terms for relation (8), also containing the blinding vectors $ \mathbf {s}_L $ and $ \mathbf {s}_R $. A quadratic polynomial $ t(X) \in \mathbb Z_p[X] $ is then defined as the inner product between the two vector polynomials $ l(X), r(X) $ such that
$$ t(X) = \langle l(X) \mspace{3mu} , \mspace{3mu} r(X) \rangle = t_0 + t_1 \cdot X + t_2 \cdot X^2 \mspace{10mu} \in \mathbb {Z}_p[X] $$
The blinding vectors $ \mathbf {s}_L $ and $ \mathbf {s}_R $ ensure that the prover $ \mathcal{P} $ can publish $ l(x) $ and $ r(x) $ for one $ x \in \mathbb Z_p^* $ without revealing any information about $ \mathbf {a}_L $ and $ \mathbf {a}_R $. The constant term $ t_0 $ of the quadratic polynomial $ t(X) $ is then the result of the inner product in relation (8), and the prover $ \mathcal{P} $ needs to convince the verifier $ \mathcal{V} $ that
$$ t_0 = z^2 \cdot v + \delta (y,z) $$
In order to do so, the prover $ \mathcal{P} $ convinces the verifier $ \mathcal{V} $ that it has a commitment to the remaining coefficients of $ t(X) $, namely $ t_1,t_2 \in \mathbb Z_p $ by checking the value of $ t(X) $ at a random point $ x \in \mathbb Z_p^* $. This is illustrated in Figure 5.
The verifier $ \mathcal{V} $ now needs to check that $ l $ and $ r $ are in fact $ l(x) $ and $ r(x) $ and that $ t(x) = \langle l \mspace{3mu} , \mspace{3mu} r \rangle $. A commitment for $ \mathbf {a}_R \circ \mathbf {y}^n $ is needed and to do so the commitment generators are switched from $ h \in \mathbb G^n $ to $ h ^\backprime = h^{(\mathbf {y}^{1})}$. Thus $ A $ and $ S $ now become vector commitments to $ ( \mathbf {a}_L \mspace{3mu} , \mspace{3mu} \mathbf {a}_R \circ \mathbf {y}^n ) $ and $ ( \mathbf {s}_L \mspace{3mu} , \mspace{3mu} \mathbf {s}_R \circ \mathbf {y}^n ) $ respectively with respect to the new generators $ (g, h ^\backprime, h) $. This is illustrated in Figure 6.
The range proof presented here has the following Commitment Scheme properties:
 Perfect completeness (hiding): Every validity/truth is provable, also see Definition 9 in [1];
 Perfect special honest verifier zeroknowledge: The verifier $ \mathcal{V} $ behaves according to the protocol, also see Definition 12 in [1];
 Computational witness extended emulation (binding): A witness can be computed in time closely related to time spent by the prover $ \mathcal{P} $, also see Definition 10 in [1].
Logarithmic Range Proof
This protocol replaces the inner product argument with an efficient innerproduct argument. In step (63) Figure 5 the prover $ \mathcal{P} $ transmits $ \mathbf {l} $ and $ \mathbf {r} $ to the verifier $ \mathcal{V} $, but their size is linear in $ n $. To make this efficient a proof size that is logarithmic in $ n $ is needed. The transfer of $ \mathbf {l} $ and $ \mathbf {r} $ can be eliminated with an innerproduct argument. Checking correctness of $ \mathbf {l} $ and $ \mathbf {r} $ (step (67) Figure 6) and $ \hat {t} $ (step (68) Figure 6) is the same as verifying that the witness $ \mathbf {l} , \mathbf {r} $ satisfies the inner product of relation (2) on public input $ (\mathbf {g} , \mathbf {h} ^ \backprime , P \cdot h^{\mu}, \hat t) $. Transmission of vectors $ \mathbf {l} $ and $ \mathbf {r} $ to the verifier $ \mathcal{V} $ (step (63) Figure 5) can then be eliminated and transfer of information limited to the scalar properties $ ( \tau _x , \mu , \hat t ) $ alone, thereby archiving a proof size that is logarithmic in $ n $.
Aggregating Logarithmic Proofs
This protocol efficiently aggregate $ m $ range proofs into one short proof with a slight modification to the protocol presented in InnerProduct Range Proof. For aggregate range proofs, the inputs of one range proof do not affect the output of another range proof. Aggregating logarithmic range proofs is especially helpful if a single prover $ \mathcal{P} $ needs to perform multiple range proofs at the same time.
A proof system must be presented for the following relation: $$ { (g,h \in \mathbb{G}) , \mspace{9mu} \mathbf {V} \in \mathbb{G}^m \mspace{3mu} ; \mspace{9mu} \mathbf {v}, \gamma \in \mathbb Z_p^m ) \mspace{6mu} : \mspace{6mu} V_j =h^{\gamma_j} g^{v_j} \mspace{6mu} \wedge \mspace{6mu} v_j \in [0,2^n  1] \mspace{15mu} \forall \mspace{15mu} j \in [1,m] } \mspace{100mu} (9) $$ The prover $ \mathcal{P} $ should now compute $ \mspace{3mu} \mathbf a_L \in \mathbb Z_p^{n \cdot m} $ as the concatenation of all of the bits for every $ v_j $ such that $$ \langle \mathbf{2}^n \mspace{3mu} , \mspace{3mu} \mathbf a_L[(j1) \cdot n : j \cdot n1] \rangle = v_j \mspace{9mu} \forall \mspace{9mu} j \in [1,m] \mspace{3mu} $$ The quantity $ \delta (y,z) $ is adjusted to incorporate more cross terms $ n \cdot m $ , the linear vector polynomials $ l(X), r(X) $ are adjusted to be in $ \mathbb Z^{n \cdot m}_p[X] $ and the blinding factor $ \tau_x $ for the inner product $ \hat{t} $ (step (61) Figure 5) is adjusted for the randomness of each commitment $ V_j $. The verification check (step (65) Figure 6) is updated to include all $ V_j $ commitments and the definition of $ P $ (step (66) Figure 6) is changed to be a commitment to the new $ r $.
This aggregated range proof that makes use of the inner product argument only uses $ 2 \cdot [ \log _2 (n \cdot m)] + 4 $ group elements and $ 5 $ elements in $ \mathbb Z_p $. The growth in size is limited to an additive term $ 2 \cdot [ \log _2 (m)] $ as opposed to a multiplicative factor $ m $ for $ m $ independent range proofs.
The aggregate range proof presented here has the following Commitment Scheme properties:
 Perfect completeness (hiding): Every validity/truth is provable, also see Definition 9 in [1];
 Perfect special honest verifier zeroknowledge: The verifier $ \mathcal{V} $ behaves according to the protocol, also see Definition 12 in [1];
 Computational witness extended emulation (binding): A witness can be computed in time closely related to time spent by the prover $ \mathcal{P} $, also see Definition 10 in [1].
NonInteractive Proof through FiatShamir Heuristic
So far the verifier $ \mathcal{V} $ behaves as an honest verifier and all messages are random elements from $ \mathbb Z_p^* $. These are the prerequisites needed to convert the protocol presented so far into a noninteractive protocol that is secure and has full zeroknowledge in the random oracle model (thus without a trusted setup) using the FiatShamir Heuristic^{def}.
MPC Protocol for Bulletproofs
This protocol allows multiple parties to construct a single simple efficient aggregate range proof designed for Bulletproofs. This is valuable when multiple parties want to create a single joined confidential transaction, where each party knows some of the inputs and outputs and needs to create range proofs for their known outputs. In Bulletproofs, $ m $ parties each having a Pedersen Commitment $ (V_k)_{k=1}^m $ can generate a single Bulletproof that each $ V_k $ commits to a number in some fixed range.
Let $ k $ denote the $ k $th party's message, thus $ A^{(k)} $ is generated using only inputs of party $ k $. A set of distinct generators $ (g^{(k)}, h^{(k)})^m_{k=1} $ is assigned to each party, and $ \mathbf g,\mathbf h $ is defined as the interleaved concatenation of all $ g^{(k)} , h^{(k)} $ such that
$$ g_i=g_{[{i \over{m}}]}^{((i1) \mod m+1)} \mspace{15mu} \mathrm{and} \mspace{15mu} h_i=h_{[{i \over{m}}]}^{((i1) \mod m+1)} $$
The protocol either uses three rounds with linear communication in both $ m $ and the binary encoding of the range, or it uses a logarithmic number of rounds and communication that is only linear in $ m $. For the linear communication case the protocol in InnerProduct Range Proof is followed with the difference that each party generates its part of the proof using its own inputs and generators, that is
$$ A^{(k)} , S^{(k)}; \mspace{15mu} T_1^{(k)} , T_2^{(k)}; \mspace{15mu} \tau_x^{(k)} , \mu^{(k)} , \hat{t}^{(k)} , \mathbf{l}^{(k)} , \mathbf{r}^{(k)} $$
These shares are sent to a dealer (could be anyone, even one of the parties) who adds them homomorphically to generate the respective proof components, for example
$$ A = \prod^m_{k=1} A^{(k)} \mspace{15mu} \mathrm{and} \mspace{15mu} \tau_x = \prod^m_{k=1} \tau_x^{(k)} $$
In each round, the dealer generates the challenges using the FiatShamir^{def} heuristic and the combined proof components and sends them to each party. In the end each party send $ \mathbf{l}^{(k)},\mathbf{r}^{(k)} $ to the dealer who computes $ \mathbf{l},\mathbf{r} $ as the interleaved concatenation of all shares. The dealer runs the inner product argument (Protocol 1) to generate the final proof. Each proof component is the (homomorphic) sum of each parties' proof components and each share constitutes part of a separate zeroknowledge proof. An example of the MPC protocol implementation using three rounds with linear communication is shown in Figure 7.
The communication can be reduced by running a second MPC protocol for the inner product argument, reducing the rounds to $ \log_2(l) $. Up to the last $ \log_2(l) $ round each parties' witnesses are independent and the overall witness is the interleaved concatenation of the parties' witnesses. The parties compute $ L^{(k)}, R^{(k)} $ in each round and the dealer computes $ L, R $ as the homomorphic sum of the shares. In the final round the dealer generates the final challenge and sends it to each party who in turn send their witness to the dealer who completes Protocol 2.
ZeroKnowledge Proof for Arithmetic Circuits
Bulletproofs present an efficient zeroknowledge argument for arbitrary Arithmetic Circuits^{def} with a proof size of $ 2 \cdot [ \log _2 (n)+13] $ elements with $ n $ denoting the multiplicative complexity (number of multiplication gates) of the circuit.
Bootle et al. [2] showed how an arbitrary arithmetic circuit with $ n $ multiplication gates can be converted into a relation containing a Hadamard Product^{def} relation with additional linear consistency constraints. The communication cost of the addition gates in the argument was removed by providing a technique that can directly handle a set of Hadamard products and linear relations together. For a twoinput multiplication gate let $ \mathbf a_L , \mathbf a_R $ be the left and right input vectors respectively, then $ \mathbf a_L + \mathbf a_R = \mathbf a_O $ is the vector of outputs. Let $ Q \leqslant 2 \cdot n $ be the number of linear consistency constraints, $ \mathbf W_{L,q} \mspace{3mu} , \mathbf W_{R,q} \mspace{3mu}, \mathbf W_{O,q} \in \mathbb Z_p^n $ be the gate weights and $ c_q \in \mathbb Z_p $ for all $ q \in [1,Q] $, then the linear consistency constraints have the form
$$ \langle \mathbf W_{L,q}, \mathbf a_L \rangle + \langle \mathbf W_{R,q}, \mathbf a_R \rangle +\langle \mathbf W_{O,q}, \mathbf a_O \rangle = c_q $$
The highlevel idea of this protocol is to convert the Hadamardproduct relation along with the linear consistency constraints into a single inner product relation. Pedersen Commitments $ V_j $ are also included as input wires to the arithmetic circuit, which is an important refinement otherwise the arithmetic circuit would need to implement a commitment algorithm. The linear constraints also include openings $ v_j $ of $ V_j $.
InnerProduct Proof for Arithmetic Circuits (Protocol 3)
Similar to InnerProduct Range Proof the prover $ \mathcal{P} $ produces a random linear combination of the Hadamard Product^{def} and linear constraints to form a single inner product constraint. If the combination is chosen randomly by the verifier $ \mathcal{V} $ then with overwhelming probability the innerproduct constraint implies the other constraints. A proof system must be presented for relation (10) below:
$$ \begin{aligned} \mspace{3mu} (g,h \in \mathbb{G} \mspace{3mu} ; \mspace{3mu} \mathbf g,\mathbf h \in \mathbb{G}^n \mspace{3mu} ; \mspace{3mu} \mathbf V \in \mathbb{G}^m \mspace{3mu} ; \mspace{3mu} \mathbf W_{L} , \mathbf W_{R} , \mathbf W_{O} \in \mathbb Z_p^{Q \times n} \mspace{3mu} ; \\ \mathbf W_{V} \in \mathbb Z_p^{Q \times m} \mspace{3mu} ; \mspace{3mu} \mathbb{c} \in \mathbb Z_p^{Q} \mspace{3mu} ; \mspace{3mu} \mathbf a_L , \mathbf a_R , \mathbf a_O \in \mathbb Z_p^{n} \mspace{3mu} ; \mspace{3mu} \mathbf v , \mathbf \gamma \in \mathbb Z_p^{m}) \mspace{3mu} : \mspace{15mu} \\ V_j =h^{\gamma_j} g^{v_j} \mspace{6mu} \forall \mspace{6mu} j \in [1,m] \mspace{6mu} \wedge \mspace{6mu} \mathbf a_L + \mathbf a_R = \mathbf a_O \mspace{6mu} \wedge \mspace{50mu} \\ \mathbf W_L \cdot \mathbf a_L + \mathbf W_R \cdot \mathbf a_R + \mathbf W_O \cdot \mathbf a_O = \mathbf W_V \cdot \mathbf v + \mathbf c \mspace{50mu} \end{aligned} \mspace{70mu} (10) $$
Let $ \mathbf W_V \in \mathbb Z_p^{Q \times m} $ be the weights for a commitment $ V_j $. Relation (10) only holds when $ \mathbf W_{V} $ is of rank $ m $, i.e. if the columns of the matrix are all linearly independent.
Part 1 of the protocol is presented in Figure 8 where the prover $ \mathcal{P} $ commits to $ l(X),r(X),t(X) $.
Part 2 of the protocol is presented in Figure 9 where the prover $ \mathcal{P} $ convinces the verifier $ \mathcal{V} $ that the polynomials are well formed and that $ \langle l(X),r(X) \rangle = t(X) $.
The proof system presented here has the following Commitment Scheme properties:
 Perfect completeness (hiding): Every validity/truth is provable, also see Definition 9 in [1];
 Perfect honest verifier zeroknowledge: The verifier $ \mathcal{V} $ behaves according to the protocol, also see Definition 12 in [1];
 Computational witness extended emulation (binding): A witness can be computed in time closely related to time spent by the prover $ \mathcal{P} $, also see Definition 10 in [1].
LogarithmicSized NonInteractive Protocol for Arithmetic Circuits
Similar to Logarithmic Range Proof the communication cost of Protocol 3 can be reduced by using the efficient inner product argument. Transmission of vectors $ \mathbf {l} $ and $ \mathbf {r} $ to the verifier $ \mathcal{V} $ (step (82) Figure 9) can be eliminated and transfer of information limited to the scalar properties $ ( \tau _x , \mu , \hat t ) $ alone. The prover $ \mathcal{P} $ and verifier $ \mathcal{V} $ engage in an inner product argument on public input $ (\mathbf {g} , \mathbf {h} ^ \backprime , P \cdot h^{\mu}, \hat t) $ to check correctness of $ \mathbf {l} $ and $ \mathbf {r} $ (step (92) Figure 9) and $ \hat {t} $ (step (88) Figure 9); this is the same as verifying that the witness $ \mathbf {l} , \mathbf {r} $ satisfies the inner product of relation. Communication is now reduced to $ 2 \cdot [ \log_22(n)] + 8 $ group elements and $ 5 $ elements in $ \mathbb Z $ instead of $ 2 \cdot n $ elements, thereby archiving a proof size that is logarithmic in $ n $.
Similar to NonInteractive Proof through FiatShamir the protocol presented so far can be turned into an efficient noninteractive proof that is secure and full zeroknowledge in the random oracle model (thus without a trusted setup) using the FiatShamir Heuristic^{def}.
The proof system presented here has the following Commitment Scheme properties:
 Perfect completeness (hiding): Every validity/truth is provable, also see Definition 9 in [1];
 Statistical zeroknowledge: The verifier $ \mathcal{V} $ behaves according to the protocol and $ \mathbf {l} , \mathbf {r} $ can be efficiently simulated;
 Computational soundness (binding): if the generators $ \mathbf {g} , \mathbf {h} , g , h $ are independently generated, then finding a discrete logarithm relation between them is as hard as breaking the Discrete Log Problem.
Optimized Verifier using MultiExponentiation and Batch Verification
In many of the Bulletproofs' Use Cases the verifier's runtime is of particular interest. This protocol presents optimizations for a single range proof that is also extendable to aggregate range proofs and the arithmetic circuit protocol.
Multiexponentiation
In Protocol 2 verification of the innerproduct is reduced to a single multiexponentiation. This can be extended to verify the whole range proof using a single multiexponentiation of size $ 2n + \log_2(n) + 7 $. In Protocol 2 the Bulletproof verifier $ \mathcal{V} $ only performs two checks, that is step (68) Figure 6 and step (16) Figure 2.
In the protocol presented in Figure 10, that is processed by the verifier $ \mathcal{V} $, $ x_u $ is the challenge from Protocol 1, $ x_j $ the challenge from round $ j $ of Protocol 2, and $ L_j , R_j $ the $ L , R $ values from round $ j $ of Protocol 2.
A further idea is that multiexponentiation (steps (98) and (105) in Figure 10) be delayed until those checks are performed and that they are also combined into a single check using a random value $ c \xleftarrow[]{$} \mathbf Z_p $. This follows from the fact that if $ A^cB = 1 $ for a random $ c $ then with high probability $ A = 1 \mspace 3mu \wedge \mspace 3mu B = 1 $. Various algorithms are known to compute the multiexponentiations and scalar quantities (steps (101) and (102) in Figure 10) efficiently (sublinearly), thereby further improving the speed and efficiency of the protocol.
Batch verification
A further important optimization concerns the verification of multiple proofs. The essence of the verification is to calculate a large multiexponentiation. Batch verification is applied in order to reduce the number of expensive exponentiations. This is based on the observation that checking $ g^x = 1 \mspace 3mu \wedge \mspace 3mu g^y = 1 $ can be checked by drawing a random scalar $ \alpha $ from a large enough domain and checking that $ g^{\alpha x + y} = 1 $. With high probability, the latter equation implies the first. When applied to multiexponentiations, $ 2n $ exponentiations can be saved per additional proof. Verifying $ m $ distinct range proofs of size $ n $ only requires a single multiexponentiation of size $ 2n+2+m \cdot (2 \cdot \log (n) + 5 ) $ along with $ O ( m \cdot n ) $ scalar operations.
Evolving Bulletproof Protocols
Interstellar [24] recently introduced the Programmable Constraint Systems for Bulletproofs [23], an evolution of ZeroKnowledge Proof for Arithmetic Circuits, extending it to support proving arbitrary statements in zeroknowledge using a constraint system, bypassing arithmetic circuits altogether. They provide an Application Programmers Interface (API) for building a constraint system directly, without the need to construct arithmetic expressions and then transform them into constraints. The Bulletproof constraint system proofs are then used as building blocks for a confidential assets protocol called Cloak.
The constraint system has three kinds of variables:
 Highlevel witness variables:
 Known only to the prover $ \mathcal{P} $, as external inputs to the constraint system;
 Represented as individual Pedersen Commitments to the external variables in Bulletproofs.
 Lowlevel witness variables:
 Known only to the prover $ \mathcal{P} $, as internal to the constraint system;
 Representing the inputs and outputs of the multiplication gates.
 Instance variables:
 Known to both the prover $ \mathcal{P} $ and the verifier $ \mathcal{V} $, as public parameters;
 Represented as a family of constraint systems parameterized by public inputs (compatible with Bulletproofs);
 Folding all instance variables into a single constant parameter internally.
Instance variables can select the constraint system out of a family for each proof. The constraint system becomes a challenge from a verifier $ \mathcal{V} $ to a prover $ \mathcal{P} $, where some constraints are generated randomly in response to the prover's $ \mathcal{P} $ commitments. Challenges to parametrize constraint systems makes the resulting proof smaller, requiring only $ O(n) $ multiplications instead of $ O(n^2) $ in the case of verifiable shuffles when compared to a static constraint system.
Merlin transcripts [25] employing the FiatShamir Heuristic^{def} are used to generate the challenges. The challenges are bound to the highlevel witness variables (the external inputs to the constraint system) which are added to the transcript before any of the constraints are created. The prover $ \mathcal{P} $ and verifier $ \mathcal{V} $ can then compute weights for some constraints with the use of the challenges.
Because the challenges are not bound to lowlevel witness variables the resulting construction can be unsafe. Interstellar are working on an improvement to the protocol that would allow challenges to be bound to a subset of the lowlevel witness variables, and have a safer API using features of Rust’s type system.
The resulting API provides a single code path used by both the prover $ \mathcal{P} $ and verifier $ \mathcal{V} $ to allocate variables and define constraints. This is organized into a hierarchy of taskspecific gadgets, which manages allocation, assignment and constraints on the variables, ensuring that all variables are constrained. Gadgets interact with mutable constraint system objects, which are specific to the prover $ \mathcal{P} $ and verifier $ \mathcal{V} $. They also receive secret variables and public parameters and generate challenges.
The Bulletproofs library [22] does not provide any standard gadgets, but only an API for the constraint system. Each protocol built on top of the Bulletproofs library must create its own collection of gadgets to enable building a complete constraint system out of them. The Interstellar Bulletproof zeroknowledge proof protocol built with their programmable constraint system is shown in Figure 11.
Conclusions, Observations, Recommendations
 Bulletproofs have many potential use cases or applications, but are still under development. A new confidential blockchain protocol like Tari should carefully consider expanded use of Bulletproofs to maximally leverage functionality of the code base.
 Bulletproofs are not done yet, as illustrated in Evolving Bulletproof Protocols, and its further development and efficient implementation has a lot of traction in the community.
 Bünz et al. [1] proposed that the switch commitment scheme defined by Ruffing et al. [10] can be used for Bulletproofs if doubts in the underlying cryptographic hardness (discrete log) assumption arise in future. The switch commitment scheme allows for a blockchain with proofs that are currently only computationally binding to later switch to a proof system that is perfectly binding and secure against quantum adversaries; this will weaken the perfectly hiding property as a drawback and slow down all proof calculations. In the Bünz et al. [1] proposal all Pedersen Commitments will be replaced with ElGamal Commitments^{def} to move from computationally binding to perfectly binding. They also gave further ideas about how the ElGamal commitments can possibly be enhanced to improve the hiding property to be statistical or perfect. (See the Grin projects' implementation here.)
 It is important that developers understand more about the fundamental underlying mathematics when implementing something like Bulletproofs, even if they just reuse libraries developed by someone else.
References
[1] Bulletproofs: Short Proofs for Confidential Transactions and More, Blockchain Protocol Analysis and Security Engineering 2018, Bünz B., Bootle J., Boneh D., Poelstra A., Wuille P. and Maxwell G., http://web.stanford.edu/~buenz/pubs/bulletproofs.pdf, Date accessed: 20180918.
[2] Efficient zeroknowledge arguments for arithmetic circuits in the discrete log setting, Bootle J., Cerulli A., Chaidos P., Groth J. and Petit C., Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 327357. Springer, 2016., https://eprint.iacr.org/2016/263.pdf, Date accessed: 20180921.
[3] Confidential Assets, Poelstra A., Back A., Friedenbach M., Maxwell G. and Wuille P., Blockstream, https://blockstream.com/bitcoin17final41.pdf, Date accessed: 20180925.
[4] Wikipedia: Zeroknowledge Proof, https://en.wikipedia.org/wiki/Zeroknowledge_proof, Date accessed: 20180918.
[5] Wikipedia: Discrete logarithm, https://en.wikipedia.org/wiki/Discrete_logarithm, Date accessed: 20180920.
[6] How to Prove Yourself: Practical Solutions to Identification and Signature Problems, Fiat A. and Shamir A., CRYPTO 1986: pp. 186194, https://link.springer.com/content/pdf/10.1007%2F3540477217_12.pdf, Date accessed: 20180920.
[7] How not to Prove Yourself: Pitfalls of the FiatShamir Heuristic and Applications to Helios, Bernhard D., Pereira O. and Warinschi B., https://link.springer.com/content/pdf/10.1007%2F9783642349614_38.pdf, Date accessed: 20180920.
[8] Pedersencommitment: An implementation of Pedersen commitment schemes, https://hackage.haskell.org/package/pedersencommitment, Date accessed: 20180925.
[9] Zero Knowledge Proof Standardization  An Open Industry/Academic Initiative, https://zkproof.org/documents.html, Date accessed: 20180926.
[10] Switch Commitments: A Safety Switch for Confidential Transactions, Ruffing T. and Malavolta G., https://eprint.iacr.org/2017/237.pdf, Date accessed: 20180926.
[11] GitHub: adjointio/Bulletproofs, Bulletproofs are Short Noninteractive Zeroknowledge Proofs that Require no Trusted Setup, https://github.com/adjointio/Bulletproofs, Date accessed: 20180910.
[12] Wikipedia: Commitment scheme, https://en.wikipedia.org/wiki/Commitment_scheme, Date accessed: 20180926.
[13] Cryptography Wikia: Commitment scheme, http://cryptography.wikia.com/wiki/Commitment_scheme, Date accessed: 20180926.
[14] Adjoint Inc. Documentation: Pedersen Commitment Scheme, https://www.adjoint.io/docs/cryptography.html#pedersencommitmentscheme, Date accessed: 20180927.
[15] Noninteractive and informationtheoretic secure verifiable secret sharing, Pedersen T., https://www.cs.cornell.edu/courses/cs754/2001fa/129.pdf, Date accessed: 20180927.
[16] Assumptions Related to Discrete Logarithms: Why Subtleties Make a Real Difference, Sadeghi A. and Steiner M., http://www.semper.org/sirene/publ/SaSt_01.dhetal.long.pdf, Date accessed: 20180924.
[17] Intensified ElGamal Cryptosystem (IEC), Sharma P., Gupta A. and Sharma S., International Journal of Advances in Engineering & Technology, Jan 2012, http://www.eijaet.org/media/58I6IJAET0612695.pdf, Date accessed: 20181009.
[18] On the Security of ElGamal Based Encryption, Tsiounis Y. and Yung M., https://drive.google.com/file/d/16XGAByoXse5NQl57v_GldJwzmvaQlS94/view, Date accessed: 20181009.
[19] Wikipedia: Decisional Diffie–Hellman assumption, https://en.wikipedia.org/wiki/Decisional_Diffie%E2%80%93Hellman_assumption, Date accessed: 20181009.
[20] Wikipedia: Arithmetic circuit complexity, https://en.wikipedia.org/wiki/Arithmetic_circuit_complexity, Date accessed: 20181108.
[21] Wikipedia: Hadamard product (matrices), https://en.wikipedia.org/wiki/Hadamard_product_(matrices), Date accessed: 20181112.
[22] Dalek Cryptography  Crate Bulletproofs, https://doc.dalek.rs/bulletproofs/index.html, Date accessed: 20181112.
[23] Programmable Constraint Systems for Bulletproofs, https://medium.com/interstellar/programmableconstraintsystemsforbulletproofs365b9feb92f7, Date accessed: 20181122.
[24] Inter/stellar website, https://interstellar.com, Date accessed: 20181122.
[25] Dalek Cryptography  Crate merlin, https://doc.dalek.rs/merlin/index.html, Date accessed: 20181122.
[26] Homomorphic Miniblockchain Scheme, Franca B., April 2015, http://cryptonite.info/files/HMBC.pdf, Date accessed: 20181122.
[27] Efficient Implementation of Pedersen Commitments Using Twisted Edwards Curves, Franck C. and Großschädl J., University of Luxembourg, http://orbilu.uni.lu/bitstream/10993/33705/1/MSPN2017.pdf, Date accessed: 20181122.
[28] An investigation into Confidential Transactions, Gibson A., July 2018, https://github.com/AdamISZ/ConfidentialTransactionsDoc/blob/master/essayonCT.pdf, Date accessed: 20181122.
Appendices
Appendix A: Definition of Terms
Definitions of terms presented here are high level and general in nature. Full mathematical definitions are available in the cited references.
 Arithmetic Circuits: An arithmetic circuit $ C $ over a field $ F $ and variables $ (x_1, ..., x_n) $ is a directed acyclic graph whose vertices are called gates. Arithmetic circuits can alternatively be described as a list of addition and multiplication gates with a collection of linear consistency equations relating the inputs and outputs of the gates. The size of an arithmetic circuit is the number of gates in it, with the depth being the length of the longest directed path. Upper bounding the complexity of a polynomial $ f $ is to find any arithmetic circuit that can calculate $ f $, whereas lower bounding is to find the smallest arithmetic circuit that can calculate $ f $. An example of a simple arithmetic circuit with size six and depth two that calculates a polynomial is shown below. ([11], [20])
 Discrete Logarithm/Discrete Logarithm Problem (DLP): In the mathematics of real numbers, the logarithm $ \log_b^a $ is a number $ x $ such that $ b^x=a $, for given numbers $ a $ and $ b $. Analogously, in any group $ G $ , powers $ b^k $ can be defined for all integers $ k $, and the discrete logarithm $ \log_ba $ is an integer $ k $ such that $ b^k=a $. Algorithms in publickey cryptography base their security on the assumption that the discrete logarithm problem over carefully chosen cyclic finite groups and cyclic subgroups of elliptic curves over finite fields has no efficient solution. ([5], [16])
 ElGamal Commitment/Encryption: An ElGamal commitment is a Pedersen Commitment^{def} with an additional commitment $ g^r $ to the randomness used. The ElGamal encryption scheme is based on the Decisional DiffeHellman (DDH) assumption and the difficulty of the DLP for finite fields. The DDH assumption states that it is infeasible for a Probabilistic Polynomialtime (PPT) adversary to solve the DDH problem. (Note: The ElGamal encryption scheme should not be confused with the ElGamal signature scheme.) ([1], [17], [18], [19])
 Fiat–Shamir Heuristic/Transformation: The Fiat–Shamir heuristic is a technique in cryptography to convert an interactive publiccoin protocol (Sigma protocol) between a prover and a verifier into a onemessage (noninteractive) protocol using a cryptographic hash function. ([6], [7])

The prover will use a
Prove()
algorithm to calculate a commitment $ A $ with a statement $ Y $ that is shared with the verifier and a secret witness value $ w $ as inputs. The commitment $ A $ is then hashed to obtain the challenge $ c $, which is further processed with theProve()
algorithm to calculate the response $ f $. The single message sent to the verifier then contains the challenge $ c $ and response $ f $. 
The verifier is then able to compute the commitment $ A $ from the shared statement $ Y $, challenge $ c $ and response $ f $. The verifier will then use a
Verify()
algorithm to verify the combination of shared statement $ Y $, commitment $ A $, challenge $ c $ and response $ f $. 
A weak Fiat–Shamir transformation can be turned into a strong Fiat–Shamir transformation if the hashing function is applied to the commitment $ A $ and shared statement $ Y $ to obtain the challenge $ c $ as opposed to only the commitment $ A $.

 Hadamard Product: In mathematics, the Hadamard product is a binary operation that takes two matrices $ \mathbf {A} , \mathbf {B} $ of the same dimensions, and produces another matrix of the same dimensions where each element $ i,j $ is the product of elements $ i,j $ of the original two matrices. The Hadamard product $ \mathbf {A} \circ \mathbf {B} $ is different from normal matrix multiplication most notably because it is also commutative $ [ \mathbf {A} \circ \mathbf {B} = \mathbf {B} \circ \mathbf {A} ] $ along with being associative $ [ \mathbf {A} \circ ( \mathbf {B} \circ \mathbf {C} ) = ( \mathbf {A} \circ \mathbf {B} ) \circ \mathbf {C} ] $ and distributive over addition $ [ \mathbf {A} \circ ( \mathbf {B} + \mathbf {C} ) = \mathbf {A} \circ \mathbf {B} + \mathbf {A} \circ \mathbf {C} ] $. ([21])
$$ \mathbf {A} \circ \mathbf {B} = \mathbf {C} = (a_{11} \cdot b_{11} \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_{1m} \cdot b_{1m} \mspace{6mu} ; \mspace{6mu} . . . \mspace{6mu} ; \mspace{6mu} a_{n1} \cdot b_{n1} \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} a_{nm} \cdot b_{nm} ) $$
 Zeroknowledge Proof/Protocol: In cryptography, a zeroknowledge proof/protocol is a method by which one party (the prover) can convince another party (the verifier) that a statement $ Y $ is true, without conveying any information apart from the fact that the prover knows the value of $ Y $. The proof system must be complete, sound and zeroknowledge. ([4], [9])

Complete: If the statement is true and both prover and verifier follow the protocol; the verifier will accept.

Sound: If the statement is false, and the verifier follows the protocol; the verifier will not be convinced.

Zeroknowledge: If the statement is true and the prover follows the protocol, the verifier will not learn any confidential information from the interaction with the prover apart from the fact that the statement is true.

Contributors
 https://github.com/hansieodendaal
 https://github.com/neonknight64
 https://github.com/CjS77
 https://github.com/philiprza
PureRust Elliptic Curve Cryptography
Summary
Fast, Safe, PureRust Elliptic Curve Cryptography by Isis Lovecruft & Henry De Valence
This talk discusses the design and implementation of curve25519dalek, a pureRust implementation of operations on the elliptic curve known as Curve25519.
They discuss the goals of the library and give a brief overview of the implementation strategy.
They also discuss features of the Rust language that allow them to achieve competitive performance without sacrificing safety or readability, and future features that could allow them to achieve more safety and more performance.
Finally, they will discuss how the dalek library makes it easy to implement complex cryptographic primitives, such as zeroknowledge proofs.
The content in this video is not complicated, but it is an advanced topic, since it dives deep into the internals of how cryptographic operations are implemented on computers.
Consensus Mechanisms
From Investopedia
A consensus mechanism is a faulttolerant mechanism that is used in computer and blockchain systems to achieve the necessary agreement on a single data value or a single state of the network among distributed processes or multiagent systems.
From KPMG
Consensus mechanism: A method of authenticating and validating a value or transaction on a Blockchain or a distributed ledger without the need to trust or rely on a central authority. Consensus mechanisms are central to the functioning of any blockchain or distributed ledger.
BFT Consensus Mechanisms
Having trouble viewing this presentation?
View it in a separate window.
Introduction to Applications of Byzantine Consensus Mechanisms
When considering the how Tari will potentially build its second layer, an analysis of the most promising Byzantine Consensus Mechanisms and their applications was sought.
Important to consider is the 'scalability trilemma'; a phrase referred to by Vitalik Buterin, which takes into account the potential tradeoffs regarding decentralization, security and scalability. [19]
Decentralization : a core principle on which majority of the systems are build, taking into account censorshipresistance and ensuring that everyone, without prejudice, is permitted to partake in the decentralized system.
Scalability : encompasses the ability of the network to process transactions. Thus, if a public block chain is deemed to be efficient, effective and usable, it should be designed to handle millions of users on the network.
Security : refers to the immutability of the ledger and takes into account threats of 51% attacks, Sybil attacks and DDoS attacks etc.
Through the recent development of this ecosystem, most block chains have focused on two of the three factors, namely decentralization and security; at the expense of scalability. The primary reason for this is that nodes must reach consensus before transactions can be processed. [19]
This report sees the examination of proposals considering Byzantine Fault Tolerant (BFT) consensus mechanisms and considers their feasibility and efficiency in meeting the characteristics of scalability, decentralization and security. In each instance the protocol assumptions, reference implementations and discernment on whether the protocol may be used for Tari as a means to maintain the distributed asset state will be assessed.
This report discusses several terms and concepts related to consensus mechanisms; these include definitions of Consensus, Binary Consensus, Byzantine Fault Tolerance, Practical Byzantine Fault Tolerant Variants, Deterministic and NonDeterministic Protocols and Scalabilityperformance trade off. An important characteristic of consensus mechanisms are degrees of synchrony which range from Synchrony, Partial Synchrony, Weak Synchrony, Random Synchrony and Asynchrony, as well as The Problem with Timing Assumptions. Definitions on Denial of Service Attack, The FLP Impossibility and Randomized Agreement are also provided.
A brief survey of Byzantine Fault Tolerant Consensus Mechanisms
Many peertopeer online Realtime strategy games use a modified Lockstep protocol as a consensus protocol in order to manage game state between players in a game. Each game action results in a game state delta broadcast to all other players in the game along with a hash of the total game state. Each player validates the change by applying the delta to their own game state and comparing the game state hashes. If the hashes do not agree then a vote is cast, and those players whose game state is in the minority are disconnected and removed from the game (known as a desync.) [21]
Permissioned Byzantine Fault Tolerant Protocols
Introduction
Byzantine agreement schemes are considered well suited for permissioned block chains, where the identity of the participants is known. Examples include Hyperledger and Tendermint. Here the Federated Consensus Algorithm is implemented. [9]
Hyperledger Fabric (HLF)
HLF began as a project under the LinX Foundation in early 2016 [13], with the aim of creating an opensource crossindustry standard platform for distributed ledgers. HLF is an implementation of a distributed ledger platform for running smart contracts, leveraging familiar and proven technologies, with a modular architecture allowing pluggable implementations of various functions. The distributed ledger protocol of the fabric is run on the peers. The fabric distinguishes peers as validating peers (they run the consensus algorithm, thus validating the transactions) and nonvalidating peers (they act as a proxy that helps in connecting clients to validating peers). The validating peers run a BFT consensus protocol for executing a replicated state machine that accepts deploy, invoke and query transactions as operations. [11]
The block chain's hash chain is computed based on the executed transactions and resulting persistent state. The replicated execution of chaincode (the transaction which involves accepting the code of the smart contract to be deployed) is used for validating the transactions. They assume that among n validating peers, at most f<n/3 (where f is the number of faulty nodes and n is the number of nodes present in the network) may behave arbitrarily, while others will execute correctly, thus adapting to concept BFT consensus. Since HLF proposes to follow PBFT, the chaincode transactions must be deterministic in nature, otherwise different peers might have different persistent state. The SIEVE protocol is used to filter out the nondeterministic transactions, thus assuring a unique persistent state among peers. [11]
While being redesigned for a v1.0 release, the format's goal was to achieve extensibility. This version allowed for modules such as membership and consensus mechanism to be exchanged. Being permissioned, this consensus mechanism is mainly responsible for receiving the transaction request from the clients and establishing a total execution order. So far, these pluggable consensus modules include a centralized, single order for testing purposes and a crashtolerant ordering service based on Apache Kafka. [9]
Tendermint
Tendermint Core is a BFT ProofofStake (PoS) protocol which is composed of two protocols in one: a consensus algorithm and a peertopeer networking protocol. Jae Kwon and Ethan Buchman, inspired by the design goal behind Raft, specified Tendermint as an easy to understand, developerfriendly algorithm while doing algorithmically complex systems engineering. [34]
Tendermint is modeled as a deterministic protocol, live under partial synchrony, which achieves throughput within the bounds of the latency of the network and individual processes themselves.
Tendermint rotates through the validator set, in a weighted roundrobin fashion: where the higher the stake (i.e. voting power) that a validator possesses, the greater their weighting, the proportionally more times they will be elected as leaders. Thus, if one validator has the same amount of voting power as another validator, they will both be elected by the protocol an equal amount of times. [34]
Critics have argued that Tendermint is not decentralized, and one can distinguish and target leadership, launching a DDoS attack against them, sniffling the progression of the chain. Although Sentry Architecture (containing Sentry Nodes) has been implemented in Tendermint, the argument on the degree of decentralization is still questionable.
Sentry Nodes
Sentry Nodes are guardians of a validator node and provide the validator nodes with access to the rest of the network. Sentry nodes are well connected to other full nodes on the network. Sentry nodes may be dynamic, but should maintain persistent connections to some evolving random subset of each other. They should always expect to have direct incoming connections from the validator node and its backup(s). They do not report the validator node's address in the Peer Exchange Reactor (PEX) and they may be more strict about the quality of peers they keep.
Sentry nodes belonging to validators that trust each other may wish to maintain persistent connections via Virtual Private Network (VPN) with one another, but only report each other sparingly in the PEX. [44]
Permissionless Byzantine Fault Tolerant Protocols (Part 1)
Introduction
BFT protocols face several limitations when utilized in permissionless block chains. They do not scale well with the number of participants, resulting in performance deterioration for the targeted network sizes. In addition, they are not well established in this setting, thus they are prone to security issues, e.g. Sybil attacks. Currently, there are approaches that attempt to circumvent or solve this problem. [9]
Paxos
The Paxos family of protocols includes a spectrum of tradeoffs between the number of processors, number of message delays before learning the agreed value, the activity level of individual participants, number of messages sent, and types of failures. Although the FLP theorem states that there is no deterministic faulttolerant consensus protocol that can guarantee progress in an asynchronous network, Paxos guarantees safety (consistency), and the conditions that could prevent it from making progress are difficult to provoke [29].
Paxos achieves consensus as long as there are f failures, where f < (n1)/2. These failures cannot be Byzantine (otherwise the BFT proof would be violated). Thus it is assumed that messages are never corrupted, and that nodes do not collude to subvert the system.
Paxos proceeds through a set of negotiation rounds, with one node having 'Leadership' status. Progress will stall if the leader becomes unreliable, until a new leader is elected, or if suddenly an old leader comes back online and a dispute between two leader nodes arises.
ChandraToueg
The Chandra–Toueg consensus algorithm was published by Tushar Deepak Chandra and Sam Toueg in 1996. It relies on a special node that acts as a failure detector. In essence, it pings other nodes to make sure they're still responsive.
This implies that the detector stays online and that the detector must continuously be made aware when new nodes join the network.
The algorithm itself is similar to the Paxos algorithm, which also relies on failure detectors and as such requires f<n/2, where n is the total number of processes. [27]
Raft
Raft is a consensus algorithm designed as an alternative to Paxos. It was meant to be more understandable than Paxos by means of separation of logic, but it is also formally proven safe and offers some additional features [28].
Raft achieves consensus via an elected leader. Each follower has a timeout in which it expects the heartbeat from the leader. It is thus a synchronous protocol. If the leader fails, an election is held to find a new leader. This entails nodes nominating themselves on a firstcome, firstserved basis. Hung votes require the election to be scrapped and restarted. This suggests that a high degree of cooperation is required by nodes and that malicious nodes could easily collude to disrupt a leader and then prevent a new leader from being elected. Raft is a simple algorithm but is clearly unsuitable for consensus in cryptocurrency applications.
While Paxos and Raft and many other wellknown protocols tolerate crash faults, Byzantine fault tolerant protocols beginning with PBFT, tolerate even arbitrary corrupted nodes. Many subsequent protocols offer improved performance, often through optimistic execution that provides excellent performance when there are no faults, clients do not contend much, and the network is well behaved, and at least some progress otherwise.
In general, BFT systems are evaluated in deployment scenarios where latency and CPU are the bottleneck, thus the most effective protocols reduce the number of rounds and minimize expensive cryptographic operations.
Clement et al. [40] initiated a recent line of work by advocating improvement of the worstcase performance, providing service quality guarantees even when the system is under attack, even if this comes at the expense of performance in the optimistic case. However, although the "Robust BFT protocols in this vein gracefully tolerate comprised nodes, they still rely on timing assumptions about the underlying network". Thus focus shifted to asynchronous networks. [6]
HashGraph
The Hashgraph consensus algorithm [30], was released in 2016. It claims Byzantine fault tolerance under complete asynchrony assumptions, no leaders, no round robin, no proofofwork, eventual consensus with probability one, and high speed in the absence of faults.
It is based on the gossip protocol, which is a fairly efficient distribution strategy that entails nodes randomly sharing information with each other, similar to how human beings gossip with each other.
Nodes jointly build a hash graph reflecting all of the gossip events. This allows Byzantine agreement to be achieved through virtual voting. Alice does not send Bob a vote over the Internet. Instead, Bob calculates what vote Alice would have sent, based on his knowledge of what Alice knows.
HashGraph uses digital signatures to prevent undetectable changes to transmitted messages.
HashGraph does not violate the FLP theorem, since it is nondeterministic.
The Hash graph has some similarities to a block chain. To quote the white paper: "The HashGraph consensus algorithm is equivalent to a block chain in which the 'chain' is constantly branching, without any pruning, where no blocks are ever stale, and where each miner is allowed to mine many new blocks per second, without proofofwork" [30].
Because each node keeps track of the hash graph, there is no need to have voting rounds in HashGraph; each node already knows what all of its peers will vote for and thus consensus is reached purely by analyzing the graph.
The Gossip Protocol
The gossip protocol works like this:

Alice selects a random peer node, say Bob, and sends him everything she knows. She then selects another random node and repeats the process indefinitely.

Bob, on receiving Alice's information, marks this as a gossip event and fills in any gaps in his knowledge from Alice's information. Once done, he continues gossiping with his updated information.
The basic idea behind the Gossip Protocol is the following: A node wants to share some information to the other nodes in the network. Then periodically it randomly selects a node from the set of nodes and exchanges the information. The node that receives the information performs the randomly selects a node from the set of nodes and exchanges the information, and so on. The information is periodically sent to N targets, where N is the fanout. [45]
The cycle is the number of rounds to spread the information. The fanout is the number of nodes a node gossips with in each cycle.
With a fanout=1, $O(LogN)$ cycles are necessary for the update to reach all the nodes.
In this way, information spreads throughout the network in an exponential fashion. [30]
Figure 1: Gossip Protocol Directed Graph
The gossip history can be represented as a directed graph, as in Figure 1.
HashGraph introduces a few important concepts that are used repeatedly in later BFT consensus algorithms: famous witnesses, and strongly seeing.
Ancestors
If an event (x1) comes before another event (x2), and they are connected by a line; the older event is an ancestor of that event.
If both events were created by the same node, then x1 is a selfancestor of x2.
Note: The gossip protocol defines an event as being a (self)ancestor of itself!
Seeing
If an event x1 is an ancestor of x2, then we say that x1 sees x2 as long as the node is not aware of any forks from x2.
So in the absence of forks, all events will see all of their ancestors.
+> y

x ++

+> z
In the example above, x is an ancestor to both y and z. However, because there is no ancestor relationship between y and z, the seeing condition fails, and so y cannot see x, and z cannot see x.
It may be the case that it takes time before nodes in the protocol detect the fork. For instance Bob may create z and y; but share z with Alice and y with Charlie. Both Alice and Charlie will eventually learn about the deception, but until that point, Alice will believe that y sees x, and Charlie will believe that z sees x.
This is where the concept of strongly seeing comes in.
Strongly seeing
If a node examines its hash graph and notices that an event z sees an event x, and not only that, but it can draw an ancestor relationship (usually via multiple routes) through a supermajority of peer nodes, and that a different event from each node also sees x; then it is said that according to this node, that z strongly sees x.
The following example comes from [30]:
Figure 2: Illustration of StronglySeeing
The Construct of Gossiping
The main consensus algorithm loop consists of every node (Alice), selecting a random peer node (Bob) and sharing their graph history. Now Alice and Bob have the same graph history.
Alice and Bob both create a new event with the new knowledge they have just learnt from their peer.
Alice repeats this process continuously.
Internal consensus
After a sync, a node will determine the order for as many events as possible, using three procedures. The algorithm uses constant n (the number of nodes) and a small constant value c>2.
in parallel:
loop
sync all known events to a random member
end loop
loop
receive a sync
create a new event
call divideRounds
call decideFame
call findOrder
end loop
Here we have the Swirlds HashGraph consensus algorithm. Each member runs this in parallel. Each sync brings in new events, which are then added to the hash graph. All known events are then divided into rounds. Then the first events in each round are decided as being famous or not (through purely local Byzantine agreement with virtual voting). Then the total order is found on those events for which enough information is available. If two members independently assign a position in history to an event, they are guaranteed to assign the same position, and guaranteed to never change it, even as more information comes in. Furthermore, each event is eventually assigned such a position, with probability one. [30]
for each event x
r ← max round of parents of x ( or 1 if none exist )
if x can strongly see more than 2/3*n round r witnesses
x.round ← r + 1
else
x.round ← r
x.witness ← ( x has no self parent )  ( x.round > x.selfParent.round )
The above is deemed the divideRounds procedure. As soon as an event x is known, it is assigned a round number x.round, and the boolean value x.witness is calculated, indicating whether it is the first event that a member created in that round. [30]
for each event x in order from earlier rounds to later
x.famous ← UNDECIDED
for each event y in order from earlier rounds to later
if x.witness and y.witness and y.round > x.round
d ← y.round  x.round
s ← the set of witness events in round y.round1 that y can strongly see
v ← majority vote in s ( is TRUE for a tie )
t ← number of events in s with a vote of v
if d = 1 // first round of the election
y.vote ← can y see x ?
else if d mod c > 0 // this is a normal round
if t > 2* n /3 // if supermajority, then decide
x.famous ← v
y.vote ← v
break // y loop
else // else, just vote
y.vote ← v
else if t > 2* n /3 // this is a coin round
y.vote ← v
else // else flip a coin
y.vote ← middle bit of y.signature
This is the decideFame procedure. For each witness event (i.e., an event x where x.witness is true), decide whether it is famous (i.e., assign a boolean to x.famous). This decision is done by a Byzantine agreement protocol based on virtual voting. Each member runs it locally, on their own copy of the hashgraph, with no additional communication. It treats the events in the hashgraph as if they were sending votes to each other, though the calculation is purely local to a member’s computer. The member assigns votes to the witnesses of each round, for several rounds, until more than 2/3 of the population agrees. [30]
Criticisms
An attempt to address some of these criticisms has been presented. [31],
 The HashGraph protocol is patented and is not open source.
 In addition, the HashGraph white paper assumes that n, the number of nodes in the network, is constant. In practice, n can increase, but performance likely degrades badly as n becomes large. [32]
 HashGraph is not as "fair" as claimed in their paper, with at least one attack being proposed. [33]
SINTRA
SINTRA is a Secure IntrusionTolerant Replication Architecture used for the coordination in asynchronous networks subject to Byzantine faults. It consists of a collection of protocols and are implemented in Java, providing secure replication and coordination among a group of servers connected by a widearea network, such as the Internet. For a group consisting of n servers, it tolerates up to $t<n/3$ servers failing in arbitrary, malicious ways, which is optimal for the given model. The servers are connected only by asynchronous pointtopoint communication links. Thus, SINTRA automatically tolerates timing failures as well as attacks that exploit timing. The SINTRA group model is static, which means that failed servers must be recovered by mechanisms outside of SINTRA, and the group must be initialized by a trusted process.
The protocols exploit randomization, which is needed to solve Byzantine agreement in such asynchronous distributed systems. Randomization is provided by a thresholdcryptographic pseudorandom generator, a cointossing protocol based on the DiffieHellman problem. Threshold cryptography is a fundamental concept in SINTRA as it allows the group to perform a common cryptographic operation for which the secret key is shared among the servers in such a way that no single server or small coalition of corrupted servers can obtain useful information about it. SINTRA provides thresholdcryptographic schemes for digital signatures, publickey encryption, and unpredictable pseudorandom number generation (cointossing). It contains broadcast primitives for reliable and consistent broadcasts, which provide agreement on individual messages sent by distinguished senders. However, these primitives cannot guarantee a total order for a stream of multiple messages delivered by the system, which is needed to build faulttolerant services using the state machine replication paradigm. This is the problem of atomic broadcast and requires more expensive protocols based on Byzantine agreement. SINTRA provides multiple randomized Byzantine agreement protocols, for binary and multivalued agreement, and implements an atomic broadcast channel on top of agreement. An atomic broadcast that also maintains a causal order in the presence of Byzantine faults is provided by the secure causal atomic broadcast channel. [51]
SINTRA is designed in a modular way as shown in Figure 3. Modularity greatly simplifies the construction and analysis of the complex protocols needed to tolerate Byzantine faults.
Figure 3: The Design of SINTRA
Permissionless Byzantine Fault Tolerant Protocols (Part 2)
 HoneyBadgerBFT
 Stellar Consensus Protocol
 LinBFT
 Algorand
 Thunderella
 Snowflake to Avalanche
 PARSEC
 Democratic BFT
HoneyBadgerBFT
HoneyBadgerBFT was released in November 2016 and is seen as the first practical asynchronous BFT consensus algorithm. Designed with cryptocurrencies in mind, where bandwidth is considered scarce, but an abundance of CPU power is available. Thus, the protocol implements publicprivate key encryption to increase the efficiency of the establishment of consensus. The protocol works with a fixed set of servers to run the consensus; however, this leads to centralization and allows an attacker to specifically target these servers. [9]
In its threshold encryption scheme, any one party can encrypt a message using a master public key, and it requires f+1 correct nodes to compute and reveal decryption shares for a ciphertext before the plaintext can be recovered.
The work of HoneyBadgerBFT is closely related to SINTRA , which as mentioned before, is a system implementation based on the asynchronous atomic broadcast protocol from Cachin et al. [41] This protocol consists of a reduction from Atomic Broadcast Channel (ABC) to Asynchronous Common Subset (ACS), as well as a reduction from ACS to MultiValue Validated Agreement (MVBA)
HoneyBadger offers a novel reductions from ABC to ACS that provides better efficiency (by O(N) factor) through batching, while using threshold encryption to preserve censorship resilience. Better efficiency is also obtained by cherrypicking improved instantiations of subcomponents. For example, the expensive MVBA is circumvented by using an alternative ACS along with an effect reliable broadcast (RBC). [6]
Stellar Consensus Protocol
Stellar Consensus Protocol (SCP) is an asynchronous protocol proposed by David Mazieres. It considered to be a global consensus protocol consisting of nomination protocol and ballot protocol, and is said to be BFT by bringing with it the concept of quorum slices and defeated byzantine fault tolerance. [11]
Each participant forms a quorum of other users, thus creating a trust hierarchy, which requires complex trust decisions. [9]
Initially the nomination proctor is run. During this, new values called candidate values are proposed for agreement. Each node receiving these values will vote for a single value among these. Eventually it results in unanimously selected values for that slot.
After successful execution of nomination protocol, the nodes deploy the ballot protocol. This involves the federated voting to either commit or abort the values resulting from nomination protocol. This results in externalizing the ballot for the current slot. The aborted ballots are now declared irrelevant. However, there can be stuck states where nodes cannot reach a conclusion, whether to abort or commit a value. This situation is avoided by moving it to a higher valued ballot, considering it in a new ballot protocol execution. This aids in case a node believes that this stuck ballot was committed. Thus SCP assures avoidance and management of stuck states and thus provides liveliness.
The concept of quorum slices in case of SCP provides asymptotic security and flexible trust, making it more acceptable than other earlier consensus algorithms utilizing Federated BFT, like the Ripple consensus protocol. [14] Here, the user is provided more independence in deciding whom to trust. [15]
SCP protocol claims to be free of blocked states, provides decentralized control, asymptotic security, flexible trust and low latency. But it does not guarantee safety all the time. If the user node chooses an inefficient quorum slice security is not guaranteed. In the event of partition or misbehaving nodes, it halts progress of the network until consensus can be reached.
LinBFT
LinBFT is a Byzantine fault tolerance protocol for block chain systems that allows for the amortized communication volume per block O(n) under reasonable conditions (where n is the number of participants) while satisfying deterministic guarantees on safety and liveness. It satisfies liveness in a partially synchronous network.
LinBFT cuts down its O(n^{4}) complexity by implementing changes each by O(n): linear view change, threshold signatures and verifiable random functions.
This is clearly optimal, in the sense that disseminating a block already takes O(n) transmissions.
LinBFT is designed to be implemented for permissionless, public block chain systems and takes into account anonymous participants without a publickey infrastructure, PoS, rotating leader and a dynamic participant set. [16]
For instance, participants can be anonymous, without a centralized public key infrastructure (PKI) public key among themselves, and participate in a distributed key generation (DKG) protocol required to create threshold signatures, both of which are communicationheavy processes.
LinBFT is compatible with proofofstate, which counters Sybil attacks and deters dishonest behavior through slashing. [16]
Algorand
The Algorand WhitePaper was released in May 2017, and is a synchronous BFT consensus mechanism; where the blocks get added at a minimum rate. [25]
Algorand allows participants to privately check whether they are chosen for consensus participation and requires only one message per user, thus limiting possible attacks. [9]
Alogrand, scales up to 500 000 users by employing Verifiable Random Functions, which are pseudorandom functions able to provide verifiable proofs that the output of said function is correct. [9]
It introduces the concept of a concrete coin. Most of these BFT algorithms require some sort of randomness oracle, but all nodes need to see the same value if the oracle is consulted. This had previously been achieved through a common coin idea; the concrete coin uses a much simpler approach; but only returns a binary value. [25]
Thunderella
Thunderella, implements an asynchronous strategy, where a synchronous strategy is used as a fall back in the event of a malfunction [26], thus it achieves both robustness and speed.
It can be applied in permissionless networks using proofofwork. Network robustness and "instant confirmations" requires both 75% of the network to be honest, as well as the presence of a leader node.
Snowflake to Avalanche
This consensus protocol was first seen in the WhitePaper entitled "Snowflake to Avalanche". Outlined in the paper are four protocols which are building blocks forming a protocol family. These leaderless Byzantine fault tolerance protocols are build on a metastable mechanism and are referred to as: Slush; Snowflake; Snowball and Avalanche.
The protocols published by Team Rocket differ from the traditional consensus protocols and the Nakamoto consensus protocols by not requiring an elected leader, but instead the protocol simply guides all the nodes to consensus.
These four protocols are described as a new family of protocols due to this concept of metastability: a means to establish consensus by guiding all nodes towards an emerging consensus without requiring leaders, while still maintaining the same level of security and inducing a speed that exceeding current protocols.
This is achieved through the formation of 'subquorums', which are small randomized samples from nodes on the network. This allows for greater throughputs and sees parallel consensuses running before they merge to form the overarching consensus: what can be seen as similar in nature to the gossip protocol.
With regards to safety, throughput (the number of transactions per second) and scalability (the number of people supported by the network) Slush, Snowflake, Snowball and Avalanche seem to be able to achieve all three. They impart a probabilistic safety guarantee in the presence of Byzantine adversaries and achieve a high throughput and scalability due to their concurrent nature. A synchronous network is assumed.
This is the current problem facing the design of BFT protocols, in that a system can be very fast when a small number of nodes are active, since there are less decisions to make, however, when there are many users and an increase in transactions, the system cannot be maintained.
Unlike the PoW implementation, which requires constant active participation from the miners, Avalanche can function with the even when nodes are dormant.
While traditional consensus protocols require O(n^{2}) communication, their communication complexity ranges from O(kn log n) to O(kn) for some security parameter k<<n. In a sense, Team Rocket highlight that the communication complexity of their protocols is less intensive than that of O(n^{2}) communications, thus making these protocols faster and more scalable.
To backtrack a bit, Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. It describes the worstcase scenario and can be used to describe the execution time required by an algorithm [49]. In the case of consensus algorithms, O describes a finite expected number of steps or operations [50]. For example, O(1) describes an algorithm that will always execute in the same time regardless of the size of the input data set. O(n)_ describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set.O(n^{2})represents an algorithm whose performance is directly proportional to the square of the size of the input data set.
The reason for this is O(n^{2}) suggests that the rate of growth of function is determined by n^{2} where n is the number of people on the network. Thus, the addition of a person exponentially increases the time taken to disseminate the information on the network while traditional consensus protocols require everyone to communicate with one another making it a laborious process. [18]
Despite assuming a synchronous network, which is susceptible to the DoS attacks, this new family of protocols "reaches a level of security that is simply good enough while surging forward with other advancements". [18]
PARSEC
PARSEC is a byzantine fault tolerant consensus algorithm possessing weak synchrony assumptions (highly asynchronous, assuming random delays with finite expected value)
Similar to HashGraph, it has no leaders, no round robin, no proof of work and reaches eventual consensus with probability one. It differs from HashGraph, in that it provides high speed in the absence and presence of faults. Thus, it avoids the structures of delegated PoS (DPoS), which requires a trusted set of leaders, and does not have a round robin (where a permissioned set of miners sign each block)
It is fully open, unlike HashGraph, which is patented and closed sourced. The reference implementation of PARSEC, written in Rust, was released a few weeks after the whitepaper. ([1], [37])
The general problem of reaching Byzantine agreement on any value is reduced to the simple problem of reaching binary Byzantine agreement on the nodes participating in each decision. This has allowed for PARSEC to reuse the binary Byzantine agreement protocol (SignatureFree Asynchronous Byzantine Consensus) after adapting it to the gossip protocol. [5]
Similar to Honeybadger BFT, this protocol is composed through the additions of interesting ideas presented in literature.
Like HashGraph and Avalanche, a gossip protocol is used to allow efficient communication between nodes. [1]
Finally, the need for a trusted leader or a trusted setup phase implied in Mostefaoui et al. [2] is removed by porting the key ideas to an asynchronous setting [3].
The network is set to N of N instances of the algorithm communicating via randomly synchronous connections.
Due to random synchrony, all users can reach an agreement on what is going on, there is no guarantee for nodes on the timing that they should be receiving messages and a possibility of up to t Byzantine (arbitrary) failures are allowed, were 3t<N. The instances where no failures have occurred are deemed correct or honest, while the failed instances are termed faulty or malicious. Since a Byzantine failure model allows for malicious behavior, any set of instances containing more than 2/3N of them are referred to as the supermajority.
When a node receives a gossip request, it creates a new event and sends a response back (in HashGraph, the response was optional). Each gossip event contains [35]:
 The data being transmitted
 The selfparent (the hash of another gossip event created by the same node)
 The otherparent (a hash of another gossip event created by a different node)
 The Cause for creation which can either be a Request for information, a Response to another node’s request, or an Observation. An observation is when a node creates a gossip event to record an observation that the node made themselves.
 Creator ID (public key)
 Signature – signing the above information.
The selfparent and otherparent prevents tampering because they are signed and related to other gossip events [35].
As with HashGraph, it is difficult for adversaries to interfere with the consensus algorithm because all voting is virtual and done without sharing details of votes cast; each node figures out what other nodes would have voted based on their copy of the gossip graph.
PARSEC also uses the concept of a concrete coin, from Algorand that is used to break ties; particularly in cases where an adversary is carefully managing communication between nodes in order to maintain a deadlock on votes.
First nodes try and converge on a 'true' result for a set of results. If this is not achieved, they move onto step 2, which is trying to converge on a 'false' result. If consensus still cannot be reached, a coin flip is made and we go back to step 1 in another voting round.
Democratic BFT
This is a deterministic Byzantine consensus algorithm that relies on a new weak coordinator. This protocol is implemented in the Red Belly Block chain and is said to achieve 30 000 transactions/second on Amazon Cloud Trials [36], Through the coupling with an optimized variant of the reduction of multivalve to binary consensus from BenOr et al., the Democratic BFT (DBFT) consensus algorithm was generated which terminates in 4 message delays in the good case, when all nonfaulty processes propose the same value. [17]
The term weak coordinator is used to describe the ability of the algorithm to terminate in the presence of a faulty or slow coordinator unlike previous algorithms that do not have the ability to terminate. The fundamental idea here is to allow processes to complete asynchronous rounds as soon as they receive a threshold of messages, instead of having to wait for a message from a coordinator that may be slow.
The resulting algorithm assumes partial synchrony, is resilience optimal, time optimal and does not require signatures.
Moving away from the impossibility of solving consensus in asynchronous message systems, where processes can be faulty or Byzantine, the technique of randomization or additional synchrony is adopted.
Randomized algorithms can use perprocess "local" coins or a shared common coin to solve consensus probabilistically among n processes despite $t<n/3$ Byzantine processes. When based on local coins, the existing algorithms converge O(n^{2.5}) expected time.
A recent randomized algorithm that does not contain a signature solves consensus in O(1) expected time under a fair scheduler, where O is the binary.
To solve the consensus problem deterministically and prevent the use of the common coin, researchers have assumed partial or eventual synchrony. Here, these solutions require a unique coordinator process, referred to as the leader, in order to remain nonfaulty. There are both advantages and disadvantages to this technique: the advantage is if the coordinator is nonfaulty and if the messages are delivered in a timely manner in an asynchronous round, then the coordinator broadcasts its proposal to all processes and this value is decided after a contest number of message delays; however a faulty coordinator can dramatically impact the algorithm performance by leveraging the power it has in a round and imposing its value to all. Nonfaulty processes thus have no other choices but to decide nothing in this round.
This protocol sees the use of a weak coordinator; a weak coordinator allows for the introduction of a new deterministic Byzantine consensus algorithm that is time optimal, resilience optimal and does not require the use of signatures. Unlike the classic, strong coordinator, the weak coordinator does not impose its value. It allows nonfaulty processes to decide a value quickly, without the need of the coordinator, while helping the algorithm to terminate if nonfaulty processes know that they proposed distinct values that might all be decided. In addition, the presence of a weak coordinator allows rounds to be executed optimistically without waiting for a specific message. This is unlike classic BFT algorithms that have to wait for a particular message from their coordinator and occasionally has to recover from a slow network or faulty coordinator.
With regards to the problem of a slow of Byzantine coordinator, the weak coordinator helps agreement by contributing a value while still allowing termination in a constant number of message delays and thus is unlike the classic coordinator or the eventual leader which cannot be implemented in the Binary Byzantine Consensus Algorithm, BAMP_{n,t}[t<n/3].
The validation of protocol was conducted similarly to that of the HoneyBadger block chain, where "Coin", the randomization algorithm from Moustefaoui et al. was used [38]. Using the 100 Amazon Virtual Machines located in 5 data centers on different continents, it was shown that the DBFT algorithm outperforms that of "Coin"; which is known to terminate in O(1) round in expectation. In addition, since Byzantine behaviors have been seen to severely affect the performance of strong coordinatorbased consensus, 4 different Byzantine attacks have been tested in the validation.
Summary of Findings
Here is a table highlighting characteristics of the above mentioned BFT Protocols. Asymptotic Security, Permissionless Blockchain, Timing Assumptions, Decentralized Control, Low Latency and Flexible Trust form part of the value system.
 Asymptotic Security: This depends only on digital signatures (and hash functions) for security
 Permissionless Protocol: This allows anybody to create an address and begin interacting with the protocol.
 Timing Assumptions: Please see Many Forms of Timing Assumptions (Degrees of Synchrony)
 Decentralized Control: Consensus is achieved and defended by protecting the identity of that node until their job is done, through a leaderless nodes.
 Low Latency: This describes a computer network that is optimized to process a very high volume of data messages with minimal delay. Flexible Trust: Where users have the freedom to trust any combinations of parties they see fit.
Important characteristics of each protocol are summarized in the table below.
Protocol  Permissionless Protocol  Timing Assumptions  Decentralized Control  Low Latency  Flexible Trust  Asymptotic Security 

Hyperledger Fabric (HLF)  Partially synchronous  ✓  ✓  
Tendermint  Partially synchronous  ✓  ✓  ✓  
Paxos  ✓  Partially synchronous  ✓  ✓  ✓  
ChandraToureg  ✓  Partially synchronous  ✓  ✓  
Raft  ✓  Weakly synchronous  ✓  ✓  ✓  
HashGraph  ✓  Asynchronous  ✓  ✓  ✓  
SINTRA  ✓  Asynchronous  ✓  ✓  
HoneyBadgerBFT  ✓  Asynchronous  ✓  ✓  ✓  ✓ 
Stellar Consensus Protocol  ✓  Asynchronous  ✓  ✓  ✓  ✓ 
LinBFT  ✓  Partially synchronous  ✓  ✓  
Algorand  ✓  Synchronous  ✓  ✓  ✓  
Thunderella  ✓  Synchronous  ✓  ✓  ✓  
Avalanche  ✓  Synchronous  ✓  ✓  ✓  
PARSEC  ✓  Weakly synchronous  ✓  ✓  
Democratic BFT  ✓  Partially synchronous  ✓  ✓  ✓ 
BFT consensus protocols have been considered as a means to diseminate and validate information, can schnorr multisignatures perform the same function in validating information through the action of signing. This will form part of the next review.
References
[1] Protocol for Asynchronous, Reliable, Secure and Efficient Consensus (PARSEC) WhitePaper, Chevalier et al., http://docs.maidsafe.net/Whitepapers/pdf/PARSEC.pdf, Date accessed: 20180830
[2] SignatureFree Asynchronous Byzantine Consensus with t<n/3 and O(n2) Messages, Mostefaoui et al., https://hal.inria.fr/hal00944019v2/document, Date accessed: 20180830
[3] Byzantine Agreement Made Trivial, Micali.,https://people.csail.mit.edu/silvio/Selected%20Scientific%20Papers/Distributed%20Computation/BYZANTYNE%20AGREEMENT%20MADE%20TRIVIAL.pdf, Date accessed: 20180830
[4] Gossip Protocol. Wikipedia https://en.wikipedia.org/wiki/Gossip_protocol, Date accessed: 20180907
[5] Project Spotlight: Maidsafe and PARSEC Part 1, https://medium.com/@flatoutcrypto/projectspotlightmaidsafeandparsecpart14830cec8d9e3, Date accessed: 20180830
[6] The Honey Badger of BFT Protocols WhitePaper, Miller et al., https://eprint.iacr.org/2016/199.pdf, Date accessed: 20180830
[7] Blockchain, cryptography and consensus, Cachin, https://cachin.com/cc/talks/20170622blockchainice.pdf, Date accessed: 20180904
[8] Comments from Medium: I don't see how it's plausible for parallel forks of the hash chain to be finalized concurrently, https://medium.com/@shelby_78386/idontseehowitsplausibleforparallelforksofthehashchaintobefinalizedconcurrentlycb57afe9dd0a, Date accessed: 20180914
[9] HighPerformance Consensus Mechanisms for Blockchains, Rusch, http://conferences.inf.ed.ac.uk/EuroDW2018/papers/eurodw18Rusch.pdf, Date accessed: 20180830
[10] Untangling Blockchain: A Data Processing View of Blockchain Systems, Dinh et al., https://arxiv.org/pdf/1708.05665.pdf, Date accessed: 20180830
[11] Survey of Consensus Protocols of Blockchain Applications, Sankar et al., https://ieeexplore.ieee.org/document/8014672/, Date accessed: 20180830
[12] The Stellar Consensus Protocol: A Federated Model for Internetlevel Consensus, Mazières, https://www.stellar.org/papers/stellarconsensusprotocol.pdf, Date accessed: 20180830
[13] Architecture of the Hyperledger Blockchain Fabric, Cachin, https://www.zurich.ibm.com/dccl/papers/cachin_dccl.pdf, Date accessed: 20180916
[14] The Ripple Protocol Consensus Algorithm, Schwartz et al., https://ripple.com/files/ripple_consensus_whitepaper.pdf, Date accessed: 20180913
[15] Tendermint: Consensus without Mining, Kwon, https://tendermint.com/static/docs/tendermint.pdf, Date accessed: 20180920
[16] LinBFT: LinearCommunication Byzantine Fault Tolerance for Public Blockchains, Yang, https://arxiv.org/pdf/1807.01829.pdf, Date accessed: 20180920
[17] DBFT: Efficient Byzantine Consensus with a Weak Coordinator and its Application to Consortium Blockchains, Crain et al., http://gramoli.redbellyblockchain.io/web/doc/pubs/DBFTpreprint.pdf, Date accessed: 20180930
[18] Protocol Spotlight: Avalanche Part 1, https://flatoutcrypto.com/home/avalancheprotocol, Date Accessed: 20180909
[19] Breaking down the Blockchain Scalability Trilemma, Asolo, https://bitcoinist.com/breakingdownthescalabilitytrilemma/, Date accessed: 20181001
[20] Byzantine Fault Tolerance, Demicoli, https://blog.cdemi.io/byzantinefaulttolerance/, Date accessed: 20181001
[21] Consensus Mechanisms, Wikipedia, https://en.wikipedia.org/wiki/Consensus_(computer_science), Date accessed: 20181001
[22] Impossibility of Distributed Consensus with One Faulty Process, Fischer et al., https://groups.csail.mit.edu/tds/papers/Lynch/jacm85.pdf, Date accessed: 20180930
[23] A brief Tour of FLP Impossibility, https://www.thepapertrail.org/post/20080813abrieftourofflpimpossibility/, Date accessed: 20180930
[24] Demystifying HashGraph: Benefits and Challenges, Jia, . https://hackernoon.com/demystifyinghashgraphbenefitsandchallengesd605e5c0cee5, Date accessed: 20180909
[25] Algorand WhitePaper, Chen and Micali, https://arxiv.org/pdf/1607.01341.pdf , Date accessed: 20180913
[26] Thunderella: Blockchains with Optimistic Instant Confirmation, Pass and Shi, https://eprint.iacr.org/2017/913.pdf, Date accessed: 20180913
[27] ChandraToueg Consensus Algorithm, Wikipedia, https://en.wikipedia.org/wiki/Chandra%E2%80%93Toueg_consensus_algorithm, Date accessed: 20180913
[28] Raft, Wikipedia, https://en.wikipedia.org/wiki/Raft_(computer_science), Date accessed: 20180913
[29] Paxos, Wikipedia, https://en.wikipedia.org/wiki/Paxos_(computer_science), Date accessed: 20181001
[30] The Swirlds Hashgraph consensus algorithm: Fair, fast, byzantine fault tolerance, Baird, https://www.swirlds.com/downloads/SWIRLDSTR201601.pdf, Date accessed: 20180930
[31] Swirlds and Sybil Attacks, Baird, http://www.swirlds.com/downloads/SwirldsandSybilAttacks.pdf, Date accessed: 20180930
[32] Demystifying HashGraph: Benefits and Challenges, Jia, https://hackernoon.com/demystifyinghashgraphbenefitsandchallengesd605e5c0cee5, Date accessed: 20180930
[33] HashGraph: A WhitePaper Review, Graczyk, https://medium.com/opentoken/hashgraphawhitepaperreviewf7dfe2b24647, Date accessed: 20180930
[34] Tendermint Explained Bringing BFTbased PoS to the Public Blockchain Domain, https://blog.cosmos.network/tendermintexplainedbringingbftbasedpostothepublicblockchaindomainf22e274a0fdb, Date accessed: 20180930
[35] Project Spotlight: Maidsafe and PARSEC Part 2, https://flatoutcrypto.com/home/maidsafeparsecexplanationpt2, Date accessed: 20180918
[36] Red Belly Blockchain, https://www.ccn.com/tag/redbellyblockchain/, Date accessed: 20181010
[37] Procotol for Asynchronous, Reliable, Secure and Efficent Consensus, https://github.com/maidsafe/parsec, Date accessed 20181022
[38] SignatureFree Asynchronous Byzantine Consensus with $t<n/3$ and O(n^{2}) Messages, https://hal.inria.fr/hal00944019v2/document, Date accessed 20181022
[39] Byzantine Fault Tolerance. Wikipedia https://en.wikipedia.org/wiki/Byzantine_fault_tolerance, Date accessed: 20181022
[40] Making Byzantine Fault Tolerant Systems Tolerate Byzantine Faults, Clement et al., https://www.usenix.org/legacy/event/nsdi09/tech/full_papers/clement/clement.pdf, Date accessed 20181022
[41] Secure and Efficent Asynchronous Broadcast Protocols, Cachin et al., https://www.shoup.net/papers/ckps.pdf, Date accessed 20181022
[42] Asynchronous secure computations with optimal resilience, BenOr et al., https://dl.acm.org/citation.cfm?id=198088, Date accessed 20181022
[43] Asynchronous secure computations with optimal resilience, BenOr et al., https://dl.acm.org/citation.cfm?id=198088, Date accessed 20181022
[44] Tendermint Peer Discovery, https://github.com/tendermint/tendermint/blob/master/docs/spec/p2p/node.md, Date accessed 20181022
[45] Just My Thoughts: Introduction to Gossip, https://managementfromscratch.wordpress.com/2016/04/01/introductiontogossip/, Date accessed 20181022
[46] Atomic Broadcast. Wikipedia, https://en.wikipedia.org/wiki/Atomic_broadcast, Date accessed: 20181022
[47] Liveness. Wikipedia, https://en.wikipedia.org/wiki/Liveness, Date accessed: 20181022
[48] Stellar Consensus Protocol Developer Guides, https://www.stellar.org/developers/guides/concepts/scp.html, Date accessed: 20181022
[49] A beginner's guide to Big O notation, https://robbell.net/2009/06/abeginnersguidetobigonotation/, Date accessed: 20181022
[50] Fast Randomized Consensus using Shared Memory, Aspnes et al., http://www.cs.yale.edu/homes/aspnes/papers/jalg90.pdf, Date accessed: 20181022
[51] Secure Intrusiontolerant Replication on the Internet, Cachin et al., https://cachin.com/cc/papers/sintra.pdf, Date accessed: 20181022
Appendix
 Terminology
 Consensus
 Binary Consensus
 Byzantine Fault Tolerance
 Practical Byzantine Fault Tolerant Variants
 Deterministic and NonDeterministic Protocols
 Scalabilityperformance trade off
 Many Forms of Timing Assumptions (Degrees of Synchrony)
 The Problem with Timing Assumptions
 The FLP Impossibility
 Randomized Agreement
Terminology
In order to gain a full understanding of the field of consensus mechanism, specifically BFT consensus mechanisms, certain terms and concepts need to be defined and fleshed out.
Consensus
Distributed agents (these could be computers, generals coordinating an attack, or sensors in a nuclear plant) that communicate via a network (be it digital, courier or mechanical) need to agree on facts in order to act as a coordinated whole.
When all nonfaulty agents agree on a given fact, then we say that the network is in consensus.
Consensus is achieved when all nonfaulty agents, agree on a prescribed fact.
There are a host of formal requirements which a consensus protocol may adhere to; these include:
 Agreement: Where all correct processes agree on the same fact
 Weak Validity: Where for all correct processes, the output must be the input for some correct process
 Strong Validity: Where if all correct processes receive the same input value, they must all output that value
 Termination: All processes must eventually decide on an output value [21]
Binary Consensus
There is a unique case of the consensus problem, referred to as the binary consensus restricts the input and hence the output domain to a single binary digit {0,1}.
When the input domain is large; relative to the number of processes, for instance an input set of all the natural numbers, it can be shown that consensus is impossible in a synchronous message passing model. [21]
Byzantine Fault Tolerance
Byzantine failures are considered the most general and most difficult class of failures among the failure modes. The socalled failstop failure mode occupies the simplest end of the spectrum. Whereas failstop failure mode simply means that the only way to fail is a node crash, detected by other nodes, Byzantine failures imply no restrictions, which means that the failed node can generate arbitrary data, pretending to be a correct one. Thus, Byzantine failures can confuse failure detection systems, which makes fault tolerance difficult. [39]
Several papers in the literature contextualize the problem using generals at different camps, situated outside the enemy castle, needing to decide whether or not to attack. A consensus algorithm that would fail, would perhaps see one general attack while all the others stay back, resulting in the vulnerability of first general
One key property of a block chain system is that the nodes do not trust each other, meaning that some may behave in Byzantine manners. The consensus protocol must therefore tolerate Byzantine failures.
A network is Byzantine Fault Tolerant when it can provide service and reach a consensus despite faults or failures of the system. The processes use a protocol for consensus or atomic broadcast (a broadcast where all correct processes in a system of multiple processes receive the same set of messages in the same order; that is, the same sequence of messages [46]) to agree on a common sequence of operations to execute. [[20]]
The literature on distributed consensus is vast, and there are many variants of previously proposed protocols being developed for block chains. They can be largely classified along a spectrum. One extreme consists of purely computation based protocols which use proof of computation to randomly select a node which singlehandedly decides the next operation. The other extreme is purely communication based protocols in which nodes have equal votes and go through multiple rounds of communication to reach consensus, Practical Byzantine Fault Tolerance (PBFT) being the prime example, which is a replication algorithm designed to be BFT. [10]
For systems with n nodes, of which f are Byzantine, it has been shown that no algorithm exists that solves the consensus problem for f > n/3.[21]
So how then does the Bitcoin protocol get away with only needing 51% honest nodes to reach consensus?
Well, strictly speaking, Bitcoin is NOT a BFTCM because there is never absolute finality in bitcoin ledgers; there is always a chance (however small) that someone can 51% attack the network and rewrite the entire history. Bitcoin is a probabilistic consensus, rather than deterministic.
Practical Byzantine Fault Tolerant Variants
PoW suffers from nonfinality, that is a block appended to a block chain is not confirmed until it is extended by many other blocks. Even then, its existence in the block chain is only probabilistic. For example, eclipse attacks on Bitcoin exploit this probabilistic guarantee to allow double spending. In contrast, the original PBFT protocol is deterministic. [10]
Deterministic and NonDeterministic Protocols
Deterministic, bounded Byzantine agreement relies on consensus being finalized for each epoch before moving to the next one ensuring that there is some safety about a consensus reference point prior to continuing. If instead you allow an unbounded number of consensus agreements within the same epoch, then there is no overall consensus reference point with which to declare finality and thus safety is compromised. [8]
For nondeterministic or probabilistic protocols, the probability that an honest node is undecided after r rounds approaches zero as r approaches infinity.
Nondeterministic protocols which solve consensus under the purely asynchronous case potentially rely on random oracles and generally incur high message complexity overhead, as they depend on reliable broadcasting for all communication.
Protocols like HoneyBadger BFT fall into this class of nondeterministic protocols under asynchrony. Normally, they require three instances of reliable broadcast for a single round of communication. [34]
Scalabilityperformance trade off
As briefly mentioned in the Introduction, the scalability of BFT protocols considering the number of participants is highly limited and the performance of most protocols deteriorates as the number of involved replicas increases. This effect is especially problematic for BFT deployment in permissionless block chains. [7]
The problem of BFT scalability is twofold: a high throughput as well as a large consensus group with good reconfigurability that can tolerate a high number of failures are both desirable properties in BFT protocols, but are often in direct conflict.
Bitcoin mining, for example supports thousands of participants, offers good reconfigurability, i.e. nodes can join or leave the network at any time, and can tolerate a high number of failures, however they are only able to process a severely limited number of transactions per second. Most BFT protocols achieve a significantly higher throughput, but are limited to small groups of participants of less than 20 nodes and the group reconfiguration is not easily achievable.
Several approaches have been employed to remedy these problems, e.g. threshold cryptography, creating new consensus groups for every round, or limiting the number of necessary messages to reach consensus. [9]
Many Forms of Timing Assumptions (Degrees of Synchrony)
Synchrony
Here, the time for nodes to wait and receive information is predefined. If a node has not received an input within the redefined time structure, there is a problem. [5]
In synchronous systems it is assumed that all communications proceed in rounds. In one round a process may send all the messages it requires while receiving all messages from other processes. In this manner no message from one round may influence any messages sent within the same round [21]
A △Tsynchronous network guarantees that every message sent is delivered after at most a delay of △T (where △T is a measure of real time) [6] Synchronous protocols come to a consensus after △T. [5]
Partial Synchrony
Here, the network retains some form of a predefined timing structure, however it can operate without knowing said assumption of how fast nodes can exchange messages over the network. Instead of pushing out a block every x seconds, in a partially synchronous block chain would gauge the limit, with messages always being sent and received within the unknown deadline.
Partially synchronous protocols come to a consensus in an unknown, but finite period. [5]
Unknown△T Model
The protocol is unable to use the delay bound as a parameter. [6]
Eventually Synchronous
The message delay bound △ is only guaranteed to hold after some (unknown instant, called the "Global Stabilization Time". [6]
Weak Synchrony
Most existing Byzantine fault tolerant systems, even those called 'robust' assume some variation of weak synchrony, where messages are guaranteed to be delivered after a certain bound △T, but △T may be timevarying or unknown to the protocol designer.
However, the liveness properties of weakly synchronous protocols can fail completely when the expected timing assumptions are violated (e.g., due to a malicious network adversary). In general, liveness refers to a set of properties of concurrent systems, that require a system to make progress despite the fact that its concurrently executing components may have to "take turns" in critical sections, parts of the program that cannot be simultaneously run by multiple components.[47]
Even when the weak synchrony assumptions are satisfied in practice, weakly synchronous protocols degrade significantly in throughput when the underlying network is unpredictable. Unfortunately, weakly asynchronous protocols require timeout parameters that are difficult to tune, especially in cryptocurrency application settings; and when the chosen timeout values are either too long to too short, throughput can be hampered.
In terms of feasibility, both weak and partially synchronous protocols are equivalent a protocol that succeeds in one setting can be systematically adapted for another. In terms of concrete performance, however, adjusting for weak synchrony means gradually increasing the timeout parameter over time (e.g. by an exponential backoff policy). This results in delays when recovering from transient network partition. Protocols typically manifest these assumptions in the form of a timeout event. For example, if parties detect that no progress has been made within a certain interval, then they take a corrective action such as electing a new leader. Asynchronous protocols do not rely on timers, and make progress whenever messages are delivered, regardless of actual clock time. [6]
Random Synchrony
Messages are delivered with random delays, such that the average delay is finite. There may be periods of arbitrarily long days (this is a weaker assumption than weak synchrony, and only a bit stronger than full asynchrony, where the only guarantee is that messages are eventually delivered). It is impossible to tell whether an instance has failed by completely stopping or if there is just a delay in message delivery. [1]
Asynchrony
In an asynchronous network, the adversary can deliver messages in any order and at any time, however the message must eventually be delivered between correct nodes. Nodes in an asynchronous network effectively have no use for real time clocks, and can only take actions based on the ordering of messages they receive. [6]. The speed is determined by the speed at which the network communicatesinstead of a fixed limit of x seconds.
An asynchronous protocol requires a different means to decide when all nodes are able to come to a consensus.
As will be discussed in The FLP Impossibility, FLP result rules out the possibility of the deterministic asynchronous protocols for atomic broadcast and many other tasks. A deterministic protocol must therefore make some stronger timing assumptions. [6]
Counting rounds in asynchronous networks
Although the guarantee of eventual delivery is decoupled from notions of 'real time', it is nonetheless desirable to characterize the running time of asynchronous protocols. The standard approach is for the adversary to assign each message a virtual round number, subject to the condition that every (r1) message between correct nodes must be delivered before any (r+1) message is sent.
The Problem with Timing Assumptions
The problem with both synchronous and partially synchronous assumptions is that "the protocols based on timing assumptions are unsuitable for decentralized, cryptocurrency settings, where network links can be unreliable, network speeds change rapidly, and network delays may even be adversarially induced."[6]
Denial of Service Attack
Basing a protocol on timings, exposes the network to Denial of Service (DoS) attacks. A synchronous protocol will be deemed unsafe if a DoS slows down the network sufficiently. Even though a partially synchronous protocol would be safe, it would be unable to operate, as the messages would be exposed to interference.
An asynchronous protocol would be able to function under a DoS attack, however it is difficult to reach consensus, as it is impossible to know if the network is under attack, or if a particular message is delayed by the protocol itself.
The FLP Impossibility
The paper, 'Impossibility of Distributed Consensus with One Faulty Process' by Fischer et al. [22], mapped out what is possible to achieve with distributed processes in an asynchronous environment.
The result, referred to as the FLP result, which raised the problem of consensus, that is, getting a distributed network of processors to agree on a common value. This problem was known to be solvable in a synchronous setting, where processes could proceed in simultaneous steps. The synchronous solution was seen as resilient to faults, where processors crash and take no further part in the computation. Synchronous models allow failures to be detected by waiting one entire step length for a reply from a processor, and presuming that it has crashed if no reply is received.
This kind of failure detection is not possible in an asynchronous setting, as there are no bounds on the amount of time a processor might take to complete its work and then respond. The FLP result shows that in an asynchronous setting, where only one processor might crash, there is no distributed algorithm that solves the consensus problem. [23]
Randomized Agreement
The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. Every protocol for this problem has the possibility of nontermination. [22] While the vast majority of PBFT protocols steer clear of this impossibility result by making timing assumptions, randomness (and, in particular, cryptography) provides an alternative route. Asynchronous BFT protocols have been used for a variety of tasks such as binary agreement (ABA), reliable broadcast (RBC) and more. [6]
Layer 2 Scaling
From Tari Labs
Analogous to the OSI layers for communication, in block chain technology decentralized Layer 2 protocols, also commonly referred to as Layer 2 scaling, refers to transaction throughput scaling solutions. Decentralized Layer 2 protocols run on top of the main block chain (offchain), while preserving the attributes of the main block chain (e.g. crypto economic consensus). Instead of each transaction only the resultant of a number of transactions are embedded onchain.
From CryptoDigest
Layer 2 scaling solutions utilize an interesting approach to scale block chains while keeping them decentralized. Layer 2 solutions are protocols built “on top” of block chains and sacrifice some security in order to potentially match VISA transaction levels. Layer 2 solutions are built with the intention of keeping the user in control at all times.
Layer 2 Scaling Survey
What is Layer 2 Scaling?
In the blockchain and cryptocurrency world, transaction processing scaling is a tough problem to solve. This is limited by the average block creation time, the block size limit, and the number of newer blocks needed to confirm a transaction (confirmation time). These factors make 'over the counter' type transactions similar to Master Card or Visa nearly impossible if done on the main blockchain (onchain).
Let's postulate that blockchain and cryptocurrency "take over the world" and are responsible for all global noncash transactions performed, i.e. 433.1 billion in 2014 to 2015 [24]. This means 13,734 transactions per second (tx/s) on average! (To put this into perspective, VisaNet currently processes 160 billion transactions per year [25] and is capable of handling more than 65,000 transaction messages per second [26].) This means that if all of those were simple singleinputsingleoutput noncash transactions and performed on:

SegWitenabled Bitcoin 'like' blockchains that can theoretically handle ~21.31tx/s, we would need ~644 parallel versions, and with a SegWit transaction size of 190 bytes [27], the combined blockchain growth would be ~210GB per day!

Ethereum 'like' blockchains, and taking current gas prices into account, Ethereum can theoretically process ~25.4tx/s, then ~541 parallel versions would be needed and, with a transaction size of 109 bytes ([28], [29]), the combined blockchain growth would be ~120GB per day!
This is why we need a proper scaling solution that would not bloat the blockchain.
The Open Systems Interconnection (OSI) model defines seven layers for communication functions of a computing system. Layer 1 refers to the physical layer and Layer 2 to the data link layer. Layer 1 is never concerned with functions of Layer 2 and up; it just delivers transmission and reception of raw data. In turn, Layer 2 only knows about Layer 1 and defines the protocols that deliver nodetonode data transfer. [1]
Analogous to the OSI layers for communication, in blockchain technology, decentralized Layer 2 protocols, also commonly referred to as Layer 2 scaling, refers to transaction throughput scaling solutions. Decentralized Layer 2 protocols run on top of the main blockchain (offchain), while preserving the attributes of the main blockchain (e.g. crypto economic consensus). Instead of each transaction, only the result of a number of transactions is embedded onchain. [2]
Also:
 Does every transaction need every parent blockchain node in the world to verify it?
 Would I be willing to have (temporary) lower security guarantees for most of my daytoday transactions if I could get them validated (whatever we take that to mean) nearinstantly?
If you can answer 'no' and 'yes', then you're looking for a Layer 2 scaling solution.
How will this be Applicable to Tari?
Tari is a highthroughput protocol that will need to handle realworld transaction volumes. For example, Big Neon, the initial business application to be built on top of the Tari blockchain, requires highvolume transactions in a short time, especially when tickets sales open and when tickets will be redeemed at an event. Imagine filling an 85,000 seat stadium with 72 entrance queues on match days. Serialized realworld scanning boils down to ~500 tickets in four minutes, or ~2 spectators allowed access per second per queue.
This would be impossible to do with parent blockchain scaling solutions.
Layer 2 Scaling Current Initiatives
Micropayment Channels
What are they?
A micropayment channel is a class of techniques designed to allow users to make multiple Bitcoin transactions without committing all of the transactions to the Bitcoin blockchain. In a typical payment channel, only two transactions are added to the blockchain, but an unlimited or nearly unlimited number of payments can be made between the participants. [10]
Several channel designs have been proposed or implemented over the years, including:
 Nakamoto highfrequency transactions;
 Spillmanstyle payment channels;
 CLTVstyle payment channels;
 PoonDryja payment channels;
 DeckerWattenhofer duplex payment channels;
 DeckerRussellOsuntokun eltoo channels;
 Hashed TimeLocked Contracts (HTLCs).
With specific focus on Hashed TimeLocked Contracts: This technique can allow payments to be securely routed across multiple payment channels. HTLCs are integral to the design of more advanced payment channels such as those used by the Lightning Network.
The Lightning Network is a secondlayer payment protocol that operates on top of a blockchain. It enables instant transactions between participating nodes. The Lightning Network features a peertopeer system for making micropayments of digital cryptocurrency through a network of bidirectional payment channels without delegating custody of funds and minimizing the trust of third parties. [11]
Normal use of the Lightning Network consists of opening a payment channel by committing a funding transaction to the relevant blockchain. This is followed by making any number of Lightning transactions that update the tentative distribution of the channel's funds without broadcasting to the blockchain; and optionally followed by closing the payment channel by broadcasting the final version of the transaction to distribute the channel's funds.
Who uses them?
The Lightning Network is spreading across the cryptocurrency landscape. It was originally designed for Bitcoin. However, Litecoin, Zcash, Ethereum, and Ripple are just a few of the many cryptocurrencies planning to implement or test some form of the network. [12]
Strengths
 Micropayment channels are one of the leading solutions that has been presented to scale Bitcoin, which does not require a change to the underlying protocol.
 Transactions are processed instantly, the account balances of the nodes are updated, and the money is immediately accessible to the new owner.
 Transaction fees are a fraction of the transaction cost. [13]
Weaknesses
 Micropayment channels are not suitable for making bulk payment, as the intermediate nodes in the multichannel payment network may not be loaded with money to move the funds along.
 Recipients cannot receive money unless their node is connected and online at the time of the transaction. At the time of writing (July 2018), channels were only bilateral.
Opportunities
Opportunities are fewer than expected, as Tari's ticketing use case requires many fast transactions with many parties, and not many fast transactions with a single party. Nonfungible assets must be "broadcasted", but state channels are private between two parties.
State Channels
What are they?
State channels are the more general form of micropayment channels. They can be used not only for payments, but for any arbitrary "state update" on a blockchain, such as changes inside a smart contract. [16]
State channels allow multiple transactions to be made within offchain agreements with very fast processing, and the final settlement onchain. They keep the operation mode of blockchain protocol, but change the way it is used so as to deal with the challenge of scalability.
Any change of state within a state channel requires explicit cryptographic consent from all parties designated as "interested" in that part of the state. [19
Who uses them?
On Ethereum:
 Raiden Network ([16], [21])

Uses state channels to research state channel technology, define protocols and develop reference implementations.

State channels work with any ERC20compatible token.

State updates between two parties are done via digitally signed and hashlocked transfers as the consensus mechanism, called balance proofs, which are also secured by a timeout. These can be settled on the Ethereum blockchain at any time. Raiden Network uses HTLCs in exactly the same manner as the Lightning Network.

 Counterfactual ([16], [19], [31])
 Uses state channels as a generalized framework for the integration of native state channels into Ethereumbased decentralized applications.
 A generalized state channel generalized framework is one where state is deposited once, and is then used afterwards by any application or set of applications.
 Counterfactual instantiation means to instantiate a contract without actually deploying it onchain. It is achieved by making users sign and share commitments to the multisig wallet.
 When a contract is counterfactually instantiated, all parties in the channel act as though it has been deployed, even though it has not.
 A global registry is introduced. This is an onchain contract that maps unique deterministic addresses for any Counterfactual contract to actual onchain deployed addresses. The hashing function used to produce the deterministic address can be any function that takes into account the bytecode, its owner (i.e. the multisig wallet address), and a unique identifier.
 A typical Counterfactual state channel is composed of counterfactually instantiated objects.
 Funfair ([16], [23], [32])
 Uses state channels as a decentralized slot machine gambling platform, but still using centralized serverbased random number generation.
 Instantiates a normal "Raidenlike" state channel (called fate channel) between the player and the casino. Final states are submitted to blockchain after the betting game is concluded.
 Investigating the use of threshold cryptography such as BonehLynnShacham (BLS) signature schemes to enable truly secure random number generation by a group of participants.
On NEO:
 Trinity ([3], [17], [18])
 Trinity is an opensource network protocol based on NEP5 smart contracts.
 Trinity for NEO is the same as the Raiden Network for Ethereum.
 Trinity uses the same consensus mechanism as the Raiden network.
 A new token, TNC, has been introduced to fund the Trinity network, but NEO, NEP5 and TNC tokens are supported.
Strengths
 Allows payments and changes to smart contracts.
 State channels have strong privacy properties. Everything is happening "inside" a channel between participants.
 State channels have instant finality. As soon as both parties sign a state update, it can be considered final.
Weaknesses
 State channels rely on availability; both parties must be online.
Opportunities
Opportunities are fewer than expected, as Tari's ticketing use case requires many fast transactions with many parties, and not many fast transactions with a single party. Nonfungible assets must be "broadcasted", but state channels are private between two parties.
Offchain Matching Engines
What are they?
Orders are matched offchain in a matching engine and fulfilled onchain. This allows complex orders, supports crosschain transfers, and maintains a public record of orders as well as a deterministic specification of behavior. Offchain matching engines make use of a token representation smart contract that converts global assets into smart contract tokens and vice versa. [5]
Who uses them?

Neon Exchange (NEX) ( [5], [35])
 NEX uses a NEO decentralized application (dApp) with tokens.
 Initial support is planned for NEO, ETH, NEP5, and ERC20 tokens.
 Crosschain support is planned for trading BTC, LTC, and RPX on NEX.
 The NEX offchain matching engine will be scalable, distributed, faulttolerant, and function continuously without downtime.
 Consensus is achieved using cryptographically signed requests; publicly specified deterministic offchain matching engine algorithms; and public ledgers of transactions and reward for foul play. The trade method of the exchange smart contract will only accept orders signed by a private key held by the matching engine.
 The matching engine matches the orders and submits them to the respective blockchain smart contract for execution.
 A single invocation transaction on NEO can contain many smart contract calls. Batch commit of matched orders in one onchain transaction is possible.

 An Ethereum ERC20based smart contract token (ZRX).
 Provides an opensource protocol to exchange ERC20compliant tokens on the Ethereum blockchain using offchain matching engines in the form of dApps (Relayers) that facilitate transactions between Makers and Takers.
 Offchain order relay + onchain settlement.
 Maker chooses Relayer, specifies token exchange rate, expiration time, fees to satisfy Relayer's fee schedule, and signs order with private key.
 Consensus is governed with the publicly available DEX smart contract: addresses, token balances, token exchange, fees, signatures, order status, and final transfer.
Strengths
 Performance {NEX, 0x}:
 offchain request/order;
 offchain matching.
 NEXspecific:
 batched onchain commits;
 crosschain transfers;
 support of national currencies;
 public JavaScript Object Notation (JSON) Application Programmers Interface (API) and web extension API for thirdparty applications to trade tokens;
 development environment  Elixir on top of Erlang to enable scalable, distributed, and faulttolerant matching engine;
 Cure53 full security audit on web extension, NEX tokens will be regulated as registered European securities.
 0xspecific:
 opensource protocol to enable creation of independent offchain dApp matching engines (Relayers);
 totally transparent matching of orders with no single point of control;
 maker's order only enters a Relayer's order book if fee schedule is adhered to,
 exchange can only happen if a Taker is willing to accept.
 consensus and settlement governed by the publicly available DEX smart contract.
Weaknesses
 At the time of writing (July 2018), both NEX and 0x were still in development.
 NEXspecific:
 a certain level of trust is required, similar to a traditional exchange;
 closed liquidity pool.
 0xspecific:
Opportunities
 Matching engines in general have opportunities for Tari; the specific scheme is to be investigated further.
Masternodes
What are they?
A masternode is a server on a decentralized network. It is utilized to complete unique functions in ways ordinary mining nodes cannot, e.g. features such as direct send, instant transactions and private transactions. Because of their increased capabilities, masternodes typically require an investment in order to run. Masternode operators are incentivized and are rewarded by earning portions of block rewards in the cryptocurrency they are facilitating. Masternodes will get the standard return on their stakes, but will also be entitled to a portion of the transaction fees, allowing for a greater return on investment (ROI). ([7], [9])
Dash Example [30]
Dash was the first cryptocurrency to implement the masternode model into its protocol. Under what Dash calls its proof of service algorithm, a secondtier network of masternodes exists alongside a firsttier network of miners to achieve distributed consensus on the blockchain. This twotiered system ensures that proof of service and proof of work perform symbiotic maintenance of Dash's network. Dash masternodes also enable a decentralized governance system that allows node operators to vote on important developments within the blockchain. A masternode for Dash requires a stake of 1,000 DASH. Dash and the miners each have 45% of the block rewards. The other 10% goes to the blockchain's treasury fund. Operators are in charge of voting on proposals for how these funds will be allocated to improve the network.
Dash Deterministic Ordering
A special deterministic algorithm is used to create a pseudorandom ordering of the masternodes. By using the hash from the proofofwork for each block, security of this functionality is provided by the mining network.
Dash Trustless Quorums
The Dash masternode network is trustless where no single entity can control the outcome. N pseudo random masternodes (Quorum A) are selected from the total pool to act as an oracle for N pseudo random masternodes (Quorum B) that are selected to perform the actual task. Quorum A are the closest nodes mathematically to the current block hash, while Quorum B are the furthest. This process is repeated for each new block in the blockchain.
Dash Proof of Service
Bad actors could also run masternodes. To reduce the possibility of bad acting, nodes must ping the rest of the network to ensure they remain active. All masternode verification is done randomly via the Quorum system by the masternode network itself. Approximately 1% of the network is verified each block. This results in the entire masternode network being verified approximately six times per day. Six consecutive violations result in the deactivation of a masternode.
Who uses them?
 Block, Bata, Crown, Chaincoin, Dash, Diamond, ION, Monetary Unit, Neutron, PIVX, Vcash, and XtraBytes. [8]
Strengths

Masternodes help to sustain and take care of the ecosystem and can protect blockchains from network attacks.

Masternodes can perform decentralized governance of miners by having the power to reject or orphan blocks if required. ([22], [30])

Masternodes can support decentralized exchanges by overseeing transactions and offering fiat currency gateways.

Masternodes can be used to facilitate smart contracts such as instant transactions, anonymous transactions, and decentralized payment processor.

Masternodes can facilitate a decentralized marketplace such as the blockchain equivalent of peerrun commerce sites such as eBay. [22]

Masternodes compensate for Proof of Work's limitations; they avoid mining centralization and consume less energy. [22]

Masternodes promise enhanced stability and network loyalty, as larger dividends and high initial investment costs make it less likely that operators will abandon their position in the network. [22]
Weaknesses
 Maintaining masternodes can be a long and arduous process.
 ROI is not guaranteed and is inconsistent. In some applications, Masternodes only get rewarded if they mine a block and if they are randomly chosen to get paid.
 In general, a masternode's IP address is publicized and thus open to attacks.
Opportunities

Masternodes do not have a specific standard or protocol; many different implementations exist. If the Tari protocol employs Masternodes, they can be used to facilitate smart contracts offchain and to enhance the security of the primary blockchain.

Masternodes increase the incentives for people to be involved with Tari.
Plasma
What is it?
Plasma blockchains are a chain within a blockchain, with state transitions enforced by bonded (time to exit) fraud proofs (block header hashes) submitted on the root chain. It enables management of a tiered blockchain without a full persistent record of the ledger on the root blockchain, and without giving custodial trust to any third party. The fraud proofs enforce an interactive protocol of rapid fund withdrawals in case of foul play such as block withholding, and in cases where bad actors in a lowerlevel tier want to commit blocks to the root chain without broadcasting this to the higherlevel tiers. [4]
Plasma is a framework for incentivized and enforced execution of smart contracts, scalable to a significant amount of state updates per second, enabling the root blockchain to be able to represent a significant amount of dApps, each employing its own blockchain in a tree format. [4]
Plasma relies on two key parts, namely reframing all blockchain computations into a set of MapReduce functions, and an optional method to do Proof of Stake (PoS) token bonding on top of existing blockchains (enforced in an onchain smart contract). Nakamoto Consensus incentives discourage block withholding or other Byzantine behavior. If a chain is Byzantine, it has the option of going to any of its parents (including the root blockchain) to continue operation or exit with the current committed state. [4]
MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key [38]. The Plasma MapReduce includes commitments on data to computation as input in the map phase, and includes a merkleized proofofstate transition in the reduce step when returning the result. [4]
Who uses it?

Loom Network, using Delegated Proof of Stake (DPoS) consensus and validation, enabling scalable Application Specific Side Chains (DAppChains), running on top of Ethereum. ([4], [15])

OMG Network (OmiseGO), using PoS consensus and validation, a Plasma blockchain scaling solution for finance running on top of Ethereum. ([6], [14])
Strengths
 Not all participants need to be online to update state.
 Participants do not need a record of entry on the parent blockchain to enable their participation in a Plasma blockchain.
 Minimal data is needed on the parent blockchain to confirm transactions when constructing Plasma blockchains in a tree format.
 Private blockchain networks can be constructed, enforced by the root blockchain. Transactions may occur on a local private blockchain and have financial activity bonded by a public parent blockchain.
 Rapid exit strategies in case of foul play.
Weaknesses
At the time of writing (July 2018), Plasma still needed to be proven on other networks apart from Ethereum.
Opportunities

Has opportunities for Tari as an L2 scaling solution.

Possibility to create a Tari ticketing Plasma dAppChain running of Monero without creating a Tarispecific root blockchain?
Note: This will make the Tari blockchain dependent on another blockchain.

The Loom Network's Software Development Kit (SDK) makes it extremely easy for anyone to create a new Plasma blockchain. In less than a year, a number of successful and diverse dAppChains have launched. The next one could easily be for ticket sales...
TrueBit
What is it?
TrueBit is a protocol that provides security and scalability by enabling trustless smart contracts to perform and offload complex computations. This makes it different from state channels and Plasma, which are more useful for increasing the total transaction throughput of the Ethereum blockchain. TrueBit relies on solvers (akin to miners), who have to stake their deposits in a smart contract, solve computation and, if correct, get their deposit back. If the computation is incorrect, the solver loses their deposit. TrueBit uses an economic mechanism called the "verification game," where an incentive is created for other parties, called challengers, to check the solvers' work. ([16], [40], [43])
Who uses it?
Golem cites TrueBit as a verification mechanism for its forthcoming outsourced computation network LivePeer, a video streaming platform. ([39], [41], [42])
Strengths
 Outsourced computation  anyone in the world can post a computational task, and anyone else can receive a reward for completing it. [40]
 Scalable  by decoupling verification for miners into a separate protocol, TrueBit can achieve high transaction throughput without facing a Verifier's Dilemma. [40]
Weaknesses
At the time of writing (July 2018), TrueBit had not yet been fully tested.
Opportunities
Nothing at the moment as, Tari wouldn't be doing heavy/complex computation, at least not in the short term.
Observations
 Further investigation into the more promising layer 2 scaling solutions and technologies is required to verify alignment, applicability, and usability.
 An overview of Counterparty, Rootstock, Drivechains, and Scriptless scripts must still be added.
References
[1] "OSI Mode" [online]. Available: https://en.wikipedia.org/wiki/OSI_model. Date accessed: 20180607.
[2] "Decentralized Digital Identities and Block Chain  The Future as We See It" [online]. Available: https://www.microsoft.com/enus/microsoft365/blog/2018/02/12/decentralizeddigitalidentitiesandblockchainthefutureasweseeit/. Date accessed: 20180607.
[3] "Trinity Protocol: The Scaling Solution of the Future?" [Online.] Available: https://www.investinblockchain.com/trinityprotocol. Date accessed: 20180608.
[4] J. Poon and V. Buterin, "Plasma: Scalable Autonomous Smart Contracts" [online]. Available: http://plasma.io/plasma.pdf. Date accessed: 20180614.
[5] "NEX: A High Performance Decentralized Trade and Payment Platform" [online]. Available: https://nash.io/pdfs/whitepaper_v2.pdf. Date accessed: 20180611.
[6] J. Poon and OmiseGO Team, "OmiseGO: Decentralized Exchange and Payments Platform" [online]. Available: https://cdn.omise.co/omg/whitepaper.pdf. Date accessed: 20180614.
[7] "The Rise of Masternodes Might Soon be Followed by the Creation of Servicenodes" [online]. Available: https://cointelegraph.com/news/theriseofmasternodesmightsoonbefollowedbythecreationofservicenodes. Date accessed: 20180613.
[8] "What are Masternodes? Beginner's Guide" [online]. Available: https://blockonomi.com/masternodeguide/. Date accessed: 20180614.
[9] "What the Heck is a DASH Masternode and How Do I get One" [online]. Available: https://medium.com/dashfornewbies/whattheheckisadashmasternodeandhowdoigetone56e24121417e. Date accessed: 20180614.
[10] "Payment Channels" (online). Available: https://en.bitcoin.it/wiki/Payment_channels. Date accessed: 20180614.
[11] "Lightning Network" (online). Available: https://en.wikipedia.org/wiki/Lightning_Network. Date accessed: 20180614.
[12] "Bitcoin Isn't the Only Crypto Adding Lightning Tech Now" [online]. Available: https://www.coindesk.com/bitcoinisntcryptoaddinglightningtechnow/. Date accessed: 20180614.
[13] "What is Bitcoin Lightning Network? And How Does it Work?" [Online.] Available: https://cryptoverze.com/whatisbitcoinlightningnetwork/. Date accessed: 20180614.
[14] "OmiseGO" [online]. Available: https://omisego.network/. Date accessed: 20180614.
[15] "Everything You Need to Know About Loom Network, All In One Place (Updated Regularly)" [online]. Available: https://medium.com/loomnetwork/everythingyouneedtoknowaboutloomnetworkallinoneplaceupdatedregularly64742bd839fe. Date accessed: 20180614.
[16] "Making Sense of Ethereum's Layer 2 Scaling Solutions: State Channels, Plasma, and Truebit" [online]. Available: https://medium.com/l4media/makingsenseofethereumslayer2scalingsolutionsstatechannelsplasmaandtruebit22cb40dcc2f4. Date accessed: 20180614.
[17] "Trinity: Universal Offchain Scaling Solution" [online]. Available at: https://trinity.tech. Date accessed: 20180614.
[18] "Trinity White Paper: Universal Offchain Scaling Solution" [online]. Available at: https://trinity.tech/#/writepaper. Date accessed: 20180614.
[19] Jeff Coleman, Liam Horne, and Li Xuanji, "Counterfactual: Generalized State Channels" [online]. Available at: https://counterfactual.com/statechannels & https://l4.ventures/papers/statechannels.pdf. Date accessed: 20180615.
[20] "The Raiden Network" [online]. Available at:, https://raiden.network/. Date accessed: 20180615.
[21] "What is the Raiden Network?" [Online.] Available at: https://raiden.network/101.html. Date accessed: 20180615.
[22] "What are Masternodes? An Introduction and Guide" [online]. Available at: https://coincentral.com/whataremasternodesanintroductionandguide/. Date accessed: 20180615.
[23] "State Channels in Disguise?" [Online.] Available at: https://funfair.io/statechannelsindisguise. Date accessed: 20180615.
[24] "World Payments Report 2017, © 2017 Capgemini and BNP Paribas" [online]. Available at: https://www.worldpaymentsreport.com. Date accessed+: 20180620.
[25] "VISA" [online]. Available at: https://usa.visa.com/visaeverywhere/innovation/contactlesspaymentsaroundtheglobe.html. Date accessed: 20180620.
[26] "VisaNet Fact Sheet 2017 Q4" [online]. Available at: https://usa.visa.com/dam/VCOM/download/corporate/media/visanettechnology/visanetfactsheet.pdf. Date accessed: 20180620.
[27] "With 100% segwit transactions, what would be the max number of transaction confirmation possible on a block?" [Online.] Available at: https://bitcoin.stackexchange.com/questions/59408/with100segwittransactionswhatwouldbethemaxnumberoftransactionconfi. Date accessed: 20180621.
[28]: "A Gentle Introduction to Ethereum" [online]. Available at: https://bitsonblocks.net/2016/10/02/agentleintroductiontoethereum/. Date accessed: 20180621.
[29] "What is the size (bytes) of a simple Ethereum transaction versus a Bitcoin transaction?" [Online.] Available at: https://ethereum.stackexchange.com/questions/30175/whatisthesizebytesofasimpleethereumtransactionversusabitcointrans?rq=1. Date accessed: 20180621.
[30] "What is a Masternode?" [Online.] Available at: http://dashmasternode.org/whatisamasternode. Date accessed: 20180614.
[31] "Counterfactual: Generalized State Channels on Ethereum" [online]. Available at: https://medium.com/statechannels/counterfactualgeneralizedstatechannelsonethereumd38a36d25fc6. Date accessed: 20180626.
[32] Jeremy Longley and Oliver Hopton, "FunFair Technology Roadmap and Discussion" [online]. Available at: https://funfair.io/wpcontent/uploads/FunFairTechnicalWhitePaper.pdf. Date accessed: 20180627.
[33] "0x Protocol Website" [online]. Available at: https://0xproject.com/. Date accessed: 20180628.
[34] "0x: An open protocol for decentralized exchange on the Ethereum blockchain" [online]. Available at: https://0xproject.com/pdfs/0x_white_paper.pdf. Date accessed: 20180628.
[35] "NEX/Nash website" [online]. Available at: https://nash.io. Date accessed: 20180628.
[36] "Frontrunning, Griefing and the Perils of Virtual Settlement (Part 1)" [online]. Available at: https://blog.0xproject.com/frontrunninggriefingandtheperilsofvirtualsettlementpart18554ab283e97. Date accessed: 20180629.
[37] "Frontrunning, Griefing and the Perils of Virtual Settlement (Part 2)" [online]. Available at: https://blog.0xproject.com/frontrunninggriefingandtheperilsofvirtualsettlementpart2921b00109e21. Date accessed: 20180629.
[38] "MapReduce: Simplified Data Processing on Large Clusters" [online]. Available at: https://storage.googleapis.com/pubtoolspublicpublicationdata/pdf/16cb30b4b92fd4989b8619a61752a2387c6dd474.pdf. Date accessed: 20180702.
[39] "The Golem WhitePaper (Future Integrations)" [online]. Available at: https://golem.network/crowdfunding/Golemwhitepaper.pdf. Date accessed: 20180622.
[40] Jason Teutsch, and Christian Reitwiessner, "A Scalable Verification Solution for Block Chains" [online]. Available at: http://people.cs.uchicago.edu/~teutsch/papers/truebit.pdf. Date accessed: 20180622.
[41] "Livepeer's Path to Decentralization" [online]. Available at: https://medium.com/livepeerblog/livepeerspathtodecentralizationa9267fd16532. Date accessed: 20180622.
[42] "Golem Website" [online]. Available at: https://golem.network. Date accessed: 20180622.
[43] "TruBit Website" [online]. Available at: https://truebit.io. Date accessed: 20180622.
Contributors
 https://github.com/hansieodendaal
 https://github.com/Kevoulee
 https://github.com/ksloven
 https://github.com/anselld
Layer 2 Scaling Survey (Part 2)
Introduction
This report provides a survey of TumbleBit , Counterparty, 2Way Pegged Secondary Blockchains, Lumino, Scriptless Scripts and Directed Acyclic Graph (DAG) Derivative Protocols as layer 2 scaling alternatives, building on Layer 2 Scaling Survey (Part 1).
Layer 2 Scaling Current Initiatives (Updated)
TumbleBit
What is it?
The TumbleBit protocol was invented at the Boston University. It is a unidirectional, unlinkable payment hub that is fully compatible with the Bitcoin protocol. TumbleBit allows parties to make fast, anonymous, offchain payments through an untrusted intermediary called the Tumbler. Noone, not even the Tumbler, can tell which payer paid which payee during a TumbleBit epoch, i.e. time period of significance.
Two modes of operation are supported:
 a classic mixing/tumbling/washing mode; and
 a fullyfledged payment hub mode.
TumbleBit consists of two interleaved fairexchange protocols that rely on the Rivest–Shamir–Adleman (RSA) cryptosystem's blinding properties to prevent bad acting from either users or Tumblers, and to ensure anonymity:
 RSAPuzzleSolver Protocol; and
 PuzzlePromise Protocol.
TumbleBit also supports anonymizing through Tor to ensure that the Tumbler server can operate as a hidden service. ([1], [2], [8], [9], [10])
TumbleBit combines offchain cryptographic computations with standard onchain Bitcoin scripting functionalities to realize smart contracts [11] that are not dependent on Segwit. The most important Bitcoin functionality used here includes hashing conditions, signing conditions, conditional execution, 2of2 multisignatures and timelocking. [2]
Who does it?
The Boston University provided a proofofconcept and reference implementation alongside the white paper [4]. NTumbleBit [5] is being developed as a C# production implementation of the TumbleBit protocol that at the time of writing (July 2018) was being deployed by Stratis with its Breeze implementation [6], at alpha/experimental release level in testnet.
"NTumbleBit will be a crossplatform framework, server and client for the TumbleBit payment scheme. TumbleBit is separated into two modes, tumbler mode and payment hub mode. The tumbler mode improves transaction fungibility and offers risk free unlinkable transactions. Payment hub mode is a way of making offchain payments possible without requiring implementations like Segwit or the lightning network." [3]
Strengths

Anonymity properties. TumbleBit provides unlinkability without the need to trust the Tumbler service, i.e. untrusted intermediary. [2]

Denial of Service (DoS) and Sybil protection. "TumbleBit uses transaction fees to resist DoS and Sybil attacks." [2]

Balance. "The system should not be exploited to print new money or steal money, even when parties collude." [2]

As a classic tumbler. TumbleBit can also be used as a classic Bitcoin tumbler. [2]

Bitcoin compatibility. TumbleBit is fully compatible with the Bitcoin protocol. [2]

Scalability. Each TumbleBit user only needs to interact with the Tumbler and the corresponding transaction party; this lack of coordination between all TumbleBit users makes scalability possible for the tumbler mode. [2]

Batch processing. TumbleBit supports onetoone, manytoone, onetomany and manytomany transactions in payment hub mode. [2]

Masternode compatibility. The TumbleBit protocol can be fully implemented as a service in a Masternode. "The Breeze Wallet is now fully capable of providing enhanced privacy to bitcoin transactions through a secure connection. Utilizing Breeze Servers that are preregistered on the network using a secure, trustless registration mechanism that is resistant to manipulation and censorship." ([6], [7], [12])

Nearly production ready. The NTumbleBit and Breeze implementations have gained testnet status.
Weaknesses
 Privacy is not 100% proven. Payees have better privacy than the payers, and theoretically collusion involving payees and the Tumbler can exist to discover the identity of the payer. [13]
 The Tumbler service is not distributed. More work needs to be done to ensure a persistent transaction state in case a Tumbler server goes down.
 Equal denominations are required. The TumbleBit protocol can only support a common denominator unit value. [2]
Opportunities
TumbleBit has benefits for Tari as a trustless Masternode matching/batch processing engine with strong privacy features.
Counterparty
What is it?
Counterparty is NOT a blockchain. Counterparty is a token protocol released in January 2014 that operates on Bitcoin. It has a fully functional Decentralized Exchange (DEX), as well as several hardcoded smart contracts defined that include contracts for difference and binary options ("bets"). To operate, Counterparty utilizes "embedded consensus", which means that a Counterparty transaction is created and embedded into a Bitcoin transaction, using encoding such as 1of3 multisignature (multisig), Pay to Script Hash (P2SH) or Pay To Public Key Hash (P2PKH). Counterparty nodes, i.e. nodes that run both bitcoind
and the counterpartyserver
applications, will receive Bitcoin transactions as normal (from bitcoind
). The counterpartyserver
will then scan each, and decode and parse any embedded Counterparty transactions it finds. In effect, Counterparty is a ledger within the larger Bitcoin ledger, and the functioning of embedded consensus can be thought of as similar to the fitting of one Russian stacking doll inside another. ([30], [31], [32])
Embedded consensus also means that the nodes maintain identical ledgers without using a separate peertopeer network, solely using the Bitcoin blockchain for all communication (i.e. timestamping, transaction ordering and transaction propagation). Unlike Bitcoin, which has the concept of both a soft fork and a hard fork, a change to the protocol or "consensus" code of Counterparty always has the potential to create a hard fork. In practice, this means that each Counterparty node must run the same version of counterpartyserver
(or at least the same minor version, e.g. the "3" in 2.3.0) so that the protocol code matches up across all nodes. ([56], [57])
Unlike Bitcoin's UTXO model, the Counterparty token protocol utilizes an accounts system where each Bitcoin address is an account, and Counterparty credit and debit transactions for a specific token type affect account balances of that token for that given address. The decentralized exchange allows for lowfriction exchanging of different tokens between addresses, utilizing the concept of "orders", which are individual transactions made by Counterparty clients, and "order matches", which the Counterparty protocol itself will generate as it parses new orders that overlap existing active orders in the system. It is the Counterparty protocol code itself that manages the escrow of tokens when an order is created, the exchange of tokens between two addresses that have overlapping orders, and the release of those assets from escrow postexchange.
Counterparty uses its own token, XCP, which was created through a "proof of burn" process during January 2014 [58]. In that month, over 2,000 bitcoins were destroyed by various individuals sending them to an unspendable address on the Bitcoin network (1CounterpartyXXXXXXXXXXXXXXXUWLpVr
), which caused the Counterparty protocol to award the sending address with a corresponding amount of XCP. XCP is used for payment of asset creation fees, collateral for contracts for difference/binary options, and often as a base token in decentralized exchange transactions (largely due to the complexities of using Bitcoin (BTC) in such trades).
Support for the Ethereum Virtual Machine (EVM) was implemented, but never included on the mainnet version [30]. With the Counterparty EVM implementation, all published Counterparty smart contracts “live” at Bitcoin addresses that start with a C
. Counterparty is used to broadcast an execute
transaction to call a specific function or method in the smart contract code. Once an execution transaction is confirmed by a Bitcoin miner, the Counterparty federated nodes will receive the request and execute that method. The contract state is modified as the smart contract code executes and stored in the Counterparty database. [56]
General consensus is that a federated network is a distributed network of centralized networks. The Ripple blockchain implements a Federated Byzantine Agreement (FBA) consensus mechanism. Federated sidechains implements a security protocol using a trusted federation of mutually distrusting functionaries/notaries. Counterparty utilizes a "full stack" packaging system for its components and all dependencies, called the "federated node" system. However, this meaning refers to federated in the general definition, i.e. "set up as a single centralized unit within which each state or division keeps some internal autonomy". ([54], [55], [28])
Who uses it?
The most notable projects built on top of Counterparty are Age of Chains, Age of Rust, Augmentors, Authparty, Bitcorns, Blockfreight™, Blocksafe, BTCpaymarket.com, CoinDaddy, COVAL, FoldingCoin, FootballCoin, GetGems, IndieBoard, LTBCoin  Letstalkbitcoin.com, Mafia Wars, NVO, Proof of Visit, Rarepepe.party, SaruTobi Island, Spells of Genesis, Takara, The Scarab Experiment, Token.FM, Tokenly, TopCoin and XCP DEX. [32]
In the past, projects such as Storj and SWARM also built on Counterparty.
COVAL is being developed with the primary purpose of moving value using “offchain” methods. It uses its own set of node runners to manage various "offchain" distributed ledgers and ledger assigned wallets to implement an extended transaction value system, whereby tokens as well as containers of tokens can be securely transacted. Scaling within the COVAL ecosystem is thus achievable, because it is not only reliant on the Counterparty federated nodes to execute smart contracts. [33]
Strengths
 Counterparty provides a simple way to add "layer 2" functionality, i.e. hardcoded smart contracts, to an already existing blockchain implementation that supports basic data embedding.
 Counterparty's embedded consensus model utilizes "permissionless innovation", meaning that even the Bitcoin core developers could not stop the use of the protocol layer without seriously crippling the network.
Weaknesses
 Embedded consensus requires lockstep upgrades from network nodes to avoid forks.
 Embedded consensus imposes limitations on the ability of the secondary layer to interact with the primary layer's token. Counterparty was not able to manipulate BTC balances or otherwise directly utilize BTC.
 With embedded consensus, nodes maintain identical ledgers without using a peertopeer network. One could claim that this hampers the flexibility of the protocol. It also limits the speed of the protocol to the speed of the underlying blockchain.
Opportunities
 Nodes can implement improved consensus models such as Federated Byzantine Agreement. [55]
 Refer to Scriptless Scripts.
2way Pegged Secondary Blockchains
What are they?
A 2way peg (2WP) allows the "transfer" of BTC from the main Bitcoin blockchain to a secondary blockchain and vice versa at a fixed rate by making use of an appropriate security protocol. The "transfer" actually involves BTC being locked on the main Bitcoin blockchain and unlocked/made available on the secondary blockchain. The 2WP promise is concluded when an equivalent number of tokens on the secondary blockchain are locked (in the secondary blockchain) so that the original bitcoins can be unlocked. ([22], [28])
 Sidechain: When the security protocol is implemented using Simplified Payment Verification (SPV) proofs  blockchain transaction verification without downloading the entire blockchain  the secondary blockchain is referred to as a Sidechain. [22]
 Drivechain: When the security protocol is implemented by giving custody of the BTC to miners  where miners vote when to unlock BTC and where to send them  the secondary blockchain is referred to as a Drivechain. In this scheme, the miners will sign the block header using a Dynamic Membership Multiparty Signature (DMMS). ([22], [28])
 Federated Peg/Sidechain: When the security protocol is implemented by having a trusted federation of mutually distrusting functionaries/notaries, the secondary blockchain is referred to as a Federated Peg/Sidechain. In this scheme, the DMMS is replaced with a traditional multisignature scheme. ([22], [28])
 Hybrid SidechainDrivechainFederated Peg: When the security protocol is implemented by SPV proofs going to the secondary blockchain and dynamic mixture of miner DMMS and functionaries/notaries multisignatures going back to the main Bitcoin blockchain, the secondary blockchain is referred to as a Hybrid SidechainDrivechainFederated Peg. ([22], [28], [29])
The following figure shows an example of a 2WP Bitcoin secondary blockchain using a Hybrid SidechainDrivechainFederated Peg security protocol [22]:
BTC on the main Bitcoin blockchain are locked by using a P2SH transaction, where BTC can be sent to a script hash instead of a public key hash. To unlock the BTC in the P2SH transaction, the recipient must provide a script matching the script hash and data, which makes the script evaluate to true. [23]
Who does them?

RSK (formerly Rootstock) is a 2WP Bitcoin secondary blockchain using a hybrid sidechaindrivechain security protocol. RSK is scalable up to 100 transactions per second (Tx/s) and provides a secondlayer scaling solution for Bitcoin, as it can relieve onchain Bitcoin transactions. ([14], [15], [16])

Hivemind (formerly Truthcoin) is implementing a PeertoPeer Oracle Protocol that absorbs accurate data into a blockchain so that Bitcoin users can speculate in Prediction Markets. [24]

Blockstream is implementing a Federated Sidechain called Liquid, with the functionaries/notaries being made up of participating exchanges and Bitcoin businesses. [29]
Strengths
 Permissionless innovation: Anyone can create a new blockchain project that uses the underlying strengths of the main Bitcoin blockchain using real BTC as the currency. [20]
 New features: Sidechains/Drivechains can be used to test or implement new features without risk to the main Bitcoin blockchain or without having to change its protocol, such as Schnorr signatures and zeroknowledge proofs. ([20], [25])
 ChainsasaService (CaaS) : It is possible to create a CaaS with data storage 2WP secondary blockchains. [25]
 Smart Contracts: 2WP secondary blockchains make it easier to implement smart contracts. [25]
 Scalability: 2WP secondary blockchains can support larger block sizes and more transactions per second, thus scaling the Bitcoin main blockchain. [25]
Weaknesses
 Security: Transferring BTC back into the main Bitcoin blockchain is not secure enough and can be manipulated because Bitcoin does not support SPV from 2WP secondary blockchains. [21]
 51% attacks: 2WP secondary blockchains are hugely dependent on merged mining. Mining power centralization and 51% attacks are thus a real threat, as demonstrated for Namecoin and Huntercoin (refer to Merged Mining Introduction).
 The DMMS provided by mining is not very secure for small systems, while the trust of the federation/notaries is riskier for large systems. [28]
Opportunities
2WP secondary blockchains may present interesting opportunities to scale multiple payments that would be associated with multiple nonfungible assets living on a secondary layer. However, take care with privacy and security of funds as well as with transferring funds into and out of the 2WP secondary blockchains.
Lumino
What is it?
Lumino Transaction Compression Protocol (LTCP) is a technique for transaction compression that allows the processing of a higher volume of transactions, but the storing of much less information. The Lumino network is a Lightninglike extension of the RSK platform that uses LTCP. Delta (difference) compression of selected fields of transactions from the same owner is done by using aggregate signing of previous transactions so that previous signatures can be disposed of. [17]
Each transaction contains a set of persistent fields called the Persistent Transaction Information (PTI) and a compound record of user transaction data called the SigRec. A Lumino block stores two Merkle trees: one containing all PTIs; and the other containing all transaction IDs (hash of the signed SigRec). This second Merkle tree is conceptually similar to the Segwit witness tree, thus forming the witness part. Docking is the process where SicRec and signature data can be pruned from the blockchain if valid linked PTI information exists. [17]
Who does it?
RSK, which was newly launched on main net in January 2018. The Lumino Network must still be launched in test net. ([18], [19])
Strengths
The Lumino Network promises high efficiency in pruning the RSK blockchain.
Weaknesses
 The Lumino Network has not yet been released.
 Details about how the Lumino Network will handle payment channels were not decisive in the white paper. [17]
Opportunities
LTCP pruning may be beneficial to Tari.
Scriptless Scripts
What is it?
Scriptless Scripts was coined and invented by mathematician Andrew Poelstra. It entails offering scripting functionality without actual scripts on the blockchain to implement smart contracts. At the time of writing (July 2018) it can only work on the Mimblewimble blockchain and makes use of a specific Schnorr signature scheme [38] that allows for signature aggregation, mathematically combining several signatures into a single signature, without having to prove Knowledge of Secret Keys (KOSK). This is known as the plain publickey model, where the only requirement is that each potential signer has a public key. The KOSK scheme requires that users prove knowledge (or possession) of the secret key during public key registration with a certification authority, and is one way to generically prevent roguekey attacks. ([35], [36])
Signature aggregation properties sought here are ([35], [36]):
 must be provably secure in the plain publickey model;
 must satisfy the normal Schnorr equation, whereby the resulting signature can be written as a function of a combination of the public keys;
 must allow for Interactive Aggregate Signatures (IAS), where the signers are required to cooperate;
 must allow for Noninteractive Aggregate Signatures (NAS), where the aggregation can be done by anyone;
 must allow each signer to sign the same message;
 must allow each signer to sign their own message.
This is different to a normal multisignature scheme where one message is signed by all.
Let's say Alice and Bob each needs to provide half a Schnorr signature for a transaction, whereby Alice promises to reveal a secret to Bob in exchange for one crypto coin. Alice can calculate the difference between her half Schnorr signature and the Schnorr signature of the secret (adaptor signature) and hand it over to Bob. Bob then has the ability to verify the correctness of the adaptor signature without knowing the original signatures. Bob can then provide his half Schnorr signature to Alice so she can broadcast the full Schnorr signature to claim the crypto coin. By broadcasting the full Schnorr signature, Bob has access to Alice's half Schnorr signature and he can then calculate the Schnorr signature of the secret because he already knows the adaptor signature, thereby claiming his prize. This is also known as ZeroKnowledge Contingent payments. ([34], [37])
Who does it?
Mimblewimble is being cited by Andrew Poelstra as being the ultimate Scriptless Script. [37]
Strengths
 Data savings: Signature aggregation provides data compression on the blockchain.
 Privacy: Nothing about the Scriptless Script smart contract, other than the settlement transaction, is ever recorded on the blockchain. No one will ever know that an underlying smart contract was executed.
 Multiplicity: Multiple digital assets can be transferred between two parties in a single settlement transaction.
 Implicit scalability: Scalability on the blockchain is achieved by virtue of compressing multiple transactions into a single settlement transaction. Transactions are only broadcast to the blockchain once all preconditions are met.
Weaknesses
Recent work by Maxwell et al. ([35], [36]) showed that a naive implementation of Schnorr multisignatures that satisfies key aggregation is not secure, and that the Bellare and Neven (BN) Schnorr signature scheme loses the key aggregation property in order to gain security in the plain publickey model. They proposed a new Schnorrbased multisignature scheme with key aggregation called MuSig, which is provably secure in the plain publickey model. It has the same key and signature size as standard Schnorr signatures. The joint signature can be verified in exactly the same way as a standard Schnorr signature with respect to a single “aggregated” publickey, which can be computed from the individual public keys of the signers. Note that the case of interactive signature aggregation where each signer signs their own message must still be proven by a complete security analysis.
Opportunities
Tari plans to implement the Mimblewimble blockchain and should implement the Scriptless Scripts together with the MuSig Schnorr signature scheme.
However, this in itself will not provide the Layer 2 scaling performance that will be required. Big Neon, the initial business application to be built on top of the Tari blockchain, requires to "facilitate 500 tickets in 4 minutes", i.e. approximately two spectators allowed access every second, with negligible latency.
The Mimblewimble Scriptless Scripts could be combined with a federated node (or specialized masternode), similar to that being developed by Counterparty. The secrets that are revealed by virtue of the MuSig Schnorr signatures can instantiate normal smart contracts inside the federated node, with the final consolidated state update being written back to the blockchain after the event.
Directed Acyclic Graph (DAG) Derivative Protocols
What is it?
In mathematics and computer science, a Directed Acyclic Graph (DAG) is a finite directed graph with no directed cycles. A directed graph is acyclic if and only if it has a topological ordering, i.e. for every directed edge uv from vertex u to vertex v, u comes before v in the ordering (age). [42]
DAGs in blockchain were first proposed as the GHOST protocol ([44], [45]), a version of which is implemented in Ethereum as the Ethash PoW algorithm (based on DaggerHashimoto ([59], [60])). Then Braiding ([40], [41]), Jute [43], SPECTRE [46], and PHANTOM [52] were presented. The principle of DAG in blockchain is to present a way to include traditional offchain blocks into the ledger, which is governed by mathematical rules. A parent that is simultaneously an ancestor of another parent is disallowed:
The main problems to be solved by the DAG derivative protocols are:
 inclusion of orphaned blocks (decrease the negative effect of slow propagation); and
 mitigation against selfish mining attacks.
The underlying concept is still in the research and exploration phase. [39]
In most DAG derivative protocols, blocks containing conflicting transactions, i.e. conflicting blocks, are not orphaned. A subsequent block is built on top of both of the conflicting blocks, but the conflicting transactions themselves are thrown out while processing the chain. SPECTRE, for one, provides a scheme whereby blocks vote to decide which transactions are robustly accepted, robustly rejected or stay in an indefinite “pending” state in case of conflicts. Both conflicting blocks become part of the shared history, and both conflicting blocks earn their respective miners a block reward. ([39], [50], [51])
Note: Braiding requires that parents and siblings may not contain conflicting transactions.
Inclusive (DAG derivative) protocols that integrate the contents of traditional offchain blocks into the ledger result in incentives for behavior changes by the nodes, which leads to an increased throughput, and a better payoff for weak miners. [45]
DAG derivative protocols are not Layer 2 Scaling solutions, but they offer significant scaling of the primary blockchain.
Who does it?
 The School of Engineering and Computer Science, The Hebrew University of Jerusalem ([44], [45], [46], [50], [51])
 GHOST, SPECTRE, PHANTOM
 DAGlabs [53] (Note: This is the commercial development chapter.)
 SPECTRE, PHANTOM
 SPECTRE provides high throughput and fast confirmation times. Its DAG structure represents an abstract vote regarding the order between each pair of blocks, but this pairwise ordering may not be extendable to a full linear ordering due to possible Condorcet cycles.
 PHANTOM provides a linear ordering over the blocks of the DAG and can support consensus regarding any general computation (smart contracts), which SPECTRE cannot. In order for a computation or contract to be processed correctly and consistently, the full order of events in the ledger is required, particularly the order of inputs to the contract. However, PHANTOM’s confirmation times are mush slower than those in SPECTRE.
 SPECTRE, PHANTOM
 Ethereum as the Ethash PoW algorithm that has been adapted from GHOST
 [Dr. Bob McElrath] ([40], [41])
 Brading
 David Vorick [43]
 Jute
 Crypto currencies:
Strengths
 Layer 1 scaling: Increased transaction throughput on the main blockchain.
 Fairness: Better payoff for weak miners.
 Decentralization mitigation: Weaker miners also get profits.
 Transaction confirmation times: Confirmation times of several seconds (SPECTRE).
 Smart contracts: Support smart contracts (PHANTOM).
Weaknesses

Still not proven 100%, development continuing.

The DAG derivative protocols differ on important aspects such as miner payment schemes, security models, support for smart contracts, and confirmation times. Thus, all DAG derivative protocols are not created equal  beware!
Opportunities
Opportunities exist for Tari in applying the basic DAG principles to make a 51% attack harder by virtue of fairness and miner decentralization resistance. Choosing the correct DAG derivative protocol can also significantly improve Layer 1 scaling.
Observations
Although not all technologies covered here are Layer 2 Scaling solutions, the strengths should be considered as building blocks for the Tari protocol.
References
[1] "TumbleBit: An Untrusted Bitcoincompatible Anonymous Payment Hub" [online]. Available: http://cspeople.bu.edu/heilman/tumblebit. Date accessed: 20180712.
[2] Ethan Heilman, Leen AlShenibr, Foteini Baldimtsi, Alessandra Scafuro, and Sharon Goldberg, "TumbleBit: An Untrusted Bitcoincompatible Anonymous Payment Hub" [online]. Available: https://eprint.iacr.org/2016/575.pdf. Date accessed: 20180708.
[3] "Anonymous Transactions Coming to Stratis" [online]. Available: https://medium.com/@Stratisplatform/anonymoustransactionscomingtostratisfced3f5abc2e. Date accessed: 20180708.
[4] TumbleBit Proof of Concept GitHub Repository [online]. Available: https://github.com/BUSEC/TumbleBit. Date accessed: 20180708.
[5] NTumbleBit GitHub Repository [online]. Available: https://github.com/nTumbleBit/nTumbleBit. Date accessed: 20180712.
[6] "Breeze Tumblebit Server Experimental Release" [online]. Available: https://stratisplatform.com/2017/07/17/breezetumblebitserverexperimentalrelease. Date accessed: 20180712.
[7] "Breeze Wallet with Breeze Privacy Protocol (Dev. Update)" [online]. Available: https://stratisplatform.com/2017/09/20/breezewalletwithbreezeprivacyprotocoldevupdate. Date accessed: 20180712.
[8] Ethan Heilman, Foteini Baldimtsi and Sharon Goldberg, "Blindly Signed Contracts  Anonymous Onchain and Offchain Bitcoin Transactions" [online]. Available: https://eprint.iacr.org/2016/056.pdf. Date accessed: 20180712.
[9] Ethan Heilman and Leen AlShenibr, "TumbleBit: An Untrusted Bitcoincompatible Anonymous Payment Hub  08 October 2016", in Conference: Scaling Bitcoin 2016 Milan. Available: https://www.youtube.com/watch?v=8BLWUUPfh2Q&feature=youtu.be&t=1h3m10s. Date accessed: 20180713.
[10] "Better Bitcoin Privacy, Scalability: Developers Making TumbleBit a Reality" [online]. Available: https://bitcoinmagazine.com/articles/betterbitcoinprivacyscalabilitydevelopersaremakingtumblebitreality. Date accessed: 20180713.
[11] Bitcoinwiki: Contract [online]. Available: https://en.bitcoin.it/wiki/Contract. Date accessed: 20180713.
[12] "Bitcoin Privacy is a Breeze: TumbleBit Successfully Integrated Into Breeze" [online]. Available: https://stratisplatform.com/2017/08/10/bitcoinprivacytumblebitintegratedintobreeze. Date accessed: 20180713.
[13] "TumbleBit Wallet Reaches One Step Forward" [online]. Available: https://www.bitcoinmarketinsider.com/tumblebitwalletreachesonestepforward. Date accessed: 20180713.
[14] "A Survey of Second Layer Solutions for Blockchain Scaling Part 1" [online]. Available: https://www.ethnews.com/asurveyofsecondlayersolutionsforblockchainscalingpart1. Date accessed: 20180716.
[15] "Secondlayer Scaling" [online]. Available: https://lunyr.com/article/SecondLayer_Scaling. Date accessed: 20180716.
[16] RSK Website [online]. Available: https://www.rsk.co. Date accessed: 20180716.
[17] S. D. Lerner, "Lumino Transaction Compression Protocol (LTCP)" [online]. Available: https://uploads.strikinglycdn.com/files/ec5278f8218c407aaf3cab71a910246d/LuminoTransactionCompressionProtocolLTCP.pdf. Date accessed: 20180716.
[18] "Bitcoinbased Ethereum Rival RSK Set to Launch Next Month" [online]. Available: https://cryptonewsmonitor.com/2017/11/11/bitcoinbasedethereumrivalrsksettolaunchnextmonth. Date accessed: 20180716.
[19] RSK Blog Website [online]. Available: https://media.rsk.co/. Date accessed: 20180716.
[20] "Drivechain: Enabling Bitcoin Sidechain" [online]. Available: http://www.drivechain.info. Date accessed: 20180717.
[21] "Drivechain  The Simple Two Way Peg" [online]. Available: http://www.truthcoin.info/blog/drivechain. Date accessed: 20180717.
[22] "Sidechains, Drivechains, and RSK 2Way Peg Design" [online]. Available: https://www.rsk.co/blog/sidechainsdrivechainsandrsk2waypegdesign or https://uploads.strikinglycdn.com/files/27311e59083249b5ab0e2b0a73899561/Drivechains_Sidechains_and_Hybrid_2way_peg_Designs_R9.pdf. Date accessed: 20180718.
[23] "Pay to Script Hash"[online]. Available: https://en.bitcoin.it/wiki/Pay_to_script_hash. Date accessed: 20180718.
[24] Hivemind Website [online]. Available: http://bitcoinhivemind.com. Date accessed: 20180718.
[25] "Drivechains: What do they Enable? Cloud 3.0 Services Smart Contracts and Scalability" [online]. Available: http://drivechains.org/whataredrivechains/whatdoesitenable. Date accessed: 20180719.
[26] "Bloq’s Paul Sztorc on the 4 Main Benefits of Sidechains" [online]. Available: https://bitcoinmagazine.com/articles/bloqspaulsztorconthemainbenefitsofsidechains1463417446. Date accessed: 20180719.
[27] Blockstream Website [online]. Available: https://blockstream.com/technology. Date accessed: 20180719.
[28] Adam Back, Matt Corallo, Luke Dashjr, Mark Friedenbach, Gregory Maxwell, Andrew Miller, Andrew Poelstra, Jorge Timón and Pieter Wuille, 20141022. "Enabling Blockchain Innovations with Pegged Sidechains" [online]. Available: https://blockstream.com/sidechains.pdf. Date accessed: 20180719.
[29] Johnny Dilley, Andrew Poelstra, Jonathan Wilkins, Marta Piekarska, Ben Gorlick and Mark Friedenbachet, "Strong Federations: An Interoperable Blockchain Solution to Centralized Third Party Risks" [online]. Available: https://blockstream.com/strongfederations.pdf. Date accessed: 20180719.
[30] CounterpartyXCP/Documentation/Smart Contracts/EVM FAQ [online]. Available: https://github.com/CounterpartyXCP/Documentation/blob/master/Basics/FAQSmartContracts.md. Date accessed: 20180723.
[31] Counterparty Development 101 [online]. Available: https://medium.com/@droplister/counterpartydevelopment1012f4d9b0c8df3. Date accessed: 20180723.
[32] Counterparty Website [online]. Available: https://counterparty.io. Date accessed: 20180724.
[33] COVAL Website [online]. Available: https://coval.readme.io/docs. Date accessed: 20180724.
[34] "Scriptless Scripts: How Bitcoin Can Support Smart Contracts Without Smart Contracts" [online]. Available: https://bitcoinmagazine.com/articles/scriptlessscriptshowbitcoincansupportsmartcontractswithoutsmartcontracts. Date accessed: 20180724.
[35] "Key Aggregation for Schnorr Signatures" [online]. Available: https://blockstream.com/2018/01/23/musigkeyaggregationschnorrsignatures.html. Date accessed: 20180724.
[36] Gregory Maxwell, Andrew Poelstra, Yannick Seurin and Pieter Wuille, "Simple Schnorr Multisignatures with Applications to Bitcoin" [online]. Available: 20 May 2018, https://eprint.iacr.org/2018/068.pdf. Date accessed: 20180724.
[37] A. Poelstra, 4 March 2017, "Scriptless Scripts" [online]. Available: https://download.wpsoftware.net/bitcoin/wizardry/mwslides/201703mitbitcoinexpo/slides.pdf. Date accessed: 20180724.
[38] bipschnorr.mediawiki [online]. Available: https://github.com/sipa/bips/blob/bipschnorr/bipschnorr.mediawiki. Date accessed: 20180726.
[39] "If There is an Answer to Selfish Mining, Braiding could be It" [online]. Available: https://bitcoinmagazine.com/articles/ifthereisananswertoselfishminingbraidingcouldbeit1482876153. Date accessed: 20180727.
[40] B. McElrath, "Braiding the Blockchain", in Conference: Scaling Bitcoin, Hong Kong, 7 Dec 2015 [online]. Available: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/2_breaking_the_chain_1_mcelrath.pdf. Date accessed: 20180727.
[41] "Braid Examples" [online]. Available: https://rawgit.com/mcelrath/braidcoin/master/Braid%2BExamples.html. Date accessed: 20180727.
[42] "Directed Acyclic Graph" [online]. Available: https://en.wikipedia.org/wiki/Directed_acyclic_graph. Date accessed: 20180730.
[43] "Braiding Techniques to Improve Security and Scaling" [online]. Available: https://scalingbitcoin.org/milan2016/presentations/D2%20%209%20%20David%20Vorick.pdf. Date accessed: 20180730.
[44] Yonatan Sompolinsky and Aviv Zohar, "GHOST: Secure Highrate Transaction Processing in Bitcoin" [online]. Available: https://eprint.iacr.org/2013/881.pdf. Date accessed: 20180730.
[45] Yoad Lewenberg, Yonatan Sompolinsky and Aviv Zohar, "Inclusive Blockchain Protocols" [online]. Available: http://fc15.ifca.ai/preproceedings/paper_101.pdf. Date accessed: 20180730.
[46] Yonatan Sompolinsky, Yoad Lewenberg and Aviv Zohar, "SPECTRE: A Fast and Scalable Cryptocurrency Protocol" [online]. Available: http://www.cs.huji.ac.il/~yoni_sompo/pubs/16/SPECTRE_complete.pdf. Date accessed: 20180730.
[47] IOTA Website [online]. Available: https://www.iota.org/. Date accessed: 20180730.
[48] NANO: "Digital Currency for the Real World – the fast and free way to pay for everything in life" [online]. Available: https://nano.org/en. Date accessed: 20180730.
[49] Byteball [online]. Available: https://byteball.org/. Date accessed: 20180730.
[50] "SPECTRE: Serialization of Proofofwork Events, Confirming Transactions via Recursive Elections" [online]. Available: https://medium.com/@avivzohar/thespectreprotocol7dbbebb707b. Date accessed: 20180730.
[51] Yonatan Sompolinsky, Yoad Lewenberg and Aviv Zohar, "SPECTRE: Serialization of Proofofwork Events: Confirming Transactions via Recursive Elections" [online]. Available: https://eprint.iacr.org/2016/1159.pdf. Date accessed: 20180730.
[52] Yonatan Sompolinsky and Aviv Zohar, "PHANTOM: A Scalable BlockDAG Protocol" [online]. Available: https://docs.wixstatic.com/ugd/242600_92372943016c47ecb2e94b2fc07876d6.pdf. Date accessed: 20180730.
[53] DAGLabs Website [online]. Available: https://www.daglabs.com. Date accessed: 20180730.
[54] "Beyond Distributed and Decentralized: what is a federated network?" [Online.] Available: http://networkcultures.org/unlikeus/resources/articles/whatisafederatednetwork. Date accessed: 20180813.
[55] "Federated Byzantine Agreement" [online]. Available: https://towardsdatascience.com/federatedbyzantineagreement24ec57bf36e0. Date accessed: 20180813.
[56] "Counterparty Documentation: Frequently Asked Questions" [online]. Available: https://counterparty.io/docs/faq. Date accessed: 20180914.
[57] "Counterparty Documentation: Protocol Specification" [online]. Available: https://counterparty.io/docs/protocol_specification. Date accessed: 20180914.
[58] "Counterparty News: Why ProofofBurn, March 23, 2014" [online]. Available: https://counterparty.io/news/whyproofofburn. Date accessed: 20180914.
[59] Vitalik Buterin, "Dagger: A MemoryHard to Compute, MemoryEasy to Verify Scrypt Alternative" [online]. Available: http://www.hashcash.org/papers/dagger.html. Date #nbsp; accessed: 20190212.
[60] Thaddeus Dryja, "Hashimoto: I/O Bound Proof of Work" [online]. Available: https://web.archive.org/web/20170810043640/https://pdfs.semanticscholar.org/3b23/7cc60c1b9650e260318d33bec471b8202d5e.pdf. Date accessed: 20190212.
Contributors
 https://github.com/hansieodendaal
 https://github.com/ksloven
 https://github.com/robbydermody
 https://github.com/anselld
Layer 2 Scaling  Executive Summary
Having trouble viewing this presentation?
View it in a separate window.
Merged Mining
From Bitcoin Wiki
Merged mining is the act of using work done on another block chain (the Parent) on one or more Auxiliary block chains and to accept it as valid on its own chain, using Auxiliary ProofofWork (AuxPoW), which is the relationship between two block chains for one to trust the other's work as their own. The Parent block chain does not need to be aware of the AuxPoW logic as blocks submitted to it are still valid blocks.
From CryptoCompare
Merged mining is the process of allowing two different crypto currencies based on the same algorithm to be mined simultaneously. This allows low hash powered crypto currencies to increase the hashing power behind their network by bootstrapping onto more popular crypto currencies.
Merged Mining Introduction
Merged Mining with Multiple Auxiliary Chains
Merged Mining  Interesting Facts and Case Studies
 Namecoin (#307) with Bitcoin (#1)
 Dogecoin (#37) with Litecoin (#6)
 Huntercoin (#779) with Bitcoin (#1) or Litecoin (#6)
 Myriad (#510) with Bitcoin (#1) or Litecoin (#6)
 Monero (#12)/DigitalNote (#166) + FantomCoin (#1068)
 Some Statistics
 Observations
 51% Attacks
 Double Proof
 Analysis of Mining Power Centralization Issues
 Introduction of New Attack Vectors
What is Merged Mining?
Merged mining is the act of using work done on another blockchain (the Parent) on one or more than one Auxiliary blockchain and to accept it as valid on its own chain, using Auxiliary ProofofWork (AuxPoW), which is the relationship between two blockchains for one to trust the other's work as their own. The Parent blockchain does not need to be aware of the AuxPoW logic, as blocks submitted to it are still valid blocks. [1]
As an example, the structure of merged mined blocks in Namecoin and Bitcoin is shown here [25]:
A transaction set is assembled for both blockchains. The hash of the AuxPoW block header is then inserted in the "free" bytes region (coinbase field) of the coinbase transaction and submitted to the Parent blockchain's ProofofWork (PoW). If the merge miner solves the block at the difficulty level of either blockchain or both blockchains, the respective block(s) are reassembled with the completed PoW and submitted to the correct blockchain. In the case of the Auxiliary blockchain, the Parent's block hash, Merkle tree branch and coinbase transaction are inserted in the Auxiliary block's AuxPoW header. This is to prove that enough work that meets the difficulty level of the Auxiliary blockchain was done on the Parent blockchain. ([1], [2], [25])
The propagation of Parent and Auxiliary blocks is totally independent and only governed by each chain's difficulty level. As an example, the following diagram shows how this can play out in practice with Namecoin and Bitcoin when the Parent difficulty (D_{BTC}) is more than the Auxiliary difficulty (D_{NMC}). Note that BTC block 2' did not become part of the Parent blockchain propagation.
Merged Mining with Multiple Auxiliary Chains
A miner can use a single Parent to perform merged mining on multiple Auxiliary blockchains. The Merkle tree root of a Merkle tree that contains the block hashes of the Auxiliary blocks as leaves must then be inserted in the Parent's coinbase field as shown in the following diagram. To prevent double spending attacks, each Auxiliary blockchain must specify a unique ID that can be used to derive the leaf of the Merkle tree where the respective block hash must be located. [25]
Merged Mining  Interesting Facts and Case Studies
Namecoin (#307) with Bitcoin (#1)
 Namecoin, the first fork of Bitcoin, introduced merged mining with Bitcoin [1] from block 19,200 onwards [3]. At the time of writing (May 2018), the block height of Namecoin was > 400,500 [4].
 Over the fiveday period from 23 May 2018 to 27 May 2018, only 226 out of 752 blocks posted transaction values over and above the block reward of 25 NMC, with an average transaction value of 159.231 NMC including the block reward. [4]
 Slush Pool merged mining Namecoin with Bitcoin rewards all miners with BTC equivalent to NMC via an external exchange service. [5]
 P2pool, Multipool, Slush Pool, Eligius and F2pool are cited as top Namecoin merged mining pools. [6]
@ 20180530  Bitcoin [16]  Namecoin [16]  Ratio 

Block time target (s)  600  600  100.00% 
Hash rate (Ehash/s)  31.705  21.814  68.80% 
Blocks count  525,064  400,794  76.33% 
Dogecoin (#37) with Litecoin (#6)
 Dogecoin introduced merged mining with Litecoin [8] from block 371,337 onwards [9]. At the time of writing (May 2018), the block height of Dogecoin was > 2,240,000 [10].
 Many in the Dogecoin user community believe merged mining with Litecoin saved Dogecoin from a 51% attack. [8]
@ 20180530  Litecoin [16]  Dogecoin [16]  Ratio 

Block time target (s)  150  60  40.00% 
Hash rate (Thash/s)  311.188  235.552  75.69% 
Blocks count  1,430,517  2,241,120  156.67% 
Huntercoin (#779) with Bitcoin (#1) or Litecoin (#6)
 Huntercoin was released as a live experimental test to see how blockchain technology could handle fullon game worlds. [22]
 Huntercoin was originally designed to be supported for only one year, but development and support will continue. [22]
 Players are awarded coins for gaming, thus the world's first human mineable cryptocurrency.
 Coin distribution: 10 coins per block, nine for the game world and one for the miners. [22]
@ 20180601  Huntercoin 

Block time target (s)  120 
blockchain size (GB)  17 
Pruned blockchain size (GB)  0.5 
Blocks count  2,291,060 
PoW algorithm (for merged mining)  SHA256, Scrypt 
Myriad (#510) with Bitcoin (#1) or Litecoin (#6)
 Myriad is the first currency to support five PoW algorithms and claims its multiPoW algorithm approach offers exceptional 51% resistance. [23]
 Myriad introduced merged mining from block 1,402,791 onwards. [24]
@ 20180601  Myriad 

Block time target (s)  60 
blockchain size (GB)  2.095 
Blocks count  2,442,829 
PoW algorithm (for merged mining)  SHA256d, Scrypt 
PoW algorithm (others)  MyrGroestl, Skein, Yescrypt 

Some solved multiPoW block examples follow:
Monero (#12)/DigitalNote (#166) + FantomCoin (#1068)

FantamCoin was the first CryptoNotebased coin to develop merged mining with Monero, but was abandoned until DigitalNote developers became interested in merged mining with Monero and revived FantamCoin in October 2016. ([17], [18], [19])

FantamCoin Release notes 2.0.0  Fantomcoin 2.0 by XDNproject, major FCN update to the latest cryptonotefoundation codebase  New FCN+XMR merged merged mining  Default block size  100Kb DigitalNote Release notes 4.0.0beta  EmPoWering XDN network security with merged mining with any CryptoNote cryptocurrency  Second step to the PoA with the new type of PoW merged mining blocks

DigitalNote and FantomCoin merged mining with Monero are now stuck with the recent CryptoNightbased Monero forks such as Monero Classic and Monero Original after Monero's recent hard fork to CryptoNight v7. (Refer to Attack Vectors.)
@ 20180531  Monero [16]  DigitalNote [16]  Ratio 

Block time target (s)  120  240  200.00% 
Hash rate (Mhash/s)  410.804  13.86  3.37% 
Blocks count  1,583,869  660,075  41.67% 
@ 20180531  Monero [16]  FantomCoin [16]  Ratio 

Block time target (s)  120  60  50.00% 
Hash rate (Mhash/s)  410.804  19.29  4.70% 
Blocks count  1,583,869  2,126,079  134.23% 
Some Statistics

Mergemined blocks in some cryptocurrencies on 18 June 2017 [24]:
Observations
 The Auxiliary blockchain's target block times can be smaller than, equal to or larger than the Parent blockchain.
 The Auxiliary blockchain's hash rate is generally smaller than, but of the same order of magnitude as that of, the Parent blockchain.
 A multiPoW algorithm approach may further enhance 51% resistance.
Attack Vectors
51% Attacks

51% attacks are real and relevant today. Bitcoin Gold (rank #28 @ 20180529) and Verge (rank #33 @ 20180529) suffered recent attacks with double spend transactions following. ([11], [12])

In a conservative analysis, successful attacks on PoW cryptocurrencies are more likely when dishonest entities control more than 25% of the total mining power. [24]

Tari tokens are envisaged to be merged mined with Monero [13]. The Monero blockchain security is therefore important to the Tari blockchain.

Monero recently (6 April 2018) introduced a hard fork with upgraded PoW algorithm CryptoNight v7 at block height 1,546,000 to maintain its Application Specific Integrated Circuit (ASIC) resistance and hence guard against 51% attacks. The Monero team proposes changes to their PoW every scheduled fork (i.e. every six months). ([14], [15])

An interesting question arises regarding what needs to happen to the Tari blockchain if the Monero blockchain is hard forked. Since the CryptoNight v7 hard fork, the network hash rate for Monero hovers around approximately 500 MH/s, whereas in the two months immediately prior it was approximately 1,000 MH/s [20](https://chainradar.com/xmr/chart). Thus 50% of the hash power can be ascribed to ASICS and botnet miners.
NiceHash statistics for CryptoNight v7 [21] show a lag of two days for approximately 100,600 miners to get up to speed with providing the new hashing power after the Monero hard fork.
The Tari blockchain will have to fork together with or just after a scheduled Monero fork. The Tari blockchain will be vulnerable to ASIC miners until it has been forked.
Double Proof
 A miner could cheat the PoW system by putting more than one Auxiliary block header into one Parent block. [7]
 Multiple Auxiliary blocks could be competing for the same PoW, and could subject your Auxiliary blockchain to nothingatstake attacks if the chain is forked, maliciously or by accident, with consequent attempts to reverse transactions. ([7], [26])
 More than one Auxiliary blockchain will be mergemined with Monero.
Analysis of Mining Power Centralization Issues
With reference to [24] and [25]:
 In Namecoin, F2Pool reached and maintained a majority of the mining power for prolonged periods.
 Litecoin has experienced slight centralization since mid2014, caused by Clevermining and F2Pool, among others.
 In Dogecoin, F2Pool was responsible for generating more than 33% of the blocks per day for significant periods, even exceeding the 50% threshold around the end of 2016.
 Huntercoin was instantly dominated by F2Pool and remained in this state until mid2016.
 Myriadcoin appears to have experienced only a moderate impact. Multimergemined blockchains allow for more than one parent cryptocurrency and have a greater chance of acquiring a higher difficulty per PoW algorithm than the respective parent blockchain.
 Distribution of overall percentage of days below or above the centralization indicator thresholds on 18 June 2017 was as follows:
Introduction of New Attack Vectors
With reference to [24] and [25]:
 Miners can generate blocks for the mergemined child blockchains at almost no additional cost, enabling attacks without risking financial losses.
 Merged mining as an attack vector works both ways, as parent cryptocurrencies cannot easily prevent being mergemined by auxiliary blockchains.
 Merged mining can increase the hash rate of auxiliary blockchains, but it is not conclusively successful as a bootstrapping technique.
 Empirical evidence suggests that only a small number of mining pools is involved in merged mining, and they enjoy block shares beyond the desired security and decentralization goals.
References
[1] Merged Mining Specification [online). Available: https://en.bitcoin.it/wiki/Merged_mining_specification. Date accessed: 20180528.
[2] How does Merged Mining Work? [Online.] Available: https://bitcoin.stackexchange.com/questions/273/howdoesmergedminingwork. Date accessed: 20180528.
[3] MergedMining.mediawiki [online]. Available: https://github.com/namecoin/wiki/blob/master/MergedMining.mediawiki. Date accessed: 20180528.
[4] Bchain.info  Blockchain Explorer (NMC) [online]. Available: https://bchain.info/NMC. Date accessed: 20180528.
[5] SlushPool Merged Mining [online]. Available: https://slushpool.com/help/firstaid/faqmergedmining. Date accessed: 20180528.
[6] 5 Best Namecoin Mining Pools of 2018 (Comparison) [online]. Available: https://www.prooworld.com/namecoin/bestnamecoinminingpools. Date accessed: 20180528.
[7] Alternative Chain [online]. Available: https://en.bitcoin.it/wiki/Alternative_chain#Protecting_against_double_proof. Date accessed: 20180528.
[8] Merged Mining AMA/FAQ [online]. Available: https://www.reddit.com/r/dogecoin/comments/22niq9/merged_mining_amafaq. Date accessed: 20180529.
[9] The Forkening is Happening at ~9:00AM EST [online]. Available: https://www.reddit.com/r/dogecoin/comments/2fyxg1/the_forkening_is_happening_at_900am_est_a_couple. Date accessed: 20180529.
[10] Dogecoin Blockchain Explorer [online]. Available: https://dogechain.info. Date accessed: 20180529.
[11] Bitcoin Gold Hit by Double Spend Attack, Exchanges Lose Millions [online]. Available: https://www.ccn.com/bitcoingoldhitbydoublespendattackexchangeslosemillions. Date accessed: 20180529.
[12] Privacy Coin Verge Succumbs to 51% Attack [Again]. [Online.] Available: https://www.ccn.com/privacycoinvergesuccumbsto51attackagain. Date accessed: 20180529.
[13] Tari Official Website [online]. Available: https://www.tari.com. Date accessed: 20180529.
[14] Monero Hard Forks to Maintain ASIC Resistance, but ‘Classic’ Hopes to Spoil the Party [online]. Available: https://www.ccn.com/monerohardforkstomaintainasicresistancebutclassichopestospoiltheparty. Date accessed: 20180529.
[15] PoW Change and Key Reuse [online]. Available: https://getmonero.org/2018/02/11/powchangeandkeyreuse.html. Date accessed: 20180529.
[16] BitInfoCharts [online]. Available: https://bitinfocharts.com. Date accessed: 20180530.
[17] Merged Mining with Monero [online]. Available: https://minergate.com/blog/mergedminingwithmonero. Date accessed: 20180530.
[18] ANN DigitalNote XDN  ICCO Announce  NEWS [online]. Available: https://bitcointalk.org/index.php?topic=1082745.msg16615346#msg16615346. Date accessed: 20180531.
[19] DigitalNote xdnproject [online]. Available: https://github.com/xdnproject. Date accessed: 20180531.
[20] Monero Charts [online]. Available: https://chainradar.com/xmr/chart. Date accessed: 20180531.
[21] Nicehash Statistics for CryptoNight v7 [online]. Available: https://www.nicehash.com/algorithm/cryptonightv7. Date accessed: 20180531.
[22] Huntercoin: A Blockchain based Game World [online]. Available: http://huntercoin.org. Date accessed: 20180601.
[23] Myriad: A Coin for Everyone [online]. Available: http://myriadcoin.org. Date accessed: 20180601.
[24] Merged Mining: Curse or Cure? [Online.]. Available: https://eprint.iacr.org/2017/791.pdf. Date accessed: 20190212.
[25] Merged Mining: Analysis of Effects and Implications [online]. Available: http://repositum.tuwien.ac.at/obvutwhs/download/pdf/2315652. Date accessed: 20190212.
[26] Problems  Consensus  8. Proof of Stake [online]. Available: https://github.com/ethereum/wiki/wiki/Problems. Date accessed: 20180605.
Contributors
Digital Assets
From Simplicable
A digital asset is something that has value and can be owned but has no physical presence.
From The Digital Beyond
A digital asset is content that is stored in digital form or an online account, owned by an individual. The associated digital data are classified as intangible, personal property, as long as they stay digital, otherwise they quickly become tangible personal property.
From Wikipedia
A digital asset, in essence, is anything that exists in a binary format and comes with the right to use. Data that do not possess that right are not considered assets. Digital assets comes in many forms and may be stored on many types of digital appliances which are, or will be in existence once technology progresses to accommodate for the conception of new modalities which would be able to carry digital assets; notwithstanding the proprietorship of the physical device onto which the digital asset is located.
Nonfungible Tokens
From Wikipedia
A nonfungible token (NFT) is a special type of cryptographic token which represents something unique; nonfungible tokens are thus not interchangeable. This is in contrast to cryptocurrencies like Bitcoin, and many network or utility tokens that are fungible in nature. NFTs are used to create verifiable digital scarcity, for example in several specific applications that require unique digital items like cryptocollectibles and cryptogaming.
From BTC Manager
A nonfungible tokens (NFT) are blockchain tokens that are designed to be wholly and distinguishable from each other. Using unique metadata, avatars, individual token IDs, and custody chains, NFTs are created to ensure that no two NFT tokens are identical. This is because they exist to store information rather than value, unlike their fungible counterparts.
Application of Howey to Blockchain network token sales
What is an Investment Contract (i.e. Howey Test)?
Fourth Prong of Howey Test  Efforts of Others
Decentralization, Consumptive Purpose and Priming Purchasers' Expectations
Introduction
Many blockchain networks use, or intend to use, cryptographic tokens ("tokens" or "digital assets") for various purposes. These purposes include as an incentive for network participants to contribute computing power to the network; or to gain access to certain goods, services or other network functionality. Some promoters of such a network sell tokens, or the right to receive tokens once the network has launched. They then use the sales proceeds to finance development and maintenance of the networks.
The U.S. Securities and Exchange Commission (SEC) has determined that many of the token offerings and sales over the past few years have been in violation of federal securities laws. In some cases, the SEC has taken action against the promoters of the offerings.
Refer to the following Orders under the Order Instituting CeaseandDesist Proceedings Pursuant to [1, Section 8A]:
 Making Findings, and Imposing a CeaseandDesist Order against Munchee Inc. (the Munchee Order);
 Making Findings, and Imposing Penalties and a CeaseandDesist Order against CarrierEQ, Inc., D/B/A/ Airfox (the Airfox Order);
 Making Findings, and Imposing Penalties and a CeaseandDesist Order against Para gon Coin, Inc. (the Paragon Order); and
 Making Findings, and Imposing a CeaseandDesist Order against GladiusNetwork LLC (the Gladius Order).
In each case, the SEC's decision to take action has been underpinned by its determination that the digital assets that were offered and sold were securities pursuant to Section 2(a)(1) of the Securities Act of 1933  the "Securities Act" or "Act" [1]. Under [1, Section 5], it is generally unlawful for any person, directly or indirectly, to offer or sell a security without complying with the registration requirement of Section 5, unless the securities offering qualifies for an exemption from registration.
The sanctions for a violation of [1, Section 5] can be significant. They include a preliminary and permanent injunction; rescission; disgorgement; prejudgment interest; and civil money penalties. It is important that those seeking to promote token offerings and sales do so only after ensuring that the tokens to be sold are not securities under [1], or that the offering, if it involves securities, complies with [1, Section 5], or otherwise qualifies for an exemption from registration thereunder.
While the Securities Act [1]; relevant case law and administrative proceedings; as well as guidance and statements of the SEC and its officials shed much light on many of the considerations involved in an analysis of whether a given digital asset is a security, the facts and circumstances relating to each token, token offering and sale are typically quite unique. Many who seek to undertake token offerings and sales struggle to achieve clarity regarding whether such offerings and sales must be registered under [1] or have to fit within an exemption from the Act's registration requirements. As a result, there is significant regulatory uncertainty surrounding the offering and sale of tokens.
The Commission recognizes the need for more guidance in this area and continues to assist those seeking to achieve a greater understanding of whether the U.S. federal securities laws apply to the offer or sale of a particular digital asset. This report:
 Reviews [2] (the "Framework", which was recently published by the SEC's Strategic Hub for Innovation and Financial Technology, along with previous statements by William Hinman, the Director of the SEC's Division of Corporation Finance. Note: "The Framework" represents the views of SEC Staff. It is not a rule, regulation or statement of the SEC, and it is not binding on the Divisions of the SEC or the SEC itself.
 Explains how [2] is indicative of the SEC's approach to applying the investment contract test laid out in [3] to digital assets. The SEC adopted this test, commonly referred to as the Howey test [4], and has applied the test in subsequent actions involving digital assets.^{f1} Note: [3, Section 21(a)] authorizes the SEC to investigate violations of the federal securities laws and, in its discretion, to "publish information concerning any such violations". The Report does not constitute an adjudication of any fact or issue, nor does it make any findings of violations by any individual or entity.
In particular, this report examines some of the SEC Staff's statements relating to the fourth prong of the Howey test, i.e. the "efforts of others", and discusses factors for promoters of token offerings to consider when assessing whether a token purchaser may have a reasonable expectation of earning profits from the significant efforts of others. It should be noted that this report is in no way intended to constitute or be relied upon as legal advice or to substitute for obtaining competent legal advice from an experienced attorney.
What is a Security?
Section 2(a)(1) of the Securities Act [1] defines a security as follows:
The term "security" means any note, stock, treasury stock, security future, securitybased swap, bond, debenture, evidence of indebtedness, certificate of interest or participation in any profitsharing agreement, collateraltrust certificate, preorganization certificate or subscription, transferable share, investment contract, votingtrust certificate, certificate of deposit for a security, fractional undivided interest in oil, gas, or other mineral rights, any put, call, straddle, option, or privilege on any security, certificate of deposit, or group or index of securities (including any interest therein or based on the value thereof), or any put, call, straddle, option, or privilege entered into on a national securities exchange relating to foreign currency, or, in general, any interest or instrument commonly known as a "security", or any certificate of interest or participation in, temporary or interim certificate for, receipt for, guarantee of, or warrant or right to subscribe to or purchase, any of the foregoing.
While blockchain network tokens (as well as a wide variety of other instruments) are not explicitly included in the definition of "security" under the Act, courts have abstracted the common elements of an "investment contract", which is included in the definition of "security" under Section 2(a)(1) (and Section 3(a)(10) of the Exchange Act of 1934, which is contained in [3]), to establish a "flexible rather than a static principle, one that is capable of adaptation to meet the countless and variable schemes devised by those who seek the use of the money of others on the promise of profits" [3, paragraph 299].
Accordingly, a determination of whether an instrument not specifically enumerated under Section 2(a)(1) of the Act may be deemed to be a security that implicates the federal securities laws must include an analysis of whether it is an investment contract under Howey.
What is an Investment Contract (i.e. Howey Test)?
Howey (and its progeny) established the framework of a fourpart test to determine whether an instrument is an investment contract and therefore a security subject to the U.S. federal securities laws. Howey provides that an investment contract exists if there is:
 an investment of money;
 into a common enterprise;
 with a reasonable expectation of profits;
 derived from the entrepreneurial or managerial efforts of others.
For an instrument to be an investment contract, each element of the Howey test must be met. If any element of the test is not met, the instrument is not an investment contract.
Fourth Prong of Howey Test  Efforts of Others
While the "efforts of others" prong of the Howey test is, at some level, no more important in an application of Howey than any of the other prongs, it is frequently the prong on which the most uncertainty hangs. The "efforts of others" is often the focus when it comes to public blockchain networks. This is because the decentralization of control that many such projects seek to foster prompts the following question: does the kind and degree of decentralization that exists mean that any expectation of profits token purchasers may have does not come from the efforts of others for purposes of Howey? The determination that token purchasers reasonably expected profits to come from the efforts of a centralized (or at least coordinated) person or group has been central in the SEC's findings that the tokens were securities in each of the recent SEC enforcement actions relating to token offerings and sales.^{f1}
Decentralization, Consumptive Purpose and Priming Purchasers' Expectations
Background
On 14 June 2018, Hinman delivered a speech^{f2} addressing whether "a digital asset that was originally offered in a securities offering [can] ever later be sold in a manner that does not constitute an offering of a security", and noted two cases where he believed this was indeed possible:
 "where there is no longer any central enterprise being invested in", e.g. purchases of the digital assets related to a decentralized enterprise or network; and
 "where the digital asset is sold only to be used to purchase a good or service available through the network on which it was created", e.g. purchases of digital assets for a consumptive purpose. a set of six questions directly related to the application of the "efforts of others" prong of the Howey test to offerings and sales of digital assets.
He posed a set of six questions directly related to the application of the "efforts of others" prong of the Howey test to offerings and sales of digital assets:
 Is there a person or group that has sponsored or promoted the creation and sale of the digital asset, the efforts of whom play a significant role in the development and maintenance of the asset and its potential increase in value?
 Has this person or group retained a stake or other interest in the digital asset such that it would be motivated to expend efforts to cause an increase in value in the digital asset? Would purchasers reasonably believe such efforts will be undertaken and may result in a return on their investment in the digital asset?
 Has the promoter raised an amount of funds in excess of what may be needed to establish a functional network, and, if so, has it indicated how those funds may be used to support the value of the tokens or to increase the value of the enterprise? Does the promoter continue to expend funds from proceeds or operations to enhance the functionality and/or value of the system within which the tokens operate?
 Are purchasers 'investing', that is seeking a return? In that regard, is the instrument marketed and sold to the general public instead of to potential users of the network for a price that reasonably correlates with the market value of the good or service in the network?
 Does application of the Securities Act protections make sense? Is there a person or entity others are relying on that plays a key role in the profitmaking of the enterprise such that disclosure of their activities and plans would be important to investors? Do informational asymmetries exist between the promoters and potential purchasers/investors in the digital asset?
 Do persons or entities other than the promoter exercise governance rights or meaningful influence?
(He also posed a separate set of seven questions exploring "contractual or technical ways to structure digital assets so they function more like a consumer item and less like a security").
Hinman noted that these questions are useful to consider when assessing the facts and circumstances surrounding offerings and sales of digital assets to determine "whether a third party  be it a person, entity or coordinated group of actors  drives the expectation of a return". If such a party does drive an expectation of a return on a purchased digital asset, i.e. priming purchasers' expectations, there is a greater likelihood that the asset will be deemed to be an investment contract and therefore a security.
In the Framework [2], the SEC Staff similarly focuses attention on whether a purchaser of a digital asset has a reasonable expectation of profits derived from the efforts of others. These three themes: decentralization, consumptive purpose and priming purchasers' expectations, provide useful context for much of its discussion. The following offers for consideration select implications of the Framework's [2] guidance and Hinman's speech, as industry participants seek a greater understanding of whether or not a digital asset is a security.
Decentralization
The Framework [2] reinforces that the degree and nature of a promoter's involvement will have a bearing on the Howey analysis. The facts and circumstances relating to a promoter are key. If a promoter does not play a significant role in the "development, improvement (or enhancement), operation, or promotion of the network" [2, Section 3] underlying the tokens, it cuts against finding the "efforts of others" prong has been met. A promoter may seek to play a more limited role in these areas. Or, along similar lines, influence over the "essential tasks and responsibilities" of the network may be widely dispersed, i.e. decentralized, among many unaffiliated network stakeholders so that there is no identifiable "person or group" that continues to play a significant role, especially as compared to the role that the dispersed stakeholders play [2, Section 4]^{f3}.
The SEC has placed particular attention on promoter efforts to impact a token's supply and/or demand and has also focused on a promoter's efforts to use the proceeds from a token offering to create an ecosystem that will drive demand for the tokens once the network is functional.^{f1} Further, the SEC has singled out promoter efforts to maintain a token's price by intervening in the buying and selling of tokens, separate from developing and maintaining the underlying network.
Further, the SEC's Senior Advisor for Digital Assets, Valerie Szczepanik, noted promoter efforts in this area may implicate U.S. securities law, recently stating she has "seen stablecoins that purport to control price through some kind of pricing mechanism… controlled through supply and demand in some way to keep the price within a certain band", and "[W]here there is one central party controlling the price fluctuation over time, [that] might be getting into the land of securities" [6].
In contrast to the kinds of efforts that meet Howey's fourth prong, to the extent that a token's value is driven by market forces (supply and demand conditions), it suggests that the value of the token is attributable to factors other than the promoter's efforts. This is assuming that the promoter does not put some mechanism in place to manage token supply or demand in order to maintain a token's price.
In addition, decentralization of control calls into question whether the application of the Securities Act [1] makes sense to begin with. A main purpose of [1] is to ensure that securities issuers "tell the public the truth about their businesses, the securities they are selling, and the risks involved in investing" [5]. Typically, the issuer of a security plays a key role in the success of the enterprise. Investors rely on the experience, judgment and skill of the enterprise's management team and board of directors to drive profits. The disclosure requirements of [1] focus primarily on issuers of securities. A decentralized network, where no central person or entity plays a key role, however, challenges both identification of a party who should be subject to the disclosure obligations of [1], and the information expected to be disclosed. Indeed, speaking about Bitcoin, Hinman acknowledged as much, stating he "[does] not see a central third party whose efforts are a key determining factor in the enterprise. The network on which Bitcoin functions is operational and appears to have been decentralized for some time. Applying the disclosure regime of the federal securities laws to the offer and resale of Bitcoin would seem to add little value". Decentralization ultimately invokes consideration of whether application of [1] truly serves its purpose. as well as the practical realities of how doing so would even work.
Consumptive Purpose
Token purchasers who plan to use their tokens to access a network's functionality typically are not speculating that the tokens will increase in value. The finding that a token was offered to potential purchasers in a manner inconsistent with a consumptive purpose has been a factor in several of the SEC's recent orders, for example:
 From paragraph 18 of the Munchee Order [1, Section 8A]: "[The] marketing did not use the Munchee App or otherwise specifically target current users of the Munchee App to promote how purchasing MUN tokens might let them qualify for higher tiers and bigger payments on future reviews... Instead, Munchee and its agents promoted the MUN token offering in forums aimed at people interested in investing in Bitcoin and other digital assets").
 From paragraph 16 of the Airfox Order [1, Section 8A]: "AirFox primarily aimed its promotional efforts for the initial coin offering at digital token investors rather than anticipated users of AirTokens").
Accordingly, a promoter may seek to limit the offering and sale of tokens to prospective network users. A promoter may also seek to ensure that, in both its content and intended audience, the marketing, and other communication related to the token offering, is consistent with consumption being the reason for purchasing tokens. The SEC Staff has indicated that the argument that purchasers are acquiring tokens for consumption will be stronger once the network is functional and tokens can actually be used to purchase goods or access services through the network.
For example:
 Reference [2, Section 9] states that it is less likely the Howey test is met if various characteristics are present, including "[h]olders of the digital asset are immediately able to use it for its intended functionality on the network").
 Paragraph 7 of the Airfox Order [1, Section 8A] states "The terms of AirFox's initial coin offering purported to require purchasers to agree that they were buying AirTokens for their utility as a medium of exchange for mobile airtime, and not as an investment or a security. At the time of the ICO, this functionality was not available… Despite the reference to AirTokens as a medium of exchange, at the time of the ICO, investors purchased AirTokens based upon anticipation that the value of the tokens would rise through AirFox's future managerial and entrepreneurial efforts."
Promoters may also consider whether "restrictions on the transferability of the digital asset are consistent with the asset's use and not facilitating a speculative market" [2, Section 10].
Priming Purchasers' Expectations
The SEC has given much attention to whether token purchasers were led to expect that the promoter would make efforts to increase token value. In this respect, in bringing enforcement actions, the SEC has highlighted statements made by promoters that specifically tout the opportunity to profit from purchasing a token.
For example, from paragraph 17 of the Munchee Order [[1, Section 8A]:
Munchee made public statements or endorsed other people's public statements that touted the opportunity to profit. For example… Munchee created a public posting on Facebook, linked to a thirdparty YouTube video, and wrote '199% GAINS on MUN token at ICO price! Sign up for PRESALE NOW!' The linked video featured a person who said 'Today we are going to talk about Munchee. Munchee is a crazy ICO… Pretty much, if you get into it early enough, you'll probably most likely get a return on it.' This person… 'speculate[d]' that a \$1,000 investment could create a $94,000 return.
Promotional statements explaining that tokens would be listed on digital asset exchanges for secondary trading also draw attention. Refer to the following examples.

From paragraph 13 of the Munchee Order [1, Section 8A]: Munchee stated it would work to ensure that MUN holders would be able to sell their MUN tokens on secondary markets, saying that 'Munchee will ensure that MUN token is available on a number of exchanges in varying jurisdictions to ensure that this is an option for all tokenholders.');

From paragraph 22 of the Gladius Order [1, Section 8A]: During and following the offering, Gladius attempted to make GLA Tokens available for trading on major digital asset trading platforms. On Gladius Web Pages, Gladius principals and agents stated that '[w]e've been approached by some of the largest exchanges, they're very interested,' and represented that the GLA Token would be available to trade on 'major' trading platforms after the ICO.

From [2, page 5], which states that it is more likely that there is a reasonable expectation of profit if various characteristics are present, including if the digital asset is marketed using "[t]he availability of a market for the trading of the digital asset, particularly where the [promoter] implicitly or explicitly promises to create or otherwise support a trading market for the digital asset".
Secondary trading on exchanges provides token holders an alternative to using the token on the network and, especially when facilitated and highlighted by the promoter, can support a finding of investment intent for token purchasers, i.e. buying with an eye on reselling for a profit instead of consuming goods or services through the network.
The Framework [2, Section 6] states that it is more likely there is a reasonable expectation of profit if various characteristics are present, including if "the digital asset is offered broadly to potential purchasers as compared to being targeted to expected users of the goods or services or those who have a need for the functionality of the network", and/or "[t]he digital asset is offered and purchased in quantities indicative of investment intent instead of quantities indicative of a user of the network".
The Framework [2] also notes that purchasers may have an expectation that the promoter will "undertake efforts to promote its own interests and enhance the value of the network or digital assets" [2, Section 5]. Token purchasers who are not aware of the promoter's interest in the network's success, such as through the promoter having retained a stake in the tokens, should not have any reason to expect the promoter to undertake any efforts to drive token value in order to increase its own profits.
Conclusion
Parties seeking to undertake offerings and sales of tokens in compliance with U.S. securities laws may look to the various actions and statements of the SEC for guidance as to whether the token is a security under the Act. As tokens are not specifically enumerated under the definition of "security", the analysis hinges on the application of the Howey test to determine if the token is an "investment contract" under the Act, and therefore a security. The decentralization of control many public blockchain projects seek to foster places Howey's "efforts of others" prong as a key area of focus. Questions regarding the application of this prong generally relate to:
 decentralization of the enterprise underlying the tokens;
 consumptive purpose of token purchasers; and
 whether token purchasers were led to have an expectation that the promoter would take efforts to drive token value.
This report has offered insight into some of the factors that promoters of token offerings may consider in assessing whether the kind and degree of decentralization that exists means that any expectation of profits token purchasers may have does not come from the efforts of others. It is also indicative of the SEC's approach to evaluating the offer and sale of digital assets.
Footnotes
f1: For examples, refer to paragraph 32 of the Munchee Order and paragraph 21 of the Airfox Order and accompanying text.
Paragraph 32 of Munchee Order: "MUN token
purchasers had a reasonable expectation of profits from their investment in the Munchee enterprise. The proceeds of the
MUN token offering were intended to be used by Munchee to build an "ecosystem" that would create demand for MUN tokens
and make MUN tokens more valuable".
Paragraph 21 of Airfox Order: "AirFox told investors that the company
would improve the AirFox App, add new functionality, enter into agreements with thirdparty telecommunication companies,
and take other steps to encourage the use of AirTokens and foster the growth of the ecosystem. Investors reasonably
expected they would profit from the success of AirFox's efforts to grow the ecosystem and the concomitant rise in the
value of AirTokens".
f2: William Hinman, Director of the Division of Corp. Fin., SEC, Remarks at the Yahoo Finance All Markets Summit: Crypto: Digital Asset Transactions: When Howey Met Gary (Plastic) (14 June 2018).
f3: Reference [2, Section 4] states that it is more likely that the purchaser of a digital asset is relying on the efforts of others if various characteristics are present. This includes "[t]here are essential tasks or responsibilities performed and expected to be performed by [a promoter], rather than an unaffiliated, dispersed community of network users".
References
[1] "U.S. Congress. United States Code: Securities Act of, 15 U.S.C. §§ 77a77mm 1934" [online].
Available: https://www.loc.gov/item/uscode1934001015002a/. Date accessed: 20190307.
[2] "Framework for 'Investment Contract' Analysis of Digital Assets" [online].
Available: https://www.sec.gov/corpfin/frameworkinvestmentcontractanalysisdigitalassets>. Date accessed: 20190310.
[3] "U.S. Congress. United States Code: Securities Exchanges, 15 U.S.C. §§ 78a78jj, 1964" [online].
Available: https://www.loc.gov/item/uscode1964003015002b/. Date accessed: 20190307.
[4] "SEC v. W. J. Howey Co., 328 U.S. 293 (1946)" [online]. Available: https://en.wikipedia.org/wiki/SEC_v._W._J._Howey_Co.). Date accessed: 20190405.
[5] "U.S. Securities and Exchange Commission, What We Do" [online]. Available: https://www.sec.gov/Article/whatwedo.html.
Date accessed: 20180307.
[6] Guillermo Jimenez, SEC's Crypto Czar: "Stablecoins might be violating securities laws", Decrypt (2019) [online].
Available: https://decryptmedia.com/5940/secscryptoczarstablecoinsmightbeviolatingsecuritieslaws.
Date accessed: 20190319.
Contributors
NonFungible Tokens
Having trouble viewing this presentation?
View it in a separate window.
Confidential Assets
Introduction
Confidential assets in the context of blockchain technology and blockchainbased cryptocurrencies can have different meanings to different audiences, and can also be something totally different or unique depending on the use case. It is a special type of digital asset and inherits all its properties except that it is also confidential. Confidential assets therefore have value, can be owned but has no physical presence. The confidentiality aspect implies that the amount of assets owned as well as the asset type that was transacted in can be confidential. A further classification can be made with regards to whether it is fungible (interchangeable) or nonfungible (unique, not interchangeable). Confidential assets can only exist in the form of a cryptographic token or derivative thereof that is also cryptographically secure, at least under the Discrete Logarithmic Problem^{def} (DLP) assumption.
The basis of confidential assets are confidential transactions as proposed by Maxwell [4] and Poelstra et al. [5], where the amounts transferred are kept visible only to participants in the transaction (and those they designate). Confidential transactions succeed in making the transaction amounts private, while still preserving the ability of the public blockchain network to verify that the ledger entries and Unspent Transaction Output (UTXO) set still add up. All amounts in the UTXO set are blinded, while preserving public verifiability. Poelstra et al. [5] showed how the asset types can also be blinded in conjunction with the output amounts. Multiple asset types can be accommodated within single transactions on the same blockchain.
This report investigates confidential assets as a natural progression of confidential transactions.
Contents
 Confidential Assets
Preliminaries
The general notation of mathematical expressions when specifically referenced are listed here. These notations are important preknowledge for the remainder of the report.
 Let $ p $ be a large prime number.
 Let $ \mathbb G $ denote a cyclic group of prime order $ p $.
 Let $ \mathbb Z_p $ denote the ring of integers $ modulo \mspace{4mu} p $.
 Let $ \mathbb F_p $ be a group of elliptic curve points over a finite (prime) field.
 All references to Pedersen Commitment will imply Elliptic Curve Pedersen Commitment.
The Basis of Confidential Assets
Confidential transactions, asset commitments and Asset Surjection Proofs (ASP) are really the basis of confidential assets. These concepts are discussed below.
Confidential Transactions
Confidential transactions are made confidential by replacing each explicit UTXO with a homomorphic commitment, like a Pedersen Commitment, and made robust against overflow and inflation attacks by using efficient zeroknowledge range proofs, like a Bulletproof. [1]
Range proofs are proofs that a secret value, which has been encrypted or committed to, lies in a certain interval. It prevents any numbers coming near the magnitude of a large prime, say $ 2^{256} $, that can cause wrap around when adding a small number, e.g. proof that a number $ x \in [0,2^{64}  1] $.
Pedersen Commitments are perfectly hiding (an attacker with infinite computing power cannot tell what amount has been committed to) and computationally binding (no efficient algorithm running in a practical amount of time can produce fake commitments except with small probability). The Elliptic Curve Pedersen Commitment to value $ x \in \mathbb Z_p $ has the following form
$$
C(x,r) = xH + rG
$$
where $ r \in \mathbb Z_p $ is a random blinding factor, $ G \in \mathbb F_p $ is a random generator point and $ H \in \mathbb F_p $ is specially chosen so that the value $ x_H $ satisfying $ H = x_H G $ cannot be found except if the Elliptic Curve DLP^{def} (ECDLP) is solved. The number $ H $ is what is known as a Nothing Up My Sleeve (NUMS) number. With secp256k1 the value of $ H $ is the SHA256 hash of a simple encoding of the $ x $coordinate of the generator point $ G $. The Pedersen Commitment scheme is implemented with three algorithms: Setup()
to set up the commitment parameters $ G $ and $ H $; Commit()
to commit to the message $ x $ using the commitment parameters $ r $, $ H $ and $ G $ and Open()
to open and verify the commitment. ([5], [6], [7], [8])
Mimblewimble ([9], [10]) is based on and achieves confidentiality using these confidential transaction primitives. If confidentiality is not sought, inputs may be given as explicit amounts, in which case the homomorphic commitment to the given amount will have a blinding factor $ r = 0 $.
Asset Commitments and Surjection Proofs
The different assets need to be identified and transacted with in a confidential manner and proven to not be inflationary, and this is made possible by using asset commitments and ASPs. ([1], [14])
Given some unique asset description $ A $, the associated asset tag $ H_A \in \mathbb G $ is calculated using the Pedersen Commitment function Setup()
using $ A $ as auxiliary input. (Selection of $ A $ is discussed in Asset Issuance.) Consider a transaction with two inputs and two outputs involving two distinct asset types $ A $ and $ B $:
$$
\begin{aligned}
in_A = x_1H_A + r_{A_1}G \mspace{15mu} \mathrm{,} \mspace{15mu} out_A = x_2H_A + r_{A_2}G \\
in_B = y_1H_B + r_{B_1}G \mspace{15mu} \mathrm{,} \mspace{15mu} out_B = y_2H_B + r_{B_2}G
\end{aligned}
\mspace{70mu} (1)
$$
For relation (1) to hold the sum of the outputs minus the sum of the inputs must be zero:
$$
\begin{aligned}
(out_A + out_B)  (in_A + in_B) = 0 \\
(x_2H_A + r_{A_2}G) + (y_2H_B + r_{B_2}G)  (x_1H_A + r_{A_1}G)  (y_1H_B + r_{B_1}G) = 0 \\
(r_{A_2} + r_{B_2}  r_{A_1}  r_{B_1})G + (x_2  x_1)H_A + (y_2  y_1)H_B = 0
\end{aligned}
\mspace{70mu} (2)
$$
Since $ H_A $ and $ H_B $ are both NUMS asset tags, the only way relation (2) can hold is if the total input and output amounts of asset $ A $ are equal and if the same is true for asset $ B $. This concept can be extended to an unlimited amount of distinct asset types as long as each asset tag can be a unique NUMS generator. The problem with relation (2) is that the asset type of each output is publicly visible, thus the assets that were transacted in are not confidential. This can be solved by replacing each asset tag with a blinded version of itself. The asset commitment to asset tag $ H_A $ (blinded asset tag) is then defined as the point
$$
H_{0_A} = H_A + rG
$$
Blinding of the the asset tag is necessary to make transactions in the asset, i.e. which asset was transacted in, confidential. The blinded asset tag $ H_{0_A} $ will then be used in place of the generator $ H $ in the Pedersen Commitments. Such Pedersen Commitments thus commit to the committed amount as well as to the underlying asset tag. Inspecting the Pedersen Commitment it is evident that a commitment to the value $ x_1 $ using the blinded asset tag $ H_{0_A} $ is also a commitment to the the same value using the asset tag $ H_A $:
$$ x_1H_{0_A} + r_{A_1}G = x_1(H_A + rG) + r_{A_1}G = x_1H_A + (r_{A_1} + x_1r)G $$ Using blinded asset tags the transaction in relation (1) then becomes: $$ \begin{aligned} in_A = x_1H_{0_A} + r_{A_1}G \mspace{15mu} \mathrm{,} \mspace{15mu} out_A = x_2H_{0_A} + r_{A_2}G \\ in_B = y_1H_{0_B} + r_{B_1}G \mspace{15mu} \mathrm{,} \mspace{15mu} out_B = y_2H_{0_B} + r_{B_2}G \end{aligned} $$ Correspondingly, the zero sum rule translates to: $$ \begin{aligned} (out_A + out_B)  (in_A + in_B) = 0 \\ (x_2H_{0_A} + r_{A_2}G) + (y_2H_{0_B} + r_{B_2}G)  (x_1H_{0_A} + r_{A_1}G)  (y_1H_{0_B} + r_{B_1}G) = 0 \\ (r_{A_2} + r_{B_2}  r_{A_1}  r_{B_1})G + (x_2  x_1)H_{0_A} + (y_2  y_1)H_{0_B} = 0 \end{aligned} $$
However, using only the sum to zero rule it is still possible to introduce negative amounts of an asset type. Consider blinded asset tag $$ H_{0_A} = H_A + rG $$ Any amount of blinded asset tag $ H_{0_A} $ will correspond a negative amount of asset $ A $, thereby inflating its supply. To solve this problem an ASP is introduced, which is a cryptographic proof. In mathematics a surjection function simply means that for every element $ y $ in the codomain $ Y $ of function $ f $ there is at least one element $ x $ in the domain $ X $ of function $ f $ such that $ f(x) = y$.
An ASP scheme provides a proof $ \pi $ for a set of input asset commitments $ [ H_i ] ^n_{i=1} $, an output commitment $ H = H_{\hat i} + rG $ for $ \hat i = 1 \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} n $ and blinding factor $ r $. It proofs that every output asset type is the same as some input asset type while blinding which outputs correspond to which inputs. Such a proof $ \pi $ is secure if it is a zeroknowledge proof of knowledge for the blinding factor $ r $. Let $ H_{0_{A1}} $ and $ H_{0_{A2}} $ be blinded asset tags that commit to the same asset tag $ H_A $: $$ H_{0_{A1}} = H_A + r_1G \mspace{15mu} \mathrm{and} \mspace{15mu} H_{0_{A2}} = H_A + r_2G $$ then $$ H_{0_{A1}}  H_{0_{A2}} = (H_A + r_1G)  (H_A + r_2G) = (r_1  r_2)G $$ will be a signature key with secret key $ r_1  r_2 $. Thus for an $ n $ distinct multiple asset type transaction, differences can be calculated between each output and all inputs, e.g. $ (out_A  in_A) , (out_A  in_B) \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} (out_A  in_n) $, and so on for all outputs. This has the form of a ring signature, and if $ out_A $ has the same asset tag as one of the inputs, the transaction signer will know the secret key corresponding to one of these differences, and be able to produce the ring signature. The ASP is based on the BackMaxwell range proof (see Definition 9 of [1]), which uses a variation of Borromean ring signatures [18]. The Borromean ring signature in turn is a variant of the AbeOhkuboSuzuki (AOS) ring signature [19]. An AOS ASP computes a ring signature that is equal to the proof $ \pi $ as follows:
 Calculate $ n $ differences $ H  H_{\hat i } $ for $ \hat i = 1 \mspace{3mu} , \mspace{3mu} . . . \mspace{3mu} , \mspace{3mu} n $, one of which will be equal to the blinding factor $ r $;
 Calculate a ring signature $ S $ of an empty message using the $ n $ differences.
The resulting ring signature $ S $ is equal to the proof $ \pi $, and the ASP consist of this ring signature $ S $. ([1], [14])
The Confidential Asset Scheme
Using the building blocks discussed in The Basis of Confidential Assets, asset issuance, asset transactions and asset reissuance can be performed in a confidential manner.
Asset Transactions
Confidential assets propose a scheme where multiple noninterchangeable asset types can be supported within a single transaction. This all happens within one blockchain and can theoretically improve the value of the blockchain by offering a service to more users and can also enable extended functionality like base layer atomic asset trades. The latter implies Alice can offer Bob $ 100 $ of asset type $ A $ for $ 50 $ of asset type $ B $ in a single transaction, both participants using a single wallet. In this case no relationship between output asset types can be established or inferred because all asset tags are blinded. Privacy can be increased as the blinded asset types brings another dimension that needs to be unraveled in order to obtain user identity and transaction data by not having multiple singleasset transactions. Such a confidential asset scheme simplifies verification and complexity and reduces onchain data. It also prohibits censorship of transactions involving specific asset types, and blinds assets with low transaction volume where users could be identified very easily. [1]
Assets originate in assetissuance inputs, which take the place of coinbase transactions in confidential transactions. The asset type to pay fees must be revealed in each transaction, but in practice all fees could be paid in only one asset type, thus preserving privacy. Payment authorization is achieved by means of the input signatures. A confidential asset transaction consists of the following data:
 A list of inputs, each of which can have one of the following forms:
 A reference to an output of another transaction, with a signature using that output's verification key, or;
 An asset issuance input, which has an explicit amount and asset tag.
 A list of outputs that contains:
 A signature verification key;
 An asset commitment $ H_0 $ with an ASP from all input asset commitments to $ H_0 $;
 Pedersen commitment to an amount using generator $ H_0 $ in place of $ H $, with the associated BackMaxwell range proof.
 A fee, listed explicitly as $ { (f_i , H_i) }_{i=1}^n $, where $ f_i $ is a nonnegative scalar amount denominated in the asset with tag $ H_i $.
Every output has a range proof and ASP associated with it, which are proofs of knowledge of the Pedersen commitment opening information and asset commitment blinding factor. Every range proof can be considered as being with respect to the underlying asset tag $ H_A $, rather than the asset commitment $ H_0 $. The confidential transaction is restricted to only inputs and outputs with asset tag $ H_A $, except that output commitments minus input commitments minus fee sum to a commitment to $ 0 $ instead of to the point $ 0 $ itself. [1]
However, confidential assets come at an additional data cost. For a transaction with $ m $ outputs and $ n $ inputs, in relation to the units of space used for confidential transactions, the asset commitment has size $ 1$, the ASP has size $ n + 1 $ and the entire transaction therefor has size $ m(n + 2) $. [1]
Asset Issuance
It is important to ensure that any auxiliary input $ A $ used to create asset tag $ H_A \in \mathbb G $ only be used once to prevent inflation by means of many independent issuances. Associating a maximum of one issuance with the spending of a specific UTXO can ensure this uniqueness property. Poelstra et al. [5] suggest the use of a Ricardian contract [11] to be hashed together with the reference to the UTXO being spent. This hash can then be used to generate the auxiliary input $ A $ as follows. Let $I $ be the input being spent (an unambiguous reference to a specific UTXO used to create the asset), let $ \widehat {RC} $ be the issuerspecified Ricardian contract, then the asset entropy $ E $ is defined as $$ E = \mathrm {Hash} ( \mathrm {Hash} (I) \parallel \mathrm {Hash} (\widehat {RC})) $$ The auxiliary input $ A $ is then defined as $$ A = \mathrm {Hash} ( E \parallel 0) $$ Note that a Ricardian contract $ \widehat {RC} $ is not crucial for entropy $ E $ generation as some other unique NUMS value could have been used in its stead, but only suggested. Ricardian contracts warrant a bit more explanation and is discussed in Appendix B.
Every noncoinbase transaction input can have up to one new asset issuance associated with it. An asset issuance (or asset definition) transaction input then consists of the UTXO being spent, the Ricardian contract, either an initial issuance explicit value or a Pedersen commitment, a range proof and a Boolean field indicating whether reissuance is allowed. ([1], [13])
Asset Reissuance
The confidential asset scheme allows the asset owner to later increase or decrease the amount of the asset in circulation, given that an asset reissuance token is generated together with the initial asset issuance. Given an asset entropy $ E $, the asset reissuance capability is the element (asset tag) $ H_{\hat A} \in \mathbb G $ obtained using an alternate auxiliary input $ \hat A $ defined as $$ \hat A = \mathrm {Hash} ( E \parallel 1) $$ The resulting asset tag $ H_{\hat A} \in \mathbb G $ is linked to its reissuance capability, and the asset owner can assert their reissuance right by revealing the blinding factor $ r $ for the reissuance capability along with the original asset entropy $ E $. An asset reissuance (or asset definition) transaction input then consists of the spend of a UTXO containing an asset reissuance capability, the original asset entropy, the blinding factor for the asset commitment of the UTXO being spent, either an explicit reissuance amount or Pedersen commitment and a range proof. [1]
The same mechanism can be used to manage capabilities for other restricted operations for example to decrease issuance, destroy the asset, or to make the commitment generator the hash of a script that validates the spending transaction. It is also possible to change the name of the default asset that is created upon blockchain initialization and the default asset used to pay fees on the network. ([1], [13])
Flexibility
ASPs prove that the asset commitments associated with outputs commit to legitimately issued asset tags. This feature allows compatible blockchains to support indefinitely many asset types, which may be added after the chain has been defined. There is room to adapt this scheme for optimal tradeoff between ASP data size and privacy by introducing a global dynamic list of assets, whereby each transaction selects a subset of asset tags for the corresponding ASPs. [1]
If all the asset tags are defined at the instantiation of the blockchain it will be compatible with the Mimblewimble protocol. The range proofs used for the development of this scheme were based on the BackMaxwell range proof scheme (see Definition 9 of [1]). Poelstra et al. [1] suggests more efficient range proofs, ASPs and use of aggregate range proofs. It is thus an open question if Bulletproofs could fulfill this requirement.
Confidential Asset Implementations
Three independent implementations of confidential assets are summarized here. The first two implementations closely resembles the theoretical description as in [1], with the last implementation adding Bulletproofs functionality to confidential assets.
Elements Project
Elements is an open source, sidechaincapable blockchain platform, providing access to advanced features, such as Confidential Transactions and Issued Assets (Github: ElementsProject/elements
[16]). It allows digitizable collectables, reward points and attested assets (for example gold coins) to be realized on a blockchain. The main idea behind Elements is to serve as research platform and testbed for changes to the Bitcoin protocol. Their implementation of confidential assets is called Issued Assets ([13], [14], [15]) and is based on their formal publication in [1].
The Elements project hosts a working demonstration (Figure 2) of confidential asset transfers (Github: ElementsProject/confidentialassetsdemo
[17]) involving 5 parties. The demonstration depicts a scenario where a coffee shop owner Dave charges a customer Alice for coffee in an asset called MELON. Alice does not hold enough MELON and needs to convert some AIRSKY into MELON making use of an exchange operated by Charlie. The coffee shop owner Dave has a competitor Bob who is trying to gather information about Dave's sales. Due to the blockchain's confidential transactions and assets features, he will not be able to see anything useful by processing transactions on the blockchain. Fred is a miner and does not care about the detail of the transactions, but he makes blocks on the blockchain when transactions enter his miner mempool. The demonstration also includes generating the different types of assets.
Figure 2: Elements Confidential Assets Transfer Demonstration [17]
Chain Core Confidential Assets
Chain Core [20] is a shared, multiasset, cryptographic ledger, designed for enterprise financial infrastructure. It supports the coexistence and interoperability of multiple types of assets on the same network in their Confidential Assets framework. Chain Core is based on [1], and available as an open source project in Github: chain/chain
[21], which have been archived. It has been succeeded by Sequence, a ledgerasaservice project, that enables secure tracking and transfer of balances in a token format ([22], [23]). They offer a free plan for up to 1,000,000 transactions per month.
Chain Core implements all native features as defined in [1]. They were also working towards implementing ElGamal commitments into Chain Core to make their Confidential Assets framework quantum secure, but it is unclear if this effort was concluded at the time the project was archived. ([24], [25])
Cloak
Chain/Interstellar [26] introduced Cloak [29], a redesign of Chain Core's Confidential Assets framework to make use of Bulletproof range proofs [27]. It is available as an open source project in Github: interstellar/spacesuit
[28]. Cloak is all about confidential asset transactions, called cloaked transactions, which exchange values of different asset types, called flavors. The protocol ensures that values are not transmuted to any other asset types, that quantities do not overflow and that both quantities and asset types are kept secret.
A traditional Bulletproofs implementation convert an arithmetic circuit into a Rank1 Constraint System (R1CS); Cloak bypasses arithmetic circuits and provide an Application Programmers Interface (API) for building a constraint system directly. The R1CS API consists of a hierarchy of taskspecific “gadgets” and is used by the Prover and Verifier alike to allocate variables and define constraints. Cloak uses a collection of gadgets like “shuffle”, “merge”, “split” and “range proof” to build a constraint system for cloaked transactions. All transactions of the same size are indistinguishable because the layout of all the gadgets is only determined by the number of inputs and outputs.
The Cloak development is still ongoing.
Conclusions, Observations, Recommendations
 The idea to embed a Ricardian contract in the asset tag creation as suggested by Poelstra et al. [1] warrants more investigation for a new confidential blockchain protocol like Tari; Ricardian contracts could be used in asset generation in the probable 2nd layer.
 Asset commitments and ASPs are important cryptographic primitives for confidential asset transactions.
 The Elements project implemented a range of useful confidential asset framework features and should be critically assessed for usability in a probable Tari 2nd layer.
 Cloak has the potential to take confidential assets implementation to the next level in efficiency and should be closely monitored. Interstellar is in the process to fully implement and extend Bulletproofs for use in confidential assets.
 If confidential assets are to be implemented in a Mimblewimble blockchain, all asset tags must be defined at its instantiation, otherwise it will not be compatible.
References
[1] Confidential Assets, Poelstra A., Back A., Friedenbach M., Maxwell G. and Wuille P., Blockstream, https://blockstream.com/bitcoin17final41.pdf, Date accessed: 20180925.
[2] Wikipedia: Discrete logarithm, https://en.wikipedia.org/wiki/Discrete_logarithm, Date accessed: 20180920.
[3] Assumptions Related to Discrete Logarithms: Why Subtleties Make a Real Difference, Sadeghi A. and Steiner M., http://www.semper.org/sirene/publ/SaSt_01.dhetal.long.pdf, Date accessed: 20180924.
[4] Confidential Transactions write up, G. Maxwell, https://people.xiph.org/~greg/confidential_values.txt, Date accessed: 20181210.
[5] An investigation into Confidential Transactions, Gibson A., July 2018, https://github.com/AdamISZ/ConfidentialTransactionsDoc/blob/master/essayonCT.pdf, Date accessed: 20181122.
[6] pedersencommitment: An implementation of Pedersen Commitment schemes, https://hackage.haskell.org/package/pedersencommitment, Date accessed: 20180925.
[7] Homomorphic Miniblockchain Scheme, Franca B., April 2015, http://cryptonite.info/files/HMBC.pdf, Date accessed: 20181122.
[8] Efficient Implementation of Pedersen Commitments Using Twisted Edwards Curves, Franck C. and Großschädl J., University of Luxembourg, http://orbilu.uni.lu/bitstream/10993/33705/1/MSPN2017.pdf, Date accessed: 20181122.
[9] Mimblewimble, Poelstra A., October 2016, http://diyhpl.us/~bryan/papers2/bitcoin/mimblewimbleandytoshidraft20161020.pdf, Date accessed: 201812??.
[10] Mimblewimble Explained, Poelstra A., November 2016, https://www.weusecoins.com/mimblewimbleandrewpoelstra/, Date accessed: 20180910.
[11] The Ricardian Contract, First IEEE International Workshop on Electronic Contracting. IEEE (2004), Grigg I., http://iang.org/papers/ricardian_contract.html, Date accessed: 20181213.
[12] Smart vs. Ricardian Contracts: What’s the Difference?, Koteshov D., February 2018, https://www.elinext.com/industries/financial/trends/smartvsricardiancontracts/, Date accessed: 20181213.
[13] Issued Assets  You can issue your own Confidential Assets on Elements, Elements by Blockstream, https://elementsproject.org/features/issuedassets, Date accessed: 20181214.
[14] Issued Assets  Investigation, Principal Investigator: Andrew Poelstra, Elements by Blockstream, https://elementsproject.org/features/issuedassets/investigation, Date accessed: 20181214.
[15] Elements code tutorial  Issuing your own assets, Elements by Blockstream, elementsproject.org, https://elementsproject.org/elementscodetutorial/issuingassets, Date accessed: 20181214.
[16] Github: ElementsProject/elements, https://github.com/ElementsProject/elements, Date accessed: 20181218.
[17] Github: ElementsProject/confidentialassetsdemo, https://github.com/ElementsProject/confidentialassetsdemo, Date accessed: 20181218.
[18] Borromean ring signatures (2015), Maxwell G. and Poelstra A., http://diyhpl.us/ ~bryan/papers2/bitcoin/Borromean%20ring%20signatures.pdf, Date accessed: 20181218.
[19] 1outofn Signatures from a Variety of Keys, Abe M., Ohkubo M. and Suzuki K., https://www.iacr.org/cryptodb/archive/2002/ASIACRYPT/50/50.pdf, Date accessed: 20181218.
[20] Chain Core, https://chain.com/docs/1.2/core/getstarted/introduction, Date accessed: 20181218.
[21] Github: chain/chain, https://github.com/chain/chain, Date accessed: 20181218.
[22] Chain: Sequence, https://chain.com/sequence, Date accessed: 20181218.
[23] Sequence Documentation, https://dashboard.seq.com/docs, Date accessed: 20181218.
[24] Hidden in Plain Sight: Transacting Privately on a Blockchain  Introducing Confidential Assets in the Chain Protocol, https://blog.chain.com/hiddeninplainsighttransactingprivatelyonablockchain835ab75c01cb, Date accessed: 201812??.
[25] Blockchains in a Quantum Future  Protecting Against PostQuantum Attacks on Cryptography, https://blog.chain.com/preparingforaquantumfuture45535b316314, Date accessed: 201812??.
[26] Inter/stellar website, https://interstellar.com, Date accessed: 20181122.
[27] Programmable Constraint Systems for Bulletproofs, https://medium.com/interstellar/programmableconstraintsystemsforbulletproofs365b9feb92f7, Date accessed: 20181122.
[28] Github: interstellar/spacesuit, https://github.com/interstellar/spacesuit/blob/master/spec.md, Date accessed: 201812??.
[29] Github: interstellar/spacesuit/spec.md, https://github.com/interstellar/spacesuit/blob/master/spec.md, Date accessed: 20181218.
[30] Wikipedia: Ricardian contract, https://en.wikipedia.org/wiki/Ricardian_contract, Date accessed: 20181221.
Appendices
Appendix A: Definition of Terms
Definitions of terms presented here are high level and general in nature. Full mathematical definitions are available in the cited references.
 Discrete Logarithm/Discrete Logarithm Problem (DLP): In the mathematics of real numbers, the logarithm $ \log_b^a $ is a number $ x $ such that $ b^x=a $, for given numbers $ a $ and $ b $. Analogously, in any group $ G $ , powers $ b^k $ can be defined for all integers $ k $, and the discrete logarithm $ \log_ba $ is an integer $ k $ such that $ b^k=a $. Algorithms in publickey cryptography base their security on the assumption that the discrete logarithm problem over carefully chosen cyclic finite groups and cyclic subgroups of elliptic curves over finite fields has no efficient solution. ([2], [3])
Appendix B: Ricardian Contracts vs. Smart Contracts
A Ricardian contract is “a digital contract that deﬁnes the terms and conditions of an interaction, between two or more peers, that is cryptographically signed and veriﬁed, being both human and machine readable and digitally signed” [12]. With a Ricardian contract the information from the legal document is placed in a format that can be read and executed by software. The cryptographic identification offers high levels of security. The main properties of a Ricardian contract are listed below (also see Figure 1):
 Human readable;
 Document is printable;
 Program parsable;
 All forms (displayed, printed, parsed) are manifestly equivalent;
 Signed by issuer;
 Can be identified securely, where security means that any attempts to change the linkage between a reference and the contract are impractical.
Figure 1: Bowtie Diagram of a Ricardian Contract [12]
Ricardian contracts are robust (due to identification by cryptographic hash functions), transparent (due to readable text for legal prose) and efficient (due to computer markup language to extract essential information). [30]
A smart contract is “a computerized transaction protocol that executes the terms of a contract. The general objectives are to satisfy common contractual conditions” [12]. With smart contracts, digital assets can be exchanged in a transparent and nonconflicting way; it provides trust. The main properties of a smart contract are listed below:

Selfexecuting (of course, it means that they don’t run unless someone initiates them)

Immutable

Selfverifying

Autoenforcing

Cost saving

Removes third parties or escrow agents
It is possible to implement a Ricardian contract as a smart contract, but not in all instances. A smart contract is a preagreed digital agreement that can be executed automatically. A Ricardian contract records “intentions” and “actions” of a particular contract, no matter if it has been executed or not. Hashes of Ricardian contracts can refer to documents or executable code. ([12], [30])
The Ricardian Contract design pattern has been implemented in several projects and is free of any intellectual property restrictions. [30]
Contributors
Block Chain Related Protocols
From Wikipedia  1
A block chain is a growing list of records, called blocks, which are linked using cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data (generally represented as a merkle tree root hash).
From Wikipedia  2
A security protocol (cryptographic protocol or encryption protocol) is an abstract or concrete protocol that performs a securityrelated function and applies cryptographic methods, often as sequences of cryptographic primitives. A protocol describes how the algorithms should be used. A sufficiently detailed protocol includes details about data structures and representations, at which point it can be used to implement multiple, interoperable versions of a program.
From Market Protocol
A protocol can be seen as a methodology agreed upon by two or more parties that establish a common set of rules that they can agree upon so as to enter into a binding agreement. Protocols are therefore the set of rules that govern the network. Block chain protocols usually include rules about consensus, transaction validation, and network participation.
Protocol, in computer science, is a set of rules or procedures for transmitting data between electronic devices, such as computers. In order for computers to exchange information, there must be a preexisting agreement as to how the information will be structured and how each side will send and receive it. Without a protocol, a transmitting computer, for example, could be sending its data in 8bit packets while the receiving computer might expect the data in 16bit packets.
Mimblewimble
This presentation contains somewhat outdated content. Whilst the description here follows the original Mimblewimble paper pretty closely, it no longer precisely describes how Mimblewimble transactions are implemented in say, Grin or Tari.
The presentation is left here as is since it offers a pretty gentle introduction to the math involved, and it may be instructive to see how things used to be before you head off to read a more detailed and uptodate account.
Having trouble viewing this presentation?
View it in a separate window.
Mimblewimble transactions explained
A highlevel overview
Mimblewimble is a privacyoriented cryptocurrency technology. It differs from Bitcoin in some key areas:
 No addresses. The concept of Mimblewimble addresses does not exist.
 Completely private. Every transaction is confidential.
 Compact blockchain. Mimblewimble uses a different set of security guarantees to Bitcoin, which leads to a far more compact blockchain.
Transactions explained
Confidential transactions [1] were invented by Dr. Adam Back and are employed in several cryptocurrency projects, including Monero and Tari  by way of Mimblewimble.
Recipients of Tari create the private keys for receiving coins on the fly. Therefore they must be involved in the construction of a Tari transaction.
This doesn't necessarily mean that recipients have to be online. But they do need to be able to communicate, whether it be by email, IM, or carrier pigeon.
The basic transaction
We'll explain how Alice can send Tari to Bob using a twoparty protocol for Mimblewimble. Multiparty transactions are similar, but the flow of information is a bit different and takes place over additional communication rounds.
Let's say Alice has 300 µT and she wants to send 200 µT to Bob.
Here’s the basic transaction:
Inputs  Outputs  

300  200  90  10 
Alice's UTXO  To Bob  Change  fee 
If we write this as a mathematical equation, where outputs are positive and inputs are negative, we should be able to balance things out so that there's no creation of coins out of thin air:
$$ 300 + 200 + 90 + 10 = 0 $$
This is basically the information that sits in the Bitcoin blockchain. Anyone can audit anyone else's transactions simply by inspecting the global ledger's transaction history. This isn't great for privacy.
Here is where confidential transactions come in. We can start by multiplying both sides of the equation above by some generator point H on the elliptic curve (for an introduction to elliptic curve math, see this presentation):
$$ 300.H = 200.H + 90.H + 10.H $$
Since H is a constant, the math above still holds, so we can validate that Alice is not stealing by checking that
$$(3.H)  (2.H)  (1.H)  (f.H) \equiv 0.H = 0 $$
Notice that we only see public keys and thus the values are hidden. Cool!
There's a catch though. If H is constant and known (it is), couldn’t someone just precalculate $n.H$ for all reasonable values of n and scan the blockchain for those public keys?^{1}
In short, yes. so we’re not done yet.
"This is called a preimage attack."
Blinding factors
To prevent a preimage attack from unblinding all the values in Tari transactions, we need to add randomness to each in and output. The basic idea is to do this by adding a second public key to each transaction output.
So what if we rewrite the inputs and outputs as follows:
$$ C_i = n_i.H + k_i.G $$
where G is another generator on the same curve.
This completely blinds the in and outputs so that no preimage attack is possible. This formulation is called a Pedersen commitment [3].
The two generators, H and G must be selected in a way that it's impossible to convert values from one generator to the other [2]. Specifically, if G is the base generator, then there exists some k where $$ H = kG $$
If anyone is able to figure out this k, the whole security of Confidential Transactions falls apart. It's left as an exercise for the reader to figure out why.
For a semigentle introduction to these concepts, Adam Gibson's paper on the subject is excellent [5].
Alice prepares a transaction
Alice can now start to build a transaction.
Type  Formula  Name 

Input  $$ 300.H  k_1.G $$  C1 
Change output  $$ 90.H + k_2.G $$  C2 
Fee  $$ 10.H + 0.G $$  f 
Total spent  $$ 200.H + 0.G $$  C* 
Sum  $$ 0.H + (k_2k_1).G $$  X 
The \( k_i \)values, \(k_1, k_2\) are the spending keys for those outputs.
The only pieces of information you need to spend Tari outputs are the spending key (also called a blinding factor) and it's associated value.
Therefore for this transaction, Alice's wallet which tracks all of her Tari unspent outputs, would have provided the blinding factor and the value "300" to complete the commitment C1.
Notice that when all the inputs and outputs are summed (including the fee), all the values cancel to zero; as they should. Notice also that the only term left is multiplied by the point G. All the H terms are gone. We call this sum the public excess for Alice's part of the transaction.
We define the excess of the transaction to be
$$ x_s = k_2  k_1 $$
A simple way for Alice to calculate her excess (and how the Tari wallet software does it) is to sum her output blinding factors and minus the sum of her input blinding factors.
Let's say Alice was trying to create some money for herself and made the change 100 µT instead of 90. In this instance, the sum of the outputs and inputs would not cancel on H and we would have,
$$X^* = 10.H + x_s.G$$
We'll see in a bit how the Mimblewimble protocol catches Alice out if she tries to pull shenanigans like this.
Alice actually also prepares a Range Proof for each output, which is a proof that the value of the output is between zero and 2^64 µT. Without range proofs, Alice could send negative amounts to people, enriching herself, and not breaking any of the accounting of Tari.
In Tari and Grin, the excess value is actually split into two values for added privacy. The Grin team has a good
explanation of why this offset
value is necessary [4]. We leave off this step to keep the explanation simple(r).
Alice also chooses a random nonce,
$$ r_a $$
and calculates the corresponding public nonce,
$$ R_a = r_a.G $$
Alice then sends the following info to Bob:
Field  Value 

Inputs  C1 
Outputs  C2 
Fee  10 
Amount paid to Bob  200 
Public nonce  $$ R_a $$ 
Public excess  X 
Metadata  m 
The message metadata is some extra bits of info that pertains to the transaction (such as when the output can be spent, for example).
Bob prepares his response
Bob receives this information and then sets about completing his part of the transaction.
First he can verify that Alice has sent over the correct information by checking that the public excess, X, is correct by following the same procedure that Alice used to calculate it above. This step isn't strictly necessary since doesn't have enough information to detect any fraud at this point.
He then builds a commitment from the amount
field that Alice is sending him:
$$ C_b = 200.H + k_b.G $$
where \(k_b\) is Bob's private spend key. He calculates
$$P_b = k_b.G$$
and generates a range proof for the commitment.
Bob then needs to sign that he's happy that everything is complete to his satisfaction. He creates a partial Schnorr Signature with the challenge,
$$ e = H(R_a + R_b \Vert X + P_b \Vert f \Vert m) $$
and the signature is given by
$$ s_b = r_b + ek_b $$
Bob sends back
Field  Value 

Output (Commitment and Range Proof)  $$C_b$$ 
Public nonce  $$R_b$$ 
Signature  $$s_b$$ 
Public key  $$P_b$$ 
Alice completes and broadcasts the transaction
After hearing back from Bob, Alice can wrap things up.
First, she can now use Bob's public nonce and public key to independently calculate the same challenge that Bob signed:
$$ e = H(R_a + R_b \Vert X + P_b \Vert f \Vert m) $$
Alice then creates both her own partial signature,
$$ s_a = r_a + e.x_s $$
and the combined aggregate signature, $$ s = s_a + s_b, R = R_a + R_b $$.
Alice can now broadcast this transaction to the network. The final transaction looks as follows:
Transaction Kernel  

Public excess  $$ X + P_b $$ 
Signature,  $$ (s, R) $$ 
fee  10 
Transaction metadata  m 
Transaction Body  

Inputs with range proofs  $$[C_1]$$ 
Outputs with range proofs  $$[C_2, C_B]$$ 
Transaction verification and propagation
When a full node receives Alice's transaction, it needs to verify that it's on the level before sending it on to its peers. The node wants to check the following:
 All inputs come from the current UTXO set,
 All outputs have a valid range proof,
 The values balance,
 The signature in the kernel is valid,
 various other consensus checks (such as, is the fee greater than the minimum)
All inputs come from the current UTXO set
All full nodes keep track of the set of unspent outputs, so the node will check that C1 is in that list.
All outputs have a valid range proof
The range proof is verified against its matching commitment.
The values balance
In this check, the node wants to make sure that no coins are created or destroyed in the transaction. But how can it do this if the values are blinded?
Recall that in an honest transaction, all the values (which are multiplied by H) cancel, and you're left with the sum of the public keys of the outputs minus the public keys of the inputs. This noncoincidentally happens to be the same value that is stored in the kernel as the public excess.
The summed public nonces, R are also stored in the kernel, so this allows the node to verify the signature by checking
$$ s.G \stackrel{?}{=} R + e(X + P_b) $$
where the challenge e is calculated as before.
Therefore by validating the kernel signature, we also prove to ourselves that the transaction accounting is correct.
If all these checks pass, then the node will forward the transaction onto its peers and it will eventually be mined and be added to the blockchain.
Stopping fraud
Now let's say Alice tried to be sneaky and used \( X^* \) as her excess; the one where she gave herself 100 µT change instead of 90 µT. Now the values won't balance. The sum of outputs, inputs and fees will look something like
$$ 10.H + (x_s + k_b).G $$
So now when a full node checks the signature:
$$ \begin{align} R + e(X^* + P_b) &\stackrel{?}{=} s.G \\ R + e(10.H + x_s.G + P_b) &\stackrel{?}{=} (r_a + r_b + e(x_s + k_b)).G \\ R + e(10.H + X + P_b) &\stackrel{?}{=} (r_a + r_b).G + e(x_s + k_b).G \\ \mathbf{10e.H} + R + e(X + P_b) &\stackrel{?}{=} R + e(X + P_b) \\ \end{align} $$
The signature doesn't verify! The node can't tell exactly what is wrong with the transaction, but it knows something is up, and so it will just drop the transaction silently and get on with its life.
Transaction summary
To sum up: A Tari / MimbleWimble transaction includes the following:
 From Alice, a set of inputs, that reference and spend a set of previous outputs.
 From Alice and Bob, a set of new outputs that include:
 A value and a blinding factor (which is just a new private key),
 A range proof that shows that the value is nonnegative.
 The transaction fee, in cleartext,
 The public excess, which is the sum of all blinding factors used in the transaction,
 Transaction metadata,
 The excess blinding value used as the private key to sign a message attesting to the transaction metadata, and the public excess.
References
[1]. Confidential transactions https://www.mycryptopedia.com/whatareconfidentialtransactions/ Date accessed: 9 April 2019.
[2]. NothingUpMy_Sleeve Numbers. https://en.wikipedia.org/w/index.php?title=Nothingupmysleeve_number&oldid=889582749 Date accessed: 9 April 2019.
[3]. Commitment schemes, Wikipedia. https://en.wikipedia.org/wiki/Commitment_scheme. Date accessed: 9 April 2019.
[4]. Kernel Offsets, in Introduction to MimbleWimble and Grin, https://github.com/mimblewimble/grin/blob/master/doc/intro.md#kerneloffsets, Accessed: 9 April 2019.
[5]: A. Gibson, "From Zeroknowledge to BulletProofs", https://joinmarket.me/static/FromZK2BPs_v1.pdf. Date accessed: 10 April 2019
MimblewimbleGrin Blockchain Protocol Overview
Introduction
Depending on who you ask, Mimblewimble is either a tonguetying curse or blockchain protocol designed to be private and scalable. The transactions in Mimblewimble is derived from confidential transactions by Greg Maxwell [1], that is in turn based on the Pedersen commitment scheme. On 19 July 2016 someone with the name Tom Elvis Jedusor left a whitepaper on the tor network describing how Mimblewimble could work. As the potential for this was realized work was done to make this a reality. One of these projects is Grin, which is a minimalistic implementation of Mimblewimble. Further information could be found on Grin at Grin vs. BEAM, a Comparison [2] and Grin design choice criticisms  Truth or Fiction [3].
Contents
Mimblewimble protocol overview
Commitments
Mimblewimble publishes all transaction as confidential transactions. All inputs, outputs and change are expressed in the following form:
$ r \cdot G + v \cdot H $
where $ G $ and $ H $ are elliptic curves, $ r $ a private key used as a blinding factor, $ v $ the value and "$ \cdot $" being Ellipticcurve cryptography (ECC) multiplication.
An example transaction can be expressed as input = output + change.
$ (r_i \cdot G + v_i \cdot H) = (r_c \cdot G + v_c \cdot H) + (r_c \cdot G + v_c + \cdot H) $
But this requires that
$ r_i = r_o + r_c $
A more detail explanation of how Mimblewimble works can be found in the Grin GitHub documents [4].
Cutthrough and pruning
Cutthrough
Grin includes something that is called cutthrough in the transaction pool, and this removes outputs in the transaction pool that are already spent as new inputs. Using the fact that every transaction in a block should sum to zero. This is shown below:
$ output  inputs = kernel_excess +(part \mspace{3mu} of)kernel_ offset $
The kernel offset is used to hide which kernel belongs to which transaction and we only have a summed kernel offset stored in the header of each block.
We don't have to record these transactions inside the block, although we still have to record the kernel as the kernel proof transfer of ownership to make sure that the whole block sums to zero, this is expressed in the formula below:
$ sum(ouputs)  sum(inputs) = sum(kernel_excess) + kernel_offset $
An example of cutthrough can be seen below:
I1(x1) +> O1
+> O2
I2(x2,O2) +> O3
I3(O3) +> O4
+> O5
After cutthrough
I1(x1) +> O1
I2(x2) +> O4
+> O5
In the diagrams: I are new inputs, X are inputs from previous blocks and O are outputs.
This causes that Mimblewimble blocks to be much smaller than normal bitcoin blocks as the cutthrough transactions are not listed under inputs and outputs anymore. In practice after this we can still see there was a transaction because the kernel excess still remains, but the actual hidden values are not recorded.
Pruning
Pruning takes this same concept but goes into past blocks. So, if an output in a previous block gets spent it will be removed from that block. Pruning removes the leaves from the Merkle Mountain Range (MMR) as well. Thus, it allows the ledger to be small and scalable. According to the Grin team [3] assuming 10 million transactions with 100 000 unspent outputs the ledger will be roughly 130GB, this can be divided into the following parts:
 128GB of transaction data (inputs and outputs).
 1 GB of transaction proof data.
 250MB of block headers.
The total storage requirements can be reduced if cutthrough and pruning is applied, the ledger will shrink to approximately 1.8GB and will result in the following:
 1 GB of transaction proof data.
 UTXO size of 520MB.
 250MB of block headers.
Grin blocks
The grin block contains the following data:
 Transaction outputs, which include for each output:
 A Pedersen commitment (33 bytes).
 A range proof (over 5KB at this time).
 Transaction inputs, which are just output references (32 bytes).
 Transaction fees, in clear text
 Transaction "proofs", which include for each transaction:
 The excess commitment sum for the transaction (33 bytes).
 A signature generated with the excess (71 bytes average).
 A block header that includes Merkle trees and proof of work (about 250 bytes).
The Grin header:
Header field  

Hash  Unique hash of block 
Version  Grin version 
Previous block  Unique hash of previous block 
Age  Time the block was mined 
Cuckoo solution  The wining cuckoo solution 
Difficulty  Difficulty of the solved cuckoo 
Target Difficulty  Difficulty of this block 
Total difficulty  Total difficulty of mined chain up to age 
Total kernal offset  Kernel offset 
Nonce  Random number for cuckoo 
Block reward  Coinbase + fee reward for block 
The rest of the block contains a list of kernels, inputs and outputs. An example of a grin block is shown in the appendix.
Trustless transactions
Schnorr signatures have been done in Tari Labs University (TLU), please have a look here for a more detailed explanation [7].
Since Grin transactions are obscured by Pedersen Commitments, there is no prove that money was actually transferred. To solve this problem, we require the receiver to collaborate with the sender in building a transaction and more specifically the kernel signature [4].
When Alice wants to pay Bob, the transaction will be performed using the following steps:

Alice selects her inputs and her change. The sum of all blinding factors (change output minus inputs) is $ r_s $.

Alice picks a random nonce ks and sends her partial transaction, $ k_s\cdot G $ and $ r_s\cdot G $ to Bob.

Bob picks his own random nonce $ k_r $ and the blinding factor for his output $ r_r $. Using $ r_r $ Bob adds his output to the transaction.

Bob computes the message $ M= fee \Vert lock_height $,
the Schnorr challenge $ e = SHA256(M \Vert K_r \cdot G + K_s\cdot G \Vert r_r\cdot G + r_s\cdot G) $
and finally his side of the signature $ s_r = k_r + e\cdot G $

Bob sends: $ s_r $ and $ k_r\cdot G $ and $ r_r\cdot G $ to Alice.

Alice computes $ e $ just like Bob did and can check that $ s_r\cdot G = k_r\cdot G + e\cdot r_r \cdot G $

Alice sends her side of the signature $ s_s = k_s + e\cdot r_s $ to Bob.

Bob validates $ s_s\cdot G $ just like Alice did for $ s_r\cdot G $ in step 5 and can produce the final signature $ s = s_s + s_r , k_s\cdot G + k_s\cdot G$ as well as the final transaction kernel including $ s $ and the public key $ r_r\cdot G + r_s\cdot G$
Contracts
Timelocked
Absolute
In a normal Grin transaction the signature [4] just the normal fee gets signed as the message. But to get an absolute time locked transaction the message can be modified taking the block height and appending the fee to that. A block with a kernel that includes a lock height greater than the current block height is then rejected.
$ M = fee \Vert h $
Relative
Taking into account how an absolute timelocked transaction is constructed the same idea can be extended by taking the relative block height and not the absolute height, but also adding a specific kernel commitment. In this way the signature references a specific block as height. The same principle counts as with absolute timelocked transactions in that a block with a kernel containing a relative timelocked transaction that has not passed is rejected.
$ M = fee \Vert h \Vert c $
Multisig
Multisignatures (Multisigs) are also known as NofM signatures, and this means that N amount out of M amount of peers need to agree before a transaction can be spend.
When Bob and Alice [6] wants to do a 2of2 multisig contract, the contract can be done with the following steps:
 Bob picks a blinding factor $ r_b $ and sends $ r_b\cdot G $ to Alice.
 Alice picks a blinding factor $ r_a $ and builds the commitment $ C= r_a\cdot G + r_b\cdot G + v\cdot H $, she sends the commitment to Bob.
 Bob creates a range proof for $ v $ using $ C $ and $ r_b $ and sends it to Alice.
 Alice generates her own range proof, aggregates it with Bob, finalizing the multiparty output $ O_{ab} $
 The kernel is built following the same procedure as used with Trustless Transactions.
We observe that the output $ O_{ab} $ , is unknown to both party because neither knows the whole blinding factor. To be able to build a transaction spending $ O_{ab} $, someone would need to know $ r_a + r_b $ to produce a kernel signature. To produce the original spending kernel, Alice and Bob need to collaborate.
Atomic swaps
Atomic swaps can be used to exchange coins from different blockchains in a trustless environment. In the Grin documentation this is handled in length by the contracts documentation [6] and in the contracts ideas documentation [8]. In practice there has already been an atomic swap between Grin and Ethereum [9], but this only used the Grin testnet with a modified Grin implementation as the release version of Grin did not yet support the required contracts. TLU has a section about Atomic swaps [7].
Atomic swaps work with 2of2 multisig contracts, one public key being Alice's, the second being the hash of a preimage that Bob has to reveal. Consider public key derivation $ x\cdot G $ to be the hash function and by Bob revealing $ x $, Alice can then produce an adequate signature proving she knows $ x $ (in addition to her own private key).
Alice will swap Grin with Bob for Bitcoin. We assume Bob created an output on the Bitcoin blockchain that allows spending by Alice if she learns a hash preimage $ x $, or by Bob after time $ T_b $ . Alice is ready to send her Grin to Bob if he reveals $ x $.
Alice will send her Grin to a multiparty timelock contract with a refund time $ T_a < T_b $. To send the 2of2 output to Bob and execute the swap, Alice and Bob start as if they were building a normal trustless transaction.
 Alice picks a random nonce $ k_s $ and her blinding sum $ r_s $ and sends $ k_s\cdot G $ and $ r_s\cdot G $ to Bob.
 Bob picks a random blinding factor $ r_r $ and a random nonce $ k_r $. However, this time, instead of simply sending $ s_r = k_r + e\cdot r_r $ with his $ r_r\cdot G $ and $ k_r\cdot G $, Bob sends $ s_r' = k_r + x + e\cdot r_r $ as well as $ x\cdot G $
 Alice can validate that $ s_r'\cdot G = k_r\cdot G + x\cdot G + r_r\cdot G $. She can also check that Bob has money locked with $ x\cdot G $ on the other chain.
 Alice sends back her $ s_s = k_s + e\cdot x_s $ as she normally would, now that she can also compute $ e = SHA256(M \Vert k_s\cdot G+k_r\cdot G) $
 To complete the signature, Bob computes $ s_r = k_r + e\cdot r_r $ and the final signature is $ (s_r + s_s, k_r\cdot G + k_s\cdot G) $
 As soon as Bob broadcasts the final transaction to get his Grin, Alice can compute $ s_r'  s_r $ to get $ x $.
Prior to completing the atomic swap, Bob needs to know Alice's public key. Bob would then create an output on the Bitcoin blockchain with a 2of2 multisig similar to alice_pubkey secret_pubkey 2 OP_CHECKMULTISIG
. This should be wrapped in an OP_IF
so Bob can get his money back after an agreedupon time. All of this can even be wrapped in a Pays To Script Hash (P2SH). Here secret_pubkey
is $x\cdot G$ from the previous section.
To verify the output, Alice would take $x\cdot G$, recreate the bitcoin script, hash it and check that her hash matches what's in theP2SH (step 2 in previous section). Once she gets $x$ (step 6), she can build the 2 signatures necessary to spend the 2of2, having both private keys, and get her bitcoin.
References
[1] Confidential Transactions. Maxwell, G. (2017) Available at: https://people.xiph.org/~greg/confidential_values.txt (Accessed: 24 October 2018).
[2] Grin vs. BEAM, a Comparison. Robinson, P. and et al (2018)Available at: https://tarilabs.github.io/tariuniversity/protocols/grinbeamcomparison/MainReport.html#grinvsbeamacomparison (Accessed: 8 October 2018).
[3] Grin Design Choice Criticisms  Truth or Fiction. Roodt, Y. and et al (2018) Available at: https://tarilabs.github.io/tariuniversity/protocols/grindesignchoicecriticisms/MainReport.html (Accessed: 8 October 2018).
[4] Grin document structure. Simon B and Et al (2018) Available at: https://github.com/mimblewimble/grin/blob/master/doc/table_of_contents.md (Accessed: 24 October 2018).
[5] Pruning Blockchain Data. Peverell, I. and et al (2016) Available at: https://github.com/mimblewimble/grin/blob/master/doc/pruning.md (Accessed: 26 October 2018).
[6] Contracts. Peverell, I. and Et al (2018) Available at: https://github.com/mimblewimble/grin/blob/master/doc/contracts.md (Accessed: 26 October 2018).
[7] Tari Labs University. Tari Labs (2018) Available at: https://tarilabs.github.io/tariuniversity/ (Accessed: 27 October 2018).
[8] Contract ideas. Sceller, Q. Le (2018) Available at: https://github.com/mimblewimble/grin/blob/master/doc/contract_ideas.md (Accessed: 27 October 2018).
[9] First Grin atomic swap! Jasper (2018) Available at: https://medium.com/grinswap/firstgrinatomicswapa16b4cc19196 (Accessed: 27 October 2018).
Contributors
https://github.com/SWvheerden
https://github.com/neonknight64
https://github.com/hansieodendaal
Appendices
Appendix A: Example of Grin Block
Hash  02cb5e810857266609511699c8d222ed4e02883c6b6d3405c05a3caea9bb0f64 

Version  1 
Previous Block  0343597fe7c69f497177248913e6e485f3f23bb03b07a0b8a5b54f68187dbc1d 
Age  20181023, 08:03:46 UTC 
Cuckoo Solution  Size: 
Difficulty  37,652 
Target Difficulty  17,736 
Total Difficulty  290,138,524 
Total Kernel Offset  b52ccdf119fe18d7bd12bcdf0642fcb479c6093dca566e0aed33eb538f410fb5 
Nonce  7262225957146290317 
Block Reward  60 grin 
Fees  14 mg 
Inputs (4)
Commit 

0898a4b53964ada66aa16de3d44ff02228c168a23c0bd71b162f4366ce0dae24b0 
09a173023e9c39c923e626317ffd384c7bce44109fea91a9c142723bfa700fce27 
086e0d164fe92d837b5365465a6b37af496a4f8520a2c1fccbb9f736521631ba96 
087a00d61f8ada399f170953c6cc7336c6a0b22518a8b02fd8dd3e28c01ee51fdb 
Outputs (5)
Output Type  Commit  Spent 

Transaction  09eac33dfdeb84da698c6c17329e4a06020238d9bb31435a4abd9d2ffc122f6879  False 
Transaction  0860e9cf37a94668c842738a5acc8abd628c122608f48a50bbb7728f46a3d50673  False 
Coinbase  08324cdbf7443b6253bb0cdf314fce39117dcafbddda36ed37f2c209fc651802d6  False 
Transaction  0873f0da4ce164e2597800bf678946aad1cd2d7e2371c4eed471fecf9571942b4f  False 
Transaction  09774ee77edaaa81b3c6ee31f471f014db86c4b3345f739472cb12ecc8f40401df  False 
Kernels (3)
Features  Fee  Lock Height 

DEFAULT_KERNEL  6 mg  7477 
DEFAULT_KERNEL  8 mg  7477 
COINBASE_KERNEL  0 grin  7482 
Apart from the header information we can only see that this block contains 2 transaction from the 2 kernels present. Between those two transaction we only know that there were 4 inputs and 4 outputs. Because of the way Mimblewimble obfuscates the transaction we don't know the values or which input and output belongs to which transaction.
Grin vs. BEAM, a Comparison
Introduction
Grin and BEAM are two opensource cryptocurrency projects based on the Mimblewimble protocol. The Mimblewimble protocol was first proposed by a anonymous user using the pseudonym Tom Elvis Jedusor (the french translation of Voldemort's name from the Harry Potter series of books). This user logged onto a bitcoin research IRC channel and posted a link to a text article hosted on a Tor hidden service [1]. This article provided the basis for a new way to construct blockchain style transactions that provided inherent privacy and the ability to dramatically reduce the size of the blockchain by compressing the transaction history of the chain. This initial article presented the main ideas of the protocol, but it left out a number of critical elements required for a practical implementation and even contained a mistake in the cryptographic formulation. Andrew Poelstra published a followup paper that addresses many of these issues and refines the core concepts of Mimblewimble [2] which have been applied to the practical implementations of this protocol in both the Grin [3] and BEAM [4] projects.
The Mimblewimble protocol describes how transacting parties will interactively work to build a valid transaction using their public/private key pairs, used to prove ownership of transaction outputs, and interactively chosen blinding factors. These blinding factors are used to obfuscate the participant's public keys from everyone, including each other, and to hide the value of the transaction from everyone except the counterparty in that specific transaction. The protocol also performs a process called cutthrough which condenses transactions by eliminating intermediary transactions. This improves privacy and compresses the amount of data that is maintained on the blockchain [3]. This cutthrough process precludes general purpose scripting systems like those found in Bitcoin. However, Andrew Poelstra proposed the concept of Scriptless scripts, which make use of Schnorr signatures, to build adaptor signatures that allow for encoding of many of the behaviors that scripts are traditionally used to achieve. Scriptless scripts enable functionality like Atomic swaps and Lightning network like payment channels [5].
Grin and BEAM both implement the Mimblewimble protocol but each has been built from scratch. Grin is written in RUST and BEAM in C++. The remainder of this report will focus on describing some of the aspects of each project that sets them apart. Both projects are still early in their development cycle and many of these details are changing on a daily basis. Furthermore, the BEAM project documentation is still mostly available only in Russian so as of the writing of this report not all the technical details are available for English readers. As such the discussion in this report will most likely become out of date as the respective project evolve.
The remainder of this report will be structured as follows: Firstly, some implementation details and unique features of the project will be discussed. Secondly, we will examine the difference in the proofofwork algorithms employed and finally we will discuss the different governance models the projects are using.
Contents
 Grin vs. BEAM, a Comparison
Comparison of Features and Implementation in Grin vs BEAM
The two projects are being independently built from scratch by different teams in different languages (Rust [6] and C++ [7]), so there will be many differences in the raw implementations. For example, Grin uses LMDB for its embedded database and BEAM use SQLite which have performance differences but are functionally similar. Grin uses a Directed Acyclic Graph (DAG) to represent their mempool to avoid transaction reference loops [8] while BEAM uses a multiset keyvalue data structure with logic to enable some of their extended features [7].
From the perspective of features the two projects exhibit all the features inherent to Mimblewimble. Grin's stated goal is to produce a simple and easy to maintain implementation of the Mimblewimble protocol [3]. BEAM's implementation however contains a number of modifications to the Mimblewimble approach with the aim to provide some unique features for their implementation. Before we get into the features and design elements that set the two projects apart let's discuss an interesting feature that both projects have implemented.
Both Grin and BEAM have incorporated a version of the Dandelion relay protocol that supports transaction aggregation. One of the major outstanding challenges for privacy that cryptocurrencies face is that it is possible to track transactions as they are added to the mempool and propagate across the network and to link those transactions to their originating IP addresses. This information can be used to deanonymize users even on networks with strong transaction privacy. To improve privacy during the propagation of transactions to the network the Dandelion network propagation scheme was proposed [9]. In this scheme transactions are propagated in two phases, the Anonymity phase (or "stem" phase) and the Spreading phase (or "fluff" phase) as illustrated in the Figure 1. In the stem phase a transaction is propagated to only a single randomly selected peer from the current nodes peer list. After a random number of hops along the network, each hop propagating to only a single random peer, the propagation process enters the second phase. During the fluff phase the transaction is then propagated using a full flood/diffusion method as found in most networks. This approach means that the transaction has first propagated to a random point in the network before flooding the network so it becomes much more difficult to track its origin.
Both projects have adapted this approach to work with Mimblewimble transactions. Grin's implementation allows for transaction aggregation and cutthrough in the stem phase of propagation which provides even greater anonymity to the transactions before they spread during the fluff phase [10]. In addition to transaction aggregation and cutthrough, Beam introduces “dummy” transactions that are added in the stem phase to compensate for situations when real transactions are not available [33].
Grin unique features
Grin is aiming to be a simple and minimal reference implementation of a Mimblewimble blockchain so they are not aiming to include many features extending the core Mimblewimble functionality as discussed. However, the Grin implementation does include some interesting implementation choices which they have documented in depth on their growing Github repository's wiki.
Grin has implemented a method for a node to sync the blockchain very quickly by only downloading a partial history [11]. A new node entering the network will query the current head block of the chain and then requests the block header at a horizon, in the example the horizon is initially set at 5000 blocks before the current head. The node then checks if there is enough data to confirm consensus and if there isn't it will increase its horizon until consensus is reached. At that point it will download the full UTXO set of the horizon block. This approach does introduce a few security risks but mitigations are provided and the result is that a node can sync to the network with an order of magnitude less data.
Since the initial writing of this article (October 2018) BEAM has published their solution for fast node synchronization using macroblocks. A macroblock is a complete state of all UTXOs, periodically created by Beam nodes [12].
BEAM unique features
BEAM has set out to extend the feature set of Mimblewimble in a number of ways. BEAM supports setting an explicit incubation period on a UTXO which limits its ability to be spent to a specific number of blocks after its creation [13]. This is different to a timelock which prevents a transaction from being added to a block before a certain time. BEAM also supports the traditional timelock feature but includes the ability to also specify an upper time limit after which the transaction can no longer be included in a block [13]. This feature means that a party can be sure that if a transaction is not included in a block on the main blockchain after a certain time that it will never appear.
Another unique feature of BEAM is an implementation of an auditable wallet. For a business to operate in a given regulatory environment it will need to demonstrate its compliance to the relevant authorities. BEAM has proposed a wallet designed for compliant businesses which generates additional public/private key pairs specifically for audit purposes. These signatures are used to tag transactions so that only the auditing authority who are given the public key can identify those transactions on the blockchain but cannot create transactions with this tag themselves. This allows a business to provide visibility of their transactions to a given authority without compromising their privacy to the public [14].
BEAM has also proposed another feature aimed at keeping the blockchain as compact as possible. In Mimblewimble as transactions are added cutthrough is performed which eliminates all intermediary transaction commitments [3]. However, the transaction kernels for every transaction are never removed. BEAM has proposed a scheme to reuse these transaction kernels to validate subsequent transactions [13]. In order to consume the existing kernels without compromising the transaction irreversibility principle BEAM proposes using a multiplier to be applied to an old kernel, by the same user who has visibility of the old kernel, to be used in a new transaction. In order to incentivize transactions to be built in this way BEAM includes a fee refund model for these types of transactions. This feature will not be part of the initial release.
When constructing a valid Mimblewimble transaction the parties involved need to collaborate in order to choose blinding factors that balance. This interactive negotiation requires a number of steps and it implies that the parties need to be in communication to finalize the transaction. Grin facilitates this process by the two parties connecting directly to one another using a socket based channel for a "realtime" session. This means that both parties need to be online simultaneously. BEAM has implemented a Secure Bulletin Board System (SBBS) that is run on BEAM fullnodes to allow for asynchronous negotiation of transactions [30], [31].
Requiring the interactive participation of both parties in constructing a transaction can be a point of friction in using a Mimblewimble blockchain. In addition to the secure BBS communication channel BEAM also plans to support onesided transactions where the payee in a transaction who expects to be paid a certain amount can construct their half of the transaction and send this half constructed transaction to the payer. The payer can then finish constructing the transaction and publish it to the blockchain. Under the normal Mimblewimble system this is not possible because it would involve revealing your blinding factor to the counterparty. BEAM solves this problem by using a process they call kernel fusion whereby a kernel can include a reference to another kernel so that it is only valid if both kernels are present in the transaction. In this way the payee can build their half of the transaction with a secret blinding factor and a kernel that compensates for their blinding factor that must be included when the payer completes the transaction [13]. BEAM has indicated that this feature will be part of the initial release.
Both projects make use of a number of Merkle tree structures to keep tract of various aspects of the respective blockchains. The exact trees and what they record is documented for both projects [16], [17]. Beam however makes use of a RadixHash tree structure for some of their trees which is a modified Merkle tree that is also a binary search tree. This provides a number of features that the standard Merkle trees do not have which they exploit in their implementation [17].
The features discussed here can all be seen in the code at the time of writing, though that is not a guarantee that they are working. There are a couple of features that have been mentioned in the literature as planned for the future, which have not yet been implemented. These include embedding signed textual content into transactions that can be used to record contract text [13] and also the issuing of confidential assets [18].
Proof of Work Mining Algorithm
BEAM has announced that it will employ the Equihash Proof of Work (PoW) mining algorithm with the parameters set to n=150, k=5 [32]. Equihash was proposed in 2016 as a memoryhard PoW algorithm which relied heavily on memoryusage to achieve Application Specific Integrated Circuit (ASIC) resistance [19]. The goal was to produce an algorithm that would be more efficient to run on consumer GPUs as opposed to the growing field of ASIC miners, mainly produced by Bitmain at the time. It was hoped this would aid in decentralising the mining power for cryptocurrencies that used this algorithm. The idea behind Equihash's ASIC resistance was that at the time implementing memory in an ASIC was expensive and so GPUs were more efficient at calculating the Equihash PoW. This ASIC resistance did last for a while but in early 2018 Bitmain released an ASIC for Equihash which were significantly more efficient than GPUs for the Equihash configurations used by Zcash, Bitcoin Gold and Zencash to name a few. It is possible to tweak the parameters of the Equihash algorithm to make it more memory intensive and thus make current ASICs and the older GPU mining farms obsolete, but it remains to be seen if BEAM will do this. No block time has been published as of the writing of this report.
Grin initially opted to use the new Cuckoo Cycle PoW algorithm, also purported to be ASIC resistant due to being memory latency bound [20]. This means that the algorithm is bound by memory bandwidth rather than raw processor speed with the hope that it will make mining possible on commodity hardware.
In August 2018 the Grin team made an announcement that they have become aware that it was likely that an ASIC would be available for the Cuckoo cycle algorithm at launch of their mainnet [21]. While they acknowledge that ASIC mining is inevitable they are concerned that the current ASIC market is very centralized (i.e. Bitmain) and that they want to foster a grassroots GPU mining community in the early days of Grin. Grin wants to aim to foster this community for 2 years by which time they hope that ASICs have become more of a commodity and thus decentralized.
To address this it was proposed to use two PoW algorithms initially. One that is ASIC Friendly (AF) and one that is ASIC Resistant (AR) and then select which PoW is used per block to balance the mining rewards over a 24h period between the two algorithms. The Governance committee resolved on 25 September to go ahead with this approach using a modified version of the Cuckoo cycle algorithm called Cuckatoo Cycle. The AF algorithm at launch will be Cuckatoo32+ which will gradually increase its memory requirements to make older singlechip ASICs obsolete over time. The AR algorithm is still not defined [23].
Governance Models and Monetary Policy
Both the Grin and BEAM projects are opensource and available on Github [6], [7]. The Grin project has 75 contributors of which 8 have contributed the vast majority of the code. BEAM has 10 contributors of which 4 have contributed the vast majority of the code (at the time of writing). The two projects have opted for different models of governance. BEAM has opted to setup a foundation to manage the project and which the core developers are members of. This is the route taken by the majority of cryptocurrency projects in this space. The Grin community has decided against setting up a central foundation and has compiled an interesting discussion of the pro's and con's of a centralized foundation [22]. This document contains a very in depth discussion weighing up the various governance functions that a foundation might serve and evaluating each usecase in depth. The Grin community came to the conclusion that while foundations are useful that they do not represent the only solution to governance problems and have opted to remain a completely decentralized community driven project. Currently decisions are made by periodic governance meetings that are convened on Gitter with community members where an agenda is discussed and decisions are ratified. These meeting agendas and minutes can be found in the Grin Forums governance section and an example of the outcomes of such a meeting can be seen in [23].
Neither project will engage in an ICO or premine but the two project also have different funding models. BEAM set up a LLC and has attracted investors to it for its initial round of funding and for sustainability will put 20% of each mining block rewards into a treasury to be used to fund further development and promotion of BEAM, as well as to fund a nonprofit Beam Foundation that will take over the management of the protocol during the first year after launch [24]. The goal of the Foundation will be to support maintenance and further development of Beam, promote relevant cryptographic research, support awareness and education in the areas of financial privacy, and support academic work in adjacent areas. In the industry this treasury mechanism is called a dev tax. Grin will not levy a dev tax on the mining rewards and will rely on community participation and community funding. The Grin project does accept financial support but these funding campaigns are conducted according to their "Community Funding Principles" [25] which will be conducted on a "needbyneed" basis. A campaign will specify a specific need it is aimed at fulfilling (e.g. "Hosting fees for X for the next year") and the funding will be received by the community member who ran the campaign. This will provide 100% visibility on who is responsible for the received funds. An example of a funding campaign is the Developer Funding Campaign run by Yeastplume to fund his fulltime involvement in the project from Oct 2018 to February 2019 can be seen in [26].
In terms of the monetary policy of the two projects BEAM has stated that they will be using a deflationary model with periodic halving of their mining reward and a maximum supply of BEAM of ~262 million coins. Beam will start with 100 coins emitted per block. The first halving will occur after one year, and then halving will happen every 4 years [32]. Grin has opted for an inflationary model where the block reward will remain constant, they make their arguments for this approach in [27]. This approach will asymptotically tend towards a zero percent dilution as the supply increases instead of enforcing a set supply [28]. Grin has not specified their mining reward or fees structure as yet, but based on their current documentation Grin is planning on a 60 Grin per block reward. Neither project has made a final decision of how to structure fees but the Grin project has started to explore how to set a fee baseline by using a metric of "fees per reward per minute" [28].
Conclusions, Observations, Recommendations
In summary, Grin and BEAM are two opensource projects that are implementing the Mimblewimble blockchain scheme. Both projects are building from scratch. Grin is using Rust while BEAM is using C++ and as such there are many technical differences in their design and implementations. However, from a functional perspective both projects will support all the core Mimblewimble functionality. Each project does contain some unique functionality but as Grin's goal is produce a minimalistic implementation of Mimblewimble the majority of the unique features that extend Mimblewimble lie in the BEAM project. The list below summarizes the functional similarities and differences between the two projects.
 Similarities:
 Core Mimblewimble feature set
 Dandelion relay protocol
 Grin unique features:
 Partial history syncing
 DAG representation of Mempool to prevent duplicate UTXO's and cyclic transaction references
 BEAM unique features:

Both confidential and nonconfidential transactions

Explicit UTXO incubation period

Timelocks with a minimum and maximum threshold

Auditable transactions

Secure BBS system hosted on the nodes for noninteractive transaction negotiation

Onesided transaction construction

Incentives to consume old UTXO's in order to keep the blockchain compact

Use of RadixHash trees

These projects are still very young, as of the writing of this report both are still in the testnet phase, and many of their core design choices have not been built or tested yet. Much of the BEAM wiki is still in Russian so it is likely there are details contained there that we are not privy to yet. It will be interesting to keep an eye on these projects to see how their various decisions play out both technically and in terms of their monetary policy and governance models.
References
[1] T.E. Jedusor, "MIMBLEWIMBLE", https://download.wpsoftware.net/bitcoin/wizardry/mimblewimble.txt, Date access: 20180930
[2] A. Poelstra, "Mimblewimble", https://download.wpsoftware.net/bitcoin/wizardry/mimblewimble.pdf, Date accessed: 20180930
[3] Introduction to Mimblewimble and Grin, https://github.com/mimblewimble/grin/blob/master/doc/intro.md, Date accessed: 20180930.
[4] BEAM: The Scalable Confidential Cryptocurrency, https://docs.wixstatic.com/ugd/87affd_3b032677d12b43ceb53fa38d5948cb08.pdf, Date accessed: 20180928
[5] A. Gibson, "Flipping the scriptless script on Schnorr", https://joinmarket.me/blog/blog/flippingthescriptlessscriptonschnorr/, Date accessed: 20180930
[6] Grin Github Repository, https://github.com/mimblewimble/grin, Date accessed: 20180930
[7] BEAM Github Repository, https://github.com/beammw/beam, Date accessed: 20180930
[8] Grin  Transaction Pool, https://github.com/mimblewimble/grin/blob/master/doc/internal/pool.md, Date accessed: 20181022
[9] Shaileshh Bojja Venkatakrishnan, Giulia Fanti, and Pramod Viswanath, "Dandelion: Redesigning the Bitcoin Network for Anonymity.", Proc. ACM Meas. Anal. Comput. Syst. 1, 1, 2017
[10] Dandelion in Grin: PrivacyPreserving Transaction Aggregation and Propogation, https://github.com/mimblewimble/grin/blob/master/doc/dandelion/dandelion.md, Date accessed: 20180930
[11] Grin  Blockchain Syncing, https://github.com/mimblewimble/grin/blob/master/doc/chain/chain_sync.md, Date accessed: 20181022
[12] BEAM Node initalization synchronization, https://github.com/beammw/beam/wiki/Nodeinitialsynchronization, Date accessed: 20181224
[13] BEAM description. Comparison with classical MW, https://www.scribd.com/document/385080303/BEAMDescriptionComparisonWithClassicalMW, Date accessed: 20181018
[14] BEAM  Wallet Audit, https://github.com/beammw/beam/wiki/Walletaudit, Date accessed: 20180930
[15] Beam's offline transaction using Secure BBS system, https://www.reddit.com/r/beamprivacy/comments/9fqbfg/beams_offline_transactions_using_secure_bbs_system/, Date accessed: 20181022
[16] GRIN  Merkle structures, https://github.com/mimblewimble/grin/blob/master/doc/merkle.md, Date accessed: 20181022
[17] BEAM  Merkle trees, https://github.com/beammw/beam/wiki/Merkletrees, Date accessed: 20181022
[18] BEAM  Confidential assets, https://github.com/beammw/beam/wiki/Confidentialassets, Date accessed: 20181022
[19] Alex Biryukov, Dmitry Khovratovich, "Equihash: asymmetric proofofwork based on the Generalized Birthday problem", Proceedings of NDSS, 2016
[20] Cuckoo Cycle, https://github.com/tromp/cuckoo, Date accessed: 20180930
[21] I. Peverell, "Proof of work update", https://www.grinforum.org/t/proofofworkupdate/713
[22] Regarding Foundation, https://github.com/mimblewimble/docs/wiki/RegardingFoundations, Date accessed: 20180930
[23] Meeting Notes: Governance, Sep 25 2018, https://www.grinforum.org/t/meetingnotesgovernancesep252018/874, Date accessed: 20180930
[24] BEAM Features, https://www.beammw.com/features, Date accessed: 20180930
[25] Grin's Community Funding Principles, https://grintech.org/funding.html, Date accessed: 20180928
[26] Oct 2018  Feb 2019 Developer Funding  Yeastplume, https://grintech.org/yeastplume.html, Date accessed: 20180930
[27] Monetary Policy, https://github.com/mimblewimble/docs/wiki/MonetaryPolicy, Date accessed: 20180930
[28] Economic Policy: Fees and Mining Reward, https://github.com/mimblewimble/grin/wiki/feesmining, Date accessed: 20180930
[29] Grin's ProofofWork, https://github.com/mimblewimble/grin/blob/master/doc/pow/pow.md, Date accessed 20180930
[30]: R Lahat, "The Secure Bulletin Board System (SBBS) implementation in Beam", https://medium.com/beammw/thesecurebulletinboardsystemsbbsimplementationinbeama01b91c0e919, Date accessed: 2181224.
[31]: Secure bulletin board system (SBBS), https://github.com/BeamMW/beam/wiki/Securebulletinboardsystem(SBBS), Date accessed: 20181224.
[32]: Beam’s mining specification, https://github.com/BeamMW/beam/wiki/BEAMMining, Date accessed: 20181224.
[33]: Beam’s transaction graph obfuscation, https://github.com/BeamMW/beam/wiki/Transactiongraphobfuscation, Date accessed: 20181224.
Contributors
 https://github.com/philiprza
 https://github.com/hansieodendaal
 https://github.com/SWvheerden
 Some clarifications by BEAM CEO Alexander Zaidelson (https://github.com/azaidelson)
Appendices
This section contains some details on topics discussed above but whose details are not directly relevant to the Grin vs BEAM discussion.
Appendix A: Cuckoo/Cuckatoo Cycle PoW algorithm
The Cuckoo Cycle algorithm is based on finding cycles of a certain length of edges in a bipartite graph of N nodes and M edges. The graph is bipartite because it consists of two separate groups of nodes with edges that connect nodes from one set to the other. As an example let's consider nodes with even indices to be in one group and nodes with odd indices in a second group. Figure 2 shows 8 nodes with 4 randomly placed edges, N = 8 and M = 4. So if we are looking for cycles of length 4 we can easily confirm that none exist in Figure 2. By adjusting the number of edges present in the graph vs the number of nodes we can control the probability that a cycle of a certain length exists in the graph. When looking for cycles of length 4 the difficulty illustrated in Figure 2 a 4/8 (M/N) graph would mean that the 4 edges would need to be randomly chosen in an exact cycle for one to exist [29].
If we increase the number of edges in the graph relative to the number of nodes we adjust the probability of a cycle occurring in the randomly chosen set of edges. Figure 3 shows an example of M = 7 and N = 8 case and it can be seen that a 4 edge cycle appeared. Thus, we can control the probability of a cycle of a certain length occurring by adjusting the ratio of M/N [29].
Detecting that a cycle of a certain length has occurred in a graph with randomly selected edges becomes significantly more difficult as the number as the graphs get larger. Figure 4 shows a 22 node graph with 14 random edges in it. Can you determine if a cycle of 8 edges is present? [29]
The Cuckoo cycle PoW algorithm is built to solve this problem. The bipartite graph that is analyzed is called a "Cuckoo Hashtable" where a key is inserted into two arrays, each with their own hash function, into a location based on the hash of the key. Each key inserted in this way produces an edge between the locations generated by the two hashing functions. Nonces are enumerated for the hashing functions until a cycle is detected of the desired length. This algorithm has two main parameters that control it's difficult which is the M/N ratio and the number of nodes in the graph. There are a number of variants of this algorithm that make speed/memory tradeoffs [20]. In Grin a third difficulty parameter was introduced to more finely tune the difficulty of the PoW algorithm to ensure a 1 minute block time in the face of changing network hash rates. This was to take a Blake2b hash of the set of nonces and ensure that the result is above a difficulty threshold [29].
Grin Design Choice Criticisms  Truth or Fiction
Introduction
Grin is a cryptocurrency implemented in Rust that makes use of Mimblewimble transactions and the Cuckatoo algorithm to perform ProofofWork (PoW) calculations. The main design goals of the Grin project are: privacy, transaction scaling and design simplicity to promote long term maintenance of the Grin source code [1].
During the development of the Grin project, the developers have received criticisms from the community on a number of the design and implementation decisions that they have made. This report will have a look at some of these criticisms and determine if there are some truth to these concerns or if the concerns are unwarranted or invalid. Some suggestions will be made on how these problems could be improved or addressed.
This report will also investigate their selected emission scheme, PoW algorithm, selection of keystore library and their choice of cryptographic curve used for signatures. Each of these topics will be discussed in detail, starting with their selected emission scheme.
Contents
Monetary Policy Due to Static Emission Scheme
Bitcoin has a limited and finite supply of coins. It makes use of 10minute block times, where the initial reward for solving the first block was 50 BTC. This reward is reduced every 4 years, by halving it, until a maximum of 21 million coins are in circulation [2]. During this process, the transaction fees and newly minted coins are paid to miners and is used as an incentive for miners to maintain the blockchain. Once all 21 million Bitcoins are released, only transaction fees will be paid to miners. Many fear that paying miners only transaction fees in the future will not be sufficient to maintain a large network of miners and it will result in the centralisation of the network, as only large mining farms will be able to perform the mining task in a profitable manner. Others believe that in time mining fees will increase and hardware costs for miners will decrease making the act of mining and maintaining the bitcoin blockchain remain lucrative and profitable [3].
Grin has decided on a different approach, where their number of coins will not be capped at a fixed supply. It will make use of a static emission rate, where a constant 60 Grin is released as a reward for solving every block. Their algorithm makes use of a block goal of 60 seconds. This will result in roughly 1 coin being created every second for as long as the blockchain is being maintained [4].
Their primary motivations for selecting a static emission rate are:
 there will be no upper limit on the amount of coins that can be created,
 the percentage of newly created coins compared to the total coins in circulation will tend toward zero,
 it will mitigate the effect of orphaned and lost coins,
 it will encourage spending rather than holding of coins.
The selected emission rate will result in Grin becoming a high inflationary currency with more than 10% inflation for the first 10 years, that is higher than most competing cryptocurrencies or successful fiat systems. This is in comparison to other cryptocurrencies such as Monero, that will have less than 1% inflation after the first 8 years in circulation and have a decreasing 0.87% inflation with the start of their tail emissions [5]. Monero will have a better potential of being used as a Store of Value (SoV) in the long run.
The fixed emission rate of Grin on the other hand will limit its use as a SoV, as it will experience constant price pressure, which might make it difficult for Grin to maintain a high value initially, while the inflation rate remains high. This high inflation rate might encourage Grin to rather be used as a Medium of Exchange (MoE) [6], as it will take approximately 50 years for the inflation to drop below 2%. The Grin team believes that the inflation rate is not that high as many coins are lost and become unusable on a blockchain. These lost coins, which they believe can be as much as 2% per year of the total supply, should be excluded from the inflation rate calculation [7]. The total percent of lost transactional coins are difficult to estimate [8] and it seems as if this value is higher for low value coins compared to high value coins where users tend to be more careful. The Grin team believes that by selecting a high inflation rate it will improve the distribution of coins as holding of coins will be discouraged. They also hope that a high inflation rate will produce natural pricing and limit price manipulation by large coin holders [7].
Most economists for traditional fiat systems agree that deflation is bad as it increases debt and some inflation is good as it stimulates the economy of a country [9]. With inflation the purchasing power of savings decrease over time, which encourages the purchasing of goods and services, resulting in the currency being used as a MoE rather than a SoV. People with debt such as study, vehicle and home loans also benefit from inflation as it produces an eroding effect on the total debt for long periods of repayment. Currently, this benefit does not apply to cryptocurrencies as not much debt exists as it is difficult to maintain successful borrowerlender relationships due to the anonymous nature of cryptocurrencies [10].
On the other hand, deflation in traditional fiat systems, produce over time an increase of purchasing power that encourages saving and discourages debt, resulting in the currency being used as a SoV. Unfortunately, this comes with a negative side effect that people will stop purchasing goods and services. Bitcoin can be considered deflationary as people would rather buy and hold Bitcoins as the price per coin might increase over time, this is limiting its use as a MoE. Also, high deflation can cause a deflationary spiral, as people with debt will have more debt and people with money will start hoarding their money as it might be worth more at a later stage [11]. Deflation in traditional fiat systems typically tend to only happen in times of economic crisis and recession and is managed by introducing inflation using monetary policies [12].
As most inflationary fiat systems are government backed, they are able to control the amount of inflation to help alleviate government debt and finance budget deficits [13]. This could result in hyperinflation where the devaluation of currency occur at an extreme pace resulting in many people losing their savings and pensions [14]. Cryptocurrencies on the other hand provide a transparent algorithmic monetary inflation that is not controlled by a central authority or government, limiting its misuse.
Finding a good balance between being a SoV and MoE is an important issue for developing a successful currency. A balance between deflation and inflation need to be selected to motivate saving and at the same time spending of a currency. A low inflationary model where inflation is algorithmically maintained and not controlled by a single authority seem like the safest choice, but only time will tell if the high inflation model proposed by Grin will have the desired effect.
From ASIC Resistant to ASIC Friendly
Initially, the Grin team proposed using two ApplicationSpecific Integrated Circuit (ASIC) resistant algorithms: Cuckoo cycles and a high memory requirement Equihash algorithm called Equigrin. These algorithms were selected to encourage mining decentralisation. ASIC resistance was obtained by having high memory requirements for the PoW algorithms, limiting its calculation to Central Processing Units (CPUs) and Highrange Graphics Processing Units (GPUs) [15]. The plan was to adjust the parameters of these PoW algorithms every 6 months to deter stealth ASIC mining and move over to using only Cuckoo cycles as the primary PoW algorithm.
Recently, the Grin team proposed to switch to a new dual PoW system, where one PoW algorithm is ASIC friendly and the other PoW algorithm is not. Grin will now make use of the new Cuckatoo Cycle algorithm, but details of their second PoW algorithm remain vague. The Cuckatoo PoW algorithm is a variation of Cuckoo that aims to be more ASIC friendly [16]. This is achieved by using plain bits for ternary counters and requiring large amounts of Static RandomAccess Memory (SRAM) to speed up the memory latency bound access of random node bits. SRAM tends to be limited on GPU and CPU processors, but increasing SRAM on ASIC processors is much easier to implement [17].
ASIC miners tend to be specialised hardware that are very efficient at calculating and solving specific PoW algorithms. Encouraging ASIC miners on a network might not seem like a bad idea as the mining network will have a higher hash rate. This will make it more difficult to hack and it will use less electrical power compared to using primarily CPU and GPU based miners.
Unfortunately, a negative side effect of running a PoW algorithm that is ASIC friendly is that the network of miners will become more centralised. General consumers do not have access or a need for this type of hardware; this limits the use of ASIC miners to be primarily reserved for enthusiasts and large corporations establishing mining farms. Having the majority of the networks hash rate localised in large mining farms will result in the blockchain becoming more vulnerable to potential 51% attacks [18], especially when specific ASIC manufacturers recommend or enforce their hardware to make use of specific mining pools that are controlled by single bodies.
Using general purpose and multiuse hardware such as CPUs and GPUs that are primarily used for gaming and large workstations, ensures that the network of miners is more widely distributed and that it is not controlled by a single potential bad player. This will make it more difficult for a single entity to control more than 50% of the networks hash rate or total computational power, limiting the potential of double spends.
Selecting to be ASIC resistant or ASIC friendly is an important decision that can affect the security of the blockchain. The Grin team's choice to support the ASIC community and trying to balancing an ASIC friendly and an ASIC resistant PoW algorithm will be interesting with many potential pitfalls.
Choice of Cryptographic Ellipticcurve  secp256k1
Elliptic curve cryptography is used for generating Private and Public key pairs that can be used for digital signatures as well as authorisation for individuals and transactions. It is much more secure and requires smaller keys for similar security compared to other Publickey cryptography techniques such as RSA [19]
Secp256k1 is an elliptic curve defined in the Standards for Efficient Cryptography [20] and is used for digital signatures in a number of cryptocurrencies such as Bitcoin, Ethereum, EOS, Litecoin, etc. [21]. Grin also makes use of this same elliptic curve [22]. Some security experts recommend not using the secp256k1 curve as some issues have been uncovered, but not necessarily exploited. One of these problems are that the complexmultiplication field discriminant is not high enough to be secure. This could result in potential future exploits as curves with low complexmultiplication field discriminant tend to be easier to break [23].
Starting a project with a potentially compromised curve does not seem like a good idea, especially when other curves with better security properties and characteristics do exist. A number of alternative curves exist that could be used to improve security such as Curve25519 that can be used with the improved Ed25519 publickey signature system. The Ed25519 signature scheme makes use of the Edwardscurve Digital Signature Algorithm (EdDSA) and uses SHA512 and Curve25519 [24] to build a fast signature scheme without sacrificing security.
Many additional alternatives exist and platforms such as SafeCurves, maintained by Daniel J. Bernstein and Tanje Lange can help the investigation and selection of an alternate security curve. The SafeCurves platform will make it easier to evaluate the security properties and potential vulnerabilities of many cryptographic curves [25].
Selection of Keystore Library
Grin originally made use of RocksDB [26] as an internal keyvalue store, but received some criticism for this decision. A number of alternatives with other performance and security characteristics exist such as LevelDB [27], HyperLevelDB [28] and the Lightning MemoryMapped Database (LMDB) [29]. Selecting between these to find the "best" keyvalue store library for blockchain applications remains a difficult problem as many online sources with conflicting information exist.
Based on the controversial results from a number of online benchmarks it seems as if some of these alternatives have better performance such as producing small database sizes and performing faster queries [30]. As an example, RocksDB or LevelDB seem incorrectly to be better alternatives to LMDB as they produced the fastest reads and deletes and produce some of the smallest databases compared to the other database libraries [31]. This is not entirely true as some mistakes were made during the testing process. Howard Chu wrote an article entitled "Lies, Damn Lies, Statistics, and Benchmarks" that exposes some of these issues and show that LMDB is the best keyvalue store library [32]. Other benchmarks performed by Symas Corp support this claim, where LMDB outperformed all the tested key store libraries [33].
Grin later replaced RocksDB with LMDB to maintain the state of Grin Wallets [34], [35]. This switch looks to be a good idea as LMDB seem to be the best keyvalue store library for blockchain related applications.
Conclusions, Observations, Recommendations
 Selecting the correct emission rate to create a sustainable monetary policy is an important decision and care should be taken to ensure that the right balance is found between being a SoV and/or a MoE.
 Weighing the benefits and potential issues of being ASIC friendly compared to ASIC resistant need to be carefully evaluated.
 Tools such as SafeCurves can be used to select a secure elliptic curve for an application. Cryptographic curves with even potential security vulnerabilities should rather be ignored.
 Care should be taken when using online benchmarks to help select libraries for a project as the results might be misleading.
References
[1] Grin: a lightweight implementation of the MimbleWimble protocol, Mattia Franzoni, https://medium.com/novamining/grintestnetislive98b0f8cd135d, Date accessed: 20181005.
[2] Bitcoin: A PeertoPeer Electronic Cash System, Satoshi Nakamoto, https://bitcoin.org/bitcoin.pdf, Date accessed: 20181005.
[3] What Happens to Bitcoin After All 21 Million are Mined?, Nathan Reiff, https://www.investopedia.com/tech/whathappensbitcoinafter21millionmined/, Date accessed: 20181007.
[4] Emission rate of Grin, https://www.grinforum.org/t/emmissionrateofgrin/171, Date accessed: 20181015.
[5] Coin Emission and Block Reward Schedules: Bitcoin vs. Monero, https://www.reddit.com/r/Monero/comments/512kwh/useful_for_learning_about_monero_coin_emission/d78tpgi, Date accessed: 20181015.
[6] On Grin, MimbleWimble, and Monetary Policy, https://www.reddit.com/r/grincoin/comments/91g1nx/on_grin_mimblewimble_and_monetary_policy/, Date accessed: 20181007.
[7] Grin  Monetary Policy, https://github.com/mimblewimble/docs/wiki/MonetaryPolicy, Date accessed: 20181008.
[8] Exclusive: Nearly 4 Million Bitcoin Lost Forever, New Study Says, Jeff J. Roberts and Nicolas Rapp, http://fortune.com/2017/11/25/lostbitcoins/, Date accessed: 20181008.
[9] How Inflationary should Cryptocurrency really be?, Andrew Ancheta, https://cryptobriefing.com/howinflationaryshouldcryptocurrencybe/, Date accessed: 20181106.
[10] Debtcoin: Credit, debt, and cryptocurrencies, Landon Mutch, https://cryptoinsider.21mil.com/debtcoincreditdebtandcryptocurrencies/, Date accessed: 20181106.
[11] Inflation vs Deflation: A Guide to Bitcoin & Cryptocurrencies Deflationary Nature, Brian Curran, https://blockonomi.com/bitcoindeflation/, Date accessed: 20181106.
[12] Why Is Deflation Bad for the Economy?, Adam Hayes, https://www.investopedia.com/articles/personalfinance/030915/whydeflationbadeconomy.asp, Date accessed: 20181106.
[13] Inflation and Debt, John H. Cochrane, https://www.nationalaffairs.com/publications/detail/inflationanddebt, Date accessed: 20181107.
[14] Think Piece: Fighting Hyperinflation with Cryptocurrencies, Lucia Ziyuan, https://medium.com/@Digix/thinkpiecefightinghyperinflationwithcryptocurrenciesa08fe86bb66a, Date accessed: 20181107.
[15] Grin  Proof of Work update, https://www.grinforum.org/t/proofofworkupdate/713, Date accessed: 20181015.
[16] Grin  Meeting Notes: Governance, Sep 25 2018, https://www.grinforum.org/t/meetingnotesgovernancesep252018/874, Date accessed: 20181015.
[17] Cuck(at)oo Cycle, https://github.com/tromp/cuckoo, Date accessed: 20181015.
[18] 51% Attack, https://www.investopedia.com/terms/1/51attack.asp, Date accessed: 20181011.
[19] What is the math behind elliptic curve cryptography?, Hans Knutson, https://hackernoon.com/whatisthemathbehindellipticcurvecryptographyf61b25253da3, Date accessed: 20181014.
[20] Standards for Efficient Cryptography Group, http://www.secg.org/, Date accessed: 20181011.
[21] Secp256k1, https://en.bitcoin.it/wiki/Secp256k1, Date accessed: 20181015.
[22] Grin  Schnorr signatures in Grin & information, https://www.grinforum.org/t/schnorrsignaturesingrininformation/730, Date accessed: 20181008.
[23] SafeCurves  CM field discriminants, http://safecurves.cr.yp.to/disc.html, Date accessed: 20181015.
[24] Curve25519: New DiffieHellman Speed Records, Daniel J. Bernstein, https://cr.yp.to/ecdh/curve2551920060209.pdf, Date accessed: 20181015.
[25] SafeCurves  choosing safe curves for ellipticcurve cryptography, http://safecurves.cr.yp.to/, Date accessed: 20181010.
[26] RocksDB, https://rocksdb.org/, Date accessed: 20181010.
[27] LevelDB, http://leveldb.org/, Date accessed: 20181015.
[28] HyperLevelDB, http://hyperdex.org/, Date accessed: 20181015.
[29] LMDB, https://github.com/LMDB, Date accessed: 20181029.
[30] Benchmarking LevelDB vs. RocksDB vs. HyperLevelDB vs. LMDB Performance for InfluxDB, Paul Dix, https://www.influxdata.com/blog/benchmarkingleveldbvsrocksdbvshyperleveldbvslmdbperformanceforinfluxdb/, Date accessed: 20181015.
[31] Lmdbjava  benchmarks, Ben Alex, https://github.com/lmdbjava/benchmarks/blob/master/results/20160630/README.md, Date accessed: 20181014.
[32] Lies, Damn Lies, Statistics, and Benchmarks, Howard Chu, https://www.linkedin.com/pulse/liesdamnstatisticsbenchmarkshowardchu, Date accessed: 20181029.
[33] HyperDex Benchmark, Symas Corp, http://www.lmdb.tech/bench/hyperdex/, Date accessed: 20181029.
[34] Grin  Basic Wallet, https://github.com/mimblewimble/grin/blob/master/doc/wallet/usage.md, Date accessed: 20181015.
[35] Progress update May  Sep 2018, Yeastplume, https://www.grinforum.org/t/yeastplumeprogressupdatethreadmaysept2018/361/12, Date accessed: 20181028.
Contributors
 https://github.com/neonknight64
 https://github.com/hansieodendaal
 https://github.com/SWvheerden
 https://github.com/philiprza
 https://github.com/kim0
Atomic swaps
What are Atomic swaps
Atomic swaps or crosschain atomic swaps [1] in a nutshell are decentralized exchanges, but only for cryptocurrencies. This allows multiple parties to exchange two different crypto currencies in a trustless environment. If one party defaults or fails the transaction, neither party can "run off" with the anyone's money. For this to work, we will require two technologies: a payment channel and hashed timelock contracts. An implementation of a payment channel is the lightning network.
Hashed Timelock Contracts
Hashed Timelock Contracts (HTLC) [2] is one of the most important technologies required for atomic swaps. This is a payment class that uses hashlocks and timelocks to require certain public knowledge before doing a payment, otherwise the payment is reversed. HTLCs are also crucial in the lighting network [3].
Here is a quick example of how a HTLC works:
In this example Alex's wants to pay Carla, but he does not have an open payment channel to Carla. But he does have an open channel to Bart who does have an open channel to Carla.
 Carla generates a random number and gives the hash of the number to Alex.
 Alex pays Bart but adds the condition that if Bart wants to claim the payment he has to provide the random number that generated the hash Carlo gave to Alex.
 Bart pays Carlo, but he adds the same condition to the payment.
 Carla claims the payment by providing the random number, and thus exposing the random number to Bart.
 Bart uses the random number to claim the payment from Alex.
If the payment to Carla does not go through the timelock in the contract will reverse all transactions.
Atomic vs Etomic
For an atomic swap transaction to happen, both cryptocurrencies must use the same hashing function as this is crucial for HTLC to function. Etomic swaps was created in an attempt to make atomic swaps happen between Bitcoin tokens and Ethereum based tokens.
Examples of current atomic swaps and implementations
#1 Manual method
An article was posted on Hackernoon [3] showing the exact steps that is required for doing an atomic swap using cli.
The requirements for this method can be listed as follows:
 Full nodes on both parties.
 Atomic swap package [[4].
 Use of supported coins (UXTO based protocol coins, eg Bitcoin, Litecoin, Viacoin).
 Power user.
#2 Atomic Wallet
Atomic wallet [5] is an atomic swap exchange. They allow two parties to trade with them as a third party. The process looks as follows:
 Party A select an order from the BitTorrent order book.
 Party A enter an amount of coin to swap or coin to receive.
 Party A confirm the swap.
 Party B receives notification.
 Party B confirms the swap.
 First party and Second party’s Atomic Wallet checks the contracts.
 Both receive their coins.
#3 BarterDEX
BarterDEX is a decentralized exchange created by Komodo [6] but it works with electron servers or native. BarterDEX at its core is more like an auction system then a true decentralized exchange. It also uses a security deposit in the form of Zcredits to do swaps without waiting for confirmation.
BarterDEX also supports Etomic swaps. These work by keeping the payments locked in a etomic blockchain which will act as a third party. Although swaps have been done, it is stated as not yet production ready [7]. Currently (July 2018) its only possible to use Barterdex out of the cli [8]. Barterdex charges a 0.1287% fee for a swap [9].
References
[1] Sudhir Khatwani (2018) What Is Atomic Swap and Why It Matters?, Coinsutra. Available at: https://coinsutra.com/atomicswap/ (Accessed: 12 July 2018).
[2] Vohra, A. (2016) What Are Hashed Timelock Contracts (HTLCs)? Application In Lightning Network & Payment Channels, Hackernoon. Available at: https://hackernoon.com/whatarehashedtimelockcontractshtlcsapplicationinlightningnetworkpaymentchannels14437eeb9345 (Accessed: 12 July 2018).
[3] Poon, J. and Dryja, T. (2016) The Bitcoin Lightning Network: Scalable OffChain Instant Payments v0.5.9.2. Available at: https://lightning.network/lightningnetworkpaper.pdf.
[3] Hotshot (2018) So how do I really do an atomic swap, Hackernoon. Available at: https://hackernoon.com/sohowdoireallydoanatomicswapf797852c7639 (Accessed: 13 July 2018).
[4] open source (ISC) (2018) ‘viacoin/atomicswap’. github. Available at: https://github.com/viacoin/atomicswap.
[5] Atomic (2018) Atomic wallet. Available at: https://atomicwallet.io/ (Accessed: 13 July 2018).
[6] Komodo (2018) BarterDEX. Available at: https://komodoplatform.com/decentralizedexchange/ (Accessed: 13 July 2018).
[7] Artemii235 (2018) ‘etomicswap’. github. Available at: https://github.com/artemii235/etomicswap.
[8] Komodo (2018) ‘Barterdex’. github. Available at: https://github.com/KomodoPlatform/KomodoPlatform/wiki/InstallingandUsingKomodoPlatform(barterDEX).
[9] Komodo and Hossain, S. (2017) barterDEX Whitepaper v2. Available at: https://github.com/KomodoPlatform/KomodoPlatform/wiki/barterDEXWhitepaperv2.
Contributors
 https://github.com/SWvheerden
Lightning Network for Dummies
Having trouble viewing this presentation?
View it in a separate window.
Introduction to SPV, Merkle Trees and Bloom Filters
Having trouble viewing this presentation?
View it in a separate window.
The RGB Protocol  An Introduction
Having trouble viewing this presentation?
View it in a separate window.
Distributed Hash Tables
Introduction
A hash table is a data structure that maps keys to values. A hashing function is used to compute keys that are inserted into a table from which values can later be retrieved. As the name suggests, a distributed hash table (DHT) is a hash table that is distributed across many linked nodes, which cooperate to form a single cohesive hash table service. Nodes are linked in what is called an overlay network. An overlay network is simply a communication network built on top of another network. The Internet is an example, as it began as an overlay network on the public switched telephone network.
$\langle key, value \rangle$ pairs are stored on a subset of the network, usually by some notion of "closeness" to that key.
A DHT network design allows the network to tolerate nodes coming and going without failure, and allows the network size to increase indefinitely.
Image from XKCD #350  Network  License
The need for DHTs arose from early filesharing networks such as Gnutella, Napster, FreeNet and BitTorrent, which were able to make use of distributed resources across the Internet to provide a single cohesive service [1].
These systems employed different methods of locating resources on the network:
 Gnutella searches were inefficient, because queries would result in messages flooding the network.
 Napster used a central index server, which was a single point of failure and left it vulnerable to attacks.
 FreeNet used a keybased routing. However, it was less structured than a DHT and did not guarantee that data could be found.
In 2001, four DHT projects were introduced: CAN, Chord, Pastry and Tapestry. They aimed to have a lookup efficiency
($O(log(n))$) similar to that of a centralized index, while having the benefits of a decentralized network.
DHTs use varying techniques to achieve this, depending on the given algorithm. However, they have a number of aspects in common:
 Each participant has some unique network identifier.
 They perform peer lookup, data storage and retrieval services.
 There is some implicit or explicit joining procedure.
 Communication need only occur between neighbors that are decided on by some algorithm.
In this report we'll go over some of the aspects common to all DHTs and dive deeper into a popular DHT implementation, called Kademlia.
Characterization of DHT networks
Peer Discovery
Peer discovery is the process of locating nodes in a distributed network for data communication. This is facilitated by every node maintaining a list of peers and sharing that list with other nodes on the network. A new participant would seek to find their peers on the network by first contacting a set of predefined bootstrap nodes. These nodes are normal network participants who happen to part of some dynamic or static list. It is the job of every node on the network to facilitate peer discovery.
As peers come and go, these lists are repeatedly updated to ensure network integrity.
Scalability and Faulttolerance
A DHT network efficiently distributes responsibility for the replicated storage and retrieval of routing information and data. This distribution allows nodes to join and leave with minimal or no disruption. The network can have a massive number of nodes (in the case of BitTorrent millions of nodes) without each node having to know about every other participant in the network.
In this way, DHTs are inherently more resilient against hostile attackers then a typical centralized system [1].
Distributed Data Storage
Arbitrary data may be stored and replicated by a subset of nodes for later retrieval. Data is hashed using a consistent hashing function (such as SHA256) to produce a key for the data. That data is propagated and eventually stored on the node or nodes whose node IDs are "closer" to the key for that data for some distance function.
Partitioned data storage has limited usefulness to a typical blockchain, as each full node is required to keep a copy of all transactions and blocks for verification.
DHT Algorithms
Overview
The following graph is replicated and simplified from [8]. Degree is the number of neighbors with which a node must maintain in contact.
Parameter  CAN  CHORD  Kademlia  Koord  Pastry  Tapestry  Viceroy 

Foundation  ddimensional torus  Circular space  XOR metric  de Bruijn graph  Plaxtonstyle mesh  Plaxtonstyle mesh  Butterfly network 
Routing function  Map keyvalue pairs to coordinate space  Matching key to node ID  Matching key to node ID  Matching key to node ID  Matching key and prefix in node ID  Suffix matching  Levels of tree, vicinity search 
Routing performance (network size $n$)  $O(dn^{(2/d)})$  $O(log(n))$  $O(log(n)) + c$ $c$ is small  Between $O(log(log(n)))$ and $O(log(n))$  $O(log(n))$  $O(log(n))$  $O(log(n))$ 
Degree  $2d$  $O(log(n))$  $O(log(n))$  Between constant to $log(n)$  $O(2log(n))$  $O(log(n))$  Constant 
Join/Leaves  $2d$  $log(n)^2$  $O(log(n)) + c$ $c$ is small  $O(log(n))$  $O(log(n))$  $O(log(n))$  $O(log(n))$ 
Implementations    OpenChord, OverSIM  Ethereum 3, Mainline DHT (BitTorrent), I2P, Kad Network    FreePastry  OceanStore, Mnemosyne 4   
The popularity of Kademlia over other DHTs is likely due to its relative simplicity and performance. The rest of this section dives deeper into Kademlia.
Kademlia
Kademlia is designed to be an efficient means for storing and finding content in a distributed peertopeer (P2P) network. It has a number of core features that are not simultaneously offered by other DHTs [2], such as:
 The number of messages necessary for nodes to learn about each other, is minimized.
 Nodes have enough information to route traffic through lowlatency paths.
 Parallel and asynchronous queries are made to avoid timeout delays from failed nodes.
 The node existence algorithm resists certain basic distributed denialofservice (DDoS) attacks.
NodeID
A node selects an $n$bit ID, which is opaque to other nodes on the network. The network design relies on node IDs being uniformly distributed by some random procedure. A node's position is determined by the shortest unique prefix of its ID, which forms a tree structure with node IDs as leaves [2]. This ID should be reused when the node rejoins the network. The following figure shows a binary tree structure in a threebit key space:
Bootstrapping a Node
A node wishing to join the network for the first time has no contacts in its $k$buckets. In order for the node to establish itself on the network, it must contact one, or more than one, bootstrap node. These nodes are not special in any way other than being listed in some predefined list. They simply serve as a first point of contact for the requesting node to become known to more of the network and to find their closest peers.
There are a number of ways that bootstrap nodes can be obtained, including adding addresses to a configuration and using DNS seeds.
The joining process is described as follows [2]:
 A joining node generates a random ID.
 It contacts a few nodes it knows about.
 It sends a
FIND_NODE
lookup request of its newly generated node ID.  The contacted nodes return the closest nodes they know about. The newly discovered nodes are added to the joining node's routing table.
 The joining node then contacts some of the new nodes it knows about. The process then continues iteratively until the joining node is unable to locate any closer nodes.
This selflookup has two effects: it allows the node to learn about nodes closer to itself; and it populates other nodes' routing tables with the node's ID. [1]
XOR metric
The Kademlia paper published in 2002 [2] offered the novel idea of using the XOR ($\oplus$) operator to determine the distance and therefore the arrangement of peers within the network. Defined as:
$$ distance(a, b) = a \oplus b$$
This works, because XOR exhibits the same mathematical properties as any distance function.
Specifically, [1]
 $a \oplus a = 0$
 $a \oplus b > 0$ for $a \neq b$
 $a \oplus b = b \oplus a$
 Triangle property: $a \oplus b + b \oplus c \geq a \oplus c$
The XOR metric implicitly captures a notion of distance in the preceding tree structure [2].
Protocol
Kademlia is a relatively simple protocol consisting of only four remote procedure call (RPC) messages that facilitate two independent concerns: peer discovery and data storage/retrieval.
The following RPC messages are part of the Kademlia protocol:
 Peer discovery
PING
/PONG
 used to determine liveness of a peer.

FIND_NODE
 returns at most $k$ nodes, which are closer to a given query value.
 Data storage and retrieval
STORE
 request to store a $\langle key, value \rangle$ pair.FIND_VALUE
 behaves the same asFIND_NODE
by returning the $k$ closest nodes. If a node has the requested $\langle key, value \rangle$ pair, it will instead return the stored value.
Notably, there is no JOIN
message. This is because there is no explicit join in Kademlia. Each peer has a chance of being added to a
routing table of another node whenever an RPC message is sent/received between them [2]. In this way, the node becomes known to the network.
Lookup Procedure
The lookup procedure allows nodes to locate other nodes, given a node ID. The procedure begins by the initiator concurrently querying the closest $\alpha$ (concurrency parameter) nodes to the target node ID it knows about. The queried node returns the $k$ closest nodes it knows about. The querying node then proceeds in rounds, querying closer and closer nodes until it has found the node. In the process, both the querying node and the intermediate nodes have learnt about each other.
Data Storage and Retrieval Procedure
The storage and retrieval procedure ensures that $\langle key, value \rangle$ pairs are reliably stored and able to be
retrieved by participants in the network.
The storage procedure uses the lookup procedure to locate the closest nodes to the key, at which
point it issues a STORE
RPC message to those nodes. Each node republishes the $\langle key, value \rangle$ pairs to
increase the availability of t he data. Depending on the implementation, the data may eventually expire (say 24 hours).
Therefore, the original publisher may be required to republish the data before that period expires.
The retrieval procedure follows the same logic as storage, except a FIND_VALUE
RPC is issued and the data received.
Routing Table
Each node organizes contacts into a list called a routing table. A routing table is a binary tree where the leaves are buckets that contain a maximum of $k$ nodes, aptly named $k$buckets. These are nodes with some common node ID prefix, which is captured by the XOR metric.
For instance, given node $A(1100)$ with peers $B(1110)$, $C(1101)$, $D(0111)$ and $E(0101)$:
The distances from node $A$ would be
$A \oplus B = 0010 (2)$
$A \oplus C = 0001 (1)$
$A \oplus D = 1011 (11)$
$A \oplus E = 1001 (9)$
$A$, $B$ and $C$ share the same prefix up to the first two most significant bits (MSBs). However, $A$, $C$ and $D$ share
no prefixed bits and are therefore further apart. In this example, $A$, $B$ and $C$ would in the same bucket and
$D$, $E$ in their own bucket.
Initially, a node's routing table is not populated with $k$buckets, but may contain a single node in a single $k$bucket. As more nodes become known, they are added to the $k$bucket until it is full. At this point, the node splits the bucket in two: one for nodes that share the same prefix as itself and one for all the others.
This guarantees that for bucket $j$, where $0 <= j < k$, there is at least one node $N$ in node $A$'s routing table for which
$$ 2^j <= distance(A, N) < 2^{(j+1)} $$
$k$bucket ordering
Peers within $k$buckets are sorted from least to most recently seen.
Once a node receives a request or reply from a peer, it checks to see if the peer is contained in the
appropriate $k$bucket. Depending on whether or not the peer already exists, the entry is either moved or appended to the
tail of the list (most recently seen). If a particular bucket is already size $k$, the node tries to PING
the first
peer in the list (least recently seen). If the peer does not respond, it is evicted and the new peer is
appended to the bucket, otherwise the new peer is discarded. In this way, the algorithm is biased towards peers
that are longlived and highly available.
Kademlia Attacks
Some notable attacks in the Kademlia scheme:
Node Insertion Attack
Since there is no verification of a node's ID, an attacker can select their ID to occupy a particular keyspace in the network. Once an attacker has inserted themselves in this way, they may censor or manipulate content in that keyspace, or eclipse nodes [9].
Eclipse Attack
An attacker takes advantage of the fact that in practice, there are relatively few nodes in most parts of a 160bit keyspace. An attacker injects themselves closer to the target than other peers and eventually could achieve a dominating position. This can be done cheaply if the network rules allow many peers to come from the same IP address.
DHT Vulnerabilities and Attacks
Eclipse Attack
An Eclipse attack is an attack that allows adversarial nodes to isolate the victim from the rest of its peers and filter its view of the rest of the network. If the attacker is able to occupy all peer connections, the victim is eclipsed.
The cost of executing an eclipse attack is highly dependent on the architecture of the network and can range from a small number of machines (e.g. with hundreds of node instances on a single machine) to requiring a fullfledged botnet. Reference [6] shows that an eclipse attack on Ethereum's Kademliabased DHT can be executed using as few as two nodes.
Mitigations include:
 Identities must be obtained independently from some random oracle.
 Nodes maintain contact with nodes outside of their current network placement.
Sybil Attack
Sybil attacks are an attempt by colluding nodes to gain disproportionate control of a network. and are often used as a vector for other attacks. Many, if not all, DHTs have been designed under the assumption that a low fraction of nodes are malicious. A Sybil attack attempts to break this assumption by increasing the number of malicious nodes.
Mitigations include:
 Associating a cost with adding new identifiers to the network.
 Reliably joining realworld identifiers (IP address, MAC address, etc.) to the node identifier, and rejecting a threshold of duplicates.
 Having a trusted central authority that issues identities.
 Using social information and trust relationships.
Adaptive JoinLeave Attack
An adversary wants to populate a particular keyspace interval $I$ with bad nodes in order to prevent a particular file from being shared. Let's suppose that we have a network with node IDs chosen completely at random through some random oracle. An adversary starts by executing join/leaves until it has nodes in that keyspace. After that they proceed in rounds, keeping the nodes that are in $I$ and rejoining the nodes that aren't, until control is gained over the interval.
It should be noted that if there is a large enough cost for rejoining the network, there is a disincentive for this attack. In the absence of this disincentive, the cuckoo rule is proposed as a defence.
Cuckoo Rule
Given a network that is partitioned into groups or intervals, and in which nodes are positioned uniformly and randomly. Adversaries may proceed in rounds, continuously rejoin nodes from the least faulty group until control is gained over one or more groups as described in Adaptive JoinLeave Attack.
The cuckoo rule is a join rule that moves (cuckoos) nodes in the same group as the joining node to random locations outside of the group. It is shown that this can prevent adaptive joinleave attacks with high probability, i.e. a probability $1  1/N$, where $N$ is the size of the network.
Given:
 $I$  keyspace group in $[0,1)$;
 $n$  number of honest nodes;
 $ \epsilon n$  number adversarial nodes for constant $\epsilon < 1$;
 therefore, the network size $N$ is $n + \epsilon n$;
 $k$region is a region in $[0,1)$ of size $k/n$;
 $R_k(x)$ is a unique $k$region containing $x$.
And with the following two conditions:
 Balancing Condition  the interval $I$ contains at least $O(log(n))$ nodes.
 Majority Condition  honest nodes are in the majority in $I$.
The cuckoo rule states:
If a new node $v$ wants to join the system, pick a random $x \in [0, 1)$. Place $v$ into $x$ and move all nodes in $R_k(x)$ to points in $[0, 1)$ chosen uniformly and independently at random (without replacing any further nodes) [5].
It is concluded that for a constant fraction of adversarial peers, where $\epsilon < 1  1/k$ for any constant, $k > 1$
is sufficient to prevent adaptive joinleave attacks with high probability.
Sen, Freedman [7] modelled and analysed the Cuckoo Rule and found that, in practice, it tolerates very few adversarial nodes.
(Cuckoo rule) Minimum group size needed to tolerate different $\epsilon$ for 100,000 rounds. Groups must be large (i.e. 100s to 1,000s of nodes) to guarantee correctness [7]  (Cuckoo rule) Number of rounds the system maintained correctness with an average group size of 64 nodes, varied. Simulation was halted after 100,000 rounds. Failure rates drop dramatically past a certain threshold for different N [7] 
Notably, they show that rounds to failure (i.e. more than onethird of nodes in a given group are adversarial) decreases dramatically with an increasing but small global fraction of adversarial nodes. An amendment rule is proposed, which allows smaller group sizes while maintaining Byzantine correctness. Reference [7] warrants more investigation, but is out of the scope of this report.
Conclusion
DHTs are a proven solution to distributed storage and discovery. Kademlia, in particular, has been successfully implemented and sustained in filesharing and blockchain networks with participants in the millions. As with every network, it is not without its flaws, and careful network design is required to mitigate attacks.
Novel research exists, which proposes schemes for protecting networks against control from adversaries. This research becomes especially important when control of a network may mean monetary losses, loss of privacy or denial of service.
References
[1] Wikipedia: "Distributed Hash Table" [online]. Available: https://en.wikipedia.org/wiki/Distributed_hash_table. Date accessed: 20190308.
[2] Kademlia: A PeertoPeer Information System" [online]. Available: https://pdos.csail.mit.edu/~petar/papers/maymounkovkademlialncs.pdf. Date accessed: 20190308.
[3] Ethereum Wiki [online]. Available: https://github.com/ethereum/wiki/wiki/KademliaPeerSelection#lookup. Date accessed: 20190312.
[4] Wikipedia: "Tapestry (DHT)" [online]. Available: https://www.wikiwand.com/en/Tapestry_(DHT). Date accessed: 20190312.
[5] Towards a Scalable and Robust DHT [online]. Available: http://www.cs.jhu.edu/~baruch/RESEARCH/Research_areas/PeertoPeer/2006_SPAA/virtual5.pdf. Date accessed: 20190312.
[6] Lowresource Eclipse Attacks on Ethereum’s PeertoPeer Network [online]. Available: https://www.cs.bu.edu/~goldbe/projects/eclipseEth.pdf. Date accessed: 20190315.
[7]: Commensal Cuckoo: Secure Group Partitioning for Largescale Services [online]. Available: http://sns.cs.princeton.edu/docs/ccuckooladis11.pdf. Date accessed: 20190315.
[8]: Overlay and P2P Networks [online]. Available: https://www.cs.Nhelsinki.fi/webfm_send/1339. Date accessed: 20190404.
[9]: Poisoning the Kad Networ" [online]. Available: https://www.net.tlabs.tuberlin.de/~stefan/icdcn10.pdf. Date accessed: 20190404.
Contributors
 https://github.com/sdbondi
 https://github.com/hansieodendaal
 https://github.com/anselld
TLU Labs
This chapter contains various new features and demos that we're thinking of adding to make the TLU experience better . Since this is experimental, things might not work 100% here.
Mermaid Demo
TLU can now support mermaid diagrams! Flowcharts, sequence diagrams and more!
How to write mermaid diagrams.
 RTFM.
 Wrap your mermaid code in
<div>
tags. Add theclass=mermaid
attribute to the tag. So your code will look like
<div class="mermaid">
graph LR
...
</div>
 Note: You can't have blank lines in your diagrams, unfortunately, because the markdown renderer will interpret
this as a new paragraph and break your diagram. However, you can sort of workaround this by putting a
#
as a spacer (see first example).
Sequence diagram example
<div class=mermaid>
sequenceDiagram
Alice >> Bob: Hello Bob, how are you?
Bob>>John: How about you John?
Bobx Alice: I am good thanks!
Bobx John: I am good thanks!
#
Note right of John: Bob thinks a long<br/>long time, so long<br/>that the text does<br/>not fit on a row.
Bob>Alice: Checking with John...
Alice>John: Yes... John, how are you?
</div>
long time, so long
that the text does
not fit on a row. Bob>Alice: Checking with John... Alice>John: Yes... John, how are you?
Flowchart example
<div class=mermaid>
graph LR
A[Hard edge] >Link text B(Round edge)
B > C{Decision}
C >One D[Result one]
C >Two E[Result two]
</div>
Gantt Chart example
<div class="mermaid">
gantt
dateFormat YYYYMMDD
title Adding GANTT diagram functionality to mermaid
section A section
Completed task :done, des1, 20140106,20140108
Active task :active, des2, 20140109, 3d
Future task : des3, after des2, 5d
Future task2 : des4, after des3, 5d
section Critical tasks
Completed task in the critical line :crit, done, 20140106,24h
Implement parser and jison :crit, done, after des1, 2d
Create tests for parser :crit, active, 3d
Future task in critical line :crit, 5d
Create tests for renderer :2d
Add to mermaid :1d
section Documentation
Describe gantt syntax :active, a1, after des1, 3d
Add gantt diagram to demo page :after a1 , 20h
Add another diagram to demo page :doc1, after a1 , 48h
section Last section
Describe gantt syntax :after doc1, 3d
Add gantt diagram to demo page :20h
Add another diagram to demo page :48h
</div>
Notes
<div class="note">
Give your reports a bit of pizazz with Notes!
</div>
Info boxes
<div class="note info">
Highlight interesting information and asides in an info box.
</div>
Warnings
<div class="note warning">
A highly visual and high contrast warning box will really get your message out there!
</div>
Style Guide
Purpose
The purpose of this Style Guide is to provide contributors to the Tari Labs University (TLU) reports with standards for content and layout. The intention is to improve communication and provide a highquality learning resource for users. Use of this Style Guide can assist in ensuring consistency in the content and layout of TLU reports.
TLU content is created in Markdown format and is rendered using mdBook.
Standards for Content
Spelling
As per the United States (US) spelling standard. The applicable dictionary is MerriamWebster Online [1].
Quotation Marks
As per the American style. Use double quotation marks for a first quotation and single quotation marks for a quotation within a quotation.
Punctuation
As per the United Kingdom (UK) punctuation standard. Place commas and full stops outside the closing quotation marks as advised in [4].
Units of Measure
Use the internationally agreed ISO standards [3] for expressing units of measure.
Example
min = minute, s = second, h = hour, g = gram.
Date and Time
 Date format: yyyymmdd (yearmonthdate).
 Date format when written in text: "The document was submitted for approval on 10 March 2019".
 Time format (international): 11:00; 15:00.
Abbreviations

If it is necessary to use abbreviations in a report, write the abbreviation out in full at its first occurrence in the text, followed by the abbreviation in brackets. Thereafter, use the abbreviation only.
Example
Tari Labs University (TLU), graphical user interface (GUI).

Abbreviations of units should be consistent and not changed in the plural.
Example
10h and not 10hrs; 5min and not 5mins.
Spacing
Note: Due to limitations in Markdown, we deviate from the ISO convention, which requires a space between numbers and units of measure, and also as a thousands separator.

Use of a nonbreaking space (
) can improve readability in the rendered mdbook where required. 
Indicate clearly to which unit a number belongs:
Incorrect
11 x 11 x 11mm
Correct
11mm x 11mm x 11mm

Use 'to' rather than a dash to indicate a range of values:
Incorrect 1  10cm
Correct
1cm to 10cm

Use a comma to indicate thousands.
Example
1,000; 20,000,000; 250,000.
Mathematical operators should usually be wrapped inside equation tags. In plain text, leave a space on either side of signs such as + (plus),  (minus), = (equal to), > (greater than) and < (less than).
Decimals and Numbers
 Use the decimal point and not the decimal comma.
 Write out numbers from one to nine in full in text; use Arabic numerals for 10 onwards.
List Types
TLU uses unordered lists (refer to the first example under List Punctuation) and ordered lists (refer to the second example under List Punctuation.
List Punctuation
Where a list is a continuation of the preceding text, which is followed by a colon, use a semicolon between each bullet point and end the list with a full stop.
Example
Their primary motivations for selecting a static emission rate are:
 there will be no upper limit on the amount of coins that can be created;
 the percentage of newly created coins compared to the total coins in circulation will tend toward zero;
 it will mitigate the effect of orphaned and lost coins;
 it will encourage spending rather than holding of coins.
Where a list contains complete sentences, each item in the list is followed by a full stop.
Example
According to the proposed solution, one of three conditions will be true to the SPV client when using erasure codes:
 The entire extended data is available, the erasure code is constructed correctly and the block is valid.
 The entire extended data is available, the erasure code is constructed correctly, but the block is invalid.
 The entire extended data is available, but the erasure code is constructed incorrectly.
Where a list is not a sentence and does not complete a preceding part of a sentence, use no punctuation.
Example
Refer to the list of contents at the start of this Style Guide.
Crossreferencing
 Insert crossreferences between the referenced information in the text and the list of references.
 Text references appear in square brackets in the text and are listed under "References" at the end of each chapter.
 If a text reference appears at the end of a paragraph, it appears after the full stop at the end of the paragraph.
 Please be specific when referring to figures, tables and sections of text. For clarity, if using figure and table numbering, avoid referring to "below" or "above". Rather give a specific figure or table number. In the case of text references, include a link. For more information, please refer to the Markdown Links section in this Style Guide.
Case Formatting
Appendix A contains a list of lowercase words used in title case formatting in headings and captions (if used).
Terminology
With new concepts being formed daily and words changing over time, it is useful to establish terminology conventions. Different conventions (uppercase, lowercase, one word, two words, etc.) are used by different sources. Appendix B contains suggested terminology for use in TLU reports.
Standards for Layout
Proposed Layout
This section gives the proposed layout for TLU reports. The following headings are provided as a guide to heading levels and content:
 Title (as heading level 1)
Contents List (as embedded links).

Introduction/Purpose/Background/Overview (as heading level 2)
This section explains the aim of the report and prepares the reader for the content.

Other headings as appropriate (as heading level 2 and lower)
Structure the body of your report by using headings and subheadings. Ordering these headings logically helps you to present your information effectively. Headings make it easier for readers to find specific information.

Numbered Lists: Use numbered lists when the order of the items in the list is important, such as in procedures.

Bulleted Lists: Use bulleted lists when the order of the items in the list is not important.


Conclusions, Observations, Recommendations (as heading level 2)
The conclusion complements the purpose of the report. It concisely summarizes the findings of the report and gives a future strategy, if required.

References (as heading level 2)
References acknowledge the work of others and help readers to find sources. Refer to Referencing of Source Material.

Appendices (as heading level 2)
Appendices contain supplementary information that supports the main report.

Appendix A: Name (as heading level 3)
Rather than inserting an entire supporting document into an appendix, provide a text reference and list the reference in the references section.

Appendix B: Name (as heading level 3)
If figure and table numbers are used in the report, the figure and table numbering in the appendices follows on from the figure and table numbers used in the report.


Contributors (as heading level 2)
Refer to List of Contributors.

Line Widths
Try to keep line widths to a maximum of 120 characters for ease of GitHub reviews. In Markdown, a single line break does not constitute the start of a new paragraph.
Example
This text, which is split over four lines:
Probatum fuit hujusmodi Testamentum apud London decimo Octavo die mensis Septembris Anno Domini Millesimo
Septingentesimo vicesimo tertio Coram venerabili viro Gulielmo Strahan. In cuius rei testimonium sigillum nostrum
presentibus apposuimus ad duos anni terminos videlicet ad festa Sancti Michaelis Archangeli et Annunciationis beate
Marie virginis.
will look like a single paragraph, as follows:
Probatum fuit hujusmodi Testamentum apud London decimo Octavo die mensis Septembris Anno Domini Millesimo Septingentesimo vicesimo tertio Coram venerabili viro Gulielmo Strahan. In cuius rei testimonium sigillum nostrum presentibus apposuimus ad duos anni terminos videlicet ad festa Sancti Michaelis Archangeli et Annunciationis beate Marie virginis.
Bulleted List of Contents
Every chapter in a TLU report should start with a bulleted list of all the headings in that chapter (with embedded links), for quick reference and consistency. This is optional for chapters that have five or fewer lowerlevel headings.
Example
Refer to the contents listed at the start of this Style Guide. The heading "Contents" is not inserted before this list.
Headings
 Do not include paragraph numbers in headings.
 For consistency, upper and lowercase (title case) letters are used for headings at all levels.
Incorrect
## 2. OVERVIEW
Correct
## Overview
Also refer to Appendix A.
Figures and Tables
The use of captions, as well as figure and table numbering, is optional. If you choose to use numbering and captions, these guidelines will help to promote consistency in TLU layout:
 Number figures and tables in each section sequentially, with the table caption above the table and the figure caption below the figure.
 Type figure and table captions in upper and lowercase (title case).
 Type Figure x: or Table X: before the caption, as applicable.
 Center figures and tables on the page.
 Place figures and tables as soon as possible after they are first referred to in the text. The text reference, if figure and table numbering is not used, would then be "the following figure..." or "the following table...". This helps to avoid confusion.
Equations
mdBook has optional support for math
equations
through MathJax. In addition to the delimiters \[
and \[
, TLU also supports delimiters $
and $$
.
Examples

Example of an inline equation: $ h \in \mathbb G $

Example of a display equation:
$$ \mathbb s = \prod _{i=0}^n s(i) $$
Note: MathJax rendering in mdBook has some caveats to take note of:

Subscripts

When using two or more subscripts in inline or display equations, stipulated by a preceding underscore (
_
), the equation rendering does not work as expected. This is due to_
being a special character for Markdown indicating text in italics. The way around this is to escape each underscore used in the equation as follows: (\_
). An example of this is:
Rendering correctly
$ \mathbf a _{[:l]} = ( a_1 , ... , a_n ) \in \mathbb F ^ n \mspace{12mu} \text{and} \mspace{12mu} \mathbf a _{[l:]} = ( a_{1+1} , ... , a_n ) \in \mathbb F ^ {nl} $
as
$ \mathbf a \_{[:l]} = ( a_1 , ... , a_n ) \in \mathbb F ^ n \mspace{12mu} \text{and} \mspace{12mu} \mathbf a \_{[l:]} = ( a_{1+1} , ... , a_n ) \in \mathbb F ^ {nl} $

Rendering incorrectly
$ \mathbf a _{[:l]} = ( a_1 , ... , a_n ) \in \mathbb F ^ n \mspace{12mu} \text{and} \mspace{12mu}
\mathbf a {[l:]} = ( a{1+1} , ... , a_n ) \in \mathbb F ^ {nl} $as
$ \mathbf a _{[:l]} = ( a_1 , ... , a_n ) \in \mathbb F ^ n \mspace{12mu} \text{and} \mspace{12mu} \mathbf a _{[l:]} = ( a_{1+1} , ... , a_n ) \in \mathbb F ^ {nl} $
Notice that this part of the (failed) formula,
_{[l:]} = ( a_
, is rendered in italics.



Superscripts and subscripts order

Sometimes swapping the order in which an expression's superscript text and subscript text appear may fix rendering issues, for example:

$ s_i = \prod ^{\log _2 (n)} _{j=1} x ^{b(i,j)} _j $
vs.
$ s_i = \prod _{j=1} ^{\log _2 (n)} x _j ^{b(i,j)} $


Referencing of Source Material
Referencing Standard
TLU uses the IEEE standard [2] as a guide for referencing publications.
List references in the following order, as applicable:
 Author(s) initials or first name and surname (note punctuation in the following example).
 Title of the report, between double quotation marks. If it is an online report, state this in square brackets, as shown in the following example.
 Title of journal, in italics (if applicable).
 Publication information (volume, number, etc.).
 Page range (if applicable).
 URL address if an online publication. Provide this information as shown in the following example: "Available: ..".
 Date you accessed the article if it is an online publication (yyyymmdd), as shown in the following example.
Example
[1] M. Abe, M. Ohkubo and K. Suzuki, "1outofn Signatures from a Variety of Keys" [online]. Available: https://www.iacr.org/cryptodb/archive/2002/ASIACRYPT/50/50.pdf. Date accessed: 20181218.
Please note the use of punctuation and full stops in the example.
Markdown Links
There are two types of Markdown links: inline links and reference links.
The inline link under the Equations heading was created as follows:
 Insert identifying link text within a set of square brackets (refer to the following example).
 Create an inline link by placing a set of parentheses (round brackets) immediately after the closing square bracket of the link text (refer to the following example).
 Insert the relevant URL link inside the parentheses (round brackets) (refer to the following example).
mdBook has optional support for math equations through MathJax.
Example
A reference link has two parts. The first part of a reference link has two sets of square brackets. Inside the inner (second) set of square brackets, insert a label to identify the link.
Example
Under the heading Spelling, the text reference is "The applicable dictionary is MerriamWebster Online [1]". In the markdown text, note the double square brackets and the label 1. The rendered text shows 1.
The second part of a reference link is inserted under the heading References, and appears as follows:
[1] MerriamWebster Online Dictionary [online]. Available: https://www.merriamwebster.com/. Date accessed: 20190201.
The full online reference is inserted after [[1]]
; and the popup text link (which can be seen by hovering your cursor
over the text reference in Spelling) is inserted after [1]:
.
For assistance with the layout of references, refer to Referencing Standard.
List of Contributors
The contributors are listed in a bulleted list via their GitHub account URLs. The author is listed first, followed by any reviewers or people who contributed via pull requests. Refer to Contributors for an example.
References
[1] MerriamWebster Online Dictionary [online]. Available: https://www.merriamwebster.com/. Date accessed: 20190201.
[2] Citing and Referencing: IEEE [online]. Available: https://guides.lib.monash.edu/citingreferencing/ieee. Date accessed: 20190201.
[3] A. Thompson and B. N. Taylor, " Guide for the Use of the International System of Units (SI)", (1995) – NIST Special Publication 811  2008 Edition [online]. Available: https://physics.nist.gov/cuu/pdf/sp811.pdf. Date accessed: 20190204.
[4] The Oxford Guide to Style [online]. Available: http://www.englang.co.uk/ogs.htm. Date accessed: 20190204.
Appendices
Appendix A: Lowercase Words used in Title Case Formatting
Case Word  Case Word  Case Word  Case Word 

a  each  less  therefore 
about  either  lesser  these 
above  equal  low  they 
after  ever  made  this 
against  every  make  those 
ahead  exclude  means  through 
am  excluding  more  throughout 
an  follow  most  thus 
and  following  neither  to 
any  follows  next  top 
are  for  no  towards 
as  from  nor  under 
at  further  not  up 
be  give  of  upper 
been  given  on  use 
before  go  only  use 
behind  good  or  used 
below  greater  order  used 
beside  had  our  using 
besides  has  out  versus 
best  have  outer  very 
better  how  outside  via 
between  however  over  were 
bottom  i.e.  provide  what 
but  in  regard  when 
by  include  since  where 
can  including  such  which 
consist  inner  than  while 
consistent  inside  that  who 
consistently  instead  the  with 
consists  is  their  within 
does  it  them  without 
down  its  then  worst 
e.g.  least  there 
Appendix B: Tari Labs University Terminology Conventions
With new concepts being formed daily and words changing over time, it is useful to establish terminology conventions. Different conventions (uppercase, lowercase, one word, two words, etc.) are used by different sources. This appendix contains suggested terminology for use in TLU reports.
 blockchain
 Mimblewimble