Plonky2 is a zkSNARK protocol based on polynomial commitment and the Plonk-based PIOP interactive proof. It focuses on achieving efficient zkSNARK through the FRI technique. The primary goal of Plonky2 is to improve the efficiency of traditional zkSNARKs in recursive zero-knowledge proof scenarios while enhancing post-quantum security. Its core concept is leveraging the FRI (Fast Reed-Solomon Interactive Oracle Proof of Proximity) to allow efficient polynomial verification and using random sampling to enhance the integrity and security of the protocol.
In this blog, we will analyze the main steps and construction logic of the Plonky2 protocol. We will also explore how the verifier’s random challenge can be replaced by a collision-resistant hash function, to transform the protocol into a non-interactive proof system.
In the preprocessing phase, the Prover (P) and Verifier (V) jointly construct the circuit and compute a set of polynomial vectors \( \vec{C} \) and \( \vec{\sigma} \). These polynomials encode the constants and extended permutation of each circuit gate, and these polynomials represent the relationship and operations within the circuit.
Specifically, \( \vec{C} \) contains the constraints on the gate constants and intermediate variables (Gate Constraint), while \( \vec{\sigma} \) represents the information of extended permutation, also known as Copy Constraint. These play a key role in the subsequent polynomial permutation and zero-knowledge proof process.
P and V hash the evaluation of \( \vec{C} \) and \( \vec{\sigma} \) to build a Merkle tree. This tree not only compresses the proof size but also provides an efficient data authentication structure in subsequent polynomial consistency verification.
P stores the entire Merkle tree as the proving key, used for subsequent polynomial commitments and verification operations.
V stores \(Com(\vec{C}, \vec{\sigma})\), as the verification key to ensure consistency when verifying the relationship between the polynomials and the circuit in the later steps.
Optimization: Typically, in the implementation, we use a Merkle cap (partial subtree nodes) instead of the Merkle root to reduce the size of the final Merkle proof.
The Prover generates the witness vector \( \vec{w} \) (the private input to the circuit) and computes its polynomial form \(w(x)\). The Prover then uses the FRI protocol to generate a commitment to the polynomial (Merkle root) \(Com(\vec{w})\) and sends it to the Verifier.
The Verifier randomly selects \(\beta_{1}, \ldots, \beta_{r}, \gamma_{1}, \ldots, \gamma_{r}\) in the finite field \(F_p\). These random parameters will be used in the subsequent verification of the permutation polynomials to ensure the completeness of the protocol.
P sends the commitment of the permutation polynomial \(Z(x)\) and the product polynomial \(\pi(x)\) as \( Com(\vec{Z}, \vec{\pi}) \), to ensure the consistency of the copy constraints.
The permutation polynomials are to verify the consistency between variables (i.e., whether the Copy Constraint holds), and the product polynomial verifies the correctness of intermediate variable computations (reducing polynomial degrees and allowing parallel computation for efficiency).
The Verifier samples another set of random parameters \( \alpha_{1}, \ldots, \alpha_{r} \) to combine multiple constraints through random linear combinations. This operation simplifies the verification process by introducing different random coefficients.
The Prover uses the FRI protocol to build a Merkle tree and generate the quotient polynomial \(q_i(x)\) with its corresponding commitment \(Com(\vec{q})\). The quotient polynomial takes the following form:
\[q_i(x) = \frac{C_i(x)}{z_H(x)},\]
where \(C_i(x)\) is the constraint polynomial after combining constraints using random linear combinations. It represents the merged constraints, and \(Z_{H}(x)=x^{n}-1\) is the vanishing polynomial over the set \(H\), The polynomial vanishes over the specific domain \(H\) (i.e., it can be factorized by the vanishing polynomial).
The Verifier selects a new random point \( \zeta \in F_p \) to verify the consistency between the quotient polynomial and the constraint polynomials at this point.
The Prover computes and sends the evaluations of all polynomials at point \(\zeta\), denoted \(\vec{P}(\zeta)\), where \(\vec{P}\) represents the following linked polynomials:
\[\vec{P}=(\vec{c},\vec{\sigma},\vec{w},\vec{Z},\vec{\pi},\vec{q}).\]
This step aims to compress the verification of all polynomial values at a single point, avoiding the overhead of verifying them one by one.
The Prover and Verifier jointly use the FRI protocol for batch verification (Batch FRI Protocol). The goal is to verify whether the following polynomials satisfy the consistency condition:
\[\vec{B}=(\frac{p_{t}(x)-p_{t}(\zeta)}{x-\zeta}|p_{t}\in\vec{P}).\]
In this step, FRI uses multi-layer folding and random coefficient adjustments to ensure the polynomial values are consistent at the verification point.
The Verifier receives the values of \(\vec{P}(\zeta)\) and verifies the merged constraint polynomial \(c_i(\zeta)\) to ensure that \( c_i(\zeta) = q_i(x)Z_H(x) \) holds.
Unlike traditional Plonk, Plonky2 introduces permutation polynomials and quotient polynomial commitments to reduce the polynomial degree and parallelize polynomial computations. It achieves efficient verification of polynomial relationships within circuits. Combined with batch verification using FRI, it significantly reduces the computational overhead on the verifier side. The advantages of the protocol are mainly reflected in the following aspects:
The introduction of FRI technology makes polynomial verification more efficient, especially in recursive zero-knowledge proof and post-quantum security scenarios.
Plonky2 enhances security by introducing multiple random parameters (\( \beta, \gamma, \alpha \) etc.) throughout the protocol, ensuring zero-knowledge properties and security.
Plonky2 is designed to perform exceptionally well in recursive scenarios, making it suitable for complex circuit proofs and multi-layer verification structures.
Subscribe to the ZKM Blog to stay updated with the latest research by ZKM. If you have any questions, you can contact the ZKM team on discord: discord.com/zkm
Plonky2 is a zkSNARK protocol based on polynomial commitment and the Plonk-based PIOP interactive proof. It focuses on achieving efficient zkSNARK through the FRI technique. The primary goal of Plonky2 is to improve the efficiency of traditional zkSNARKs in recursive zero-knowledge proof scenarios while enhancing post-quantum security. Its core concept is leveraging the FRI (Fast Reed-Solomon Interactive Oracle Proof of Proximity) to allow efficient polynomial verification and using random sampling to enhance the integrity and security of the protocol.
In this blog, we will analyze the main steps and construction logic of the Plonky2 protocol. We will also explore how the verifier’s random challenge can be replaced by a collision-resistant hash function, to transform the protocol into a non-interactive proof system.
In the preprocessing phase, the Prover (P) and Verifier (V) jointly construct the circuit and compute a set of polynomial vectors \( \vec{C} \) and \( \vec{\sigma} \). These polynomials encode the constants and extended permutation of each circuit gate, and these polynomials represent the relationship and operations within the circuit.
Specifically, \( \vec{C} \) contains the constraints on the gate constants and intermediate variables (Gate Constraint), while \( \vec{\sigma} \) represents the information of extended permutation, also known as Copy Constraint. These play a key role in the subsequent polynomial permutation and zero-knowledge proof process.
P and V hash the evaluation of \( \vec{C} \) and \( \vec{\sigma} \) to build a Merkle tree. This tree not only compresses the proof size but also provides an efficient data authentication structure in subsequent polynomial consistency verification.
P stores the entire Merkle tree as the proving key, used for subsequent polynomial commitments and verification operations.
V stores \(Com(\vec{C}, \vec{\sigma})\), as the verification key to ensure consistency when verifying the relationship between the polynomials and the circuit in the later steps.
Optimization: Typically, in the implementation, we use a Merkle cap (partial subtree nodes) instead of the Merkle root to reduce the size of the final Merkle proof.
The Prover generates the witness vector \( \vec{w} \) (the private input to the circuit) and computes its polynomial form \(w(x)\). The Prover then uses the FRI protocol to generate a commitment to the polynomial (Merkle root) \(Com(\vec{w})\) and sends it to the Verifier.
The Verifier randomly selects \(\beta_{1}, \ldots, \beta_{r}, \gamma_{1}, \ldots, \gamma_{r}\) in the finite field \(F_p\). These random parameters will be used in the subsequent verification of the permutation polynomials to ensure the completeness of the protocol.
P sends the commitment of the permutation polynomial \(Z(x)\) and the product polynomial \(\pi(x)\) as \( Com(\vec{Z}, \vec{\pi}) \), to ensure the consistency of the copy constraints.
The permutation polynomials are to verify the consistency between variables (i.e., whether the Copy Constraint holds), and the product polynomial verifies the correctness of intermediate variable computations (reducing polynomial degrees and allowing parallel computation for efficiency).
The Verifier samples another set of random parameters \( \alpha_{1}, \ldots, \alpha_{r} \) to combine multiple constraints through random linear combinations. This operation simplifies the verification process by introducing different random coefficients.
The Prover uses the FRI protocol to build a Merkle tree and generate the quotient polynomial \(q_i(x)\) with its corresponding commitment \(Com(\vec{q})\). The quotient polynomial takes the following form:
\[q_i(x) = \frac{C_i(x)}{z_H(x)},\]
where \(C_i(x)\) is the constraint polynomial after combining constraints using random linear combinations. It represents the merged constraints, and \(Z_{H}(x)=x^{n}-1\) is the vanishing polynomial over the set \(H\), The polynomial vanishes over the specific domain \(H\) (i.e., it can be factorized by the vanishing polynomial).
The Verifier selects a new random point \( \zeta \in F_p \) to verify the consistency between the quotient polynomial and the constraint polynomials at this point.
The Prover computes and sends the evaluations of all polynomials at point \(\zeta\), denoted \(\vec{P}(\zeta)\), where \(\vec{P}\) represents the following linked polynomials:
\[\vec{P}=(\vec{c},\vec{\sigma},\vec{w},\vec{Z},\vec{\pi},\vec{q}).\]
This step aims to compress the verification of all polynomial values at a single point, avoiding the overhead of verifying them one by one.
The Prover and Verifier jointly use the FRI protocol for batch verification (Batch FRI Protocol). The goal is to verify whether the following polynomials satisfy the consistency condition:
\[\vec{B}=(\frac{p_{t}(x)-p_{t}(\zeta)}{x-\zeta}|p_{t}\in\vec{P}).\]
In this step, FRI uses multi-layer folding and random coefficient adjustments to ensure the polynomial values are consistent at the verification point.
The Verifier receives the values of \(\vec{P}(\zeta)\) and verifies the merged constraint polynomial \(c_i(\zeta)\) to ensure that \( c_i(\zeta) = q_i(x)Z_H(x) \) holds.
Unlike traditional Plonk, Plonky2 introduces permutation polynomials and quotient polynomial commitments to reduce the polynomial degree and parallelize polynomial computations. It achieves efficient verification of polynomial relationships within circuits. Combined with batch verification using FRI, it significantly reduces the computational overhead on the verifier side. The advantages of the protocol are mainly reflected in the following aspects:
The introduction of FRI technology makes polynomial verification more efficient, especially in recursive zero-knowledge proof and post-quantum security scenarios.
Plonky2 enhances security by introducing multiple random parameters (\( \beta, \gamma, \alpha \) etc.) throughout the protocol, ensuring zero-knowledge properties and security.
Plonky2 is designed to perform exceptionally well in recursive scenarios, making it suitable for complex circuit proofs and multi-layer verification structures.
Subscribe to the ZKM Blog to stay updated with the latest research by ZKM. If you have any questions, you can contact the ZKM team on discord: discord.com/zkm