ZKM Prover: Cross-Table Lookups
Share on

In zero-knowledge proofs, the lookup operation is used to verify the relationships between multiple tables. First, the data from the multiple tables is aggregated to form a query condition. Then, through the lookup, records that meet the condition are searched in the target table. Finally, the zero-knowledge proof generates a proof to verify the correctness of the query result without revealing any specific data.

1. Cross-Table Lookup

1.1 Aggregation of Multiple Tables

In zero-knowledge proofs, we may have multiple tables, each containing different data. There may be certain relationships between these tables that need to be aggregated for joint verification. The purpose of this process is to combine the relevant information from multiple tables into a single lookup operation, and then validate it.

For example, suppose we have two tables, T1 and T2, where T1 stores some data, and T2 stores query conditions or verification information related to this data. The goal of cross-table lookup is to find rows in T2 that match the conditions based on T1 data.

1.2 Lookup Operation for Searching

Once the aggregation of multiple tables is completed, a lookup must be performed on a “looked table.” This “looked table” stores the final validation results and can be considered as the final target table. The lookup operation is used to check if certain data exists in the looked table.

Specifically, through the lookup operation, the system will check if any values in the given looked table match the query conditions previously computed (i.e., the aggregation from multiple tables). If there are matching records, the proof process will succeed; otherwise, the proof fails.

In this process, a “filter” is typically used to determine which rows meet the conditions. This filter is expressed through a linear combination of columns, with the coefficients of the linear combination being related to the data in different columns of the tables.

1.3 Proof Using lookup in the Looked Table

After the lookup operation is completed, the final goal is to prove the correctness of the query, and this process is carried out using lookup in the looked table. During this process, the zero-knowledge proof generates a proof that verifies the correctness of the lookup results under certain conditions, without revealing any specific information about the contents of the tables.

In STARK, the lookup operation typically uses the Z(x) polynomial to construct the proof. The Z(x) polynomial is generated through multiple linear combinations and filter operations to represent the validity and constraints of the query result.

2. Cross-Table Lookup Process

2.1 Aggregation of Table Columns and Filters

First, we need to aggregate the columns and filters from multiple tables. The columns from each table can form new columns through linear combinations, while the filters determine which rows will be included in the computation.

  • Linear Combination:

\[\sum_{i=1}^{n} f_i \cdot a_i\]

Where \(f_i\) is the coefficient associated with each column.

  • Filter: The filter is a function \(f(x)\), which returns 0 or 1 when calculated for each row, indicating whether that row is selected. If \(f(x) = 1\), the column values for that row will be included in the calculation.​

Code Analysis:

pub(crate) fn get_helper_cols<F: Field>(
    trace: &[PolynomialValues<F>],  // Input trace data
    degree: usize,  // Degree of the trace
    columns_filters: &[ColumnFilter<F>],  // Tuple of columns and filters
    challenge: GrandProductChallenge<F>,  // Challenge to combine columns
    constraint_degree: usize,  // Constraint degree
) -> Vec<PolynomialValues<F>> {  // Return multiple helper columns
    let num_helper_columns = ceil_div_usize(columns_filters.len(), constraint_degree - 1);  // Calculate required helper columns
    let mut helper_columns = Vec::with_capacity(num_helper_columns);  // Initialize vector for helper columns

    // Process each batch of columns and filters, dividing by constraint_degree - 1
    for mut cols_filts in &columns_filters.iter().chunks(constraint_degree - 1) {
        let (first_col, first_filter) = cols_filts.next().unwrap();  // Get the first column and filter in the current batch

        let mut filter_col = Vec::with_capacity(degree);  // Store filter values for each row
        // Process each row, combining filter and column
        let first_combined = (0..degree)
            .map(|d| {
                // Calculate filter value for each row
                let f = if let Some(filter) = first_filter {
                    let f = filter.eval_table(trace, d);  // Evaluate filter for the current row
                    filter_col.push(f);  // Store the filter value for the current row
                    f
                } else {
                    filter_col.push(F::ONE);  // Return 1 if no filter is present
                    F::ONE
                };
                // If filter value is 1, return the combined column value
                if f.is_one() {
                    let evals = first_col
                        .iter()
                        .map(|c| c.eval_table(trace, d))  // Evaluate column for the current row
                        .collect::<Vec<F>>();
                    challenge.combine(evals.iter())  // Combine columns using the challenge
                } else {
                    assert_eq!(f, F::ZERO, "Non-binary filter?");  // Throw an exception if filter value is 0
                    F::ONE  // Otherwise, return 1
                }
            })
            .collect::<Vec<F>>();  // Collect results for the current batch of rows

        let mut acc = F::batch_multiplicative_inverse(&first_combined);  // Batch invert the combined results
        // If the filter value is 0, set the current row's value to 0
        for d in 0..degree {
            if filter_col[d].is_zero() {
                acc[d] = F::ZERO;
            }
        }

        // Process the remaining columns in the same way
        for (col, filt) in cols_filts {
            let mut filter_col = Vec::with_capacity(degree);  // Store filter values for each row
            let mut combined = (0..degree)
                .map(|d| {
                    let f = if let Some(filter) = filt {
                        let f = filter.eval_table(trace, d);  // Evaluate the current row's filter value
                        filter_col.push(f);  // Store filter value
                        f
                    } else {
                        filter_col.push(F::ONE);  // Return 1 if no filter is present
                        F::ONE
                    };
                    if f.is_one() {
                        let evals = col
                            .iter()
                            .map(|c| c.eval_table(trace, d))  // Evaluate the column for the current row
                            .collect::<Vec<F>>();
                        challenge.combine(evals.iter())  // Combine columns using the challenge
                    } else {
                        assert_eq!(f, F::ZERO, "Non-binary filter?");  // Throw an exception if filter value is 0
                        F::ONE  // Otherwise, return 1
                    }
                })
                .collect::<Vec<F>>();  // Collect results for the current batch of columns

            combined = F::batch_multiplicative_inverse(&combined);  // Batch invert the combined columns

            // If the filter value is 0, set the current row's value to 0
            for d in 0..degree {
                if filter_col[d].is_zero() {
                    combined[d] = F::ZERO;
                }
            }

            batch_add_inplace(&mut acc, &combined);  // Add the current column's result to the accumulated result
        }

        helper_columns.push(acc.into());  // Store the computed result as a helper column
    }

    assert_eq!(helper_columns.len(), num_helper_columns);  // Ensure the correct number of helper columns

    helper_columns  // Return all helper columns
}

  • get_helper_cols function computes the helper columns by combining each column with its filter, inverting the result, and then generating the helper columns.

2.2 Construction of Helper Columns

Helper columns are constructed by calculating the inverse of each column and summing these inverses. Each column is combined with its filter, and the final helper column is obtained through inversion operations. Each batch of columns produces a new helper column.

  • Construction of Helper Columns:

\[h_i(X) = \sum_{i=1}^{N} \frac{1}{x+a_i}\]

2.3 Computation of Z Polynomial

The \(Z(x)\) polynomial is computed through a recursive relation, ensuring consistency of lookup items across different tables. Its computation starts from the initial condition \(z(1) = 0\), and recursively adds the values from each helper column to the current polynomial.

\[Z(w) = Z(gw) + \sum_i h_i(X) - m(X)g(X)\]

Where hi is the helper column, and mX represents the frequency of an element in the lookup table.

fn partial_sums<F: Field>(
    trace: &[PolynomialValues<F>],  // Input trace data
    columns_filters: &[ColumnFilter<F>],  // Tuple of columns and filters
    challenge: GrandProductChallenge<F>,  // Challenge to combine columns
    constraint_degree: usize,  // Constraint degree
) -> Vec<PolynomialValues<F>> {  // Return helper columns and Z polynomial
    let degree = trace[0].len();  // Get the degree of the trace
    let mut z = Vec::with_capacity(degree);  // Vector to store Z polynomial

    let mut helper_columns = get_helper_cols(trace, degree, columns_filters, challenge, constraint_degree);  // Get helper columns

    // Compute the last term of the Z polynomial


    let x = helper_columns
        .iter()
        .map(|col| col.values[degree - 1])  // Get the last term from each column
        .sum::<F>();
    z.push(x);  // Add the last term to the Z polynomial

    // Recursively compute each term of the Z polynomial
    for i in (0..degree - 1).rev() {
        let x = helper_columns.iter().map(|col| col.values[i]).sum::<F>();
        z.push(z[z.len() - 1] + x);  // Add the current term to the Z polynomial
    }
    z.reverse();  // Reverse the Z polynomial so that Z(1) = 0

    // Process Z polynomial and return
    if columns_filters.len() > 1 {
        helper_columns.push(z.into());
    } else {
        helper_columns = vec![z.into()];
    }

    helper_columns  // Return computed helper columns
}

  • partial_sums function computes the Z polynomial and accumulates the helper columns to obtain the final Z polynomial.

2.4 Logup Lookup Verification

The purpose of Logup is to verify whether the entries between multiple lookup tables are consistent, ensuring that cross-table lookup constraints are satisfied. In the Logup process, the verification is done by comparing the auxiliary columns with the \(Z(x)\) polynomial. Specifically, during the verification of each row’s lookup entries, the computed auxiliary columns and the \(Z(x)\) polynomial are utilized.

Formula:

  • Lookup entry verification formula:

\[\sum_{i=1}^{n} \frac{1}{x+a_i} = Z(x)\]

  • where Z(x) is the polynomial, and the terms are from the lookup table.

Code Analysis:

pub(crate) fn eval_cross_table_lookup_checks<F, FE, P, S, const D: usize, const D2: usize>( 
    vars: &S::EvaluationFrame<FE, P, D2>,  // Evaluation frame, containing the current and next state values
    ctl_vars: &[CtlCheckVars<F, FE, P, D2>],  // Cross-table lookup variables
    consumer: &mut ConstraintConsumer<P>,  // Constraint consumer for consuming constraints
    constraint_degree: usize,  // Constraint degree
) where
    F: RichField + Extendable<D>,  // Supported field type
    FE: FieldExtension<D2, BaseField = F>,  // Field extension
    P: PackedField<Scalar = FE>,  // Packed field type
    S: Stark<F, D>,  // Stark configuration type
{
    let local_values = vars.get_local_values();  // Get the local values for the current state
    let next_values = vars.get_next_values();  // Get the local values for the next state

    // Iterate through all the cross-table lookup variables (`ctl_vars`)
    for lookup_vars in ctl_vars {
        let CtlCheckVars {
            helper_columns,  // Auxiliary columns
            local_z,  // Current Z value
            next_z,  // Next Z value
            challenges,  // Challenge values
            columns,  // Set of columns
            filter,  // Set of filters
        } = lookup_vars;

        // Compute the linear combination of all columns for the current table
        let evals = columns
            .iter()
            .map(|col| {
                col.iter()
                    .map(|c| c.eval_with_next(local_values, next_values))  // Evaluate each column
                    .collect::<Vec<_>>()
            })
            .collect::<Vec<_>>();

        // Use auxiliary columns for constraint validation
        eval_helper_columns(
            filter,  // Filters
            &evals,  // Evaluated columns
            local_values,  // Current state values
            next_values,  // Next state values
            helper_columns,  // Auxiliary columns
            constraint_degree,  // Constraint degree
            challenges,  // Challenge values
            consumer,  // Constraint consumer
        );

        // If there are auxiliary columns, validate the Z polynomial values
        if !helper_columns.is_empty() {
            let h_sum = helper_columns.iter().fold(P::ZEROS, |acc, x| acc + *x);  // Sum of the auxiliary columns
            // Check `Z(g^(n-1))`
            consumer.constraint_last_row(*local_z - h_sum);
            // Check `Z(w) = Z(gw) + \sum h_i`
            consumer.constraint_transition(*local_z - *next_z - h_sum);
        } else if columns.len() > 1 {
            // If there are only two columns, compute their combined value
            let combin0 = challenges.combine(&evals[0]);
            let combin1 = challenges.combine(&evals[1]);

            let f0 = if let Some(filter0) = &filter[0] {
                filter0.eval_filter(local_values, next_values)  // Compute the value for filter 0
            } else {
                P::ONES
            };
            let f1 = if let Some(filter1) = &filter[1] {
                filter1.eval_filter(local_values, next_values)  // Compute the value for filter 1
            } else {
                P::ONES
            };

            // Constraint for the last row
            consumer.constraint_last_row(combin0 * combin1 * *local_z - f0 * combin1 - f1 * combin0);
            // Constraint for the transition row
            consumer.constraint_transition(
                combin0 * combin1 * (*local_z - *next_z) - f0 * combin1 - f1 * combin0,
            );
        } else {
            // If there is only one column, directly verify that column
            let combin0 = challenges.combine(&evals[0]);
            let f0 = if let Some(filter0) = &filter[0] {
                filter0.eval_filter(local_values, next_values)  // Compute the value for the filter
            } else {
                P::ONES
            };

            // Constraint for the last row
            consumer.constraint_last_row(combin0 * *local_z - f0);
            // Constraint for the transition row
            consumer.constraint_transition(combin0 * (*local_z - *next_z) - f0);
        }
    }
}

  • eval_cross_table_lookup_checks: In this function, we first compute the evaluation results for each column and then check using the auxiliary columns. Afterward, we calculate the value of the Z polynomial and validate it. Finally, constraints are checked for each lookup entry using the constraint consumer (consumer).

Subscribe to the ZKM Blog to stay updated with the latest research by ZKM. If you have any questions about the article, you can contact the ZKM team on discord: discord.com/zkm

More articles
Hello World June Newsletter
zkGM and welcome to the June edition of the ZKM Newsletter ☀️.
ZKM ECP Contributor Board — November Wrap-up
November began with an exciting development at ZKM — we launched Cohort 2 of our Early Contributor Program (ECP): Community Evolution
ZKM Prover: Cross-Table Lookups

In zero-knowledge proofs, the lookup operation is used to verify the relationships between multiple tables. First, the data from the multiple tables is aggregated to form a query condition. Then, through the lookup, records that meet the condition are searched in the target table. Finally, the zero-knowledge proof generates a proof to verify the correctness of the query result without revealing any specific data.

1. Cross-Table Lookup

1.1 Aggregation of Multiple Tables

In zero-knowledge proofs, we may have multiple tables, each containing different data. There may be certain relationships between these tables that need to be aggregated for joint verification. The purpose of this process is to combine the relevant information from multiple tables into a single lookup operation, and then validate it.

For example, suppose we have two tables, T1 and T2, where T1 stores some data, and T2 stores query conditions or verification information related to this data. The goal of cross-table lookup is to find rows in T2 that match the conditions based on T1 data.

1.2 Lookup Operation for Searching

Once the aggregation of multiple tables is completed, a lookup must be performed on a “looked table.” This “looked table” stores the final validation results and can be considered as the final target table. The lookup operation is used to check if certain data exists in the looked table.

Specifically, through the lookup operation, the system will check if any values in the given looked table match the query conditions previously computed (i.e., the aggregation from multiple tables). If there are matching records, the proof process will succeed; otherwise, the proof fails.

In this process, a “filter” is typically used to determine which rows meet the conditions. This filter is expressed through a linear combination of columns, with the coefficients of the linear combination being related to the data in different columns of the tables.

1.3 Proof Using lookup in the Looked Table

After the lookup operation is completed, the final goal is to prove the correctness of the query, and this process is carried out using lookup in the looked table. During this process, the zero-knowledge proof generates a proof that verifies the correctness of the lookup results under certain conditions, without revealing any specific information about the contents of the tables.

In STARK, the lookup operation typically uses the Z(x) polynomial to construct the proof. The Z(x) polynomial is generated through multiple linear combinations and filter operations to represent the validity and constraints of the query result.

2. Cross-Table Lookup Process

2.1 Aggregation of Table Columns and Filters

First, we need to aggregate the columns and filters from multiple tables. The columns from each table can form new columns through linear combinations, while the filters determine which rows will be included in the computation.

  • Linear Combination:

\[\sum_{i=1}^{n} f_i \cdot a_i\]

Where \(f_i\) is the coefficient associated with each column.

  • Filter: The filter is a function \(f(x)\), which returns 0 or 1 when calculated for each row, indicating whether that row is selected. If \(f(x) = 1\), the column values for that row will be included in the calculation.​

Code Analysis:

pub(crate) fn get_helper_cols<F: Field>(
    trace: &[PolynomialValues<F>],  // Input trace data
    degree: usize,  // Degree of the trace
    columns_filters: &[ColumnFilter<F>],  // Tuple of columns and filters
    challenge: GrandProductChallenge<F>,  // Challenge to combine columns
    constraint_degree: usize,  // Constraint degree
) -> Vec<PolynomialValues<F>> {  // Return multiple helper columns
    let num_helper_columns = ceil_div_usize(columns_filters.len(), constraint_degree - 1);  // Calculate required helper columns
    let mut helper_columns = Vec::with_capacity(num_helper_columns);  // Initialize vector for helper columns

    // Process each batch of columns and filters, dividing by constraint_degree - 1
    for mut cols_filts in &columns_filters.iter().chunks(constraint_degree - 1) {
        let (first_col, first_filter) = cols_filts.next().unwrap();  // Get the first column and filter in the current batch

        let mut filter_col = Vec::with_capacity(degree);  // Store filter values for each row
        // Process each row, combining filter and column
        let first_combined = (0..degree)
            .map(|d| {
                // Calculate filter value for each row
                let f = if let Some(filter) = first_filter {
                    let f = filter.eval_table(trace, d);  // Evaluate filter for the current row
                    filter_col.push(f);  // Store the filter value for the current row
                    f
                } else {
                    filter_col.push(F::ONE);  // Return 1 if no filter is present
                    F::ONE
                };
                // If filter value is 1, return the combined column value
                if f.is_one() {
                    let evals = first_col
                        .iter()
                        .map(|c| c.eval_table(trace, d))  // Evaluate column for the current row
                        .collect::<Vec<F>>();
                    challenge.combine(evals.iter())  // Combine columns using the challenge
                } else {
                    assert_eq!(f, F::ZERO, "Non-binary filter?");  // Throw an exception if filter value is 0
                    F::ONE  // Otherwise, return 1
                }
            })
            .collect::<Vec<F>>();  // Collect results for the current batch of rows

        let mut acc = F::batch_multiplicative_inverse(&first_combined);  // Batch invert the combined results
        // If the filter value is 0, set the current row's value to 0
        for d in 0..degree {
            if filter_col[d].is_zero() {
                acc[d] = F::ZERO;
            }
        }

        // Process the remaining columns in the same way
        for (col, filt) in cols_filts {
            let mut filter_col = Vec::with_capacity(degree);  // Store filter values for each row
            let mut combined = (0..degree)
                .map(|d| {
                    let f = if let Some(filter) = filt {
                        let f = filter.eval_table(trace, d);  // Evaluate the current row's filter value
                        filter_col.push(f);  // Store filter value
                        f
                    } else {
                        filter_col.push(F::ONE);  // Return 1 if no filter is present
                        F::ONE
                    };
                    if f.is_one() {
                        let evals = col
                            .iter()
                            .map(|c| c.eval_table(trace, d))  // Evaluate the column for the current row
                            .collect::<Vec<F>>();
                        challenge.combine(evals.iter())  // Combine columns using the challenge
                    } else {
                        assert_eq!(f, F::ZERO, "Non-binary filter?");  // Throw an exception if filter value is 0
                        F::ONE  // Otherwise, return 1
                    }
                })
                .collect::<Vec<F>>();  // Collect results for the current batch of columns

            combined = F::batch_multiplicative_inverse(&combined);  // Batch invert the combined columns

            // If the filter value is 0, set the current row's value to 0
            for d in 0..degree {
                if filter_col[d].is_zero() {
                    combined[d] = F::ZERO;
                }
            }

            batch_add_inplace(&mut acc, &combined);  // Add the current column's result to the accumulated result
        }

        helper_columns.push(acc.into());  // Store the computed result as a helper column
    }

    assert_eq!(helper_columns.len(), num_helper_columns);  // Ensure the correct number of helper columns

    helper_columns  // Return all helper columns
}

  • get_helper_cols function computes the helper columns by combining each column with its filter, inverting the result, and then generating the helper columns.

2.2 Construction of Helper Columns

Helper columns are constructed by calculating the inverse of each column and summing these inverses. Each column is combined with its filter, and the final helper column is obtained through inversion operations. Each batch of columns produces a new helper column.

  • Construction of Helper Columns:

\[h_i(X) = \sum_{i=1}^{N} \frac{1}{x+a_i}\]

2.3 Computation of Z Polynomial

The \(Z(x)\) polynomial is computed through a recursive relation, ensuring consistency of lookup items across different tables. Its computation starts from the initial condition \(z(1) = 0\), and recursively adds the values from each helper column to the current polynomial.

\[Z(w) = Z(gw) + \sum_i h_i(X) - m(X)g(X)\]

Where hi is the helper column, and mX represents the frequency of an element in the lookup table.

fn partial_sums<F: Field>(
    trace: &[PolynomialValues<F>],  // Input trace data
    columns_filters: &[ColumnFilter<F>],  // Tuple of columns and filters
    challenge: GrandProductChallenge<F>,  // Challenge to combine columns
    constraint_degree: usize,  // Constraint degree
) -> Vec<PolynomialValues<F>> {  // Return helper columns and Z polynomial
    let degree = trace[0].len();  // Get the degree of the trace
    let mut z = Vec::with_capacity(degree);  // Vector to store Z polynomial

    let mut helper_columns = get_helper_cols(trace, degree, columns_filters, challenge, constraint_degree);  // Get helper columns

    // Compute the last term of the Z polynomial


    let x = helper_columns
        .iter()
        .map(|col| col.values[degree - 1])  // Get the last term from each column
        .sum::<F>();
    z.push(x);  // Add the last term to the Z polynomial

    // Recursively compute each term of the Z polynomial
    for i in (0..degree - 1).rev() {
        let x = helper_columns.iter().map(|col| col.values[i]).sum::<F>();
        z.push(z[z.len() - 1] + x);  // Add the current term to the Z polynomial
    }
    z.reverse();  // Reverse the Z polynomial so that Z(1) = 0

    // Process Z polynomial and return
    if columns_filters.len() > 1 {
        helper_columns.push(z.into());
    } else {
        helper_columns = vec![z.into()];
    }

    helper_columns  // Return computed helper columns
}

  • partial_sums function computes the Z polynomial and accumulates the helper columns to obtain the final Z polynomial.

2.4 Logup Lookup Verification

The purpose of Logup is to verify whether the entries between multiple lookup tables are consistent, ensuring that cross-table lookup constraints are satisfied. In the Logup process, the verification is done by comparing the auxiliary columns with the \(Z(x)\) polynomial. Specifically, during the verification of each row’s lookup entries, the computed auxiliary columns and the \(Z(x)\) polynomial are utilized.

Formula:

  • Lookup entry verification formula:

\[\sum_{i=1}^{n} \frac{1}{x+a_i} = Z(x)\]

  • where Z(x) is the polynomial, and the terms are from the lookup table.

Code Analysis:

pub(crate) fn eval_cross_table_lookup_checks<F, FE, P, S, const D: usize, const D2: usize>( 
    vars: &S::EvaluationFrame<FE, P, D2>,  // Evaluation frame, containing the current and next state values
    ctl_vars: &[CtlCheckVars<F, FE, P, D2>],  // Cross-table lookup variables
    consumer: &mut ConstraintConsumer<P>,  // Constraint consumer for consuming constraints
    constraint_degree: usize,  // Constraint degree
) where
    F: RichField + Extendable<D>,  // Supported field type
    FE: FieldExtension<D2, BaseField = F>,  // Field extension
    P: PackedField<Scalar = FE>,  // Packed field type
    S: Stark<F, D>,  // Stark configuration type
{
    let local_values = vars.get_local_values();  // Get the local values for the current state
    let next_values = vars.get_next_values();  // Get the local values for the next state

    // Iterate through all the cross-table lookup variables (`ctl_vars`)
    for lookup_vars in ctl_vars {
        let CtlCheckVars {
            helper_columns,  // Auxiliary columns
            local_z,  // Current Z value
            next_z,  // Next Z value
            challenges,  // Challenge values
            columns,  // Set of columns
            filter,  // Set of filters
        } = lookup_vars;

        // Compute the linear combination of all columns for the current table
        let evals = columns
            .iter()
            .map(|col| {
                col.iter()
                    .map(|c| c.eval_with_next(local_values, next_values))  // Evaluate each column
                    .collect::<Vec<_>>()
            })
            .collect::<Vec<_>>();

        // Use auxiliary columns for constraint validation
        eval_helper_columns(
            filter,  // Filters
            &evals,  // Evaluated columns
            local_values,  // Current state values
            next_values,  // Next state values
            helper_columns,  // Auxiliary columns
            constraint_degree,  // Constraint degree
            challenges,  // Challenge values
            consumer,  // Constraint consumer
        );

        // If there are auxiliary columns, validate the Z polynomial values
        if !helper_columns.is_empty() {
            let h_sum = helper_columns.iter().fold(P::ZEROS, |acc, x| acc + *x);  // Sum of the auxiliary columns
            // Check `Z(g^(n-1))`
            consumer.constraint_last_row(*local_z - h_sum);
            // Check `Z(w) = Z(gw) + \sum h_i`
            consumer.constraint_transition(*local_z - *next_z - h_sum);
        } else if columns.len() > 1 {
            // If there are only two columns, compute their combined value
            let combin0 = challenges.combine(&evals[0]);
            let combin1 = challenges.combine(&evals[1]);

            let f0 = if let Some(filter0) = &filter[0] {
                filter0.eval_filter(local_values, next_values)  // Compute the value for filter 0
            } else {
                P::ONES
            };
            let f1 = if let Some(filter1) = &filter[1] {
                filter1.eval_filter(local_values, next_values)  // Compute the value for filter 1
            } else {
                P::ONES
            };

            // Constraint for the last row
            consumer.constraint_last_row(combin0 * combin1 * *local_z - f0 * combin1 - f1 * combin0);
            // Constraint for the transition row
            consumer.constraint_transition(
                combin0 * combin1 * (*local_z - *next_z) - f0 * combin1 - f1 * combin0,
            );
        } else {
            // If there is only one column, directly verify that column
            let combin0 = challenges.combine(&evals[0]);
            let f0 = if let Some(filter0) = &filter[0] {
                filter0.eval_filter(local_values, next_values)  // Compute the value for the filter
            } else {
                P::ONES
            };

            // Constraint for the last row
            consumer.constraint_last_row(combin0 * *local_z - f0);
            // Constraint for the transition row
            consumer.constraint_transition(combin0 * (*local_z - *next_z) - f0);
        }
    }
}

  • eval_cross_table_lookup_checks: In this function, we first compute the evaluation results for each column and then check using the auxiliary columns. Afterward, we calculate the value of the Z polynomial and validate it. Finally, constraints are checked for each lookup entry using the constraint consumer (consumer).

Subscribe to the ZKM Blog to stay updated with the latest research by ZKM. If you have any questions about the article, you can contact the ZKM team on discord: discord.com/zkm