Amherst Labs is committed to developing a prover system that generates zero-knowledge proofs to harness the scaling benefits of both SNARKs and STARKs. The Geregè Protocol is built on Cairo architecture, which offers a high degree of customization to enhance its performance and adaptability.

To construct a zero-knowledge proving system, there are two essential components:

Cairo does not have built-in support for state transitions or management like Solidity. To maintain and update the state within a Cairo program, we would need to manage it manually using Cairo's memory concept to store and manage the state of the protocol.

The length of the state route or the number of state transitions that can be processed in a Cairo program depends on the complexity of the computation and the hardware resources available for running the prover. Cairo is designed for high efficiency and scalability, allowing for the handling of large-scale computations and efficient proof generation.

Recursion is vital for Geregè's scaling purposes, as it can reduce data size and verification times. Instead of using KZG (Kate Commitments) as the commitment scheme, Geregè opted for FRI, which is typically associated with STARKs.

FRI (Fast Reed-Solomon Interactive Oracle Proofs) offers an interesting tradeoff between speed and proof size. It enables fast proof-generation times, albeit with large proofs, which can be expensive and cumbersome to post on Ethereum mainnet. On the other hand, FRI can generate smaller proofs, but at a slower pace.

Geregè leverages this tradeoff by using large proofs when speed is crucial and smaller proofs when size is essential. The protocol optimizes parameters for each step in the recursion process, taking full advantage of the time-space tradeoff available in FRI. This flexibility makes Geregè a versatile solution for various implementations, benefiting from Cairo architecture's customizability.

Hardware

We focuses on optimising proof generation by collaborating with Snarkify/arkmsm, an MSM library that achieves up to 2x speedup on CPUs. Recently Snarkify team implemented four algorithmic optimisation on Arkworks:

Overall they achieved a 1.6x - 2.0 speedup with input size scaling from 2^8 to 2^18. Each optimisation applied achieved and average improvement of 10% - 20%

Fs6trcsWwAA_KcO.png

Batch addition of offline points was used to reduce the addition cost in both bucket accusation and bucket reduction. You can read more here.

In bucket accusation, instead of a tree-based addition, a list-based approach was used for more straightforward implementation and constant extra memory usage

Fs6wjUuXsAMbHnE.jpeg