Memory-Hard Mining Algorithms: Keeping Crypto Decentralized
Memory-hard mining algorithms resist ASIC dominance by requiring large amounts of RAM, keeping cryptocurrency mining accessible. Learn how Ethash, Scrypt, Equihash, and KeccakCollision work.
The Problem with Mining Centralization
When Bitcoin launched in 2009, anyone with a laptop could mine BTC profitably. By 2012, GPUs had taken over. By 2013, the first ASICs (Application-Specific Integrated Circuits) appeared — custom chips designed exclusively to compute SHA-256 hashes. Today, Bitcoin mining is dominated by industrial operations running warehouses full of ASIC hardware manufactured by a handful of companies, primarily Bitmain, MicroBT, and Canaan.
This progression from laptops to warehouses illustrates a fundamental tension in proof-of-work cryptocurrency: if the mining algorithm can be dramatically accelerated by custom hardware, mining will inevitably centralize around those who can afford or manufacture that hardware.
Memory-hard mining algorithms are the primary defense against this centralization. By requiring large amounts of memory (RAM) to solve mining puzzles, these algorithms neutralize the advantage of custom silicon and keep mining accessible to participants using commodity hardware.
What Memory-Hardness Actually Means
To understand memory-hardness, you first need to understand why ASICs are so effective at traditional hash-based mining.
A SHA-256 hash computation is pure arithmetic — it consists of bitwise operations (AND, OR, XOR, shifts, rotations) and additions. These operations can be implemented in custom circuits that are thousands of times more efficient than a general-purpose CPU. An ASIC designer can pack millions of hash cores onto a single chip, each running at minimal power, achieving hashrates that no GPU or CPU could match.
Memory-hard algorithms disrupt this advantage by making the bottleneck memory access rather than computation. Here is why this matters:
Computation is cheap to parallelize in custom hardware. Adding more arithmetic units to an ASIC is relatively straightforward and inexpensive. A chip that costs $100 to manufacture might contain thousands of hash cores.
Memory is expensive to scale. RAM requires physical space on silicon, consumes significant power, and cannot be miniaturized as aggressively as logic circuits. A GB of SRAM costs orders of magnitude more than a GB of DRAM, and both are dramatically more expensive per unit than simple logic gates.
Memory bandwidth is a physical constraint. Even if you can fit more memory on a chip, the speed at which data can be read from and written to that memory is limited by fundamental physics — bus width, signal propagation, and thermal constraints.
A memory-hard algorithm forces miners to repeatedly read from and write to a large dataset during the mining process. The speed of mining becomes limited by memory bandwidth rather than computational throughput. Since commodity DRAM is already optimized for bandwidth (it is a massive market driven by consumer electronics, servers, and gaming), custom hardware offers little advantage over standard GPUs or CPUs equipped with standard memory.
Formal Definition
In computational complexity theory, a memory-hard function is one where:
- Computing the function requires at least O(n) memory for some parameter n
- Any attempt to compute the function with less than O(n) memory results in a super-linear increase in computation time
- Verification of the result requires significantly less memory than computation
The third property is essential for blockchain applications. Miners need lots of memory to find solutions, but every node on the network needs to verify those solutions efficiently. If verification also required gigabytes of RAM, running a full node would be impractical.
Major Memory-Hard Mining Algorithms
Scrypt
Used by: Litecoin, Dogecoin, and hundreds of other coins Introduced: 2011 (adapted for cryptocurrency by Litecoin) Memory requirement: 128 KB in Litecoin’s configuration
Scrypt was originally designed by Colin Percival as a password hashing function. Its key innovation is a large lookup table that must be stored in memory and accessed randomly during computation.
The algorithm works in three phases:
- Initialization: Fill a large array with pseudorandom data derived from the input
- Mixing: Repeatedly read random positions in the array, mixing the data
- Finalization: Produce the output hash from the mixed data
The tradeoff in practice: Litecoin configured Scrypt with a relatively small memory parameter (128 KB) to keep verification fast. This turned out to be small enough that ASIC manufacturers eventually built Scrypt ASICs with enough on-chip memory. By 2014, Scrypt ASICs were available, and Litecoin mining centralized in a pattern similar to Bitcoin.
The lesson from Scrypt is that memory-hardness requires sufficient memory requirements. A few hundred kilobytes is not enough — modern algorithms target hundreds of megabytes to multiple gigabytes.
Ethash (and Etchash)
Used by: Ethereum (before The Merge), Ethereum Classic Introduced: 2015 Memory requirement: 1-4+ GB DAG (grows over time)
Ethash was designed specifically for Ethereum with stronger memory-hardness than Scrypt. Its defining feature is the DAG (Directed Acyclic Graph), a large dataset that miners must store in memory and access during mining.
The algorithm works as follows:
- Seed generation: A seed hash is derived from the current epoch (changes every 30,000 blocks)
- Cache generation: A 16-64 MB cache is computed from the seed
- DAG generation: A full DAG (starting at ~1 GB, growing ~8 MB per year) is generated from the cache
- Mining: For each nonce attempt, the algorithm makes 64 lookups into the DAG, each accessing a 128-byte page at a pseudo-random location
- Verification: Verifiers can recompute the needed DAG pages from the cache, requiring only the smaller cache in memory
The growing DAG size was a deliberate design choice. As the DAG grows, lower-memory GPUs are gradually excluded, but commodity GPUs continue to have sufficient VRAM. This created a moving target that made ASIC design more difficult, though some Ethash ASICs were eventually produced.
Ethereum Classic continues to use a variant called Etchash with similar properties.
Equihash
Used by: Zcash, Horizen, Beam Introduced: 2016 Memory requirement: ~144 MB (for standard parameters)
Equihash takes a different approach to memory-hardness. Instead of a large lookup table, it requires solving a generalized birthday problem — finding collisions among hash outputs where certain conditions are met.
The standard Equihash parameter set (n=200, k=9) requires:
- Generate 2^(n/(k+1)+1) = 2^21 hash outputs (about 2 million)
- Store these in memory (approximately 144 MB)
- Find 2^k = 512 of these outputs that XOR to zero (the birthday collision)
- The algorithm uses a multi-step collision search (Wagner’s algorithm) that requires repeated sorting and filtering of the dataset in memory
Equihash’s memory-hardness comes from the requirement to store and sort millions of hash outputs. Reducing memory usage forces exponentially more computation, maintaining the time-memory tradeoff.
Despite its design goals, Bitmain released an Equihash ASIC (the Z9 Mini) in 2018, demonstrating that even well-designed memory-hard algorithms face pressure from custom hardware. Zcash responded by moving to a modified version, but the episode highlighted the ongoing arms race.
RandomX
Used by: Monero Introduced: 2019 Memory requirement: 2 GB (scratchpad and cache)
RandomX represents a different philosophy: instead of being memory-hard, it is CPU-hard. The algorithm dynamically generates random programs that execute in a virtual machine, leveraging the full complexity of a modern CPU — branch prediction, cache hierarchies, floating-point units, and superscalar execution.
While not purely memory-hard, RandomX requires a 2 GB dataset for its read-only lookup operations, and the random program execution pattern makes ASIC implementation extremely difficult. An ASIC for RandomX would essentially need to be a general-purpose CPU, at which point it offers no advantage over existing CPUs.
RandomX has proven remarkably ASIC-resistant since its deployment. Monero mining remains dominated by consumer CPUs, validating the design approach.
KeccakCollision
Used by: EVMORE Introduced: 2025 Memory requirement: Memory-hard with configurable parameters
KeccakCollision is a newer algorithm built on the Keccak hash function (the same primitive underlying SHA-3, which is the NIST-standardized hash function). It combines the well-studied security properties of Keccak with a collision-finding puzzle that requires significant memory.
The algorithm works as follows:
- Challenge derivation: A mining challenge is derived from the current blockchain state
- Collision search: Miners must find multiple values (K=4 in EVMORE’s configuration) where the keccak256 hash of the challenge concatenated with each value shares matching bit patterns in the first N bits (N=16 in EVMORE’s configuration)
- Memory requirement: Efficiently searching for collisions requires building and querying large lookup tables in memory, similar to birthday attack optimizations
- On-chain verification: Solutions are verified directly in smart contracts, with the verification process requiring only the candidate values (not the full search table)
The key properties that make KeccakCollision suitable for mining:
Asymmetric difficulty. Finding collisions requires extensive memory and computation. Verifying them requires only computing a few hashes and comparing bit patterns. This asymmetry is ideal for blockchain mining where verification must be cheap.
Configurable parameters. The N (bit pattern length) and K (number of required values) parameters can be adjusted to tune difficulty and memory requirements. This gives the protocol flexibility to respond to hardware developments.
Well-studied cryptographic foundation. Keccak has been extensively analyzed by the cryptographic community since its selection as the SHA-3 standard. Using a well-known primitive reduces the risk of hidden weaknesses that have affected some novel mining algorithms.
Smart contract compatibility. Because verification involves only standard Keccak operations, solutions can be verified on-chain in Ethereum smart contracts. This enables mining to be fully transparent and auditable without relying on off-chain verification.
How Memory-Hard Algorithms Compare
| Algorithm | Memory Needed | ASIC Resistance (2026) | Verification Cost | Hash Foundation | Primary Weakness |
|---|---|---|---|---|---|
| SHA-256 | Minimal | None (fully ASIC) | Very low | SHA-2 | Complete ASIC dominance |
| Scrypt | 128 KB | None (ASICs exist) | Low | Salsa20/8 | Insufficient memory |
| Ethash | 1-4+ GB | Moderate | Low (cache-based) | Keccak-256/512 | Growing DAG excludes devices |
| Equihash | ~144 MB | Low (ASICs exist) | Moderate | Blake2b | Insufficient complexity |
| RandomX | 2 GB | High | High (VM execution) | AES/Blake2b | CPU-specific optimization |
| KeccakCollision | Configurable | High | Low (hash comparison) | Keccak-256 | Newer, less battle-tested |
The Tradeoffs of Memory-Hardness
Memory-hard algorithms are not without costs. Understanding these tradeoffs is important for evaluating any mining-based cryptocurrency.
Verification Overhead
Traditional hash-based mining (SHA-256) has near-zero verification cost — just compute one hash and check if it meets the target. Memory-hard algorithms generally have higher verification costs, which can impact block propagation time and node resource requirements.
Well-designed algorithms mitigate this through asymmetric verification, where the verification path is much cheaper than the search path. Ethash achieves this through its cache-based verification. KeccakCollision achieves it by requiring only K hash computations for verification versus a large search space for mining.
Hardware Lifecycle
Memory-hard algorithms that target GPUs tie mining to the consumer graphics card market. This has benefits (broad hardware availability, competitive pricing) but also drawbacks. GPU shortages — like those during the 2020-2021 chip crisis — can impact mining accessibility and drive up hardware costs for gamers and professionals.
CPU-targeted algorithms like RandomX avoid this issue but may be more susceptible to botnet mining, where attackers use compromised machines for unauthorized mining.
Difficulty in Formal Security Proofs
Proving that a function is memory-hard is more complex than proving computational hardness. Several algorithms initially believed to be memory-hard were later found to have time-memory tradeoff vulnerabilities that allowed mining with less memory than intended. Formal verification of memory-hardness properties remains an active area of cryptographic research.
Energy Distribution
Memory-hard mining uses energy differently from compute-hard mining. A significant portion of the energy goes to powering memory modules rather than compute cores. The total energy consumption per unit of security is not necessarily lower — the energy is simply distributed differently across hardware components.
The Arms Race Continues
The history of mining algorithm design is an ongoing arms race between algorithm designers and hardware manufacturers. Every algorithm eventually faces optimization pressure, whether from purpose-built ASICs, FPGAs, or novel computational techniques.
The trend in 2026 is toward algorithms that:
-
Leverage existing hardware optimizations. Rather than trying to create puzzles that no hardware can solve efficiently, modern algorithms target the strengths of commodity hardware (CPUs, GPUs with standard DRAM) that are already mass-produced and optimized.
-
Use well-studied cryptographic primitives. Novel hash functions or puzzles carry hidden risks. Algorithms built on SHA-3, AES, or other heavily scrutinized primitives benefit from decades of analysis.
-
Support on-chain verification. As smart contract platforms mature, the ability to verify mining solutions on-chain becomes increasingly valuable. This enables transparent, trustless mining where anyone can audit every solution.
-
Maintain configurable parameters. Fixed parameters become optimization targets. Algorithms with adjustable difficulty, memory, and puzzle parameters can adapt to hardware developments without requiring hard forks.
Why ASIC Resistance Matters for Decentralization
The connection between mining hardware and network decentralization is direct and measurable.
Bitcoin mining is concentrated among a small number of operators in a few geographic regions, primarily due to ASIC manufacturing centralization. Three companies produce virtually all Bitcoin ASICs, and a handful of mining pools control the majority of hashrate. While Bitcoin’s security remains strong due to the sheer scale of investment, the network’s decentralization has degraded significantly from its early years.
By contrast, Monero (using RandomX) maintains a more distributed mining network. Without specialized hardware requirements, mining occurs on consumer devices across the globe. No single entity or small group can dominate through hardware advantages.
For new projects launching in 2026, memory-hard and ASIC-resistant algorithms represent the most practical path to decentralized mining. EVMORE’s choice of KeccakCollision reflects this reality — by building on Keccak’s well-studied foundation with memory-hard collision search requirements, the algorithm aims to keep mining accessible to anyone with standard computing hardware.
Conclusion
Memory-hard mining algorithms exist to preserve one of cryptocurrency’s core promises: that anyone can participate. When mining centralizes around specialized hardware, the network’s censorship resistance, geographic distribution, and economic fairness all degrade.
The evolution from Scrypt’s early attempts through Ethash and RandomX to newer approaches like KeccakCollision shows a maturing understanding of what memory-hardness requires. Modern algorithms combine well-studied cryptographic foundations with configurable parameters and efficient on-chain verification.
For miners and cryptocurrency users, understanding the algorithm behind a project is not just a technical detail — it directly predicts how decentralized the network will remain over time. Projects that invest in genuine ASIC resistance are making a long-term commitment to accessible, decentralized participation. In an industry increasingly dominated by institutional players and specialized hardware, that commitment matters more than ever.