Back to Blog
pqc ml-dsa nist cryptography

ML-DSA Explained: What NIST FIPS 204 Actually Is

CRYSTALS-Dilithium is now ML-DSA under NIST FIPS 204. Here is what the standard actually specifies, why lattice-based cryptography resists quantum attacks, and what the practical trade-offs are.

QNTM Team | April 10, 2026 | 12 min read

In August 2024, NIST finalized four post-quantum cryptographic standards after an eight-year competition. The most consequential for digital signatures is FIPS 204, which standardizes an algorithm called ML-DSA. If you have been following PQC literature, you may know it as CRYSTALS-Dilithium. They are the same algorithm. The name changed when it became a federal standard.

This article is a technical explanation of what ML-DSA actually is: where it came from, why the math makes it quantum-resistant, what the parameter sets mean, and what you give up by using it instead of ECDSA.

The NIST PQC Competition (2016-2024)

NIST opened its post-quantum cryptography standardization process in 2016 with a call for proposals. The goal was explicit: identify algorithms capable of resisting both classical and quantum attacks, suitable for widespread standardization.

Eighty-two algorithms were submitted from research teams worldwide. NIST ran three evaluation rounds over eight years, applying cryptanalysis, performance benchmarking, and implementation security review. The evaluation criteria included security against quantum adversaries, concrete classical and quantum security levels, key and signature sizes, computational performance, and side-channel resistance.

In July 2024, NIST published the four final standards:

  • FIPS 203: ML-KEM (Module Lattice Key Encapsulation Mechanism), formerly CRYSTALS-Kyber. For key exchange.
  • FIPS 204: ML-DSA (Module Lattice Digital Signature Algorithm), formerly CRYSTALS-Dilithium. For digital signatures.
  • FIPS 205: SLH-DSA (Stateless Hash-based Digital Signature Algorithm), formerly SPHINCS+. Hash-based signatures.
  • FIPS 206: FN-DSA (Fast and Narrow Digital Signature Algorithm), formerly FALCON. Compact lattice signatures.

ML-DSA and FN-DSA are both signature algorithms. QNTM uses ML-DSA as its primary signature scheme, with FN-DSA available for contexts where signature size is the binding constraint.

Why Classical Signatures Break Under Quantum Attack

To understand why ML-DSA matters, you need to understand what Shor’s algorithm actually does.

ECDSA’s security rests on the Elliptic Curve Discrete Logarithm Problem. The private key k and public key Q are related by Q = k * G, where G is a known generator point on the curve. Classical computers cannot invert this operation efficiently: the best classical algorithms run in sub-exponential time, which for a 256-bit curve means roughly 2^128 operations.

Shor’s algorithm, published in 1994, solves discrete logarithm and integer factorization problems in polynomial time on a quantum computer. For a 256-bit elliptic curve, Shor’s algorithm reduces the attack complexity from 2^128 to polynomial in the number of bits. The security does not degrade gracefully. It collapses entirely.

RSA has the same problem. RSA’s security rests on integer factorization, which Shor’s algorithm also solves in polynomial time.

The correct framing: ECDSA and RSA have exponential security against classical adversaries and polynomial (effectively zero) security against quantum adversaries with sufficient hardware. There is no middle ground.

Lattice Cryptography: The Hard Problems That Quantum Cannot Break

Lattice-based cryptography builds on mathematical problems that resist both classical and quantum attacks. The foundation is the Learning With Errors (LWE) problem, introduced by Oded Regev in 2005.

A lattice is a set of points in n-dimensional space with regular spacing, defined by a set of basis vectors. Lattice problems involve finding short vectors or solving equations defined over these structures. The hardest lattice problems are believed to require exponential time on both classical and quantum computers.

LWE works as follows. Given a secret vector s, a matrix A chosen uniformly at random, and a small error vector e, the problem is to recover s from pairs (A, b) where b = As + e (mod q). The error term e is small, drawn from a discrete Gaussian distribution. Without e, you would just solve a linear system. With e, the problem becomes hard in a way that resists known quantum algorithms.

Why does quantum fail here? Shor’s algorithm exploits periodic structure in the mathematical problems it solves. Discrete logarithm and factoring both have exploitable periodicity that quantum Fourier transforms can detect. LWE has no such structure. The best quantum algorithms for lattice problems (primarily algorithms based on quantum random walks) provide only polynomial or small polynomial speedups over classical algorithms, insufficient to break properly sized parameters.

The formal connection is important: LWE’s hardness can be reduced to the hardness of worst-case lattice problems, specifically GapSVP (shortest vector problem) and SIVP (shortest independent vectors problem). This means breaking LWE would also break the hardest instances of these fundamental lattice problems, which researchers have studied for decades without finding efficient algorithms.

ML-DSA Specifically: From LWE to Signatures

ML-DSA uses the “Module” variant of LWE (MLWE), which works over polynomial rings rather than plain integers. This is more efficient than plain LWE while maintaining the same hardness assumptions.

The signing process uses a technique called Fiat-Shamir with Aborts. The signer generates a random commitment, creates a challenge hash, computes a response, and checks whether the response is “short enough” to be published without leaking information about the secret key. If the response is too large, the process aborts and restarts. This rejection sampling ensures the signature distribution reveals nothing about the private key.

The verification process checks that the signature satisfies a linear equation over the ring, computed using the public key and the message hash. It is fast: a single matrix-vector multiplication and norm check.

Parameter Sets: ML-DSA-44, ML-DSA-65, ML-DSA-87

FIPS 204 defines three parameter sets corresponding to three security levels:

Parameter SetSecurity LevelSecurity Bits (Classical)Security Bits (Quantum)
ML-DSA-44NIST Level 2128-bit64-bit
ML-DSA-65NIST Level 3192-bit96-bit
ML-DSA-87NIST Level 5256-bit128-bit

NIST security levels map to classical security: Level 2 is “at least as hard to break as AES-128,” Level 3 is “AES-192,” Level 5 is “AES-256.” The quantum security column reflects that Grover’s algorithm provides a quadratic speedup on symmetric key search, halving the effective key length against quantum adversaries.

The module dimension k x l differs across parameter sets (4x4, 6x5, 8x7 for 44/65/87 respectively), which drives the size and performance differences.

Key and Signature Sizes: The Real Trade-off

This is where the honest engineering conversation begins.

AlgorithmPublic KeyPrivate KeySignature
ECDSA (secp256k1, 256-bit)33 bytes (compressed)32 bytes64 bytes
ML-DSA-441,312 bytes2,528 bytes2,420 bytes
ML-DSA-651,952 bytes4,000 bytes3,293 bytes
ML-DSA-872,592 bytes4,864 bytes4,595 bytes

ML-DSA-65 signatures are 3,293 bytes versus ECDSA’s 64 bytes. That is a 51x increase. On a blockchain that processes thousands of transactions per second, this is not a negligible overhead.

This is the trade that the PQC transition demands. You pay in bandwidth and storage to gain security that holds against quantum adversaries. The mathematics does not offer a cheaper path: smaller lattice parameters mean easier attacks. The size floor is set by the hardness of the underlying problem.

To put the blockchain impact in context: a Bitcoin block with 2,000 ECDSA transactions uses roughly 1.5 MB of signature data. The same block with ML-DSA-65 signatures uses roughly 6.6 MB of signature data. That is the cost of quantum resistance, paid in bytes.

QNTM accounts for this at the protocol level: block size parameters, propagation limits, and state storage are all calibrated for ML-DSA-65 signatures from genesis. A classical chain migrating to PQC signatures mid-life faces a fork that changes every assumption about transaction throughput and block economics. QNTM has no such legacy to work around.

Performance: Keygen, Sign, Verify

On commodity x86-64 hardware (reference implementation, not AVX2-optimized):

OperationML-DSA-44ML-DSA-65ML-DSA-87ECDSA (secp256k1)
Key generation~0.06 ms~0.10 ms~0.13 ms~0.05 ms
Signing~0.08 ms~0.11 ms~0.14 ms~0.05 ms
Verification~0.05 ms~0.07 ms~0.10 ms~0.07 ms

Latency is comparable to ECDSA. At the operation level, ML-DSA is not meaningfully slower than ECDSA. The performance difference is dominated by the size difference: larger signatures mean more bytes to transmit, store, and hash. But the per-operation compute cost is in the same order of magnitude.

With AVX2 vectorization, ML-DSA performance improves by 3-5x. NIST’s reference implementation ships with an optimized AVX2 path, and production implementations in C and Rust use it by default.

FIPS 204 Standard Requirements

FIPS 204 specifies the algorithm deterministically. Key generation, signing, and verification are all defined by explicit polynomial arithmetic over the ring Z_q[X]/(X^256 + 1) with q = 8380417. The standard includes:

  • Complete pseudocode for all three operations
  • Known Answer Tests (KAT vectors) for all three parameter sets
  • Requirements for randomness: key generation requires a cryptographically secure RNG; signing uses a combination of deterministic and randomized modes
  • Side-channel guidance: implementations should run in constant time to prevent timing attacks. FIPS 204 flags specific operations where variable-time implementations are dangerous.

Implementations must pass the KAT vectors exactly. This is verifiable: any compliant ML-DSA implementation produces identical outputs for identical inputs and randomness. QNTM’s ML-DSA implementation is tested against FIPS 204 KAT vectors in CI on every commit.

Why ML-DSA-65 and Not Level 5

QNTM uses ML-DSA-65 (NIST Level 3, targeting 192-bit classical / 96-bit quantum security). Level 5 would provide 256-bit classical security but at larger key and signature sizes with no known threat that would justify it.

The quantum security argument: the best known quantum attack on ML-DSA requires finding short vectors in a module lattice using quantum algorithms. Current analysis suggests quantum adversaries achieve no better than 96-bit security against ML-DSA-65, which means breaking it would require approximately 2^96 quantum operations. Building a quantum computer capable of executing 2^96 operations is not a near-term engineering problem. The additional margin of Level 5 (2^128 quantum operations) exceeds any plausible threat horizon by orders of magnitude.

The engineering argument: every transaction on QNTM carries an ML-DSA signature. Choosing Level 5 over Level 3 increases signature size from 3,293 to 4,595 bytes, a 40% increase in signature data per transaction. For a network targeting high throughput, that is a significant and unnecessary overhead for security that exceeds the threat model by a factor of 2^32.

The NIST guidance is consistent with this reasoning: Level 3 is appropriate for long-term security where data must remain confidential for decades. Level 5 is reserved for the highest-value targets where resource-unlimited adversaries are assumed.

Summary

ML-DSA is a mature, well-analyzed signature algorithm that NIST standardized after an eight-year competition. Its security rests on lattice problems that quantum computers cannot solve efficiently. The cost is larger keys and signatures. For any system that must survive past the CRQC threshold, that cost is not a choice.


Key Takeaways

  • NIST finalized four PQC standards in August 2024. ML-DSA (FIPS 204) is the primary digital signature standard.
  • ECDSA and RSA have zero security against a quantum adversary running Shor’s algorithm. The transition is not optional.
  • Lattice problems (LWE, MLWE) have no known quantum speedup beyond polynomial. The hardness is believed to be exponential for both classical and quantum adversaries.
  • ML-DSA-65 signatures are 3,293 bytes vs. ECDSA’s 64 bytes. This is the honest engineering trade for quantum resistance.
  • Performance is comparable to ECDSA at the operation level. Overhead is driven by data size, not compute time.
  • QNTM uses ML-DSA-65 (NIST Level 3), calibrated for threat models that include future quantum adversaries without over-engineering for implausible threat scenarios.