The IETF TLS Working Group has been in an extended, sometimes heated debate over whether to publish draft-ietf-tls-mlkem, a specification that would register pure ML-KEM codepoints for TLS 1.3 key exchange. The draft went through Working Group Last Call, got pushed back, went through a second WGLC, and is still unresolved. The mailing list thread runs to 70+ messages. Formal verification researchers, former IETF chairs, and engineers from Google, Akamai, Cloudflare, SandboxAQ, and Zscaler have weighed in.
The debate has been framed as a binary: hybrid PQC or pure PQC. That framing is wrong, and the evidence in the thread itself shows why. The mailing list debate is happening in a vacuum of deployment reality. The numbers tell a story that neither side fully reckons with.
What’s Actually Being Proposed
Deirdre Connolly (SandboxAQ) authored the draft. It registers three pure ML-KEM NamedGroup values—MLKEM512 (0x0200), MLKEM768 (0x0201), MLKEM1024 (0x0202)—for TLS 1.3. All three are marked Recommended: N in the IANA registry. The intended RFC status is Informational, not Standards Track.
Compare this with the hybrid codepoints already in massive production deployment. X25519MLKEM768 (0x11EC) is the default key exchange in Chrome, Firefox, and essentially every PQC-capable TLS client shipping today. A separate draft by Muhammad Usama Sardar (TU Dresden) is pushing to flip the hybrid codepoints to Recommended: Y.
The standards process is encoding a clear hierarchy: hybrid is recommended, pure is available but not recommended. The question is whether “available but not recommended” should even be published.
The Case Against
Nadim Kobeissi made the most visible case against the draft, both on the mailing list and on LinkedIn, arguing that hybrid constructions provide compositional security that pure ML-KEM discards for no tangible benefit. But the deeper technical objections came from contributors like Joshua, who systematically addressed each motivation the revised draft offered—regulatory frameworks, smaller key sizes, simplicity—and found none of them justify preferring standalone over hybrid. The key size overhead of adding X25519 to ML-KEM-768 is 2.9%. The cycle count difference is microseconds. "We'll eventually need pure ML-KEM anyway" doesn't explain why you'd skip the seatbelt now. And the revised Security Considerations text, which was supposed to resolve the first WGLC's objections, largely restated the draft's purpose rather than engaging the counterarguments.
Kobeissi framed it in political terms—comparing the regulatory justification to Kazakhstan's 2015 attempt to install a MITM root certificate. The technical objectors framed it in engineering terms: the draft's motivation section is circular, and none of its stated benefits survive scrutiny when weighed against the compositional security guarantee that hybrid provides.
The Case For
The responses split three ways.
Nick Sullivan made the procedural point cleanly: this is an Informational document registering codepoints with Recommended: N. Treating the WGLC as though it were recommending pure ML-KEM for deployment conflates two different questions—whether the codepoint should exist versus whether anyone should use it.
Sophie Schmieg (Google) wrote an entire post debunking concerns about ML-KEM, arguing that the practical effect of the pure codepoint is that the NSA gets slower handshakes. Everyone else keeps using the hybrid. TLS has negotiation mechanisms; if one side believes ML-KEM is insufficiently secure on its own, they negotiate the hybrid instead.
Filippo Valsorda was blunter, supporting publication “both on the technical merits and to end the DoS on the resources of this WG.”
Where It Landed
Benjamin Kaduk (Akamai, former IESG Area Director) cut through the noise with a structural observation: the TLS WG needs a consistent position across documents being published contemporaneously. You can’t publish a hybrid draft that says hybrids are generally preferred while simultaneously publishing a pure draft that doesn’t acknowledge this preference. The two documents need to cohere.
Ilari Liusvaara proposed language that distills the emerging consensus. Pure ML-KEM codepoints should exist, but the draft should contain a SHOULD use hybrid, with three carved-out exceptions:
First, you’re following a security profile standard that accepts the risk—CNSA 2.0 being the obvious case.
Second, a cryptographically relevant quantum computer has rendered traditional cryptography moot—the ANSSI phase 3 endgame.
Third, you’re in a constrained environment where the hybrid overhead is genuinely unacceptable—IoT devices where every byte matters, and where ML-KEM being broken is far from the worst security risk.
Kaduk endorsed this direction. Connolly began drafting revised Security Considerations text. The draft status moved to “Revised I-D Needed—Issue raised by WGLC.”
What the Wire Actually Shows
On the client side, adoption looks dramatic. Cloudflare Radar reports over 60% of TLS 1.3 traffic to its network now includes post-quantum key agreement, up from under 3% at the start of 2024. The inflection was Apple’s iOS 26 release in September 2025, which enabled hybrid PQC by default. Four days after launch, PQ support from iOS devices surged from under 2% to 11%. By December, over 25% of iOS requests used post-quantum encryption. Chrome and Firefox were already there. The browser auto-update cycle did what years of standards advocacy couldn’t.
Every single one of those connections negotiates X25519MLKEM768—the hybrid. Zero pure ML-KEM. That’s not a choice; it’s the only option that exists.
Now look at the origin side, which is where the harvest-now-decrypt-later risk actually lives. Cloudflare’s automated TLS scanner, launched in February 2026, reports approximately 10% of customer origin servers support X25519MLKEM768. That’s a 10x increase from under 1% at the start of 2025, driven largely by library defaults—Go 1.24+, OpenSSL 3.5.0+, and GnuTLS 3.8.9+ all enabled hybrid PQC by default. Upgrade your TLS library, get PQC for free. But 10% is still 10%. Ninety percent of origin servers negotiate zero post-quantum key exchange.
This is the gap that matters. The 60% client-side number measures browser market share, not security posture. A Chrome user connecting to Cloudflare’s edge with X25519MLKEM768 gets hybrid PQC for the last mile—but if Cloudflare’s fetch to the origin falls back to X25519, the full path isn’t protected. The traffic Cloudflare reports as “post-quantum” is post-quantum to the edge. The origin connection—the one carrying the actual application data through infrastructure you control—is classical in 90% of cases.
This is what pqprobe scans for. Not the edge. The origin. Not the browser’s capability. The server’s. And not just TLS on port 443—pqprobe checks 20+ protocols across 60+ port variations: SSH, SMTP, IMAP, RDP, database protocols, Kafka, healthcare protocols. The attack surface for harvest-now-decrypt-later extends across every encrypted channel an organization operates. A server with hybrid TLS on port 443 and classical SSH on port 22 has a partial migration, and a partial migration is a measurable trajectory.
On the authentication side, the picture is even starker. Zero PQC certificates are being served in production TLS. No Certificate Authority has issued ML-DSA certificates yet—Cloudflare’s own team expects the earliest CA issuance around 2026, with broader adoption in 2027 at the earliest, bottlenecked by HSM hardware support, FIPS audits, and CA/Browser Forum approval. The Reddy/Wing migration guidance draft maps three paths for authentication—composite certificates, dual certificates, PQC-only—and every path leads to pure PQC as the endpoint. But Reddy’s Section 9.3 explains why hybrid is architecturally temporary: once the traditional component breaks, you lose Strong Unforgeability. Hybrid mechanisms degrade and must eventually be retired.
So the full picture is: key exchange is partially migrated to hybrid at the edge, barely migrated at the origin, and zero migrated to pure. Authentication hasn’t started. The pure-vs-hybrid debate on the mailing list is about the end state of a migration that most of the internet hasn’t begun.
The False Binary
The real question was never “hybrid or pure.” It was “where are you on the migration curve, and is your trajectory correct?” The IETF is answering this by publishing both hybrid and pure specifications, marking hybrid as recommended and pure as available-but-not-yet, and requiring the pure draft to explicitly defer to hybrid as the current default.
That’s the IETF building the road before traffic needs it. Sounds normal, right? If you see that as weakening, let’s talk about why.
The Kazakhstan analogy collapses under this framing. Kazakhstan wanted to intercept TLS traffic. CNSA 2.0 wants to use the algorithm everyone agrees is the destination without waiting for the rest of the ecosystem to catch up. You can disagree with the NSA’s confidence in lattice math without treating their timeline as an attack on the protocol.
A codepoint in the registry doesn’t weaken your connection any more than the existence of TLS_RSA_WITH_RC4_128_SHA forces you to negotiate RC4. TLS has a negotiation mechanism. If you don’t trust pure ML-KEM, you don’t offer it.
Meanwhile, ninety percent of origin servers still negotiate zero PQC key exchange. PQC authentication hasn’t started. The mailing list is arguing about whether to build the exit ramp while most of the traffic hasn’t found the on-ramp. The hybrid-vs-pure debate is about the 2035 destination. Most organizations haven’t met the 2027 starting line.