TL;DR

Six researchers — including Fulkerson Prize winner Mikkel Thorup and graph theory heavyweights Kawarabayashi, Mohar, and Thomassen — just published a near-linear time algorithm for 4-coloring planar graphs. The previous best, from Robertson et al. in 1996, ran in O(n²). This new result achieves O(n log n). The key breakthrough: they proved that every planar triangulation contains linearly many reducible configurations at once, not just one. That’s a genuine generalization of the Four Color Theorem itself, and it couldn’t have been achieved using any previously known proof of the theorem.

Why the Four Color Theorem Still Matters

If you studied CS or math at any point, you’ve probably run into this one. The Four Color Theorem says: any map (or equivalently, any planar graph) can be colored with at most four colors such that no two adjacent regions share the same color. It’s simple to state, maddening to prove, and has a wild history.

Francis Guthrie noticed the pattern in 1852 while coloring a map of English counties. Alfred Kempe published a “proof” in 1879. Percy Heawood found the bug 11 years later. The conjecture then sat open for nearly a century.

In 1976, Kenneth Appel and Wolfgang Haken finally proved it — by exhaustive computer verification of 1,476 reducible configurations. It was the first major theorem proved by computer, and mathematicians spent decades arguing over whether it counted as a real proof.

In 1996, Robertson, Sanders, Seymour, and Thomas simplified the proof (down to 633 configurations) and produced a quadratic-time coloring algorithm: given a planar graph with n vertices, it finds a valid 4-coloring in O(n²) time. That algorithm has been the state of the art ever since.

Until last week.

What the New Paper Actually Does

The paper is “The Four Color Theorem with Linearly Many Reducible Configurations and Near-Linear Time Coloring” by Yuta Inoue, Ken-ichi Kawarabayashi, Atsuyuki Miyashita, Bojan Mohar, Carsten Thomassen, and Mikkel Thorup. Posted to arXiv on March 25, 2026.

The headline: O(n log n) time to 4-color any planar graph. Down from O(n²).

But the algorithm isn’t the real contribution. The real contribution is a structural theorem about planar graphs that’s strictly stronger than the Four Color Theorem itself.

The Old Approach and Why It’s Quadratic

To understand the improvement, you need to understand why the Robertson et al. algorithm is O(n²).

The classic proof strategy works by contradiction. Assume there exists a minimal planar graph that can’t be 4-colored. Then show it must contain at least one “reducible configuration” — a local subgraph pattern that can be simplified while preserving the coloring problem. Simplify it, solve the smaller problem, then extend the coloring back. Since the graph was supposed to be minimal, you have a contradiction.

The algorithm version: scan the graph, find one reducible configuration, reduce it (removing a constant number of vertices), and recurse. Each scan takes O(n) time. Each reduction removes O(1) vertices. So you need O(n) reductions, each costing O(n) to find. Total: O(n) × O(n) = O(n²).

The bottleneck is clear: you find one configuration, fix it, then start scanning all over again.

The New Insight: Linearly Many Configurations at Once

Here’s where Kawarabayashi, Thorup, and co-authors break through. They prove that every planar triangulation doesn’t just contain one reducible configuration — it contains linearly many pairwise non-touching reducible configurations simultaneously. Or, alternatively, linearly many short obstructing cycles that also permit effective reductions.

“Linearly many” means Ω(n). A constant fraction of the graph is reducible at any given moment.

This changes the algorithm’s structure completely. Instead of:

  1. Scan the whole graph → find one reduction → apply it → repeat

You get:

  1. Scan the whole graph → find Ω(n) reductions → apply them all at once → repeat

Each round still costs O(n) for the scan, but now each round removes a constant fraction of the vertices instead of a constant number. The number of rounds drops from O(n) to O(log n). Total: O(n) × O(log n) = O(n log n).

It’s the same conceptual leap as going from sequential deletion to batch processing. And the reason nobody did it before is that proving the “linearly many” part required genuinely new mathematics.

How They Proved It: Curvature and Flat Regions

The classic proof technique for finding reducible configurations is called the discharging method. Think of it like a budget. You assign each vertex and face a “charge” based on its degree. Euler’s formula guarantees that the total charge is positive. You then redistribute (discharge) this charge according to local rules. Vertices that end up with positive charge after discharging must be part of reducible configurations.

The problem: positive-curvature vertices (those with degree 5 or less) are easy to handle. But most of a large planar graph is “flat” — vertices of degree 6 arranged in a hexagonal-ish grid, with zero curvature. The classic approach punts on these regions entirely. It finds the one place where curvature is positive, does one reduction, and repeats.

The new paper extends the discharging method to work in the flat regions too. They show that even in these zero-curvature areas, you can identify reducible configurations or short obstructing cycles (length at most 5) that permit reductions. And critically, you can find linearly many of them that don’t interfere with each other.

All identified configurations are D-reducible — the strongest form of reducibility, where the coloring of the reduced graph can be extended to the original graph by a simple local rule. This matters for the algorithm’s efficiency because D-reducible configurations can be processed in batch without the complex backtracking that weaker forms of reducibility require.

The Author Lineup

This isn’t a random arXiv drop from unknowns. The author list reads like a greatest-hits of graph algorithms and structural graph theory:

  • Mikkel Thorup (University of Copenhagen) — Fulkerson Prize 2021, pioneer in hashing, shortest paths, and deterministic graph algorithms. His work on deterministic edge connectivity with Kawarabayashi was described by the AMS as introducing “powerful and impactful new ideas.”

  • Ken-ichi Kawarabayashi (National Institute of Informatics, Tokyo) — co-winner of the Fulkerson Prize with Thorup, extensive work in graph minors and topological graph theory.

  • Carsten Thomassen (Technical University of Denmark) — one of the most prolific graph theorists alive, known for his work on graph coloring and flows. Proved that every planar graph is 5-list-colorable.

  • Bojan Mohar (Simon Fraser University / University of Ljubljana) — expert in topological graph theory, graph embedding, and spectral graph theory.

  • Yuta Inoue and Atsuyuki Miyashita — researchers at NII Tokyo working with Kawarabayashi.

Kawarabayashi and Thorup previously broke the near-linear time barrier for deterministic edge connectivity. Their approach here follows a similar pattern: take a classical problem with a polynomial-time algorithm that everyone assumed was close to optimal, then find structural depth that yields a better bound.

What This Means in Practice

Let’s be honest: you’re probably not 4-coloring planar graphs in production code. But this result matters beyond the specific algorithm for a few reasons.

For algorithm designers: The technique of finding linearly many independent reductions simultaneously — turning an “additive” algorithm into a “multiplicative” one — is a general strategy. If you have any algorithm that works by repeatedly finding and fixing one local structure, ask yourself: can I find many of them at once? This paper shows the answer can be yes even when the problem is famously hard.

For complexity theorists: The result proves that the quadratic barrier wasn’t inherent to the problem. The O(n²) running time was an artifact of the proof technique, not of the problem. And the way they broke it — by strengthening the structural theorem rather than cleverly engineering the algorithm — suggests similar barriers in other problems might also be proof artifacts.

For anyone who appreciates elegant CS: The Four Color Theorem is 174 years old. The first computer proof is 50 years old. The best coloring algorithm was 30 years old. And it just got meaningfully faster, not through hardware or heuristics, but through a deeper understanding of the combinatorial structure. Graph theory is having a moment — just last week, Donald Knuth published a paper after Claude solved one of his open graph problems.

FAQ

Is this a new proof of the Four Color Theorem?

It’s a generalization of the 4CT, which implies the classic theorem as a special case. The classic theorem says every planar triangulation has at least one reducible configuration. This paper says it has linearly many non-touching ones. Finding one is a corollary of finding many.

Does the algorithm have practical applications?

Direct applications of 4-coloring planar graphs include register allocation in compilers (for certain architectures), frequency assignment in wireless networks, and scheduling problems. The near-linear time bound makes these feasible at scales where O(n²) was prohibitive. But the bigger impact is the proof technique, not the specific algorithm.

Can I run this myself?

The authors mention source code on GitHub. The configurations are D-reducible, which means the reduction step is straightforward to implement — no backtracking search required.

Why couldn’t the old proofs achieve near-linear time?

The Appel-Haken and Robertson et al. proofs only guarantee one reducible configuration per graph. Removing it reduces the graph by O(1) vertices, so you need O(n) rounds of find-and-reduce. Each round scans the whole graph. That’s inherently quadratic. You need the “linearly many” structural result to break out.

What’s next — can we get linear time?

That’s the obvious open question. The O(log n) factor comes from the number of rounds. If you could do everything in a single pass — find all configurations and color the graph bottom-up — you might reach O(n). But that would require an even stronger structural result, and it’s not clear whether it’s achievable.

Bottom Line

A 30-year-old quadratic barrier, broken by six people who understood the structure of planar graphs deeply enough to see what everyone else missed: the reductions aren’t scattered — they’re everywhere. The math is hard, the running time improvement is real, and the technique of “find linearly many independent local fixes at once” is one I expect to see applied to other problems. Keep an eye on these authors.