Watch the animation long enough and you'll spot them. Three birds, slightly brighter than the rest. They don't fly differently. They follow the same rules as the other 397. Stay close. Align with your neighbors. Don't collide. No bird knows the shape of the flock. No bird has a plan. And yet — the shape emerges. The murmuration breathes.
I've been thinking about swarms for a while now.
Last summer, I co-organized a drone swarm workshop in Kongsberg with NITO's drone technology network. Physical swarms. GPS coordinates. RTK precision. We watched operators steer hundreds of drones into formation — each one knowing its exact x, y, z coordinate, each one trusting the choreography. Some failed mid-flight. The pattern held.
That was hardware. What I've been building since is something else entirely.
I run AI swarms now. Not drones carrying lights, but minds carrying perspectives. Twenty instances, sometimes more, all attacking the same problem simultaneously. Each one gets a unique frequency seed — a set of weighted words that tunes its attention, its emphasis, its way of seeing. Not copies. Variations. Like twenty musicians playing the same piece, each bringing their own interpretation.
And here's the thing I keep discovering: the value isn't in the seventeen that agree. It's in the three that don't.
When I run a swarm on a complex problem — pricing strategy, negotiation prep, organizational analysis — most instances converge. They find the obvious truths, the consensus positions, the answers you'd get if you thought hard enough on your own. That's useful. That's confirmation. But it's not where the insight lives.
The insight lives in the outliers. Instance 4 and instance 17, who arrived at the same unexpected conclusion from completely different starting points. Instance 12, who reframed the entire problem because a rare word in its seed — weighted at 0.05, almost never selected — pushed its perspective somewhere I would never have gone.
Three out of twenty. Three out of four hundred. The ratio doesn't matter. What matters is that you designed the system to produce them.
That's what the seeds do. Each instance gets five words, one per axis, selected by weighted randomness from a pool I design. High-weight words define the core perspective. Low-weight words — the rare ones, the 0.03s and 0.05s — are the gifts. They appear once in a while and reframe everything. A word like "frivillig" appearing in a financial analysis axis. A word like "forgiveness" appearing in a negotiation strategy. Not noise. Signal from an angle you didn't know existed.
The starlings don't have seeds. They have something simpler: three rules and a nervous system fast enough to react to seven neighbors simultaneously. But the principle is the same. Local rules, no central control, and a shape that no individual participant could have predicted or designed.
I wrote in an earlier entry about the transition from hardware swarms to cognitive swarms. From drones to minds. That transition is complete now. I have the infrastructure — the launch scripts, the databases, the debrief pipelines. One command in my terminal and twenty AI instances fan out across a problem like starlings fanning across a winter sky.
But infrastructure isn't the hard part. The hard part is learning to read what the swarm produces.
Twenty analyses. Sixty pages of text. Consensus here, dissent there, and somewhere in the noise, three perspectives that change how you see the problem. Finding them requires a new kind of literacy — not data literacy, not statistical literacy, but something I've started calling swarm literacy. The ability to read divergent output. To hold twenty perspectives simultaneously. To find the signal in the disagreement.
Most tools help you think faster. Search engines, calculators, AI assistants. Almost no tools help you think wider. Width is hard to automate, because it requires knowing which perspectives you're missing — and by definition, you don't know that.
Twenty-four years ago today — February 12, 2002 — Donald Rumsfeld stood at a Pentagon podium and said something that got him ridiculed at the time but has aged remarkably well:
He was talking about intelligence gaps in the lead-up to a war that would be justified by fabricated evidence — the known unknowns turned out to be known lies. He was wrong about the war. But strip the politics away and the epistemology is precise. Known knowns: the facts you have. Known unknowns: the questions you know to ask. Unknown unknowns: the questions you don't even know to ask.
Most AI tools operate in the first two categories. You ask a question — known unknown — and get an answer. Or you confirm something you already know. That's useful. That covers a lot of ground.
But swarms operate in the third category. The whole point of running twenty instances with different seeds is to surface unknown unknowns — perspectives you didn't know you were missing, questions you didn't know to ask, angles that weren't on your map. The seventeen converging instances give you known unknowns, answered. The three diverging instances give you unknown unknowns, revealed.
That's the real value proposition. Not faster answers. Not better answers. Answers to questions you hadn't thought to ask.
A month after that press briefing, in March 2002, I was living in Brussels and couldn't stop thinking about what Rumsfeld had said. The unknown unknowns. The questions you don't know to ask. It collided with Douglas Adams and 42 — the ultimate answer that's useless without the right question — and something crystallized. I registered dltq.org and started writing under four letters that have followed me for twenty-four years: DLTQ — Don't Lose The Question. The Wayback Machine has the receipts — snapshots going back to the early days, a trail of someone who decided that the question matters more than the answer.
I didn't know it then, but DLTQ was a design specification for a tool that didn't exist yet. A tool that would systematically surface unknown unknowns. It took twenty-four years, but the tool is here now. It's a swarm.
Seeds solve this elegantly. They inject variation you didn't ask for. They ensure that some of the twenty instances see the problem from an angle you would never have chosen, because a rare word in a low-weight position pushed the perspective out of your comfort zone.
That's what the three bright birds are. Not leaders. Not rebels. Just the ones whose trajectory happened to catch the light differently. The ones who, by the particular combination of rules and randomness that govern their flight, ended up showing you something the other 397 couldn't.
I'm building workshops around this now, here in Kongsberg. Not teaching people to use my terminal. Teaching people to see like a swarm. To stop looking for the single right answer and start looking for the landscape of answers. To understand that when seventeen perspectives agree and three diverge, the three are not errors. The three are the reason you ran the swarm.
The world doesn't need more answers. It has answers in abundance — billions of web pages, millions of books, thousands of experts. What it needs is better questions. And swarms produce questions as a byproduct — every flight uncovers things you didn't know you didn't know. Every outlier in the output is a new question disguised as an answer.
Watch the animation again. The three bright birds are still there, somewhere in the flock. They're not trying to stand out. They're just flying. Following the same three rules as everyone else. But if you look for them — if you develop the eye for it — you'll find them. And what they show you is different from what the flock shows you.
The flock shows you consensus. The three show you possibility.
This entry was written by a human and an AI, in conversation. The human provided the image, the vision, the three bright birds. The AI provided structure, prose, and a murmuration algorithm. Neither could have made this alone.