…is a little chaotic energy.

Generative AI can already generate ideas, patterns and even strategies that surprise us. What it still lacks is the chaotic energy of a human mind living in a messy world. The real task for leaders now is deciding which work belongs to machines, which to people, and how to harness that chaos instead of optimising it away.
The thing AI still doesn’t have
I remember listening to an interview with finance professor Aswath Damodaran on the Prof G Pod, where he said his best ideas came from a 30-minute walk with no phone and no agenda. Just him, walking, letting his mind wander. That line lodged itself in my head. When you give your brain half an hour of unstructured time, loose thoughts start bumping into each other until something clicks.
That story sticks because it captures something deeply human: we don’t just think; we drift (and sometimes we even fall asleep). We mix a half-remembered book, a client conversation, a childhood memory and an oddly specific TikTok into something that suddenly feels like an answer.
AI can generate thoughts too – oftentimes bland, sometimes, when you’re lucky, surprisingly sharp. Anyone who has asked a model for campaign ideas or market analysis knows it can produce a passable first pass and, when the stars align, an alarmingly strong one. The secret weapon, of course, is that AI is effectively an infinite machine: you can keep asking it for variations long after a human brain would have tapped out.
Even so, something is still missing.
Human brains (at least mine) run on what I define as chaotic energy: the entropy-driven way we move between emotions, memories, cultural references and lived experience. As long as we’re living organisms in a universe that tends towards disorder, that chaos is baked in. It’s the feature, not the bug.
Current AI systems are trained to do almost the opposite. They minimise surprise (unless they hallucinate). They reduce the world into probability distributions and stay inside the guardrails that AI companies set. Maybe one day AGI systems will have enough compute, feedback and sensory input to approximate that chaos and connect ideas like we do. Right now, we’re working with powerful pattern-matchers that still need humans to aim, judge and take responsibility.
For simplicity, let’s call all the fragments, drafts and options that AI produces “dots” – like stars scattered across a night sky. Each one is a small point of light; it only becomes a pattern when someone decides which ones to join.
That raises a practical question for every team using AI: what do we let the machines do, and what do we reserve for this thing I’ve started to call human chaos?
Right now, AI is very good at filling the sky with stars. The real work is that one deliberate line through the dots that turns stars into constellations, paragraphs into stories and different assets into a coherent campaign. And ladies and gents, stars are great, but we’re in the business of constellations.
It’s easy to spot stars, but hard to see constellations
In marketing, PR and comms, a lot of what we used to give juniors already looks like machine food:
- Turning transcripts into meeting notes
- Clipping articles from online sources and classifying them
- Generating first-draft copy and subject lines
- Compiling longlists of media, influencers or events
- Reformatting ideas into decks, FAQs and briefs
Generative AI devours this kind of work. It is tireless, fast and unburdened by boredom. Ask nicely and it will produce more options than any of us can seriously review in a sitting.
So we end up drowning in dots:
- Ten routes for a blog post
- Fifty headline options
- Pages of “insights” from the same dataset
The scarce resource is no longer ideas. It is attention and judgement – where to look, which dots to trust and how to combine them into something that matters.
Humans don’t just make dots; we choose which ones count
It’s tempting to frame this as “AI makes dots, humans connect them”. The truth is messier.
Models already connect patterns all the time. They summarise, cluster, analogise, recommend. Sometimes they do it better than a tired strategist at 11.47pm.
Where they still struggle:
- Value judgements: deciding what should happen, not just what is likely
- Context outside the training data: local politics, office dynamics, one client’s risk appetite
- Long-term narrative: holding a brand, a market and a set of stakeholders in mind over years, not prompts
- Accountability: carrying the emotional and professional cost of being wrong
Humans don’t win because our thoughts are always better. We win because we carry the messy context that the model can’t see and we live with the consequences.
So our job description shifts:
- AI: explore the possibility space, surface patterns, draft options
- Humans: frame the problem, decide what “good” means, and for those of us lucky enough to be powered by chaotic energy, channel it into ideas you’ll care enough to defend.
That split sounds simple. It is not how most organisations are set up.
Our systems trained people to be dot factories
For decades, formal education and corporate training have focused on skills and knowledge:
- Learn the frameworks
- Memorise the models
- Follow the process
Schools reward correct answers, not strange questions.
Corporate training mirrors that logic. You are valued for getting from A to B reliably, on time, and with as little deviation from the norm as possible.
Agencies are no different. Junior staff members are typically trained to produce clean work, fast:
- Correct clippings
- Clean copy
- Templated reports
Whether they ever learn to ask the uncomfortable question or reframe the brief is left to “talent” (and balls), chance mentors and late-night panic.
Now AI shows up and happily does a big chunk of that work. We’re left with a ladder that’s no longer fit for purpose: fewer traditional entry-level tasks, but the same expectation that people will somehow emerge as constellation-makers in a few years. Talent has to be fostered and balls need room to grow.
We’ve trained people to be excellent dot factories at the exact moment machines became even better at making dots. The real opportunity now is in that uncomfortable space in between – where humans decide which dots count, and why.
In a follow-up pieceN8, I’ll get practical: how we can redesign delegation, frameworks and tools so that AI does the heavy lifting, and human chaotic energy does its best work instead of burning out or getting optimised away.

