AI Whitepaper: Parallel Futures – Branching Technology to Protect Society
- pmontwill
- Aug 5
- 10 min read
Updated: Aug 13
We Need to Talk NOW About Branching Our Technology into Two Streams
We are at an inflection point. Artificial intelligence is accelerating innovation faster than any technology in history. The potential benefits are enormous — cures for disease, solutions to climate change, breakthroughs in science and engineering. But the risks are equally enormous, and they are arriving faster than our ability to manage them.
If we do not act now, we risk placing our most critical systems — our electricity grids, banks, food supply, healthcare, and transportation — in the blast radius of inevitable failures, cyberattacks, and unintended consequences. The time for cautious observation has passed. The time for structural change is here.
The Dual Sphere System
Imagine standing on a high ridge, looking down at two valleys. In one, everything moves fast — self-driving cars hum through neon-lit streets, drones dart overhead delivering meals, algorithms optimise every heartbeat of the city. This is the Accelerated Sphere: the branch of technology where innovation runs at full speed, driven by competition, curiosity, and the promise of transformative breakthroughs.
In the other valley, the pace is calmer. Trains run on fixed schedules. The power grid hums steadily. Hospitals are stocked, staffed, and run on systems that haven’t changed in years — not because they can’t, but because they don’t need to. This is the Human-Paced Sphere: the branch designed for resilience and safety, where essential systems are shielded from the vulnerabilities of rapid change.
The Accelerated Sphere is where AI experiments, quantum computing breakthroughs, and frontier biotech happen. It’s high-risk, high-reward. Here, creativity and ambition are unleashed — but the cost is exposure to cyberattacks, unpredictable failures, and unintended consequences. That’s acceptable in consumer apps, entertainment platforms, and research labs, where mistakes can be absorbed.
The Human-Paced Sphere is different. It doesn’t chase the newest model or the fastest processor. It’s built on proven, stable technology that’s deliberately insulated from the internet, large-scale automation, and AI decision-making. Its purpose is to protect the infrastructure we can’t afford to lose — electricity, water, healthcare, food distribution, financial systems, and transportation.
The two spheres aren’t in competition. They are symbiotic. The Accelerated Sphere pushes the boundaries of what’s possible; the Human-Paced Sphere preserves what’s essential. Lessons from the fast lane can be adapted and transferred to the safe lane — but only after rigorous testing, years of validation, and multiple layers of safeguards.
What do we need to talk about?
I’ve been researching for this white paper for 18 months — and completed 4 research volumes as part of this (more details below) — and feel strongly that we need to talk about the following:
Of the 2 valleys described above, do we want to be in the first one — or the 2nd one — or have a society with a mix of both? As from my research, we are driving into the first valley and leaving the 2nd valley behind.
Is putting our essential services into the first valley a good idea? From my research, this seems like high risk when their stability is more important to society.
The HPS is NOT new technology
We already have the capability to build this second sphere today. It’s not about inventing a new technology — it’s about removing the pieces that make existing systems vulnerable. By doing so, we could create a zone of guaranteed stability, immune to the instability of the ever-faster technological race.
The HPS could even be an economy in itself
And perhaps most importantly, building and maintaining the Human-Paced Sphere could become one of the defining industries of the 21st century — a place to redeploy workers displaced by automation into work that is meaningful, secure, and essential to society’s survival.
The Case for Branching
In the technical world, there’s a term for what I’m suggesting: branching of technology. It’s the idea of taking an existing system and splitting it into two streams, each with its own rules, risks, and purposes. In software development, branching allows engineers to preserve the stability of one version while experimenting freely with another. In our current moment, I believe society needs to apply that same thinking — not to code, but to civilisation itself.
The first branch would be the technology we already have, but strengthened with rigorous safeguards. It would continue evolving, but under tighter rules designed to limit exposure to the most dangerous capabilities.
The second branch — what I call the Human-Paced Sphere — would be a fully secure, risk-free environment for our most critical systems. This is where our electricity grids, banks, food supply chains, hospitals, and transport networks would live. It wouldn’t need to match the cutting-edge pace of the first branch. In fact, its safety would come from deliberate restraint. We already have the core technologies to build it; we simply need to strip out the elements that make it vulnerable.
European Roll-out
When I asked ChatGPT in my research volume 4 below to help me imagine how such a system could be rolled out, we found an obvious starting point: Europe’s existing drive to safeguard its sovereignty. AI threats overlap with cybersecurity threats — and both overlap with the broader need for national and regional security. That means the same investments, budgets, and infrastructure projects could serve multiple purposes if planned carefully.
This approach could also address another looming issue: the displacement of jobs by AI. If millions of roles are automated away, why not redeploy people to design, build, and maintain this new safe stream? It could become an industry in its own right — and perhaps even a cultural shift, as more people choose the slower, safer, more human-centred side of technology.
I don’t want to halt progress. I want to protect what’s best about our world while still making space for breakthroughs — from curing cancer to advancing clean energy. I want to imagine a future where we value people as much as inventions, where daily routines are satisfying rather than frantic, and where everyone is kept usefully and meaningfully busy. Branching of technology, if done right, could be the first step toward that balance.
Conclusions based on 18 Months of Research
Over the past 18 months, I have conducted deep research into AI’s impact on society, using ChatGPT as a research partner and collaborator. That research became a series of four Research Volumes:
The Birth – The Next Intelligence
Traced the origins of modern AI, beginning with the events around Sam Altman’s firing from OpenAI, and examined how corporate ambition and scientific discovery have combined to form an unstoppable global force.
The Mind of Tomorrow – The Next Intelligence
Explored how AI could reshape the human mind, society, and power structures over the next 100 years — with timelines that recognise how quickly some regions will change, and how slowly others will adapt.
The Future We Cannot Control – The Next Intelligence
Focused on AI’s most dangerous trajectories, dividing them into likely and unlikely risks. Highlighted the unprecedented speed at which small groups — even individuals — can now create powerful, potentially harmful technologies.
Parallel Futures – The Next Intelligence
Proposed a structural solution: the “branching” of technology into two parallel spheres, one for high-speed innovation and one for critical infrastructure protected from AI’s most volatile risks.
These Research Volumes document the rapid evolution of AI, the risks that come with it, and the necessity of creating a safe, parallel track for our most important systems.
Next Steps for the Reader
If the ideas in this whitepaper resonate with you — if you see the need for urgent, structural action to safeguard civilisation’s critical systems — I strongly recommend reading my fourth Research Volume, Parallel Futures – The Next Intelligence.
While this whitepaper summarises the core concept of branching technology into two spheres, Parallel Futures takes the discussion further, offering:
A detailed blueprint for how the Human-Paced Sphere could be designed, governed, and maintained.
Practical integration strategies for merging this initiative with Europe’s current sovereignty, cybersecurity, and infrastructure agendas.
Economic pathways showing how building and running the Human-Paced Sphere could create a sustainable, large-scale employment sector.
Cultural scenarios that imagine a future where people can choose between fast-paced innovation and a stable, secure daily life.
A fictionalised rollout journey that makes the concept tangible, showing how it could evolve from an idea to a global reality.
This research volume is entirely fictional and unrealistic in its overall scenario, but its purpose is to spark the important conversations we need to have now. Many of the concepts it presents — such as safeguarding our essential services — are highly realistic.
Others, like creating a new Human-Paced Sphere (HPS) world for people to live in, are less likely, as they would depend on consumers both demanding and paying for it, given that companies operate on consumer-driven markets. However, if mass unemployment from AI leads to widespread loss of purpose, public demand may grow for a return to a slower, more familiar world — one that restores the stability and human connection people feel they’ve lost.
Still, by starting the conversation today, we can be better prepared to respond to unexpected developments — especially those driven by rogue AI implementers.
The aim of Parallel Futures is not only to warn, but to inspire. It presents a vision of a balanced future — one where technological progress and human security grow together, rather than in opposition.
Reading it will give you the depth, context, and strategic detail that a whitepaper cannot fully cover, while also sparking ideas for how you, your organisation, or your government could begin implementing this model today.
Critical Systems for the Human-Paced Sphere
To protect society from systemic collapse in the event of AI misuse, cyberattacks, or catastrophic technical failure, the Human-Paced Sphere must safeguard the following systems:
Energy Infrastructure
Electricity generation, transmission, and distribution
Gas pipelines and storage facilities
Renewable energy systems and battery storage
Financial Systems
Banking networks and payment systems
Central bank operations and monetary controls
Stock exchanges and clearing houses
Food & Water Supply
Agricultural production and distribution
Food processing facilities
National and regional water treatment and distribution systems
Healthcare Systems
Hospitals, clinics, and emergency services
Medical supply chains (including pharmaceuticals and equipment)
National health records and critical patient databases
Transport & Logistics
Rail, road, air, and maritime networks
Public transport systems
Supply chain hubs and distribution centres
Communications Infrastructure
Telecommunications networks (landline, mobile, satellite)
Emergency communication systems
Broadcasting infrastructure for public information
Emergency & Security Services
Police, fire, and rescue networks
Civil defence coordination
Disaster response centres
Government & Civic Systems
Legislative and judicial infrastructure
Identity and citizenship databases
Electoral systems and processes
Background Research & Conclusions
This whitepaper is the product of an intensive 18-month research project into the societal implications of artificial intelligence. Over that period, I produced four Research Volumes, each building upon the insights and unanswered questions of the last. Together, they chart a progression from observation, to projection, to risk assessment, and finally, to structural solutions.
Research Volume 1 – The Birth – The Next Intelligence
I began by investigating the origins of the current AI wave, focusing on the period around Sam Altman’s dismissal and reinstatement at OpenAI. My goal was to understand how modern AI emerged from a combination of open research and commercial imperatives — and how that mixture created a force too powerful to stop.
Key takeaway: The trajectory of AI development is shaped by both economic incentives and the curiosity of scientists. Once momentum builds in both areas, the pace of progress is near impossible to slow.
One revelation that truly struck me was how OpenAI’s safety teams were handled. When the operations team alerted them that they were developing something far riskier than intended, the safety team took action — only to be replaced by another safety team aligned with the company’s new corporate goals. This means “safety” will continue to be discussed publicly to create the impression of control, but in practice, it will be ineffective and may instead give the public a dangerous false sense of security.
Research Volume 2 – The Mind of Tomorrow – The Next Intelligence
With a foundation in AI’s origins, I shifted to projecting its influence over the next century. I asked ChatGPT to model a future where technological adoption rates varied by region and sector, acknowledging that some societies evolve quickly while others change slowly. The result was a 100-year roadmap broken into phases, each showing how AI could reshape cognition, identity, and governance.
Key takeaway: The spread of AI will be uneven, but its influence will be total. Even slow adopters will eventually be transformed by its reach into education, healthcare, governance, and personal life.
Research Volume 3 – The Future We Cannot Control – The Next Intelligence
I then turned my focus to risks, asking ChatGPT to separate dangerous societal changes into likely and unlikely scenarios. This stage revealed that dangerous technologies could now be developed not just by nations or corporations, but by small groups or even individuals — in weeks rather than decades.
Key takeaway: The barrier to creating disruptive or harmful technology has collapsed. We can no longer assume that dangerous innovations require massive resources or long lead times.
In the past, breakthroughs might occur once in a decade. Soon, they will arrive on a weekly — or even daily — basis. What truly shocked me was realising that a single teenager, anywhere in the world, could create a technology in just hours that might have a profound and destabilising effect on society.
Research Volume 4 – Parallel Futures – The Next Intelligence
Finally, I sought solutions. My conclusion was that we must branch technology into two streams:
The Accelerated Sphere, where innovation moves at full speed but is closely monitored for risks.
The Human-Paced Sphere, where critical infrastructure is insulated from rapid change and AI-driven volatility.
This model would protect our most vital systems while still enabling breakthroughs in less critical domains.
Key takeaway: We already have the technology to build the Human-Paced Sphere today. The challenge is political will, global coordination, and reframing security budgets to include AI risk mitigation.
Overall Conclusions After Four Stages
AI progress is unstoppable in the short term — but its risks can be managed if addressed structurally, not reactively.
The danger is in the speed: disruptive technologies can now appear suddenly, from unexpected sources.
A dual-sphere approach is viable today using existing technologies, governance structures, and funding streams.
Public and political awareness is lagging — policymakers and citizens alike must be brought into the discussion urgently.
The journey from The Birth to Parallel Futures was not simply a research project — it was a process of moving from observation to action. This whitepaper represents the actionable blueprint that emerged from that work.

Comments