On November 30 2022, a simple social media post invited people to interact with an AI chatbot. It marked the day that OpenAI released ChatGPT to the public. In five days, the chatbot gained one million users and crossed 100 million within two months, making it the fastest adopted consumer application in history.
Today, roughly 800 million people use the platform every week. That means one tenth of the global population now regularly consults an artificial intelligence for help with their work, their questions, or their daily lives.
Sundar Pichai, the chief executive of Google’s parent company Alphabet, calls artificial intelligence the most profound technology humanity has ever worked on. When he warns that people will need to adapt, he is not exaggerating, but he is also not telling us the whole story. The public conversation about AI focuses on jobs and automation. However, something bigger is happening around AI. That’s what we are going to discuss today.
Here is the simplest way to understand what has changed: for the first time in history, machines can now process information, generate ideas and solve problems faster than any human or human institution possibly could.
With proper inputs, an AI system can read all your documents, analyse every issue you are dealing with and draft responses to your queries, while you are having breakfast and preparing for work in the morning. This is not just a faster tool. It is a completely different kind of capability, and whoever controls it gains an extraordinary advantage.
Genesis
On November 24, the Trump White House, issued an executive order launching something called the Genesis Mission. The order itself compares this initiative, in urgency and ambition, to the Manhattan Project. If you recall this was the secret programme that built the atomic bomb during World War II.
Think about what that comparison means. The American government is now treating artificial intelligence the way it once treated nuclear weapons: as a technology so powerful that national security depends on who masters it first. The Genesis Mission brings together national laboratories, top universities, and national security facilities into a single coordinated effort. Its stated goal is to achieve what the order calls global technology dominance.
This is not just another policy announcement or funding programme. It is an official admission that AI has crossed a line from being a useful personal and business tool to an instrument of national security.
To understand why this matters, consider how power has traditionally worked. For the past two centuries, if you wanted to build something significant you needed money, scale, connections and the ability to navigate regulations. Big organisations had natural advantages. They could hire more people, invest more capital and lobby more effectively.
The playing field was tilted toward those who already held power.
AI changes this equation in ways we are only beginning to understand. What matters now is not how many employees you have or how much capital you control. What matters is how quickly you can train AI systems, how much quality data you can feed them, and how effectively you can deploy them.
A small team with the right AI capabilities can now outperform organisations a hundred times their size. Conversely, a large institution that fails to integrate AI effectively may find its traditional advantages evaporating.
The real disruption is not about which jobs will be automated. It is about what happens when “thinking” becomes cheap while the ability to act on that thinking remains expensive and scarce. Your competition is no longer another person or even another company. Your competition is a system that learns and improves faster than any human organisation can keep up with.
The problem
Yet within this story of accelerating power lies an unexpected vulnerability. In October, researchers at Texas A&M University, the University of Texas at Austin and Purdue University published a study with interesting findings. They discovered that when AI systems are continuously trained on low quality internet content, the systems actually become worse.
The researchers called this phenomenon brain rot, borrowing a term usually applied to people who spend too much time consuming shallow online content. The most alarming finding came when the researchers tried to fix the problem. Even after retraining the damaged systems on high-quality, carefully curated data, the AI models could not fully recover their original capabilities.
For businesses, these findings create an immediate problem. There is a rush to integrate AI into business operations. It is used for customer service, content creation, data analysis and decision support. Yet if the AI systems businesses rely on are themselves degrading, then companies face a new kind of risk they have never had to manage before.
The question is no longer just whether your AI will occasionally make mistakes or produce nonsense (what the industry calls hallucinations). The question is whether your AI runs the risk of losing its ability to “think” clearly over time.
The Genesis Mission should be understood in this context. The directive explicitly mandates the US government to move away from training AI on the open internet. Instead, it shifts to using federal scientific datasets from decades of government funded research in areas like biotechnology, nuclear energy, and materials science.
We are watching the emergence of two different kinds of artificial intelligence. On one side, there are the commercial systems trained on whatever content they can scrape from the internet. These are systems that may be slowly deteriorating with usage, even as new models improve capabilities.
On the other side, there are government and elite systems trained on carefully protected, high quality data. The gap between these two tiers may widen considerably in the years ahead.
Implications
For companies, the implications are uncomfortable. Treating AI as a simple subscription service, something you buy from a vendor and plug into your operations, may be a risk. Firms that depend on general purpose AI systems trained on the open internet may see the underlying quality degrades over time.
Companies that want to remain competitive may need to invest in their own data infrastructure, build their own carefully curated training materials, and continuously test their AI systems for signs of decline.
The deeper question, which neither government programmes nor corporate strategies fully address, concerns ordinary people caught in the middle of this transition. Pichai is right that people will need to adapt. But adapting is difficult when the ground keeps shifting beneath your feet.
The reality is that the systems that help you think are themselves changing faster than you can learn to use them. As a result adaptation becomes less a skill you can acquire and more a posture you must constantly maintain.
The reality is you cannot win this race by simply working harder or learning more. The systems you are competing against can work infinitely harder and learn infinitely more than you ever could. I would suggest that the only viable strategy is to learn how to work with these systems.
This means learning to integrate them into how you think and operate. The goal is to multiply your own capabilities rather than trying to compete directly against capabilities you cannot match.
The public conversation frames AI primarily as a threat to jobs. That framing misses the bigger picture. AI is more like a gravitational force, pulling value and opportunity toward those who learn to use it effectively and away from those who do not.
The people who will thrive in any profession are those who figure out how to make AI an extension of their own abilities. Those who resist or ignore it will find their skills worth progressively less, regardless of how hard they work.
This will show up only in unemployment statistics. It will also show up as a widening gap in outcomes. That is in earnings, in opportunities, in quality of life, that our existing systems of education, employment, and social support were never designed to handle.
Three years after ChatGPT, we find ourselves in a strange in-between period. The old ways of organising work and power have not collapsed, but they are visibly straining. The new arrangements are not yet clear, but their outlines are starting to emerge.
ChatGPT was released as what OpenAI called a research preview. The ask was to try it out, essentially an experiment to see how people would use it.
Three years later, it has become the interface through which hundreds of millions of people encounter artificial thinking for the first time. What began as an experiment has become infrastructure.
In the future it is likely to become a national asset for countries that properly harnesses it. This is a conversation that we have yet to have here in T&T.
Ian Narine is a financial consultant who believes that intelligence isn’t artificial. Send your comments to ian@iannarine.com
