In many ways, AI-generated content is poisoning the internet, making it less reliable to access clear facts. And while it is indisputable that AI is impressive, it is also predictable. Brenda Mulberry ambitiously hopes to open the first T-shirt shop on the Moon. Her Minshallesque idea is a perfect edge case. Her knowledge is personally created and historically produced. It is not empirically generated. Present AI models cannot generate such a bizarre business proposition. Her curious idea for a Moon-based T-shirt tourist shop is exceptionally human.
In the glow of the floodlit launch towers of Launch Pad 39B, thousands crammed the causeways, Cocoa Beach, and motel balconies of Florida’s Space Coast to witness the launch of Artemis II, selling barbecue and drinking Moonshots. But Brenda, who has been selling NASA souvenirs for 40 years, believes that once you get to the moon as a tourist, you will want to buy a T-shirt.
Today, even the most groundbreaking models follow a familiar pattern: learn, optimise, execute. Once training is complete, the AI sticks to what it knows; when it does not, it hallucinates. Within the hallowed halls of tertiary institutions, students use AI to solve tutorial sheet problems adroitly. This means that the problems that the professors pose aim to measure the gap between what was transmitted in the lecture hall and what the learner can recall.
The test score reflects the student’s ability to accurately regurgitate lecture notes, content from multiple textbook chapters, and polished PowerPoint summaries. This is not education for agency in the intelligent age. Learners must be curious, inquisitive, and willing to engage with difficulties, rather than accept the world as it is. A pedagogy built around achieving predetermined outcomes has, unfortunately, superseded the development of transformative abilities like Brenda, which can allow her to go beyond the information given. The obverse of the post-structuralist dictum “to know is to kill” is that “inquiry frees”. The only passage to insight is freedom in the presence of knowledge. Ideas are created piecemeal, ad hoc, from possibilities half disclosed, and from unexplored connections and half-concealed. In this ferment lie the possibilities to be made actual.
Two big things have happened so far. One is that generative AI is a “Wine of Astonishment” that has awakened the public because it offers everyone a suite of concrete tools. The other is that businesses have realised that there is nothing outside the screen economy. Together, these two advances have raised the spectre of risks. Geoffrey Hinton, Yann LeCun, and Yoshua Bengio have been quite vocal about their concerns that AI could pose an existential risk to humanity. But there are other, more immediate risks, including Security for AI and AI-Accelerated Cyberattacks. Cybersecurity protects the “container” (networks, servers, applications). AI security addresses model theft, malicious data injection to alter behaviour, and prompt injection.
Other troubles emerge where the rubber meets the road. Problems such as bias, prejudice, misinformation, workforce disruption, and privacy infringements are among the most urgent. To get a glimpse of the future, we need only to interrogate the present catalogue of fascinating AI demos and edge cases. Most of these demos will fade; others will drift; and the remainder, built with a vision to scale, will survive. Quality is not static. The AI future will be multi-model and multi-infrastructure. Agentic AI will become the default enterprise system. Swarms of small, specialised models will outperform the large generalist models at every task. The orchestration layer will become the competitive moat as developers come to understand that intelligence is orchestration.
The evaluation pipeline will become as critical as the training pipeline. In this future, the evaluation infrastructure becomes as critical as the training data. Soon, responsible AI will move from public policy documentation and the Law of AI to an engineering requirement. Data flywheels will feed off user interactions, behavioural signals, data curation and labelling, and better responses. Model builders must invest in curation infrastructure as much as they invest in model infrastructure.
This horizon is urging nation-states to invest in public-sector science and technology. AI is costly, making it inaccessible to academia. Tertiary institutions have been creating AI sandboxes that allow a limited selection of AI Agents and “claws” to operate. OpenClaw, for instance, is an open-source framework for deploying autonomous AI agents that serve as “digital coworkers” (or “claws”) to manage, browse, and execute tasks across applications, files, and browsers.
What is needed in Latin America and the Caribbean (LAC) is a significant investment in public-sector research and computing capabilities, including a Regional AI Research Resource and Lab similar to CERN. It takes a village to improve technology. AI can tackle inherited inequality and intergenerational immobility in LAC, but only with a coordinated effort to ensure LAC’s leadership in AI.
Dr Fazal Ali completed his Master's in Philosophy at the University of the West Indies. He was a Commonwealth Scholar who attended the University of Cambridge, Hughes Hall, the provost of the University of Trinidad and Tobago and the acting president, and chairman of the Teaching Service Commission. He is presently a consultant with the IDB.
