Laying tracks for the hype train: a look at AI in 2025

While traditional media outlets are full of stories about the next AI breakthrough, social media is saturated with AI wishlists and alerts for 2025. At the same time, policy debates and government agendas are packed with AI conferences and companies are racing to launch the next product. Whatever problem we face, it may sometimes feel like AI will be either (and sometimes both) the cause and the remedy to solve it.

Talk about hype much? Is AI, after all, really all that? Does it deserve to attract everyone’s interest at this rate? This blog shares ideas on how AI hype is evolving, what can be done to manage its waves, and how it can be leveraged to support meaningful action towards a more responsible and inclusive future.

Hype around “new” tech is, of course, nothing new. There are even those who claim AI will be a bubble like the metaverse was a few years ago. Hopping on the hype train, journalists use it to write articles, companies leverage it to sell flashy products, and governments use it to unite populations and allies around a common friend or foe. But hype can create confusion, blind optimism, or overt pessimism. Importantly – since it may bring about long-term unintended consequences – hype can lead to misguided policy choices, overconfidence in new products and services, or underinvestment in potentially life-saving tools and initiatives. Knowing that, it is unfortunate that  hype can ultimately capture the scarce time and energy needed for concrete debate, research, and investment in the pressing issues people and our planet face.

Claims around AI’s promise have become more prevalent in the marketplace, including about the ways it could potentially enhance people’s lives through automation and problem solving. According to research by the United States Federal Trade Commission (FTC), some firms have seized on the hype surrounding AI and are using it to lure consumers into fake schemes and AI powered tools. In September 2024, the US FTC took action against multiple companies that have relied on AI as a way to supercharge deceptive or unfair conduct that harms consumers, as part of its new law enforcement sweep called Operation AI Comply.

Academics and researchers are also debunking some myths associated with AI. Arvind Narayanan and Sayash Kapoor, in their book AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (also published in September 2024), dig into the AI hype proactively by explaining the crucial differences between types of AI, why organizations are falling for “AI snake oil”, and why AI can’t fix all of the problems it’s marketed to face. In October 2024, authors at the MIT review created an AI hype Index, a simple, at-a-glance summary of the state of the industry.

On a governmental level, a report by France’s government-appointed AI commission in the context of its Action Summit this year, largely dismisses some AI risks as “hype”, suggesting directly and indirectly that the risks are being used to legitimize barriers to entry that would lead to the concentration of AI development in the already dominant players such as U.S. big tech companies and AI startups. Similarly, Brazil’s Federal Court of Accounts (TCU) published a short guidebook highlighting risks associated with AI regulations, including loss of international competitiveness and creation of monopolies. 

So why do we need to be careful about AI hype in 2025? Can we leverage the hype train to support, not deteriorate, efforts towards more trustworthy and responsible use of AI?

Avoid generalizations 

We should view AI not as a single entity, but as a diverse set of technologies with various uses and associated with different business models. Moving beyond the hype, it’s important to consider the practical implications of the use cases at hand – at the end of the day, different AIs may present unique opportunities and risks, which may, in turn, call for varied methods of regulation and supervision. In fact, the growing disconnect between visible AI performance and behind-the-scenes advancements complicates governance efforts:   without clear insights into these developments, crafting effective governance strategies – for private and public sector alike – becomes increasingly challenging. 

Educate in critical thinking

As technology companies wield growing influence over national and global politics through decisions on automated content moderation and AI-driven systems and platforms, it becomes essential to foster critical thinking and enhanced research skills in individuals and communities across all regions and age groups. These abilities are crucial to upholding democratic values, safeguard vulnerable groups such as immigrants and people with diverse gender identities from potential harm or exclusion. While it is true that fake news and misinformation are not new phenomena in society, their speed and spread may increase when powered by some emerging technologies and platforms. Educating people to verify information and engage with diverse opinions and sources on the same topic is essential for fostering informed and balanced perspectives regarding pressing societal challenges and AI itself. Since the AI governance debate will likely not be solved in 2025, 

Move from “Wow!” to “How?”

Do you know where the data that is used to build AI comes from? It’s becoming increasingly clear that we should. A study from the Data Provenance Initiative, shows that the web has been the dominant source for data sets used in all media, such as audio, images, and video, and a gap between scraped data and more curated data sets has emerged and widened. AI’s current data practices risk concentrating power overwhelmingly in the hands of a few dominant technology companies with the majority of data from the Global North. This is a familiar risk to those in the internet policy space: relevant local content has long been identified not just as a driver of inclusion and adoption of new technologies and services in underserved communities, but also as a catalyst for disruption and reverse innovation. Further, in the AI space,some are researching how AI systems may run out of scrapable data before 2026. This represents both risk and opportunity for the developing world. While we study and experiment more and more with synthetic data,initiatives like data coalitions could become a new source of power in AI development.

Calculate the cost and impact across the value chain

As the widespread reliance on AI grows for everyday tasks, it becomes relevant to consider the energy consumption associated with their use and development. Recent studies show that new generative models can use up to 30 times more energy to perform the same function as older AI models. This significant increase in energy use – and therefore in cost – is itself enough justification to consider the cost-effectiveness of the development of and investment in smaller, more specialized models, especially in the context of generative AI. 

As AI systems become more complex and widespread, the demand for energy increases, prompting companies to seek innovative solutions to mitigate environmental impact while at the same time securing access to key resources in the AI era. Some big tech companies are even turning to nuclear energy to power their long-term AI ambitions. As data continues to power AI, the cost of their use and development needs to factor the energy they consume.. Unfortunately we still don’t know how much energy is used during a conversation with Gen AI tools. Calculating not only the financial but environmental cost of AI is becoming urgent.

Change “regulate and forget” to “test and iterate”

It is true that private sector entities, and especially startups, have been faster to adopt “test and iterate” mindsets – they, after all, are not always so constrained by the firm boundaries of positive public law. 

Big tech companies, in particular, are competing for AI domination and releasing products at the speed of light. Sometimes AI tools are released before a company even considers them as a product. Research has shown that some experts agree that foundation models need additional regulatory oversight due to their novelty, complexity and lack of clear safety standards. Oversight needs to enable learning about risks, and to ensure iterative updates to safety assessments and standards. 

Meanwhile, in the public sector side, and in the context of all the challenges outlined above, agility in policymaking and responsive regulatory framework have never been more urgent. This and where sandboxes, as “safe spaces to test new technologies and practices against regulatory frameworks”, come in. They offer an alternative for public and private sectors to experiment together, rethinking systems based on responsible agility and effective coordination to solve common challenges – such as AI governance and policymaking.  

Don’t jump off the wagon just yet

As we navigate through the thickets of AI hype in 2025, good, old-fashioned discernment is a great tool to have. 

Competing narratives may misleadingly paint AI simply as a panacea or a considerable peril. If it could indeed be both, then the real task lies in directing the momentum of this hype to harness AI’s potential responsibly and inclusively. The good and the bad news is that no one person or entity, public or private, knows the right way forward, so we need to build it together. Since the train is moving, we will need to lay tracks as we move along, hopefully steering towards a future where this inexorable wave of technological change serves humanity with greater equity and foresight.

Want to hear more?

Sandboxes and their potential for AI will be the topic of discussion at an event organized by the Datasphere Initiative and the International Chamber of Commerce at France’s AI Action Summit on 11 February. Gathering Heads of State and Government, leaders of international organizations, CEOs of small and large companies, representatives of academia, non-governmental organizations, artists and members of civil society, the Summit will focus on public interest AI, the future of work, innovation and culture, trust in AI and global AI governance. The official side event “Advancing global AI governance: Exploring adaptive frameworks and the role of sandboxes” will explore the building blocks of global governance for trustworthy AI and how sandboxes can be used to ensure responsible AI innovation, design and deployment. The event will discuss the role of sandboxes within the regulatory process and when is the best moment to sandbox and analyze potential risks and how to mitigate them.

The post Laying tracks for the hype train: a look at AI in 2025 appeared first on The Datasphere Initiative.