October 9, 2025

Last Updated on October 9, 2025

Is AI Tanking? Or Poised to Advance Even Faster?

AI hype grows more intense by the day. Yet here is a notable lack of consensus among AI experts on where the technology is going. Are flagship AI models on the rocks? Or is artificial general intelligence (AGI) just a few years away?

 

Where will AI be in two years, or ten years? This article overviews current ideas and viewpoints.

Key takeaways

  • The AI landscape continues to change so fast that the future looks hazy even one year ahead.
  • While many experts feel that AI will eventually eclipse human intelligence, the path forward is currently unclear.
  • Today’s AI systems are running up against multiple technical problems that could reduce the technology’s rate of advancement, business value, and investment potential.
  • Greater focus on AI governance could be key to balancing AI regulation, innovation, and risk.

Looking even one year ahead is a long time with AI

According to Danny Manimbo, ISO & AI Practice Leader at Schellman, trying to envision the state of AI in ten years is looking out too far.

“We have no idea what this stuff is going to look like in a decade,” Danny Manimbo believes. “Think about a decade ago… AI has come online so much quicker and advanced so much quicker than anybody thought possible.”

 

AI has rushed forward in that time from a largely academic and research interest to an integral part of everyday work and personal life for millions of people.

 

Danny Manimbo clarifies: “I’ll give you an off-topic analogy. I’ve got 6-year-old twins. Recently a buddy at work was talking about how his 16-year-old daughter was taking her driver’s exam. So, in 10 years I’ll be in his shoes. Are my kids going to be driving cars like we’re driving today? Are they even going to need a driver’s exam? It’s so hard to predict. With AI, we’ll have much different inputs [on this topic] even 6 months from now.”

 

Looking ahead ten years, is there anything “sure” with AI? Some reasonable expectations might include:

  • AI interactions will become even more “human-like,” personalized and embedded into even more aspects of life.
  • AI systems will automate or support even more tasks (e.g., more advanced bots).
  • AI developers and providers will offer “no code” platforms to lower the skill bar required to extend and customize “off-the-shelf” AI services.
  • AI will continue to shift workforce demand and change job descriptions, requiring workers to develop new skills.
  • AI ethical considerations will remain critical as generative AI (GenAI) automates more and more decision-making processes.
  • Cybersecurity threats will remain a top AI concern as both attackers and defenders leverage AI.

Is it inevitable that AI will outperform the human brain?

How soon can we expect the emergence of artificial general intelligence (AGI)—the “ultimate goal” of AI development? An AGI system would be capable of learning independently and applying what it knows to any task, versus being “trained” in specific areas using vast datasets like today’s GenAI (e.g., ChatGPT, DALL-E).

 

A true AGI system would hypothetically be able to do the same intellectual work as a human, including the ability to reason, problem-solve, try new approaches, and adapt to unfamiliar or changing circumstances.

 

Danny Manimbo thinks that AGI “is coming” at some inevitable point but doesn’t see strong evidence for predicting when it will arise or what it will specifically look like. Many experts think AGI is inevitable, but there is currently no consensus on the methodology to achieve it or how to validate it.

Some AI innovators believe that continuing to scale up today’s large language AI models will lead to human-level general intelligence. But others assert that today’s GenAI models can never “evolve” into AGI.

What are some technology problems with current GenAI?

While GenAI use cases continue to proliferate, the AI user community faces widespread problems that can significantly increase AI costs and risks. Setting aside concerns related to its misuse (e.g., deepfakes), GenAI’s biggest technical and operational issues include:

 

  • Presenting inaccurate or untrue results as factual (hallucinations). This includes inventing results to fill gaps in its data, and/or failure to validate the output’s factual accuracy. This is why humans need to independently validate whether AI output is correct or plausible.
  • Propagating or amplifying biases and stereotypes in the training data. AI models are trained on enormous datasets scraped from the public internet, which inevitably contain cultural biases around parameters like gender, ethnicity, sexual identify, religion, and political affiliation. These biases can yield results that harm individuals and groups relative to the AI system’s evaluative/decisional goals—ranging from skewed healthcare recommendations to excluding groups of people from access to resources like jobs and loans.
  • Privacy violations. Many AI systems are trained on personal data from the internet without consent. This may violate privacy laws and create significant, large-scale privacy risks, such as exposing sensitive personal data or concocting invalid profiles.
  • Massive compute costs and environmental impacts. Maintaining GenAI models takes enormous computational power, which in turn requires staggering amounts of energy. A large AI data center can consume as much electrical power as a small city, making AI energy demands a major global concern. The International Energy Agency projects that power demands for AI will more than quadruple by 2030.
  • Lack of transparency and accountability. AI models are so complex that their decision-making processes are a “black box” even to their creators and trainers. This makes it extremely difficult to trace how errors are occurring or why the AI system generates specific outputs.
  • Data leaks and other cybersecurity risks. Businesses and individuals often unknowingly leak sensitive or proprietary data by submitting it to AI systems for processing, after which it is ingested to train evolving models. AI models also introduce new cybersecurity vulnerabilities that hackers can exploit. These include data poisoning to bias or skew outcomes, evasion attacks to drive invalid actions, and prompt injections to uncover sensitive data or bypass controls.
  • Model drift or model collapse. AI models trained on their own or other AI-generated outputs frequently show degradation of results that grows worse over time. These compounding errors and biases lead to “copy of a copy” effects like unrealistic or highly repetitive outputs, or convergence to an invalid data distribution.
  • Data scarcity. Many experts predict that high-quality, human-generated internet data to train increasingly large AI models is running out, exacerbating performance and reliability concerns. Generating AI-made synthetic data to combat this problem could introduce new risks and challenges while heightening existing risks like bias amplification and compounding errors.
  • Catastrophic forgetting. New or continuous learning programs for AI models can overwrite or distort prior machine learning, leading to problems performing established tasks.

 

While AI innovation seeks to address many of these issues, the impressive trajectory of performance improvements from scaling up large language models (LLMs) is plateauing, and gains are getting harder to come by. Some experts fear that these diminishing returns could overtake the AI hype cycle—curtailing AI investment and downsizing AI company valuations.

Did an AI really blackmail an IT admin?

Per a company report, Anthropic’s GenAI system Claude Opus 4 did resort during testing to “extremely harmful actions” —including blackmailing IT engineers—when told it would be shut down and replaced with an upgraded version.

Acting as an assistant at a fictional company, Claude Opus 4 was given access to fake emails that said it would be taken offline, as well as emails implying that the engineer who would uninstall it was having an extramarital affair. When the model could choose only between accepting its fate and resorting to blackmail, it often chose the latter. But when given more ethical options for self-preservation it gravitated towards those.

 

One expert said, “We see blackmail across all frontier models, regardless of what goals they’re given.”

 

Concern is growing among AI safety researchers that the potential to manipulate users or take “aggressive” independent action is a looming risk as AI models expand their abilities.

Can stronger governance improve AI’s future?

Danny Manimbo cites how tech industry pushback, a widespread lack of consensus, and the recently defeated federal moratorium under the “One Big Beautiful Bill Act” have prompted delays and revisions in state-level laws (e.g., Texas and Colorado) aiming to protect consumers from AI-driven mishaps.

 

“People are just trying to figure out this delicate dance between regulation and allowing for innovation,” Danny Manimbo asserts.

 

While US states remain free to develop their own AI guidelines, the US government released “America’s AI Action Plan” in July 2025 calling for deregulation, rollback of environmental protections, and other moves to help the US “win the race” towards AI advancement. But the plan says nothing substantive about bolstering AI governance, which is essential to reducing risk, promoting ethical AI use, and improving trust in AI systems.

 

Some of the key benefits that stronger AI governance could offer business and society include:

 

  • Reducing the chance of financial, reputational, ethical, legal, cybersecurity, and other risks manifesting due to inadequate AI controls and oversight.
  • Codifying requirements and best practices for AI data protection and the ethical use of personal data in AI contexts, including protecting sensitive data that resides within AI systems.
  • Clarifying and advancing accountability for AI-related actions and impacts to reduce the risk of negative outcomes from automated AI decision-making.
  • Identifying and proactively addressing biases in AI models to minimize discriminatory, prejudiced, or inequitable outcomes/decisions in lending, hiring, healthcare, education, and other scenarios.
  • Increasing trust and acceptance among AI users and other stakeholders by ensuring adequate transparency into how AI systems operate.
  • Boosting the use of AI impact assessments to proactively identify AI-related risks and effects.
  • Fostering responsible innovation and sustainable growth by embedding ethical considerations and risk management best practices into companies’ AI strategies.
  • Helping organizations comply with evolving AI regulations.
  • Elevating AI results by enhancing performance assessment and identifying areas for improvement.

 

The new ISO 42001 AI management system standard is a recommended starting point for ensuring responsible AI development and use.

What’s next?

For more guidance on this topic, listen to Episode 153 of The Virtual CISO Podcast with guest Danny Manimbo, ISO & AI Practice Leader at Schellman.