November 22, 2023

Last Updated on January 17, 2024

The original and still the ultimate goal of AI is “human-level AI”—a non-human system that can reason, interpret, learn on its own, and go far beyond its initial programmed state to perform potentially any cognitive task a human can do.

The hope is that “thinking machines” will on the whole benefit humanity and help us solve many of the intractable societal and environmental problems we are creating.

But what are the risks? How do we manage those risks? And when will we begin to experience the risks and rewards of human-level AI? Are we there already?

If you’re new to the idea of human-like AI, this article shares a comprehensive, top-level overview.

What is human-level AI?

AI is often categorized in terms of three waves:

First Wave: Handcrafted knowledge.

At this level, AI can correctly solve intensively defined problems using the rules-based expertise programmed into it. But it is unable to learn and adapts poorly to new situations. Think game-playing programs or delivery logistics software.

 

Second Wave: Statistical learning.

AI’s second wave is cresting now with groundbreaking new systems like ChatGPT and other large language models (LLMs) or digital neural networks. Second wave AI is based on human-created statistical models for specific problem domains, augmented by training on massive data sets. This yields sophisticated classification and prediction capabilities but no awareness of context and almost no reasoning capacity. Current second wave AI systems perform well most of the time but periodically and unpredictable fail—sometimes spectacularly. Chatbots, sound or object recognition, and autonomous vehicles are well-established solution arenas for second wave AI.

 

Third Wave: Contextual adaptation.

AI’s future is to bring together the successes of its first and second waves to pioneer the third wave—human-level AI that can perceive, learn, reason, and abstract within digital or even physical spaces. To get there, the AI systems themselves will construct and contextualize models to “figure out” and logically explain real-world scenarios. Third wave AI systems will be able to train themselves, discover the rules they should apply in situations, and then program themselves to accomplish their goals.

 

How long before human-level AI emerges?

Well-funded commercial and research entities are working hard to increase the “IQ” of second wave AI and bring its capabilities closer to the human level.

According to Peter Voss, CEO and Chief Scientist at Aigo.ai, the time it will take to develop third wave AI should be measured “not in years but in dollars.” The research needed to create so-called “cognitive AI” or “artificial general intelligence” (AGI) has largely been done, says Voss. This enables third wave AI developers to focus on implementation. The more expert resources are available, the faster it will happen.

“If the right people work on the right technology, I’m convinced we can have this in less than ten years,” says Voss. “In fact, it could be five years.”

While Voss’s estimates are more optimistic than most, many experts believe that AI can approach human levels within the next 20 to 50 years.

One of the key accelerators towards this goal is recent advances in second wave AI training and machine learning. The ability to give third wave AI systems a curated contextual understanding about the world (people, places, things, ideas) will be critical to the initial success of human-level models.

 

What are the risks of human-level AI?

The predictions on human-level AI outcomes run the gamut from utopian to apocalyptic. In scientific research on the most likely causes of human extinction, human-level AI consistently tops the list.

While some AI proponents discount existential or extinction risk, a growing body of AI thought leaders from to OpenAI CEO Sam Altman to Bill Gates, have collectively voiced grave concern about third wave AI risk as a “global priority.”

Today’s second wave AI technology, exemplified by large language models (LLMs) like ChatGPT, introduces risks like:

  • Explainability. How does the AI model analyze/interpret the data it ingests to come up with responses? Even the model creators don’t entirely understand how they work, which calls into question all their results.
  • Privacy violations. Many LLMs are trained on incomprehensively vast datasets that could include unknown types and quantities of personal data and other sensitive data. This makes second wave AI a potential threat to privacy. The direct application of second wave AI to violate privacy and threaten individual and collective autonomy, such as by governments, is a related risk.
  • Biases in the data and hence the results. Second wave AI models inherit and reflect biases and patterns that exist in the human social fabric (whether we like it or not). AI has the potential to amplify these biases and/or reinject them into individual or collective human consciousness as facts.
  • Hallucinations. As mentioned above, second level AI has a high statistical probability of offering useful results. But occasionally it fails in problematic ways, such as by presenting completely false or seemingly fabricated results.

Even today’s second wave AI is “socio-technical,” meaning that the software is unpredictably influenced by human social and behavioral changes. Our current ability to predict, identify, measure, monitor, and address today’s second wave AI risk is limited.

Our current ability to envision and manage third wave AI risk—let alone agree on any solution to reduce the risk—is even more limited.

 

What’s next?

For more discussion on this topic, listen to Episode 126 of The Virtual CISO Podcast with guest Peter Voss, founder/CEO of Aigo.ai.