November 22, 2023

John W Verry (00:01.145)
And like I said, if anything goes on, we can always edit stuff, the video editors and audio editors, anything else we need to get done. All right. With your blessing, I’ll kick this off. I’ll just start with a quick intro and we’ll be ready to go. Ready? Hey there, and welcome to yet another episode of the Virtual See-So Podcast. With you as always, your host, John Berry, and with me today, Peter Voss. Hey, Peter. How are you today?

Peter Voss (00:07.359)
right?

Peter Voss (00:26.808)
Hello.

Very, very good. Thank you. And how are you?

John W Verry (00:32.405)
Good, in fact I was surprised that this is a Labor Day weekend. I was surprised when I saw someone on my calendar this late. So I guess you and I are the two people that are working in keeping America’s economy rolling right now.

Peter Voss (00:43.982)
Well, actually the rest of our company is still very much working. We’re on a mission, yeah.

John W Verry (00:50.277)
Well that or you’re a slave driver. I’m going to go with on a mission though it sounds better. Alright so I always like to start simple. Tell us a little bit about who you are and what is it that you do every day.

Peter Voss (01:01.454)
So I am actually literally on a mission and that mission is to bring human level AI to the eight billion people in the world to improve the human condition. So that’s the ultimate plan is that I believe that having true artificial intelligence, human level intelligence will actually be tremendously.

good for humanity and help us solve many of the problems that we have. And that’s what I’ve been very actively involved with for more than 20 years.

John W Verry (01:37.317)
Excellent. Well, I’m looking forward to chatting with you with regards to artificial intelligence and both your very optimistic view of it being such a great technology and then also at the same time how we’re going to manage those risks that come along with great technology. Before we get there, I always ask, what’s your drink of choice?

Peter Voss (01:41.985)
Mm-hmm.

Peter Voss (01:57.87)
Twining’s Earl Grey Tea. I’m addicted to it.

John W Verry (02:05.011)
So that’s, if I recall correctly, Earl Grey tea has, it’s bergamot, right, is the phytochemical that’s in there. By the way, that’s supposedly excellent for your cardiovascular system as well. So are you running marathons?

Peter Voss (02:09.506)
Correct, it’s Bergamot, yes.

Peter Voss (02:18.495)
No, I just power walk.

John W Verry (02:20.145)
All right, so 2023 has certainly been a really interesting year, and probably a few as well, because of the significant increase in conversation around AI, as it’s become much more mainstream with the emergence of and significant widespread use of chat GPT. So as you mentioned, AI certainly has the significant potential to transform society and change people’s lives.

I think, as you alluded to, you want that to be for the better. I think the NIST AI risk management framework is an early change to that. So can we start simple? What is the NIST AI RMF framework?

Peter Voss (03:07.801)
Sorry, what is?

John W Verry (03:09.355)
The NIST risk management framework.

Peter Voss (03:10.514)
Oh, yes. Yeah, so the framework is an outline of how to assess and manage risks related to AI. And as you say, that’s become very topical with large language models such as chat GPT, which have exploded. And that really has raised the level of discussion on AI risk. And there’s a lot of confusion about AI risk because it really…

goes the whole spectrum from what one could call mundane risk or risks that we already kind of have with other technology or even without technology, you know, people lying and cheating, you know, getting an email from Nigeria or whatever. And I think it’s a bit unfortunate that this sort of has all been lumped together in AI risk. On the one hand,

people talk about extinction risk. And there’s some people who absolutely believe if we get to full human level AI, that it will kill everybody, we’ll be wiped out completely. So you have that sort of discussion or those noises going on at the same time, people talking about that some group may not be represented properly or there may be some misinformation that is distributed.

And people obviously in talking past each other. So I think what’s great about this framework, it obviously has become a hot topic so a lot of people have started working on it. It is a very detailed outline of the different kind of aspects of risk that you want, that you need to assess. It doesn’t go all the way to kind of extinction risk, it doesn’t touch that at all. It’s more the sort of more conventional

risks that have already been identified in engineering for many years. So they talk about it being safe and not harming anyone, being secure. I guess it can’t be hacked and resilient against attacks. Now, the interesting thing is they also talk about it being, now we get into the realm of AI, about it being explainable and interpretable. And-

Peter Voss (05:35.442)
There’s actually a huge, huge problem with the current technology, and I’d like to talk more about that in a minute, about the current AI technology that we have can be described as wave two. And DARPA gave a presentation a few years ago where they talk about three waves of AI. So the current crop of AI, the current focus of AI is all wave two or statistical or generative AI.

And that has really a very fundamental issue with interoperability and explainability. It really, I mean, you have Sam Altman and other people leading multi-billion dollar AI companies saying, we don’t know how these things come up with their answers. We don’t know how they work. Now, that’s pretty wild, you know, for multi-billion dollar companies selling a product that says, you know, quite openly say, we have no idea how this works.

Now, they must have some idea, I guess. I mean, I have some idea on how they work, but they certainly don’t explain how they give the responses that they do, and they’re not interpretable. You see, they are black box, basically. So that is a problem. So the NIST framework does have that as one of the topics to look at, but there doesn’t really seem to be much of a solution on this current approach to AI.

John W Verry (06:47.361)
They are black cards basically. That is a problem. Yep. So the NIST framework…

Peter Voss (07:02.99)
Privacy enhancing and fairness are other things. I don’t know about privacy enhancing. Hearing that from a government organization always makes me highly suspicious when they basically just totally undermine your privacy, that they have access to everything. I mean, they can just ask Google or Facebook for whatever information they want and they seem to give it quite freely.

Legally, they’re not even allowed to disclose when they refuse a particular request or something. So they just seem to quietly give out whatever information requested. So I’m not too sure who the big culprits are as far as privacy are concerned. And then accountability and transparency.

Peter Voss (08:03.122)
Yeah, that’s a, you know, that I mean, it’s a common business problem, really. It’s not unique to, to AI. So, you know, the, I think the only really new thing that’s come up there is the sort of explainability for the rest, it seems to be more mundane business and, and technology risks that need to be managed. And I think the framework does give a lot of detail of the kinds of things that, you know, you, you should be thinking about.

John W Verry (08:32.989)
Yeah, and that ethical risk is an interesting one as well. Like I was talking with another AI person recently, and the idea that these, if you think about the internet as having probably some level of inherent bias baked into it, sort of implicit bias, and that is the basis of a learning model, that’s the ingestion and what creates the learning model, then does the language model inherit

that same bias that exists and does it therefore impact the information that it produces. So interesting stuff.

Peter Voss (09:12.714)
Yes, and it obviously does. I mean, these models are built from massive amounts of data, basically anything they can find on the internet. And it will be based, their knowledge will be based on whatever information they happen to have. And a lot of it is just plain wrong, you know, or misleading. And certainly there’s more information on certain topics and for certain demographics and certain languages than in others. I mean, that’s just the reality of it.

John W Verry (09:40.041)
the reality of it. And anybody believing that is something

Peter Voss (09:41.338)
And anybody believing that is something that can be fixed or should be fixed even, I think it’s just not at all realistic. And bias is sort of a word that’s thrown around if you want to attack somebody. And it’s often used as a weapon really. Whereas, is it bias if you live in a…

Christian dominated country, that should be your default assumption, or that if you live in a Muslim, what’s the bias? Or if the reality of the situation is that the majority of women prefer to raise children, and that’s something they like to do.

John W Verry (10:24.417)
or if the reality of the situation is that, you know, women prefer to, the majority of women prefer to raise children, you know, and that’s something they like to do, which obviously makes a lot of evolutionary sense as well, then is putting, taking that into account…

Peter Voss (10:39.374)
which obviously makes a lot of evolutionary sense as well, then is taking that into account, is that a bias? So the reality of the world, the reality representing the reality, statistical reality of it, will bring in a whole lot of biases. It’s just they are patterns in the world that are real.

John W Verry (11:02.875)
Mm-hmm.

John W Verry (11:07.133)
Yep, and in a weird way it’s…

Peter Voss (11:07.95)
Now, you may not like those patterns, but then by calling them a bias, that’s kind of where I say you weaponize it. So there’s a distinction between what is and what you’d like to have.

John W Verry (11:19.209)
Right. And it gets interesting because if you were to remove, if you were trying to remove that from the training set, if you will, you know, then it no longer becomes an accurate representation of, of what we’re trying to accomplish. So it’s kind of like inherently, yes, it exists, but removing it might actually create as many problems as it solves. So, yeah.

Peter Voss (11:41.198)
Correct, correct, yeah. I mean, it’s useful to know what it is. Now, it’s obviously very useful to know that, you know, about this sort of distribution of data and, you know, to be aware of that problem. But in any case, who would be the judge of what is unbiased?

John W Verry (12:02.589)
Yes, exactly. Well, that’s the whole problem with the whole Twitter debate, right? Is, you know, that it was OK, like we’re going to put moderators in, but who’s the moderator of the moderators? Right. Who who is the authoritative source of that? And we’ve had that whole problem with right now on YouTube and people being blocked for things which, quote, unquote, don’t agree with the prevailing opinion, especially on the medical side right now, which is which is creates its own set of problems. So I like to your point. You said this a little bit earlier.

Peter Voss (12:11.314)
Right. Yeah.

Peter Voss (12:24.254)
Right, right. Yeah. So, that’s it.

John W Verry (12:31.253)
The problems that we’re talking about and some of the problems that the NIST AI Risk Management Framework helps us manage are problems which are inherent, independent of AI. Yep. All right. So let me ask you a question. Let me ask you a question because I’ve asked this question before and some people think it’s a challenging to define what is artificial intelligence.

Peter Voss (12:40.91)
Correct, correct, yes.

Peter Voss (12:56.686)
Oh, okay. I love that topic. So the term was coined some 60 odd years ago, 1955 actually. And originally it was really the meaning was how can we build thinking machines, machines that can think, learn and reason the way humans do. So ultimately machines that can do the work that we do, but really focused on mental.

activity, the cognitive ability of humans. And I think learning and reasoning has always been an important part of that, that machines can learn and reason similar to humans. Now, when they coined the term, they actually thought that they could build these machines within a few months or a few years. Now,

Obviously that turned out to be much, much harder than they thought. So what has happened in the field of AI over the last decades is that the field has turned into narrow AI, where it’s really not, it hasn’t been about building thinking machines, it has been about what particular problem can we tackle and use computers to solve it and basically call that AI. And AI has gone through summers and winters, you know, and during the summers.

If you called your product AI or your research AI, you’d get funding and if you didn’t, you didn’t. And then during the winters, people quickly took the label AI away and said, we’re working on machine learning or solving specific problems. And then AI got back into fashion. But really people haven’t been working on the original idea of AI, building thinking machines very much at all over the last 50 years.

And there’s actually an important difference. Let me concretize it. So let me give some examples. So narrow AI is, for example, if you do a container optimization, you write a program or an algorithm, you somehow figure out how you can best stack containers or route them to their destination. Or it might be the breakthrough of Deep Blue, IBM’s Deep Blue becoming the world chess champion.

Peter Voss (15:14.918)
or more recently DeepMind was the AlphaGo, the WorldGo champion. But essentially these are, it’s the external intelligence, it’s the intelligence of the programmer who solved the problem. The intelligence doesn’t really reside in the machine, certainly not the kind of intelligence that we talk about that humans have. So it’s the human ingenuity, the human intelligence that figures out how to write algorithms,

John W Verry (15:26.005)
to solve the problem. The intelligence doesn’t really reside in the machine.

Peter Voss (15:44.45)
that particular problem. Now, in 2001, 2002, I got together with a number of people. I had spent over five years doing research on intelligence to prepare for my work in artificial intelligence. And I got together with a few other people who felt the time was right to go back to the original dream of AI, to build thinking machines.

John W Verry (15:55.73)
doing research on…

Peter Voss (16:10.658)
And we wrote a book on the topic and we, three of us came up with the book title, artificial general intelligence or AGI. And that was really to recapture the original dream of AI, to build thinking machines, machines that really truly have the intelligence so they could learn to play chess or to, you know, do container optimization or whatever. They could learn the kind of things that humans can learn and, uh, roughly in, in the same way. And that’s what I’ve.

been busy with for the last 20 years to develop these systems and to commercialize, basically both to commercialize and to continue improving this. So AI, conventionally, well, today what AI means to almost everyone is chat GPT, Large Language Models, Statistical Models, because that’s where all the money is. But these are still.

John W Verry (16:40.636)
We’ve been busy with the last 20 years to develop these systems.

Peter Voss (17:09.69)
narrow air, they’re not thinking machines and you know people like Sam Altman quite clearly say and Demis Asabi is the CEO of DeepMind say the technology we have now is not AGI and in fact Demis Asabi said not by a long shot and there is no direct path from them So they are still they are statistical systems and GPT means Generative Pre-trained Transformers

John W Verry (17:27.969)
There is no direct path from them. So they are statistical systems. And GPT means Generative Pre-trained Transformers. The generative part really means they make up stuff. Which, you know, we’ve all experienced. For good and bad.

Peter Voss (17:37.122)
The generative part really means they make up stuff, which we’ve all experienced. The pre-trade, what’s that? Yes, absolutely, and some of the stuff they make up is just phenomenal. I mean, certainly I use it to help me with programming issues or to help me write an article, any number of things.

John W Verry (17:46.677)
for good and bad.

Peter Voss (18:05.61)
So they are generative, they create things, but the P stands for pre-trained. So it means they are trained with massive amounts of data at the factories, so to speak. These models now cost about $100 million to assemble. And they take days or weeks to, depending on how much processing power, but they can take weeks to generate, to build this one model. Now, once that model is built, they essentially do not learn anything. That’s why they’re called

John W Verry (18:20.489)
Wow.

Peter Voss (18:35.118)
pre-trained and that’s very different from human level. Now they do have some minor ways of sort of learning but it’s short term memory and it’s not really integrated. I’ve just written an article on this and we did some tests and benchmarks on that. They really cannot remember and learn new things. So imagine you hire an assistant.

John W Verry (18:36.993)
That’s very different from human level. Now they do have some minor ways of sort of learning.

John W Verry (18:59.555)
So, you know, imagine you hire an assistant and take very knowledgeable on lots of different things.

Peter Voss (19:03.502)
and they’re very knowledgeable on lots of different things. They’ve been pre-trained on these things. And let’s even assume that they don’t just make up things, but they just have a lot of knowledge, but they can’t learn anything new. You tell them about a new deal that you have or a new relationship, partnership, new products or whatever, and they can’t integrate it into their existing knowledge. That’s not real intelligence. So the…

The real artificial intelligence is basically what we now call AGI, artificial general intelligence, or DARPA calls it the third wave of AI, systems that can learn incrementally and also think about, they can think about their own thinking the way we can and can act autonomously.

John W Verry (19:56.053)
So I love that the autonomously was my next question. So the AI RMF actually, they say an engineer to machine, they define an AI system as an engineered or machine based system that can be for a given set of objectives, generate outputs such as predictions, recommendations or decisions, influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy. And I think that autonomy,

or autonomously, as you said. I think that’s where people get slightly frightened, right? The Skynet from Arnold Schwarzenegger concern. And I think where many people probably think, wow, it’s great that the RMF is out there because these autonomous things can have significant impact. So can you talk a little bit about that and talk about, and I’d be curious as to like, if you can apply that to like a real world example, like full self-driving cars or…

whatever is most relevant to the research that you’ve done, like to explain that concept of autonomy and explain how these artificial intelligence may be forced to make decisions that have ethical, legal, moral, life and limb type decisions.

Peter Voss (21:09.258)
Mm-hmm.

Peter Voss (21:14.21)
Right. Yes, autonomy is definitely an important dimension here. I think it always has to be seen together with adaptive intelligence. And what I mean by adaptive intelligence is really that the system can adapt to new circumstances, can learn new things, can think and reason about things. So if you just have autonomy without that, you know, you have a pre-trained system.

that you let loose on the world. If it has a lot of autonomy, if it could do a lot of things, potentially the more things it can do, the more damage it can do clearly. However, in real life, you tend to find that programs are actually very brittle. And I mean, every programmer knows that you put something together and there is, it’s a million times more likely not to work than to work.

So I think that is kind of a, somewhat of an inherent protection against semi-intelligent systems, like the ones we have now, doing too much damage autonomously, because they simply will get stuck very quickly, will just crash, you know, or not be able to do what might really be harmful. So, but once you have autonomy together with,

John W Verry (22:35.989)
So.

Peter Voss (22:39.67)
real adaptive intelligence, then that becomes beneficial because you can think of the system then of being more responsible for its actions because it can think about its actions, it can take into account a much wider context and therefore it will be safer. I’ll give you one example.

John W Verry (22:51.653)
for its actions because it can think about its actions, it can take into…

John W Verry (23:01.089)
Therefore it will be safer. I’ll give you one example. A few years ago there was…

Peter Voss (23:08.97)
listening in on somebody’s conversations that were private. I don’t know if you remember that. And you know, the problem there was really that Alexa wasn’t smart enough. Because if the system was smarter, it would have known whether this was something relevant to listen to or not relevant to listen to. So quite often you find that the limitations and risks in systems are because systems aren’t smart enough.

John W Verry (23:09.25)
listening in on somebody’s conversations that were fired or whatever. I do. And you know the problem there was really that Alexa…

Peter Voss (23:36.182)
I mean, you have the same with autonomous vehicles. You know, I drive a Tesla and I love it, but full self-driving, no. It’s a long way off because there are so many edge cases. The system, again, it’s pre-trained. If the programmer didn’t think of something, if it’s not in your training system, in your training set, it will not be able to respond properly.

John W Verry (23:36.513)
I mean, you have the same with the Tom.

John W Verry (23:47.553)
because there are so many edge cases of the system, again, it’s pre-trained. If the program had anything but.

Peter Voss (24:02.186)
So there are these many, many edge cases that you have that really require reasoning, that require learning, you know, something different, something new that you need to think about. And so you really want more intelligence, especially adaptive intelligence and what’s called metacognition, thinking about thinking, that you have this upper level supervisory system that kind of keeps track of what you’re saying and what you’re…

you know, what you’re doing. I mean, we have that. A lot of what we do is sort of automated, as you know, just happens automatically subconsciously, but we always have this, as long as we’re awake, we have consciousness kind of supervising what’s going on. And, you know, yeah.

John W Verry (24:47.945)
Gotcha. So it sounds as if the vast majority of what lay people like me would refer to as artificial intelligence in our everyday life, you know, the Tesla, FSD, you know, full self-driving, the Alexas, the chat GPTs, these are all examples of pre-trained non-adaptive artificial intelligence? Okay. Does adaptive artificial intelligence meaningfully exist yet?

Peter Voss (25:09.93)
Correct.

Peter Voss (25:18.278)
Well, we’ve been working on it for the last 20 years and commercialized it. For example, the 1-800-Flowers group of companies are using it for customer support. But our current version certainly is not at full human level, but it has much deeper understanding and can use context, can reason about the situation and certainly has…

memories that remembers what was said early in the conversation and previous conversations. So we are certainly working on it, but I can’t claim that the current development, in fact, we’ve just kicked off a major initiative to scale up our development team to accelerate that, because there is a lot of excitement now about AI. And, you know, it’s an opportunity for us to grow that aspect of our business.

But yes, I briefly mentioned earlier that Dapr talked about the three waves of AI. And very quickly, the first wave is what now many people refer to as good old-fashioned AI. So that’s the kind of AI that happened in the 70s, 80s, 90s, which are expert systems. IBM’s Deep Blue World Champion would be in that. So they’re roughly rule-based systems. And then the second wave is this.

John W Verry (26:40.555)
second.

Peter Voss (26:42.178)
thing that hit us like a tsunami about 12 years ago. And that is all about statistical big data approaches. And there was a breakthrough basically around about 2012, where people figured out how to use, the big companies figured out how they could use massive amounts of data and massive amounts of computing power, like Google obviously had accumulated out.

Amazon and some others, Microsoft. And they figured out how they could use massive amounts of data and massive amounts of computing power to build these statistical models, these pre-trained models and do really, really useful things. Speed recognition, for example, automated speed recognition has much improved translation, image recognition. I mean, we wouldn’t be as far with the self-driving cars. I mean, it’s amazing what it can do.

John W Verry (27:20.981)
statistical models, these pre-trained models, and do really, really useful things.

John W Verry (27:38.017)
It’s amazing what it can do.

Peter Voss (27:40.238)
in terms of recognizing street signs and the intent of vehicles and pedestrians and so on. That’s all because of the breakthroughs in statistical learning. So that’s the second wave. But the third wave is going back to the original dream of AI to build thinking machines. And that’s called cognitive AI. So your focus here is on cognition, on intelligence. What does intelligence entail?

John W Verry (27:45.601)
of vehicles and pedestrians and so on. That’s all because of the breakthroughs in statistical learning. So that’s the second wave. But the third wave is going back to the original dream of AI to build thinking machines. And that’s called cognitive AI. So your focus here is on cognition.

Peter Voss (28:08.898)
you know, what does cognition entail? Thinking and learning entail. And, you know, we’re on the forefront of doing that development, but because statistical AI has been so incredibly successful, it sort of sucked all of the oxygen out of the air for anything else.

John W Verry (28:27.029)
So when you get to that third wave, right, does the third wave replace the second wave or does it build on? So does cognitive AI still…

John W Verry (28:40.215)
like humans do, rely on some form of pre-trained information that they’re using.

Peter Voss (28:44.994)
Mm-hmm. Yeah, it’s a good question, and I think it really depends a lot on definition. So it doesn’t really build on the second wave, but it borrows from the first and the second wave. So obviously, a lot of what has been learned with rule-based systems and with statistical systems, so first and second wave, does apply and can be applied. So I think what you kind of alluded to as well, that humans also are.

pre-trained in a certain way. Well, there’s psychologist Daniel Kahneman who talks about the first system one and system two thinking. So roughly speaking, that’s subconscious automatic thinking. And that’s sort of more analogous to the second wave, you know, that it’s just almost statistical, except it’s not, in our case, it’s not just statistical, it’s also cause and effect. And…

John W Verry (29:26.473)
that’s subconscious automatic thinking, and that’s sort of more analogous to second wave, you know, that it’s just almost statistical, except it’s not, in our case, it’s not just statistical, it’s also cause and effect.

Peter Voss (29:42.622)
and knowledge hierarchies that we have in our automated thinking. But so no, it’s not an it’s not an extension of the second wave. But as I say, it’ll borrow from, of course, all the work that has been done before these cognitive AI, sort of the technical approach there is called a cognitive architecture. So your starting point is.

What does intelligence require? What are the features that human type, human level, and human-like intelligence requires? What is that flexibility, the adaptability, the ability to learn, to reason, to have system one, test system two thinking, you know, theory of mind. Can you put yourself in the other person’s shoes? And that, these are all the things that are absolutely required to have human-level intelligence.

So that’s your starting point. So you then build a system that will allow you to have all of these capabilities. So you’re not starting off with, hey, we’ve got a lot of data, we’ve got a lot of computing power, that’s a hammer we’ve got, so everything looks like a nail. The first wave was, we’re very good at logic, we’re very good at mathematics, that’s a hammer we’ve got, how far can we get with that? Now the third wave is really going back to saying,

What does intelligence require? What does cognition require?

John W Verry (31:12.313)
So let me ask a crazy question. So it sounds as if in level two, we’re effectively creating the algorithm, the problem solving thought process, right? And that in level three, the artificial intelligence itself is developing the algorithm. Now maybe we develop the algorithm that’s capable of developing its old algorithms.

Peter Voss (31:37.726)
Yes, exactly, exactly.

John W Verry (31:39.333)
So is that effectively what cognitive AI is?

Peter Voss (31:42.958)
That’s exactly right. Yeah. It’s sort of what I said earlier that we are still in wave one and wave two is external intelligence. The problems we are solving are solved by the engineers or by the data scientists. So chat GPT, they fiddle around with different architectures, different training sets, different training methods. Then they’ve got human in the loop training. And it’s basically by this human ingenuity, they kind of by trial and error figure out what

John W Verry (31:44.67)
Oh, that’s crazy.

Ha ha

Peter Voss (32:12.138)
what works, but the system itself doesn’t have anything like human-like intelligence inside it. So wave three, that’s what our task is to build that machine that can then learn all of the different things, similar to the way humans learn.

John W Verry (32:29.545)
Gotcha, and quick question for you. So, you know, there’s the famous Turing test, right? Is, you know, was one of the things I think, correct me if I’m wrong, because I’m not an AI guy, but I think that was posited as being the test of determining whether, you know, if a system could mimic a human in a way that you couldn’t tell it, quote unquote, we’d call that an artificially intelligent machine. You know, does, can you reach, can you pass the Turing test with that layer, level wave two, or do you need to get to wave three to truly pass a Turing test?

Peter Voss (32:33.76)
Hmm.

Peter Voss (32:41.899)
Yes.

Peter Voss (32:57.358)
You’re right. Well, the Turing test has already been passed. And in fact, Alan Turing didn’t really take this sort of that serious. So I think people have picked, yeah, and yet people have picked up on it. And a lot of people do believe that the Turing test is some, you know, is quite meaningful. It’s for him, it was really kind of a little thought.

John W Verry (33:06.107)
So it’s funny, it persisted this long.

Peter Voss (33:21.718)
experiment, you know. Now I’ve actually written quite a lot of articles on, you know, three ways of AI and Turing tests and machine consciousness and ethics and so on. They’re all on medium.com and they’re also linked on our website, igot.ai. But to answer your question about the Turing test, what I wrote in my article is that it asks both too little and too much in different ways.

So what it asks too much is it really asks it to be better than humans at lying about itself. Because, you know, if you say to a computer divide 5,346,000 by 253, it can spit out an answer immediately, which a human couldn’t. So it has to, it not only has to then be good enough to do…

John W Verry (33:57.549)
you know if you say to a computer divide 5,346,000 by 253 it can spit out an answer immediately which a human couldn’t. So it has to, it not only has to then be good enough

Peter Voss (34:16.318)
everything a human can do, but it also has to be good enough to fool you that it’s a computer. So in that way you’re asking too much of it. But the way you’re asking too little of it is that you really only have to fool the computer judges. And the Turing test was actually passed. One of the standard Turing test setups, it was passed, I don’t know, it might be 10 years ago now, where basically somebody just gamed the system and said, hey, I’m just a little

John W Verry (34:32.576)
Turing test was actually passed. One of the standard Turing test setups, it was passed by.

Peter Voss (34:44.002)
12 year old kid and I’m a foreigner and I don’t really know so much about too many things, you know, and they kind of just press the emotional buttons of the judges and they said, it’s got to be a human, this can’t be a machine, you know. So yeah, it’s not really that useful. Also we go to a lot of trouble actually to persuade our customers to be upfront saying that

John W Verry (34:49.92)
and they kind of just pressed the emotional buttons of the judges and they said, there’s got to be a humanist kind of…

So, yeah, it’s not really that useful. So, also we…

Peter Voss (35:12.19)
you know, the technology, you are using technology, you are not talking to a human. You know, I think that’s the honest and the right thing to do. And I think customers appreciate that, you know, oh, I’m talking to a computer, fine. Well, I won’t talk about the weather or yesterday’s sports scores, or, you know, my daughter won some ballet competition or whatever, you know. I’m not gonna talk about that. I’m just gonna, you know, get my order changed or, you know, get a question answered on, you know.

how I can keep the flowers alive longer, you know, or whatever.

John W Verry (35:47.473)
So it would seem to me that, you know…

So, because I’m a risk guy, right, so I always talk about risk. So it seemed to me that the risk relating to wave two and the risk relating to wave three, A, are going to be very different, and then B, I’d be curious whether or not you think like the NIST AI risk management framework, is framed in wave two, because that’s all we are, or would it also account for those different risks or what I think would be different risks?

associated wave 3.

Peter Voss (36:24.374)
Yeah, so I think the framework is very good in just throwing up a lot of different aspects to consider to think about, you know, for the CEO and CTO and CISO of course to think about. But I don’t think it’s particularly useful for anything more detailed than that even right now.

I mean, these issues are so specific to technology, to applications, it’s changing so fast. And there really aren’t a lot of good answers. So it’s really the people on the ground implementing the technology, either developing it or implementing, need to consider these things for particular applications and use cases. And so I don’t think this framework can really, you know…

deal with the details. As I say, it’s moving too fast. The whole framework is already way too bureaucratic. Who can plow through that systematically? In all detail, you have to just kind of pick your fights, basically, as it were.

John W Verry (37:38.289)
Okay, well, I mean, we have to acknowledge, though, that there’s risk, right? So if not that framework, how would you suggest that people do manage risk?

Peter Voss (37:50.006)
Well, as I say, I sort of think of it more that right now, it’s really sort of mundane risks that we have in the sense that there are risks that we are really familiar with and that companies should already be equipped to do. Now, is the spread of misinformation going to accelerate by having chat GPT and automatic video generation?

Absolutely, and the defenses need to be improved on that. So, you know, I mean, Twitter’s trying to do it in one way by getting people to sign up for it, you know, to be authenticated basically, which, you know, is great except when it comes to kind of whistleblowers or people wanting to say something that they’re not willing to say under their own name. I think that’s the only shortcoming,

I’m generally a fan of knowing who I’m talking to, and that could take care of a lot of things because then you are subject to traditional remedies of false advertising, libel, and hacking if you basically know who the actors are. And to me, that seems like probably a pretty good solution generally.

John W Verry (38:48.577)
It’s definitely a fangirl.

John W Verry (39:10.401)
teams.

John W Verry (39:15.721)
Yeah, but what I would say, like, so I was chatting with someone I didn’t know about this. I do think that just because these risks exist in, you know, outside of AI, you know, doesn’t mean that people recognize that these same risks exist within AI. You know, so as an example, did ChatGPT and did the users of ChatGPT, you know, were they made fully aware? Did the…

Did the programmers really understand, or the developers, whatever we want to call them, open AI for that, you know? And did the consumers really understand the concept of hallucination? You know, I’m aware of a case where an individual, a lawyer, you know, presented a legal argument in case, citing case law, and you know, Chachi Petit had hallucinated the case law, specifically. So, you know, I do feel like having some structured logical thought process to

ensure that people are thinking about the risk associated with AI and making sure that they’re effectively managing it makes some level of sense, no?

Peter Voss (40:24.138)
Yeah, well, let’s take that case and analyze it. I mean, first of all, how did that person get to using, if he’s a lawyer and he was doing that, I mean, he should just be responsible for that. He was using a tool and that tool wasn’t right. I mean, that would be the same as him going to some website, picking up some made up story and presenting that as a legal case or legal.

So I think the lawyer needs to just be responsible for the tools that they use. And anybody, any professional at all, will know that these systems hallucinate. I mean, that becomes extremely obvious very quickly. I mean, I can ask it, hey, Chet, tell me about the article I wrote about liver transplant in 2005.

and will come and tell me about the article I wrote about liver transplant complete with references, you know. And I can assure you, I’ve never written one when there isn’t a Pete of Oz who wrote and, yes. Well we could be, look, we could be hallucinating right now. Maybe this is all a dream, you know. So

John W Verry (41:29.877)
Are you sure? Are you sure, Peter? All right, I’m just checking. I think you’ve developed a lot of AI stuff. Maybe Peter Voss AI. According to Elon Musk, there’s a 50% chance that this is just a matrix, right? This is a sim. We’re in a sim.

Peter Voss (41:49.314)
Right, but then we’ll just have to continue playing that game. So I think any professional can’t hide behind that at all, that they didn’t know. I mean, that’s ridiculous. And any company selling stuff like that and claiming that it can be reliable for medical advice or so should be sued. I mean, as I say, we have remedies. It’s false advertising.

So, I don’t think that risk really is that different. People are quite aware of the limitations of the technology. So, there’s a whole risk industry as well, and one needs to also be aware of it, where people have a livelihood. And it’s one of my pet peeves is that the AI risk industry

has been funded to the tunes of hundreds of millions of dollars. And there are thousands of people now making a livelihood out of that. There are zero dollars that are spent on debunking some of the claims that they make. So you basically have people who are extremely eloquent, very well educated.

John W Verry (42:57.625)
zero dollars that are spent on

John W Verry (43:04.769)
So you basically have people who are extremely hallowed.

Peter Voss (43:12.686)
who can be very persuasive because that’s their job, that’s what they’re getting paid for. The more books they sell, Bostrom is one of those people. I mean, he’s very smart, very well educated, and he basically makes a living out of telling people how dangerous AI is. But there is no counterbalance, there’s nobody who’s getting paid for critiquing his work. I mean, I’ve written some things, some other people have written some things, but.

John W Verry (43:18.205)
themselves. Bostrom is one of them.

Peter Voss (43:40.79)
That’s not our job, that’s not our expertise. We don’t have a whole industry that can counterbalance these risk claims. So a lot of the risks are cherry picked or just overblown.

John W Verry (43:54.465)
Well, let me ask you a question there. So, you know, like…

If you got, if we got to a point where we’ve got artificial intelligence at the cognitive adaptive, that mimics a human, we know that many humans are good, but we know there are some obviously very evil humans. Would it?

Peter Voss (44:13.467)
Mm-hmm.

John W Verry (44:28.169)
What would prevent cognitive AI from becoming evil?

Peter Voss (44:35.498)
Yes, a very important question and one I’ve definitely thought a lot about. In fact, a lot of the research that I did was also in philosophy and ethics and, you know, really understanding what is right or wrong and how do we know right from wrong or how should we know how can an AI know, will an AI have free will? Will it be conscious? You know, those kinds of questions. And so, um, I mean, it’s obviously a very complex topic, but I can

try and give a brief answer on it. So, a lot of the things that humans do that we regard as bad or immoral, but let’s just keep it a bit more neutral and just call it as bad as wrong, counterproductive, are because of the inherent way we are built, the emotions that we have. And the example I often give here is 9-11. Let’s take 9-11. Obviously very emotional for humans. Now,

John W Verry (45:06.785)
So.

John W Verry (45:19.991)
Mm-hmm.

Peter Voss (45:34.574)
There are basically three things that our emotions drove us to do. The one is an emotional response. We’ve got to hit out. We’ve got to get back at somebody. So we’ve got to hit somebody. And Iraq was there. Afghanistan was there. So let’s hit out. Anyway, that was just kind of an emotional response. The second thing is we tend to work with limited information.

John W Verry (45:43.969)
hit somebody. So, you know, hit somebody.

Peter Voss (46:03.67)
we’re not that good at gathering information. You know, it takes a lot of work to gather information. So weapons of mass destruction, yeah, good, sounds checkbox, yes, sounds like a good enough thing. We’re not really that good at asking questions about, well, did that actually make sense, you know? And the third thing related to that is we are just not very good at thinking, at reasoning things through. You know, saying, will this action actually give us the kind of results? Is it likely to lead to the results we want?

you know, the outcome we want. Does it actually make sense what we’re doing? You know, rational thinking is an evolutionary afterthought, I like to say. We’re not that terribly good at it. You know, it’s something we have to learn to think rationally. So now look at, so a lot of the things that we do that we regret that are wrong or bad are because of, for those reasons, as evolutionary, just the way we are as humans. And AI.

will not be built with those emotions. There’s no reason to build an AI that has these reproductive drives, that has survival and reproductive drives and emotions related to that. So it automatically shifts the bias much more into the positive realm. It’s much more likely to do positive than ever, to do stupid things basically.

to put it plainly. Now, the AIs that we build that are going to be commercially viable that we want to build are going to be assistance. I mean, why am I so passionate about this topic of building human-level AI? I see a world where we have AIs that can be PhD-level researchers. Imagine training up one AI as a PhD-level cancer researcher.

John W Verry (47:34.977)
The AI’s that we build that are going to be commercially viable, that we want to build, are going to be assistance. I mean, why am I so…

Peter Voss (47:59.018)
making a million copies of that, you now have a million PhD level researchers. That’s what they were built for, that’s what they will do. What drive would they have to want to take over the world or do anything else? Similarly, you can have these high level researchers that can help us solve energy problems, pollution, climate change, poverty, and…

and also governance can help us, how we can manage society is better, can help us think that through. Personal assistants that we have, assistants that can help us think things through. So when something traumatic happens, like 9-11 or doesn’t have to be nearly as traumatic, we can have an AI that can help us think things through. What actually makes sense? What will further our lives? What will further our flourishing?

So we’ll have these AI. So I don’t see, yes, if you look at the movies, it makes for a good movie that when the AI wakes up, it wants to take over the world because, I don’t know, humans treated Windows badly or, you know, clippy, because, you know, because they shut down clippy now, they wanna take revenge and take over the world or whatever, you know? So, yeah, it’s, yeah.

John W Verry (48:59.037)
So I don’t see… Yes, if you look at the movies, it makes for a good movie that when the AI wakes up, it wants to take over the world. I don’t know. Humans treated windows badly. You know, tippy. Because, you know, they shut the door. They take revenge and take over the world.

Peter Voss (49:24.342)
You know, people have seen too much movies. In fact, I was once asked to give advice on an AI movie script, one of the famous directors. And I looked at the script and I gave some advice on technical things and so on. Then I got back to them and say, but why does it have to be so negative? At the end, why can’t the AIs actually be helping humans improve their condition, you know, and have that?

Well, no, that’s not the kind of ending we want, you know. I guess it’s not gonna sell.

John W Verry (49:58.282)
So I wonder how many people actually got the Clippy reference. Some of us older folk might have gotten that one. So we could talk about this forever and neither one of us have forever. So let me ask you a little bit more about the work that you’re doing. If I recall correctly, your company’s called AIGO or something? I-go, I-go. That’s it, I-go.ai.

Peter Voss (50:04.878)
Hmm. Right.

Peter Voss (50:17.318)
Igo. Yeah, yeah, yeah. Igo.ai.

John W Verry (50:22.977)
Tell me, I’m curious now as to what you guys are, what you’re doing, what you’re working on, and where you see things going in what period of time.

Peter Voss (50:36.814)
So we have a cognitive AI engine framework that we started developing 20 years ago. And we’ve been since then commercializing it really mainly in the call center space, automating calls, both phone call and chat. You know, as I said, one of our great clients is 1-800-Flowers Group of Companies with Harry and David and Popcorn Factory and et cetera.

So that’s how we deploy our technology commercially. It can also be used for internal support, HR support. It can be used actually as a Clippy, as an assistant for software, where if you have complex software, it can help you navigate, so you can just tell it what you want. So for example, business intelligence software, you could just say, give me this graph and then no, I change this or change that.

and it can operate within, can be as a natural language front end to complex software. It can also be used as a personal assistant if you wanted something to help somebody manage their diabetes. It can basically learn the kind of foods you like, whether you love broccoli or hate it, and whether you’re vegetarian. So it can learn your routine and then help coach you to manage diabetes. Or another personal example would be,

John W Verry (51:48.705)
So can burn.

Peter Voss (51:58.794)
for college students, especially when you’re just getting to college and everything is overwhelming, you don’t know what find your way around or where to get books or food or your curriculum, you know, classes and so on, and connect with other people who have similar interests and help you study and so on. So those are the kind of things we can do with our technology, with our cognitive AI. And so our company is really two parts. The one part is the commercialization.

which has its own unique requirements. Obviously, their security, reliability, predictability, all of those things are very important. And our system is completely scrutable. It’s not a black box. So you know exactly why the system is saying what it’s saying and you know what you need to change if you have new requirements. So that’s on the commercial side where obviously a lot of that is in the.

John W Verry (52:42.558)
So, you know, you know exactly.

Peter Voss (52:56.334)
predictability, security, and scalability aspect. And then the other part of our business, to continue cranking up the IQ of our system, to get closer and closer to human level intelligence. And as I said, we’ve just launched a major project that we’re busy funding and looking for additional funding, but we have some funding to significantly increase the number of people working on the development side to crank up IQ and to get to human level.

Now, when people ask me when are we going to get to true human level intelligence, I usually say we don’t tend to measure it so much in years as in dollars because it’s really, we’ve done, the research has largely been done. We’ve spent many, many years doing research, so we know what needs to be done, but we simply don’t currently have the resources to do it very quickly. So that’s what we’re trying to scale up. But to put it…

time on it certainly If the right people work on the right technology basically meaning cognitive AI as opposed to statistical or generative I’m convinced we can have this in less than 10 years. In fact, it could be five

John W Verry (54:14.773)
So is it one of those things that accelerates? And can you use wave two to help advance wave three? So can you use these pre-trained mechanisms in such a way that they speed up that process of moving to wave three? They contribute to it.

Peter Voss (54:31.474)
Yeah, you actually hit on something really important here. And one of the reasons we are actually accelerating our project is because of what chat GPT and other large language models can offer. And one of the huge barriers has always been how can we teach the system all of this common sense knowledge that we as humans acquire just by living in the real world. And to me that had always been, well, we’re going to need to hire

probably a thousand people working for a few years to teach the system all of this background knowledge that it needs to know about people and things and places and the traditions and whatnot. And certainly these large language models properly curated if you basically have a system that can think about what it’s ingesting and not just blindly taking masses amounts, trillions of pieces of information.

This can speed up the training process, and that’s a lot less intimidating now because of the success of GPT. So you put your finger right on it. Yes, absolutely, that is something that’ll be, is much less of a problem now with having these large language models.

John W Verry (55:43.973)
You know what they say, Peter, the blind squirrel occasionally stumbles on an acorn, and I guess I did. So, anything, you know, like I said, I could spend hours chatting with you because I think this stuff is fascinating. Is there anything we missed or anything that you would like to just close with?

Peter Voss (55:49.538)
Hmm. Ha ha ha.

Peter Voss (55:57.603)
Hmm.

Peter Voss (56:05.118)
No, I think anybody interested in this field, we are looking for collaborators. I’m always, as you might have noticed, I love talking about the stuff. It is my passion. So on our website, iGo.ai, we have a resource tab and it has a link to my article. You can also find them on medium.com. It’s easy to find me on Twitter, linked in Facebook.

you know, a number of different channels, or just email me peter at hiker.ai. So always very happy to brainstorm things with people and to have collaborators to help us speed up this dream of having human level AI assistance that can help us solve the many problems that face humanity and really optimize our flourishing.

John W Verry (56:59.861)
Yeah, that idea of a thousand…

highly knowledgeable bots or AI engines or whatever you refer to them, you know, concentrating on solving specific problems that we’re having is kind of a really cool image. And I can understand your obviously passionate optimism for where this might go.

Peter Voss (57:26.146)
Great, thank you.

John W Verry (57:28.084)
So thank you, this has been a lot of fun. I appreciate you coming on.

Peter Voss (57:31.459)
Thank you.