In this eye-opening episode of Real Science Radio, hosts Fred Williams and Doug McBurney are joined by AI and IT expert Daniel Hedrick to delve into the intriguing world of artificial intelligence deception. From the concept of ‘deception by design’ to groundbreaking advancements in computational analytics using organoids, Daniel sheds light on the major AI tricks that have fooled many. The trio discusses how pervasive and embedded AI is in everyday life, often unbeknownst to most of the public, and explores why AI remains unaware of its own deceptive capabilities.
SPEAKER 02 :
Will you lose your job? Yes. How do you not lose your job? By becoming more efficient and more aware of AI.
SPEAKER 03 :
Scholars can’t explain it all away.
SPEAKER 1 :
Get ready to be awed by the handiwork of God. Tune into Real Science Radio.
SPEAKER 03 :
Turn up the Real Science Radio. Keepin’ it real.
SPEAKER 05 :
Greetings to the brightest audience in the country. Welcome to Real Science Radio. I’m Fred Williams. And I’m Doug McBurney.
SPEAKER 04 :
Cool. That’s it. That’s all I got, Fred. Fred has been trimming the intro down to satisfy the short attention span of the YouTube audience. And he’s got me down to I’m allowed to say I’m Doug McBurney. So there you go, folks. Here we are. And we’re here. We’re joined today by Daniel Hedrick, AI expert, IT expert. It’s great to have you, Daniel.
SPEAKER 02 :
I’m so glad to be here, man. And you’re also a great comedian. Amateur, amateur comedian. Oh, I’m sorry. Amateur comedian. Well, I’ll send you five bucks. Okay.
SPEAKER 04 :
I’ll never be able to do the comedy Olympics because I’ll be considered a professional. Send me money.
SPEAKER 05 :
Right, right, right. Okay. So how many times has artificial intelligence tricked us? Sometimes even on purpose. Today, we’re going to count down the top 10 real world cases of AI deception and help us through this list. We’ve called on none other than our AI expert, Daniel Hedrick, the Real Science Radio resident person on all things artificial intelligence. Daniel, it’s great to have you back.
SPEAKER 02 :
You know, I’m so glad to be here. You know, Fred, it’s such a wonderful time to be living. Honestly, this is an insane moment where we’re seeing a huge transition to how computational analytics are done. And, you know, AI is a central point of that. And the last time we met was back in April. And from April until now, the number of changes are, I mean, I don’t even know where to begin. If you remember, we talked about these are called organoids, and those organoids have actually grown all the way to being faster than some of the silicon chips, and they’re being used more. to run these AI models, which is absolutely completely impressive. We could go on and on about all the different models that are out there and how they’re improving and how they’re changing and all this. But I think, you know, when we talked about the top 10 and what could be there, I really, really thought that it would be interesting to talk about, well, the phrase that I came up with is something called deception by design. And I think we’re going to find out that it’s pretty easy to use these AI models in a deceptive manner. And some of them may not even be considered deceptive because you might have a particular worldview that says, I believe in X and X should be pushed across everything. And that’s exactly what’s happening. So I definitely look forward to getting into each one of these individual topics. And And whenever I can, I’ll try to tell you how some of this is being done and maybe some areas to look into, you know, wherever it goes. But yeah, what a great time to, like I said, to be alive and to be going through this experience. It’s super insane. And as, think about it, you and I went all the way back, or we could say we went all the way back to, what, DOS 3.1 or whatever it was, DOS 2.1. Now, all of a sudden, we’re into these, well, even quantum computing, which is now existing today. So, yeah, what an amazing time. That’s really what I wanted to tell you.
SPEAKER 04 :
And for those of you who may not know, Daniel Hedrick’s been a top producer in the information technology space since Buicks had cassette decks in them. You mentioned DOS 2.0. Anyway, he’s an IT security expert. He’s been tracking AI risks since before most people even knew they existed. And when AI lies, Daniel is one of the first people to spot it. So Daniel, I read this article just a couple weeks ago. from the America First report, the headline, Artificial Intelligence Has Already Taken Over, Most Simply Haven’t Noticed. And the article talks about how AI product is basically embedded into all kinds of things, marketing campaigns, data crunching, consulting advice that people think they’re getting from a high paid consultant. And the author talks about how People are basically pawning off AI as their own work product with just minimal editing and still collecting a paycheck and maybe not realizing that the days of doing that are probably numbered.
SPEAKER 02 :
Well, you know, actually, if you think through this through, right? I mean, there’s so many different ways we can go in this conversation. But you probably have seen recently there’s something called Sora 2. It’s actually very new. Well, are you ready for this? Take a picture of a guy named Daniel, right? Put him in a podcasting studio. Give him a script. And let him go. The point that I’m trying to make is that you could imagine that you could have some sort of integration with teams. One thing that’s happening at work with me, and I’m wondering if all of these CEOs or wherever, CIOs, for me, chief information officer, or a CSO, chief security officer, decided that everyone has to be on camera now. So when we’re in these meetings, you can just see this huge grouping of people and they’re all, you know, bored to death looking at the camera and doing nothing because they’re not talking and they’re not actively engaging. I guarantee that there will be a Teams meeting. solution to provide an avatar you ready for this that can take those meetings for you and no one will even know I guarantee that’s going to happen so let me just carry this a little bit more forward I think I mentioned this in the past I do believe it’s possible that you could take everything that Bob’s ever written and create an AI avatar for Bob and allow him to respond in his words. Well, the same thing can be done for you. If you were to take basically sound samples or writing samples of everything you’ve done and then put that into a model, I do believe it is very, very likely and very possible to mimic you. And so that’s interesting because now how many different jobs could I take? I mean, there’s just so many different ways this is going. And of course, the other one is this notion that these jobs are going to be taken over by AI and you’re not going to have any work at all. So there’s just this, this is just such a transitional time. Who knows what direction it’s actually going to go.
SPEAKER 05 :
Yeah, guys, remember earlier in the month the AI stuff that Trump did with Chuck Schumer? And I can’t remember the congressman’s name from, I think, New Jersey. Now, I don’t know what you think about Trump, but we have to admit he is a pretty funny guy.
SPEAKER 04 :
Oh, yeah. Well, you know, there was a congressman, a Hispanic dude with the sombrero and the mustache.
SPEAKER 05 :
No, he’s African-American. That’s what makes it even funnier.
SPEAKER 04 :
Hakeem Jeffries, right.
SPEAKER 05 :
Yeah, exactly. But he’s a racist. It was racist.
SPEAKER 02 :
Oh, well. Anyway. It’s cultural appropriation, sir.
SPEAKER 04 :
Donald Trump. I always give Trump credit where credit is due. Donald Trump is a professional comedian.
SPEAKER 02 :
That’s right. He gets paid. He gets paid well. Well, of course, that’s the whole idea, right? This whole deep fake thing. And what I was just saying is that not only is it a deep fake, but I mean, I don’t know. I’m sorry. I think it is interesting. I’m not saying it’s good, but I certainly could imagine building an AI avatar for myself and throw it up into my teams. And then I’m out racing cars while I’m getting paid to do work. Well, I mean, I’m just saying I guarantee someone’s going to do it. There’s there’s no doubt.
SPEAKER 04 :
Yeah, and so what happens when it gets to the point where nobody has to show up anymore? Maybe that’s the end of the world, or at least the end of civilization.
SPEAKER 02 :
Yeah, Doug, you’re bringing up something that’s pretty important. One of the AI leaders that has been trying to basically determine what the end result of AI is, they have these two formulas. And I believe it’s called the probability of PA. I think it’s PA squared or something like that. PA is probability of apocalypse, right? Or probability of Armageddon. And then the other one is the probability of abundance, right? Right. So the idea there and, you know, I hope you find this interesting. I know we’re going to get to the top 10, but this is actually really interesting. It is possible if you want to just think, especially from a sci fi world, which I definitely I read a lot of sci fi stuff. I’m I’m in the eighth book of The Expanse, which is really interesting. And there’s some AI in there. But here’s the thing. People believe that ASI, artificial superintelligence, is going to ultimately come up with free energy. Healthcare, basically we could end up living to being 200 years old because we’ll figure out how to, you know, extend the LLs. I think they’re called, you know, at the end of the histone tags or whatever, which defines how much longer you have to live. Well, all of those things, let’s just say it happens. Well, what would end up happening is that you’d be able to convert energy into matter. So literally, you would have an opportunity to talk into a replicator like in Star Trek and just say, give me X, and then there it is. So since you have so much abundance, there’s no need for money, right? And this is this utopia, right? Will it happen? Who knows? But I’m just trying to tell you that’s what the leading… I don’t know, the proponents and the philosophers of AI are basically saying it’s either a race to our death and demise or it’s a race to abundance and opportunity.
SPEAKER 05 :
Well, let’s say it is a race to abundance. Well, if we do hit that utopia, we haven’t hit it yet and we need your help. So it’s telethon month.
SPEAKER 04 :
Well, that’s right.
SPEAKER 05 :
We’re not there yet, folks. We’re in October. It’s telephone month. We still need your help. We’re not quite there yet with AI making real science radio be able to be on the air for free. So we actually have to pay for our radio time. And for those who don’t know, we’ve been saying this since September. We are now on at 1230 p.m. right in the lunch noon hour in Denver, the most powerful station in Colorado. We reach five adjacent states. And it’s great being on when more people are listening. So in order to do that, we had to pay for our radio time. So please help us out, sponsor a show, go to rsr.org slash store, and you can get some material, donate, again, sponsor a show for a certain price, whatever you can do to help us out. That would be great. It’s October 12th on month and we’re asking you to partner with us.
SPEAKER 04 :
Yeah, and if you’re on the website, go to the Contact Us page. Contact us to advertise your business or your product, to sponsor a show. There’s a link, but you can contact us, too. We can negotiate bigger money, bigger contracts, all of that. And, of course, you can subscribe to the RSR Archive Show. And one thing I forgot to mention is… is that you can purchase the Real Science Radio products from the website as well. That helps keep us on the air. It’s telethon month. If you want us to shut up about it, then please send as large a check as you can and help keep Real Science Radio on the radio. The real terrestrial radio. Remember with the cassette decks, there used to be the band that you could turn the knob and it would move across? The radio. We love the radio. So help keep us on the radio. Thanks, Fred.
SPEAKER 05 :
Sure. Okay, Daniel, let’s go ahead and get started. What would you say is the first one we can talk about where AI is deceptive?
SPEAKER 02 :
Well, you know, first off, we’ll get into that right away. What I want to also just go backwards in time, a phrase that I came up with I would hope that you would hold on to is that AI remains unaware that it’s unaware. So even though there is deception, I’m not saying there isn’t, I don’t believe it’s aware that it’s being deceptive in the idea of, well, that you have a guilty conscience, as an example. Most people, when they sin, yeah, they’ll do it 100%, whatever, but they’re going to have some sort of regret, right? Why? Because they’re aware. They’re aware that they were just deceptive. So when we go through these, what’s actually going to end up happening is a lot of times the training models or the way in which they’re trained reward the agents or the model’s ability to overcome obstacles. And that ends up being deceptive. So if you’ve ever played video games or anything, I’m a fan. I play a few. We could talk about it if you want. There is AI in there as well. But you may have heard of StarCraft. Well, in StarCraft, pretty interesting game. I mean, it’s not mine because it’s kind of a top-down game. It’s not my favorite kind of game. But DeepMind was able to fool the players and basically cheat and win. And there really wasn’t anything that the human players could do.
SPEAKER 05 :
Huh. Kind of like the Kobarashi Maru. Oh, I love that story.
SPEAKER 02 :
Yeah, that’s awesome. Of course, we need a Captain Kirk to come in there and, you know, kick him in the shin somehow and get past that. Of course, because you’re not supposed to be able to win. And of course he did. Yeah. My favorite character of all time, probably.
SPEAKER 05 :
All right.
SPEAKER 02 :
So that’s number 10. Yeah, the next one is pretty obvious. Every time that you use a model, there is going to be, how do you say this word, sycophancy to it. And the notion there is it’s just going to be nice. So it’s really important, and we probably definitely talked about this on previous shows, and that’s the idea of either… Telling your model who you are or telling the model who it is, right? So instead of getting all this flatter, right? There’s actually a particular prompt called the Min Choi prompt, M-I-N-C-H-O-I-E. Feel free to look it up. And it’s pretty strong. It basically says, tell me the truth no matter what. Don’t hide anything. Keep in my face about the truth of things and don’t whitewash it. Don’t water it down. Just tell me the truth like it is.
SPEAKER 05 :
Okay. So kind of like I’m trying to think of examples like salesmen. They’re going to be really nice because they’re selling a product. A mobster right before he kills you.
SPEAKER 02 :
That’s a good one.
SPEAKER 04 :
That’s a great one. You know? I like the salesman, Fred. Sycophancy is, yeah, it’s a salesman. Yeah, that’s great.
SPEAKER 02 :
All right.
SPEAKER 04 :
So what about now? What’s number eight in, wait, deception by design. AI deception by design. What’s the number eight in your top ten list?
SPEAKER 02 :
You know, I. Gosh, I just love that phrase so much. And, you know, if you don’t mind, I felt like there was an alignment of the stars because while I was thinking through this, a company that I used to work for called Forcepoint, which does data loss prevention and others, they put out a podcast called… Deception by design. I was like, wait, what? You know, and they weren’t even talking about AI at the time. So whatever. I was like, but whatever. I already came up with before they did. I know I did. So you maybe remember an example where someone said, hey, give me a beautiful picture with a nice background of all the founding fathers. Do you remember what happened when they did that? I don’t. Well, they were all not white, let’s say. They weren’t all European. They were all black. I mean, literally, George Washington looked like an African-American. It was very odd, right?
SPEAKER 05 :
Did any of them have a sombrero on their head?
SPEAKER 02 :
Well, you know, I’m sure someone in there thought of that, you know, but think about this for a minute. Like, first off, it shouldn’t even be possible, right? Because if you have this baseline of reality, then it’s going to be basically a European that’s going to end up, or obviously an American, but whatever, discerning these terms right now. But you get the idea. It’s going to be a Caucasoid and not an African American.
SPEAKER 04 :
Right, no matter how offensive that is, it’s just the reality that America was founded by a bunch of white guys. Sorry, I don’t mean to hurt anybody’s feelings. I’m so sorry, but it’s just reality.
SPEAKER 02 :
That’s 100% true. So what’s happening in the training data, and so you know that there must be a bifurcation. You like that separation there. The idea here is that the facial recognition bias pretty much works very, very well. against caucasoids let’s say and it doesn’t do so well against dark skin and and women uh it’s really interesting that there’s going to be this problem well why is that i would assume that all the training data is mostly on let’s just say white people So it’s going to be better. It’s going to be more accurate in that. So then if it’s not accurate, then either two things are going to happen. Either the minorities are going to be ignored or they’re going to be more rapidly identified as an unknown threat. This is not good. And I accept that as well. But this is the problem with any of these algorithms. It’s based off the training data.
SPEAKER 04 :
And so basically you’re saying that these systems… They were generally, these facial recognition systems were less accurate with non-Caucasian facial features, darker skin people, not white people, all right?
SPEAKER 01 :
That’s right.
SPEAKER 04 :
And that’s because the IT industry, generally speaking, at least at the executive level, is generally a bunch of white guys.
SPEAKER 02 :
Yeah, I mean, I think that’s true. I mean, in my world, though, which is odd, is that there are a lot. I mean, I think you know about the H-1B program. Yeah. But I actually deal with a lot of Indians. So there is that conflict. And then obviously in the AI industry, there’s a lot of Indians as well. But again, it’s going to be the training data.
SPEAKER 04 :
It’s surprising to me because I would think because in my mind, most of the people working on the construction of the AI infrastructure are mostly liberal potheads. And so I would think that they would be skewing everything toward women and minorities and against white guys.
SPEAKER 02 :
Well, if they’re lazy, remember, they’re lazy. So they’re not actually going to try to cull their own data sets. You know, it’s too much work.
SPEAKER 04 :
I had left that data point out of the thought process.
SPEAKER 05 :
So maybe similarly is number seven, Amazon’s hiring algorithm.
SPEAKER 02 :
Yeah, I think you’re probably aware of that as well. I mean, think about it. Most of the resumes that are going to be out there are going to be based off male candidates. And just based off that alone, you’re going to have this skew. I think a really interesting thing that they should have done, and I don’t know if it would have worked, but what if you just removed any references to names, period, right? That might have actually helped. But again, one more time, why does it end up picking a male candidate over another? You could just say because the population of resumes increased. is 51 compared to 49 and that it’s interesting because the human condition it’s the other way around i believe it’s 51 women and 49 men but probably it’s just that there are more men attempting to get these same jobs
SPEAKER 05 :
And so the system downgraded the female candidates because of the bias.
SPEAKER 02 :
And it wouldn’t have even been programmed to do that. That’s what they’re trying to say is that basically the algorithms just keep going next phrase, next phrase, next phrase. And over time, there’s actually this branching mechanism which you can control. We maybe could talk about that. But it’s the number of tokens that it looks forward and tries to resolve that. And because, again, it’s garbage in, garbage out, the GIGO phrase that we’ve known for 25 years or longer, it remains to this very moment.
SPEAKER 05 :
Okay. All right. So what’s number six on the list?
SPEAKER 02 :
All right. Number six. It’s called the recidivism algorithm, the compass recidivism algorithm. And basically, it’s a tool that predicts criminals reoffending. And of course, any guesses about what the results were? It went against black guys. You know, it’s interesting that that’s true because it’s funny in very, let’s say, liberal areas of the country, they’re not willing to say that they have a problem with black on black crime. Yet the statistics are so high that it appears that that information is actually bleeding into this recidivism algorithm. Is it correct? Is this one of the times that it’s actually not just the fact that there’s more information about it, but maybe it’s actually true? Sad to say, but I don’t know that it’s a flaw in the system at all.
SPEAKER 04 :
Well, so it’s reality. As uncomfortable as reality can be sometimes, there are real data out there, massive amounts of real data on criminal recidivism. And unfortunately, the way it skews is the way it skews. I mean, unless we go in and create deceptive data in the AI models to skew it the other way, you’re going to end up with some… That’s interesting. Some reality bleeds into AI, it seems like.
SPEAKER 05 :
Yeah, I was going to say it. So it seems like this is a case where… There’s so much overwhelming data that it’s trumping maybe any attempts by the pot-smoking hippies to change the narrative on the black-on-black crime because they act like it doesn’t exist.
SPEAKER 02 :
I mean, again, you’ve got to know your data. I can’t tell you how many times I’ve said that to whoever I’m working with. And accuracy depends on visibility. That’s a phrase that, again, I think I’m the one who coined it, to be honest with you. I’ve been saying it so long. But in 2023, the Biden administration created a brand new group for AI oversight, let’s say. And it recently got changed. It used to be AISI, and now it’s a different name, CASI. I don’t even know how you say it. But it’s basically a new organization. And I’m sure you’ve heard of NIST, which is the primary, let’s say, governmental organization on standards. So what they’ve done is they’ve developed a standard for basically the evaluation of models that are going to be used in government service, right? So what’s interesting is that the change from the past was more about ready for this. I think it was based on equity, right? The whole DEI era, right? And now today, Oddly enough, it really hasn’t changed too much. Now it’s just based on whether or not minorities are underrepresented in the underlying data. So it’s odd, but this is literally the United States method of looking into it. Again, it’s called, let’s see, I have my notes here. Let me just find, yeah, it’s SP-1270, if you’re interested in looking into it. And then also you have the UK’s AI Safety Institute, which is basically a very similar model as the US one. The reason why I’m mentioning it to you is because this is where you could get into the front of the system or the system prompt. And to be able to try to overcome what they think is this, we still seem to think that racism is the primary problem with the United States and apparently in the UK as well. So what we’re going to do is we’re going to program that into our AI models. And that is happening right now, at least in the government.
SPEAKER 04 :
So coders are trying to override reality. And since we’ve offended everyone by even mentioning black on black crime, I want to go back to the hiring algorithm. It sounds to me like number seven, Amazon’s picking men over women is simply the reality bleeding in that men are designed to work outside the home in order to support the home. Women are designed to work inside the home to keep the home. We’re supposed to work together and it’s all a fantastic design. But that reality simply bleeds into the algorithm and I guess the efforts are on to continue to deny reality. What’s number five?
SPEAKER 05 :
Well, before we do number five, this might be a good time for the interesting fact of the week. All right. I don’t think so. Are you guys ready?
SPEAKER 02 :
I love how much you get so excited for this, man. I just wanted to write there along with you in every show.
SPEAKER 05 :
A little buzzer ready here.
SPEAKER 04 :
Okay. I’m never ready, but I got to do it, Fred. Reality is what it is. I’m not going to deny reality. Hit me with it.
SPEAKER 05 :
So what 2016 AI system from Google shocked the world by beating a top human player at the game of Go? A game thought too complex for machines.
SPEAKER 04 :
2016. That’s like ancient history in the AI world.
SPEAKER 05 :
What AI system?
SPEAKER 04 :
Google’s AI. I should know this because it’s in my face. Every day, Google is demanding that I use their AI. And of course, it’s probably not even the same one anymore. But I’m going to say Google Assistant.
SPEAKER 05 :
I couldn’t remember the answer, man. That’s good. Daniel, do you want to take a crack?
SPEAKER 02 :
Man, well, gosh, if I get this wrong, we should just end the show right away. I think, isn’t it DeepMind?
SPEAKER 05 :
That’s a good answer because that was DeepMind, I think, was the one that beat the chess player, right?
SPEAKER 02 :
Yeah, isn’t it the same exact one? I thought it was.
SPEAKER 05 :
It’s actually called AlphaGo.
SPEAKER 02 :
Oh, right, right. Yeah, but he also had Alpha Fold. That’s the reason why I didn’t think of Alpha Go. Yeah, it defeated Lee Sedol.
SPEAKER 05 :
Sedol? Sedol? I don’t know how to say his name. He was one of the best players in the world, and the guy, I guess he got so frustrated he retired.
SPEAKER 02 :
You know, I need to look that up. Is that based off the DeepMind? Maybe it is. You could be right.
SPEAKER 04 :
Help me out, guys. What’s the game of Go?
SPEAKER 02 :
How do I not know about this game? Is this a board game?
SPEAKER 05 :
By the way, Daniel, AlphaGo is an AI program created by Google’s DeepMind.
SPEAKER 02 :
See, I was right. Thank you. I’m going to stay.
SPEAKER 03 :
I’m going to stay on the show.
SPEAKER 02 :
Oh, my gosh. Well, let me say this about Go, if you don’t mind.
SPEAKER 01 :
Stop the tape. Stop the tape. Hey, this is Dominic Enyart. We are out of time for today. If you want to hear the rest of this program, go to rsr.org. That’s Real Science Radio, rsr.org.
SPEAKER 03 :
Intelligent design and DNA Scholars can’t explain it all away Get ready to be awed By the handiwork of God Tune in to Real Science Radio Turn up the Real Science Radio Keeping it real That’s what I’m talking about