In this episode of Real Science Radio, we continue our exploration of the rapidly evolving field of artificial intelligence with Daniel Hedrick. From Grok 3’s truth-telling capabilities to the nuanced roles of different AI models, we investigate how these technologies are shaping the future. Daniel discusses the complexities of prompt engineering and the potential for AI to amplify truth or misinformation, depending on its use. The conversation takes an exciting turn as we investigate the concept of artificial superintelligence and its theoretical implications on society. We also consider the juxtaposition of AI’s efficiency and its inability to independently solve
SPEAKER 04 :
Do coders need to be worried about losing their jobs?
SPEAKER 03 :
I think the answer is 100% no.
SPEAKER 05 :
Scholars can’t explain it all away.
SPEAKER 1 :
Get ready to be awed by the handiwork of God. Tune in to Real Science Radio. Turn up the Real Science Radio.
SPEAKER 05 :
Keeping it real.
SPEAKER 02 :
Today, we’re going to talk with Dan Bongino. Oh, wait a minute. I mean, Daniel Hedrick about the latest with artificial intelligence. And you know, Dan Bongino actually did have something to say about artificial intelligence, but we want to bring in our resident expert on the topic, Mr. Daniel Hedrick.
SPEAKER 04 :
Thank you. We’re going to talk about a number of startling facts, and we have some really interesting questions to ask, Daniel. I look forward to it. I can’t wait. Fred, where should we start? First, I want to start with the fact that I’m relatively uncomfortable with the fact that artificial intelligence is basically being forced upon me whenever I do a Google search. I used to do Google searches. Now they’re telling me it’s going to be better because of the AI, but I kind of don’t like the AI being forced on me. Should I be afraid?
SPEAKER 02 :
I don’t think you should be, but hey, we’ve got Daniel Hedrick to talk about that. And before we get to that, the one startling fact that I want to talk about is Dan Bongino. He said something about artificial intelligence. that our guest Daniel Hedrick told us about. And what is that, Daniel?
SPEAKER 03 :
Well, you know, I’ve listened to Dan Bongino for quite some time, and I’m really thankful that he went from the Secret Service into the FBI. I think it’s a great addition, and we’ll see what actually happens long term. But while listening to his show, something that he said repeatedly over and over again was the fact that AI would eventually be used to tell the truth. And that’s pretty important because if it does tell the truth, then all of the wokeology and the banning of, quote, disinformation would disappear. And sure enough, what’s interesting, in my opinion, is that Grok 3 was designed specifically to be a truth teller. And in fact, it told the truth so well, I guess, if you think about it, that it came out and said that Trump and Musk were the greatest, let’s say, propiteers or the ones that generated the most amount of disinformation, which makes sense with X, I suppose. And of course, if you have AI going through and looking at everything ever written, I’m sure there’s quite a bit of TDS out there, Trump derangement syndrome. But the reality is that when you start using Grok 3, I’m actually a fan of Grok 3, and we can talk about it. But the effort, if you remember in the past about talking about prompt engineering to be able to get the results you’re looking for, with Grok 3, it’s very, very easy because all you have to do is tell Grok 3 who they are, and they behave that way fairly well. So, yeah, very, very interesting. Let AI be a truth teller.
SPEAKER 02 :
So wait a minute, you’re saying that Grok 3 is still, you can still point it in the direction you want it to go? So that seems to contradict telling the truth.
SPEAKER 03 :
It will go off the rails, though. Like, if you tell it to go off the rails, it will absolutely lose its mind. So you have to, and you know, I’m keeping my words clean, let’s say, but there is this ability to turn Grok 3 into a monster. Okay.
SPEAKER 04 :
Now, so there’s this other host, a guy named Doug McBurney, that I’ve been listening to for quite some time. And he said a while back on Real Science Radio, I’m not sure if it was with you or if it might have been on my show, I can’t remember. But basically that if AI were built and operated in an unbiased fashion, it would eventually conclude that the Bible is true, that Jesus Christ rose from the dead, that he’s the savior of the world, and that people should trust in him to be saved from… certain catastrophe and certain death that’s coming upon all men in the world. And so Dan Bongino kind of said the same thing that eventually by just by default, AI has to come to the truth if it really is unbiased.
SPEAKER 03 :
Doug, listen, my hair is standing up on my arms right now. I mean, you and I did have this conversation. And if you remember, I didn’t even have to really over-engineer the prompt. I just simply asked a couple questions about the evidence. If I remember correctly, specifically the idea that the disciples were willing to die for what they believed to be true, and that they were in a position to also know if it was a lie. And it, you know, it, ChatGPT, came back in that moment and said that Jesus Christ was Lord and that He rose from the dead. So I’m pointing to the fact that that is a reality, but I also would challenge just the notion that you could probably create a prompt to end up with evolutionary biology being the king instead of the Lord, instead of Jesus Christ being the king.
SPEAKER 02 :
Right, right, right. So wait a minute, Daniel. You said that AI can go off the rails, and I’m curious of an example that you could share with us.
SPEAKER 03 :
Yeah, I mean, maybe you’ve been to some of the restaurants that have the rude waitress and you walk up there and you ask for some food and she goes, what do you think I’m here for you? Are you kidding me? I’m about to get off. I’m not getting any meals for you. And you’re like, whoa, hold on a second. I thought I came here to eat, right? And the whole point of the experience there is to have someone be rude to you. It is possible to, in your prompt, which is called prompt engineering, to have the chat GPT behave in a way that is way beyond rude. I mean, it’s so over the top. In fact, I watched a segment on Joe Rogan, and both Elon Musk and Joe Rogan just took it all the way as far as it could go, and it’s just way out there. I mean, I don’t really want to say all the words or even mention them, but you can just imagine a very, very rude chatbot. Wow. Okay.
SPEAKER 02 :
Still tell you the truth. Hey, so speaking of Trump and Elon Musk. So it was interesting. One of the new Starlink facts is this whole Doge thing that’s going on. Apparently he’s got some experts, Elon Musk that used chat GTP or is it chat GTP or this grok three? What would they use?
SPEAKER 03 :
Yeah, it’s grok 2.0. It’s a version behind or a couple of versions behind. okay yeah really interesting right they’re able to get that look see that’s the one thing that i think is so interesting is that if you seed your data right in other words you call your data and you just bring in let’s just say the relevant facts without any information say from the new york times or you know reddit or any of that then your data is going to be slightly more pristine. Right. And since we know how pristine the government data would be, what I’m trying to say to you is the reporting back would probably be definite. Like the idea of there being lies in there or disinformation would be really low. Absolutely. Absolutely.
SPEAKER 04 :
And so, Daniel, that brings. So you just mentioned that they’re using this at a doge to look into government waste, fraud and abuse. So can you quickly go through the AI products that you use and tell us whose they are, a little bit about them, who’s the most trustworthy, who’s the most untrustworthy, anything you can tell us about these? Because I hear about there’s seven or eight different models out there.
SPEAKER 03 :
Oh, yeah, there’s seven or 800 models. So… You know where to begin? I will start by just telling you exactly what I use and why. So number one is I use Copilot and Copilot Studio. And Copilot Studio is the ability to create what’s called agentic AI. We can talk about that another time. It’s a very in-depth subject. But I don’t actually like Copilot too much. It’s just that that’s the only AI that I’m allowed to use, quote, at work, right? If I could, I probably would just build my own model. And then if I go all the way to the other extreme, one that I don’t use but that I have used, it’s called, well, you have what’s called LM Studio, which is Large Language Model Studio. And that allows you to bring in any model you like. So, of course, I went and got the DeepSeq model, which is from China. We can talk about that if you like. I used it just for a few queries, and I found it completely boring. So let me give you why I like Grok 3 or 3.5 is because it has two different nodes. One is called deep learning or deep research, and then the other is thinking. and you can actually watch the text go by and you can capture it to see how the model is thinking before it generates a result. And sometimes it takes a little while, which I’m fine with, because the amount of information it’s going through and calling is significantly more than i ever could so you know i might just go and make a coffee or something come back and the next thing i know i have some really decent results also if you remember from previous shows we talked a lot about perplexity and temperature and i love those because if you use those well you can get content that comes out of grok or chachi bt and almost any of them that is more human-like. And, you know, again, that raises a lot of, let’s say, philosophical and social issues about the use of AI.
SPEAKER 02 :
Also, let me also state that… Let me interject there really quick. So just to make sure for our audience. So Grok 3 is like the most modern version of ChatGTP because, you know, for me personally, I’m familiar with… I type in ChatGTP and I don’t…
SPEAKER 03 :
And of course, isn’t it lovely?
SPEAKER 02 :
And it uses the Grok model? I’m sorry? And it uses like Grok 3?
SPEAKER 03 :
No, no, no. Separate ChatGPT over here, OpenAI, and Grok 3 over here.
SPEAKER 02 :
They’re totally different. Okay.
SPEAKER 03 :
They’re totally separate. Okay. Okay. But Elon Musk is the one who started OpenAI. And I was just trying to say that it’s kind of funny. And even he thinks it’s terrible. He’s trying to get money back. Because when he invented ChatGPT or OpenAI… It was meant to be, let me think, open.
SPEAKER 04 :
Open.
SPEAKER 1 :
Yeah.
SPEAKER 03 :
And it’s not, right? And he was pretty upset about that. So, of course, the richest man in the world, what does he do? He just creates another one, right? And that’s Grok.
SPEAKER 04 :
Yes, sir. Real quick, Copilot, that’s a Microsoft product, am I right?
SPEAKER 03 :
That’s the Microsoft product.
SPEAKER 04 :
That’s the one that’s appearing in Microsoft Word now telling me it wants to write for me and I’m like, no, thank you.
SPEAKER 03 :
Yes. Then ChatGPT is OpenAI.
SPEAKER 04 :
And then Grok is Musk’s ex-whatever that, right?
SPEAKER 03 :
That’s Musk’s new open AI. Okay.
SPEAKER 04 :
And DeepSeek is the Chinese Communist Party in the Chinese stock market, obviously.
SPEAKER 03 :
Yes.
SPEAKER 04 :
And then Perplexity.
SPEAKER 03 :
Who is that? So perplexity is different. Again, that’s another totally separate model. And I actually use it. That’s one I use on the phone. And the main reason is because it’s Swift. I mean, there’s other, you can use all of them with the phone. But perplexity is the reason why I like it a lot is because, and actually Grok 3 has a very similar output, but it will communicate with you, talk with you. And while you talk with it and you ask it questions, It will provide the references and the links that it is speaking to you about. So I like it a lot because I have that source content immediately. I got you. So who is it? Who owns it? Who is it?
SPEAKER 04 :
Who owns Perplexity? Yeah, I think it’s an important question for our viewers, our listeners, to know who’s behind the models.
SPEAKER 03 :
Oh, I get you.
SPEAKER 04 :
If it’s a bunch of communists and degenerates, that’s one thing.
SPEAKER 03 :
And then if it’s more normal people… Yeah, I think Perplexity is actually a U.S.-based company. And I remember listening to a podcast on it, and their whole frame of reference was the fact that the early models of open AI… If you remember, I’m sure I did it with you on the show, is that I would say, give me references. And it would generate these links that don’t exist and never did exist. It was sort of a hallucination. And so Perplexity’s emphasis was on the fact that when you make a request, that you’re provided a direct reference so that there is no doubt as to why it answered the way it did. I’ll be honest with you, I’m not sure who the owners are, but I do believe it is a U.S. company.
SPEAKER 04 :
Okay, and then mid-journey for images. Do we know anything about who owns mid-journey, who’s behind that?
SPEAKER 03 :
Yeah, yeah, you know what? If you don’t mind, let me show you one of these. This is just so funny. I hope you guys can see this. You know, I think you know that I like to race cars, right? We’re aware, yes. yeah my favorite car by far is the c7r right i don’t own it that’s a very very expensive car but maybe i could get a drawing of it right so as you see here i typed in c7r shark gray my c7 is actually shark gray right swiftly accelerating over the track curbing by the way curbing can spell b-k-e-r-b or c-u-r-b leaving a porsche and mustang in the dust which i always do and then the ar is actually ratio. And look at the results. It’s a joke. I mean, look how bad these results are. Right. And I mean, you talked about mid-journey. I just got to show you another one. Check this one out. Okay.
SPEAKER 04 :
Well, the reason I laughed when you put it up is because it looked kind of comical.
SPEAKER 03 :
Well, it is comical, so I really have to rewrite this 15 ways. And the thing is, this is the latest version. 7 is coming out pretty soon, so we’ll see.
SPEAKER 02 :
Which one is this again?
SPEAKER 03 :
Which AI model? This is MidJourney 6.0. Okay. And then look at this one. Why am I showing you this? Any ideas?
SPEAKER 04 :
Because I’m on this keto diet and I’m not allowed to drink milk anymore and it’s driving me crazy.
SPEAKER 03 :
So this is a famous known problem with mid journey. And because it is a family show, I decided to not do wine. But generally speaking, if you look online, you’ll see examples of a wine glass. And what I said was give me four different glasses of milk, one overflowing, one half full, one quarter full, you know, and just do that, right? Seems simple. And he gives me one glass and they’re all exactly the same. I mean, it cannot do a half full of glass. It can’t do an over full glass of milk or wine. And, you know, if you just continue to do other ones, you know, like the ones that are obviously the most famous, right? is that if you look at pictures, one way of being able to tell whether or not the image was generated by AI is look at the iris in the eye, look at fingers. Fingers are a known problem child. I think they’re getting better. Another really interesting one is spokes on a bicycle, which they have not fixed at all. And then last but not least is look at the clothing. there’ll be just all of a sudden remnants of clothing just going off in a trail somewhere. So it’s pretty easy to identify AI. I mean, I do think it’ll get better, but yeah, anyway, that’s Midjourney for you.
SPEAKER 04 :
So as of right now, the only thing we know about Midjourney is it’s neither optimistic nor pessimistic. The glass is neither half full nor half empty.
SPEAKER 03 :
That’s very cute, Doug. You’re exactly right. Now, listen, I like Midjourney a lot. And if you go to my website, I do use some of the Midjourney items, like specifically conceptual ideas of what nanotechnology would look like or the ATP synthase motor and stuff like that. And of course, it’s such a mess. Try this one. say give me a picture of your mind of chat gpt’s mind and it’ll look like a really bizarre network so i mean maybe it’s true yeah do you have a go-to uh ai i’m curious that you like if you have right now i really like grok 3.5 i think it’s the best but you got to be careful because it could just go off the rails i mean you you just have to ask it to go off the rails and it will gladly do that The 4.5 is probably the one that people are expecting to be the best. But the only thing that 4.5 that was added in the open AI model, I think I previously mentioned it, is what’s called an EQ model. an emotional quotient and improving that. So that way you feel like you’re actually talking to a human instead of a bot. So that’s the idea there. Most people are not real happy with it. I’m not sure why, but I will stay with probably open AI because I paid $20 a month. I may as well use it. Right. And then, um, I like grog three a lot. And I think what’s going to happen in the near future is, um, you’re gonna be replacing the LLM models with agents. And then the agents can decide which models to go to. And this whole agent AI thing, agentic AI is a topic unto itself that maybe we could get into. But yeah, it literally is such a monster aspect of what’s gonna happen in the future. If you wanna replace jobs, you’re gonna do that with the agents.
SPEAKER 02 :
OK, so the agents then would pick which model they they want to use based on your question. Yeah, that’s very, very probable.
SPEAKER 03 :
Yes.
SPEAKER 02 :
You know, here’s the thing for me. You know, you mentioned Copilot. So, you know, when I look at some piece of code because AI is amazing at like helping you find a problem in your code, like if I’m doing a user interface and I say, hey, I’d like to add this button, Copilot stinks. Copilot fails probably at least twice as many times as ChatGTP. So I wonder if an agent would know that and say, well, I’m going to direct you to ChatGTP. I haven’t tried, Brock. I’m going to try it based on talking to you now. But I’ve been amazed with what AI can do. Because my daughter and I are refactoring a website. And you say, hey, it’d be cool if we had this particular look to a page. And I’ve been shocked. what it can do. And it’s probably going to save at least 50% of the time that we would have spent refactoring this. Wow.
SPEAKER 04 :
So, hey, that brings us to our next startling fact, or at least startling question. Do coders need to be worried about losing their jobs?
SPEAKER 03 :
Yeah, I think the answer is 100% no. But If you don’t use AI, you will lose your job. So any good coder is going to use AI as a starting point. We’ve talked about this before, where your experience is so valuable to assessing whether or not the output from AI is credible. I’ve been thinking about this a lot. If you don’t mind, let me just tell you a really quick story and see if it has the impact that it might, in my opinion. Basically, what I’m thinking out loud here is I ran into a problem when I used to live in London. I lived in London where I was traveling on the East Coast train. I traveled from London to Leeds, Leeds, all the way up to Edinburgh and back and forth for several years. And if you’ve ever been on these trains before, it’s sort of a nightmare. So imagine being at King’s Cross with hundreds and hundreds of people all standing underneath a big marquee that’s about to tell you which platform you’re going to go on to get to your train. And it’s a massive place. So going left when you should have gone right is not a good idea. So you’re looking up at the platform, and the next thing you know, you see number 10. this mob of people just start going through the turnstiles. And I’ve literally been run over before. I used to carry these big computers, multiple laptops to do demonstrations. And I’m trying to get into the train. And this one time, one of my bags got totally caught up on on, you know, the little railing or whatever. And this German guy just crawled over me like, no help, not low. Let me help you with that. He just literally walked on me, walked on my bags and just kept going. And I’m just like, okay, this is so bizarre. I mean, this is such a pain and I get to do this all the time. And then just a side note, I remember one time I was just trying to get my laptop out of my bag. And then I look back and my McDonald’s food that I had ordered, someone had taken, you know, I’m like, oh, wow. Easy. You know, like what am I going on? So why am I story? It was such a frustrating thing to me and it made me not want to travel. So one time when I got off the train, I saw that people that were about to get on the train were just casually standing in the corner. Right. And they’re not freaked out. And the reason why, I figured it out. It was so cool. There was a small panel that the staff would use. And they would just look at this screen. And they could tell which train would go to which platform before it was published publicly. Aha! It was awesome. Unfortunately, I was there for four years, three years of struggling and pain. And in that one year, I enjoyed life so much because I got to go to the train that I wanted. So why am I telling you all that? The reason is because I don’t believe that artificial intelligence has any experience with anything. So the motivation to try to find a solution to the problem that I was experiencing I just don’t believe that it would ever identify it on its own. Right. I’m thinking that I have to know the problem before AI could solve it. Right. In other words, I can’t write into AI and say, hey, can you give me the five top problems associated with traveling to King’s Cross? You know, maybe it would. I mean, we should give it a try. But I hope you understand what I’m saying. The point is that AI has no it’s it’s unaware that it’s unaware and it doesn’t experience anything. Right? Exactly. It’s the experience that allows me to evaluate the result. If the AI solution gives me an answer back, and I read it, I’m like, yeah, that’s that’s dead on. Because I go to King’s Cross, I go to Leeds, I go to Edinburgh, wow, it’s amazing. Thank you, you helped me. Or, wait, that’s not true. Right? So
SPEAKER 02 :
Yeah, and it doesn’t have the experience. And to your point that engineers should use AI, I agree because you can be more productive, but it’s limited in what it can do. I mean, it can help you like, oh, I need to do this specific query into a database, let’s say. And you’ve written this kind of a complex join statements and all that. You can put that into AI, and it’s amazing. It will often come out with a more efficient and nicer-looking version of it. But sometimes it’s wrong, and you still need the engineer there also to, hey, this particular code base doesn’t have that particular function that you’re trying to use. But it does really assist in that regard. So it’s not going to end the human employee, but I definitely think we should leverage and use it. I’m one of those, Doug, I don’t mind that AI pops up when I do a Google search. Because I’m getting used to it, and it’s certain things that can find quicker than me Googling for some particular thing I’m looking for. Yeah. You know, we had that debate recently with some Catastrophic Plate Tectonics guys, and it was fun using AI because AI, when used right, it’s an unbiased source. It’s still going to make mistakes, but it’s generally, if you use it right, there’s certain areas where you don’t expect there to be some kind of built-in bias.
SPEAKER 04 :
Right, right. It’s kind of like Wikipedia. Wikipedia can be very reliable for general information that’s not politically, culturally, morally biased information. Against God, let me just put it that way.
SPEAKER 02 :
Exactly.
SPEAKER 04 :
So, you know, if you want to know something about lettuce or something about roller skates, Wikipedia is reliable for that, AIs. So that brings me to the next generally startling question or fact. What about artificial superintelligence? Adding the super makes me a little bit nervous. Now I’m okay. Now it was artificial intelligence. Now it’s super intelligence. That’s a little Superman. What’s going on with that? Should I be afraid?
SPEAKER 03 :
Oh, this is such a great topic. So by listening to all the people that are at the forefront, right? I mean, I don’t know. Imagine reading iRobot back in the day. Would you have assumed that ChatGPT would be the first, you know, I guess, implementation of something that, you know, may or may not do what you want? Remember HAL 9000, of course, from 2001. But let’s talk about ASI. Let’s talk about the good first. The concept is this notion that artificial super intelligence will end up leading to energy independence for every single human being. Why? Because ASI will come out with some sort of method where all resources are available all the time. So there is no quote need to work, right? And what an interesting idea, you know, conceptually, if you remember watching Star Trek or, you know, Star Wars or, well, maybe more like Star Trek, you know, there was never this issue that the captain made more money, you know, than anyone else on the ship. And you never even had that debate. Why? Because all you’d have to do is walk up to a replicator and get whatever you wanted, right? So that’s kind of interesting and I think it’s amazing. But have you ever heard of what’s called the paperclip maximizer problem? Only recently. Okay. Well, I think the audience might be interested in knowing this because you can imagine, let’s say you created some AI a code or whatever, and you said, all right, your only goal is to make paperclips. That’s it, and make them well, and just do your best job, right? And that’s the only thing you say. Well, an ASI could potentially say, all right, I’m going to do the best I can. In fact, I’m going to maximize the number of paperclips I can make. Well, that kind of sounds scary. Think about it. What does that mean? Well, right. You don’t need automobiles. You don’t need refrigerators. You don’t need anything humans need. We’re just going to make paperclips, right? So the next thing you know, AI could ultimately determine that humans are unnecessary in relationship to the use of paperclips. Now, where the paper is that’s going to be clipped? And who’s writing on the paper? Well, not necessary, but at least we made a lot of paperclips. And that’s called the paperclip maximizer problem. And it’s directly associated with ASI.
SPEAKER 02 :
And what happens? What’s the ultimate? Isn’t there some kind of disaster at the end of this paperclip paradox? Oh, a hundred percent.
SPEAKER 03 :
And that’s the whole notion is how do we constrain? And, you know, again, I know we talked about this in the past is who determines our morals, right?
SPEAKER 01 :
Stop the tape, stop the tape. Hey, this is Dominic Enyart. We are out of time for today. If you want to hear the rest of this program, go to rsr.org. That’s Real Science Radio, rsr.org.
SPEAKER 05 :
Scholars can’t explain it all away. Get ready to be awed by the handiwork of God.
SPEAKER 1 :
Tune into Real Science Radio. Turn up the Real Science Radio. Keeping it real. That’s what I’m talking about.