At The Boundary

Cybersecurity in the Age of AI: Threats & Opportunities

• Global and National Security Institute • Season 3 • Episode 70

Text the ATB Team! We'd love to hear from you!

🎙️ AI, Cybersecurity & the Future of Computing—A Must-Listen Episode! 🤖💡

Professor John Licato joins guest host Glenn Beckmann on the "At the Boundary" podcast to discuss groundbreaking AI advancements and the launch of USF’s new College of AI, Cybersecurity, and Computing. Dr. Licato breaks down the impact of AI models like DeepSea and Lucy, the growing potential of quantum computing, and the critical work of Actualization AI in ensuring AI privacy and accuracy.

Don’t miss this in-depth conversation on the future of AI and its ethical challenges. Listen now! 

At the Boundary from the Global and National Security Institute at the University of South Florida, features global and national security issues we’ve found to be insightful, intriguing, fascinating, maybe controversial, but overall just worth talking about.

A "boundary" is a place, either literal or figurative, where two forces exist in close proximity to each other. Sometimes that boundary is in a state of harmony. More often than not, that boundary has a bit of chaos baked in. The Global and National Security Institute will live on the boundary of security policy and technology and that's where this podcast will focus.

The mission of GNSI is to provide actionable solutions to 21st-century security challenges for decision-makers at the local, state, national and global levels. We hope you enjoy At the Boundary.

Look for our other publications and products on our website publications page.

Glenn Beckmann:

Hi everybody. Welcome to another episode of at the boundary, the podcast from the global and national security Institute at the University of South Florida. I'm Glenn Beckman, communications manager at GNSI, and your host for at the boundary today on the show, one of our favorite guest returns as Professor John licatto joins us to talk about all things artificial intelligence. I know it's difficult to believe, but a lot has changed in the AI world since the last time we had John on the podcast before we start talking with him. However, I want to say thank you from the entire GNSI team to everyone involved in last week's GNSI Tampa summit five, the Russia, Ukraine war lessons for future conflicts. The conference was a smashing success again, and our keynote speakers, retired Marine Corps General Frank McKenzie, as well as John Kirby USF, alumni class of 85 go Bulls and former White House National Security communications advisor, were exemplary, as were all of the speakers and experts who spent time With us, sharing their insights analysis and experiences. We're grateful to all of the people who attended TS five, as well as to the groups involved with the conference, the USF Institute for Russia, European and Eurasian Studies, Iris, the College of Arts and Sciences, us, CENTCOM, and also the research one program, part of USF research and innovation and a very special thank you to Iris director, Dr golfel Alex Apollos, who played an integral role in developing Tampa summit five. If you weren't able to attend last week, the videos from the conference will be available this week on our YouTube channel. When you subscribe to our channel, you'll be notified when those summit videos are online, as well as any other new content we create. Speaking of content, the best way to keep up with everything going on at GNSI is to follow us on our socials, at USF, underscore, GNSI on YouTube and x at USF, GNSI on LinkedIn. Okay, it's time to bring in one of our favorite guests of the podcast, and he's certainly been the most frequent guest. What is this? John,

Dr. John Licato:

five No, second or third. No, no, no. We

Glenn Beckmann:

had you on, and then you were on with Norma, and then also two Times with Craig. Yeah.

Dr. John Licato:

Craig might tell Yeah, right, yeah, the two parter, yeah. So

Glenn Beckmann:

I think, but anyway, but this is the first time we've had you on where you're appearing as a big, big star. Oh yeah. So to better explain, John is one of the stars of a new University of South Florida marketing campaign the next phase of the Be bold campaign for USF this time around, be bold is focusing on the heroes all around us here on the USF campus, students, faculty, staff everywhere. So John was chosen to be part of this campaign. So give us the story. Did you answer a casting call? How did they find you? Give us

Dr. John Licato:

I got, I got nominated, and, you know, I guess they figured out that, you know, I do a little bit of AI work, and have talked about in the past. So, you know, for lack of better choice, I suppose they film me. And, yeah, I was, I was really honored to be chosen for it. And, you know, it's, it's a really exciting initiative that that I'm able to talk about. So, you know, it's Be bold campaign. I guess. The idea is that this new college that we're establishing, College of AI, cyber security and computing, right is is really a courageous move to take right, because putting AI as the first word in the college's name in the middle of you know, what some might describe as a hype wave. But I think those of us in the field know that it actually is a real, lasting technological advance that, you know we can't, we can't ignore that's affecting every single field of study, every single job on Earth, right? We're still talking about it right in this today's podcast, we're going to talk about it. So I think it's a bold move to try to take leadership in this field and to do so for the state and be one of the first in the country to set up a college like this. So I think the marketing campaign is really set up to make that clear that you know what we're doing here at USF is not an easy move, and it's a

Glenn Beckmann:

little bold, yeah, no, it's fantastic. And we were talking before we started recording, and I'm really happy to hear that the acronym for the new college cake is getting some traction, although we're not sure how long maybe.

Dr. John Licato:

Yeah, I do like the acronym cake. Caicc, the name might change. We're hopefully going to find out news about a potential sponsor next week. So we're all really excited to find out what's going to happen, and things are going to change really quickly over the next few months. You know, we're expecting to officially launch the college in the fall, and that's that's not too far away. So,

Glenn Beckmann:

right? And I think if I, if I remember correctly, you're, you're going to begin offering a an undergraduate degree in artificial intelligence, right? That's

Dr. John Licato:

right, yeah, we're in the stages of planning everything out, you know, getting all the curriculum approved, creating all the electives for it. Initially, it's going to spin out from the computer science degree that's already existing, you know, which already has a lot of AI electives that the faculty have been teaching. But, you know, we're really hoping that it's going to turn into its own thing. It's going to be one of the few AI centered degrees in the country. And just another example of how we're trying to take leadership in this

Glenn Beckmann:

field. Yeah, well, and obviously, you know, GNSI is all about national security, but what we've emphasized over the couple of years we've been around is that national security takes on many different forms. It isn't just military, it isn't just the three letter agencies in Washington, DC. It's all kinds of different things. You know, we've heard talk about the stories of, I believe China is trying to develop AI hospitals. And then I know that couple of months ago, you were, you guys, part of the the agreement with USF Health for the AI, the voice recognition research that they're going to try to use to diagnose patients with, using voice and AI,

Dr. John Licato:

oh, yeah, no, I'm not directly involved. But, you know, I know the people that are working on, yeah, good researchers,

Glenn Beckmann:

yeah. So it's, it's really fascinating to see where it all goes. And just a little bit that I personally have been involved in, just, you know, just using little chat gpts and things like that. It's, it's, it isn't, it's not going anywhere there. There is no doubt about that. So since the last time you were on the podcast, there have been some big developments in the AI space, and that's not a surprise. I guess the two that come to mind immediately are deep seek and Lucy, and we can talk about each of those a little bit more deeply. But for any, for any of our listeners who are unaware of those two things, can you kind of give us a little brief description of those two programs? Sure,

Dr. John Licato:

yeah, I'll start with deep seek. So this, this made the news a couple weeks ago the the company that created deep sea, because the Chinese company, and, you know, just kind of set the context the GPUs, graphics processing units, which is essentially the hardware that does a lot of the heavy lifting For training deep learning systems, right? Is our export restricted? So we're, you know, we're limited on how many of those we can export to China, for example, right now, the company behind deep sea is a Chinese company, and they came out with this version of their LLM, which called Deep Sea r1 December, late December, or something, something like that, and it was able to show an improvement on a bunch of benchmarks of reasoning, even beyond the performance reported by some of the best you know, American models that we know of, right? Open a eyes, models and some of the open source stuff, right? And that caught everybody by attention, because, you know, they wondered how, how did they, how did they know how to do this, right? How did they, you know, because open AI, for example, is is fresh off the heels of these many billions of dollars of investment. And part of the way that they got that investment was they made the case that we need to make language models really big. And, you know, we at open AI, have this, this lead over everybody else. We have technology that's so far ahead of other AI companies. And all of a sudden this Chinese company comes out, and they got things that can beat them on a bunch of reasoning benchmarks, right? So immediately Nvidia stop stock dropped, and, you know, I think it was reported as the largest single day loss in value of a company's stock in history, like 600 billion or something. That's saying

Glenn Beckmann:

something, because there have been some really big crashes, especially the last 20 years.

Dr. John Licato:

Oh, yeah, yeah, this, you know, and part of it might be just market correction, right? Nvidia's stock, because they're the ones that are providing the most commonly used GPU. Their stock has been just increasing tremendously over the past. Months, and maybe it's just a correction for that, right? So, whatever it is, you know, people said, I don't know how they're doing it. Did they find some secret technique that doesn't require GPUs anymore? And, you know? And the one thing that makes what they did remarkable is, not only did they beat a lot of bench, uh, not only get, did they get the best performance on a lot of benchmarks, but they made all of the details available. So they completely published how they did it, the training technique they used. They made it all free, right? We can download the the full weights for their largest, uh, 600 plus billion parameter model, right? You can use it locally. They made a version where you can it's a big model, so you have to have a lot of compute power to run it locally. If you don't want to do that, you can interact with their version on their website, just like you can do with open AI. And they made that much, much cheaper than, for example, open AI's API.

Glenn Beckmann:

Were you surprised by that? Because the Chinese have a well earned reputation for being very secretive about their IP. Were you surprised that they just made it available for anybody to kind of comb through, look through?

Dr. John Licato:

It was very surprising. Yeah, in some sense, you can describe that as maybe, maybe a offensive move, right? You know, it caused a massive drop in confidence of the future of Nvidia and open AI's lead right, and kind of sent a message that, Look, you guys aren't as far ahead as as you claim you are, right? They may, by making the details available about how they did the training, they make it so that a whole bunch of other companies now can pick up where they left off and build off of it, right? So, you know, in that sense, maybe, maybe it was a strategic move. Oh yeah, okay, it certainly was a strategic Yeah, right, yeah. But I mean, speaking as a scientist, I'm very happy they did, because it's really nice to actually know how the technology you're talking about works.

Glenn Beckmann:

So deep sea came out, was introduced in a spectacular fashion. Um, Lucy was just as spectacular, but in a bad way. You know, it crashed and burned. And, in fact, like, they turned it off two or three days after it, it was introduced. So, yeah, tell us a little bit about about that, what your reaction was, and, you know, to the whole thing. Yeah, that's so what is? What is Lucy, first of

Dr. John Licato:

all. So one thing that they all have in common, Lucy, deep seek, chat, GPT, they're all based on llms, large language models, right? And they're all based on very similar underlying technology, what we call the transformer neural neural network architecture. And the thing is, it's not when you're trying to figure out how to make these models better and more capable. It's not just a matter of throwing more compute power at it and then magically it's going to turn into something smarter, right? To be fair, there is some of that right there. The past few years, we've seen that simply taking the models that we have and then making them twice as large, three times as large, does seem to increase reasoning capability, but we're reaching kind of a plateau with that right. We're realizing that you have to experiment with different things. You have to find small tweaks in the architecture. You have to change how it incorporates its training data. You have to try a whole bunch of different things, right? No one company can do that, and so, you know, that's one reason why what deep seek did, I think, is actually good for the science, because now it makes it so that a lot of different companies can try different possibilities, and a lot of them are going to fail. Some of them are going to discover some new tricks, and then if they reveal their details, it advances the whole field, right? That's, that's the ideal. That's, that's how we love for it to work doesn't always work that way, because money is involved, right? So anyway, Lucy was one of those cases where, unfortunately, what they tried did not work out as well, right? They their basic idea is the French company, and what they wanted to do was create a language model that was trained on, number one, a lot of French language data. So a lot of the models that we're using open AI, they're trained on massive corpora of primarily English text, right? Just because they're widely available, you know? And then they wanted to say, Okay, we're, we're not going to remove the English text. So there's a lot of data there, but we're going to make it think, like, I think it's like a 30, 35% of. French text, something like that same amount of English text, and then you got programming code, multilingual stuff. So they wanted to make this sort of natively French trained language model, and they use the smaller architecture size as well. So 7 billion parameters, I think, compare that to the largest version of deep seek, which is like six, 50 billion parameters, right? So when you do that, you got to expect that its capabilities are going to be reduced. They also did not use a lot of the so when you take a language model, you train on a lot of data, we call that pre training, and then you have to do some subsequent stages of training, like rlhf, which is where you train it on human conversation data, so it learns how to talk more human like and, you know, engage in interactions that seem more natural. There's a whole bunch of the subsequent training stages you gotta do to refine it. They didn't do a lot of those subsequent training stages not to say they couldn't have just, just didn't. They didn't do it yet, right? And they got excited about it. They put it online, and then I think people used it and expected that it was just as powerful and just as well trained, tested as chat GPT, and it wasn't. And then they they had to take it down, but

Glenn Beckmann:

so it got a little bit of an unfair I think. So there's a little bit there, yeah, I think so the expectations, it wasn't, it wasn't that it was, it was just unfinished. And, yeah, the expectations were so high,

Dr. John Licato:

I think, yeah, you know, to compare this to five years ago, right, when a researcher would try something unique. They say, Oh, I'm going to try this little thing different with training my language models, right? They put it out. Only researchers pay attention. Researchers know the limitations of these things. They know what they're trying to do, right? But now that AI is in the mainstream, right? Someone puts out a language model. They make it so anybody can use it. You're gonna have people that don't understand what it's for. They use it expecting that it's fully open AI, and then it goes and says silly things, hallucinates, right? And that's I

Glenn Beckmann:

remember your conversations with Craig Martell when he was a guest on the podcast. The both of you, the conversations you had. And Craig was the first chief artificial intelligence and Digital Officer for the Department of Defense. He stood up that entire department for the DOD. I remember something you two talked about, and it was the trust bond that humans have to learn to trust. Ai, so the trust bond between Lucy and humans is broken, yeah, is, is there any hope that for for the people who develop Lucy, that Lucy 2.0 might be able to bridge, bridge that broken bond? Oh,

Dr. John Licato:

yeah. Oh, absolutely, you know this is they're, they're probably going to have to hire a marketing person you know and and figure out how to communicate the next release. I Yeah, it's always possible to create another version that can fix it. I think with Lucy itself, you can, if they've used some of the tricks that we already know work, like rlhf and training on human conversational data, that sort of thing, right? Maybe even using the reinforcement learning techniques that were pioneered by the deep sea team, right? I'm sure that they can get some improvements. And then, you know, you know, they're using a 7 billion parameter model, which is relatively small for even though, five years ago, that would have been massive, right? But you know, they're going to need a lot of funding to do training at the scale that they need, and if the government's supports them, then they can probably acquire that but they can easily show improvement on benchmarks, and just as easily companies that are in the lead. Can can drop. And, you know, see that with open AI, right? They just released GPT 4.5 and it's been long awaited, but I'm not hearing really a lot of buzz about it, right? I'm not hearing a lot of excitement about it.

Glenn Beckmann:

So outside of the university, you're also the founder of a company called actualization. Ai, yes, on the surface to someone like me who just, I can spell artificial intelligence, and that's about the breadth of my knowledge of it. It seems like you, you're developing, or you have a product that that would kind of prevent what happened to Lucy, proactively. Yeah, is that a fair assessment? It seems to me that what you're working on, and you tell us a little bit about it, is to prevent any future AI systems from having those false answers and nonsense answers and just ultimately breaking the trust bond. Between the people using the system and the system,

Dr. John Licato:

yeah, yeah, that's a great transition, because I think, you know, researchers understand that large language models are subject to hallucinations. Even if you tell them to do something, you tell them don't give away this piece of information. They might just slip up and give away that piece of information, right? Which, you know, researchers are familiar with that, but because AI is now in the mainstream, and every company is trying to throw AI into their products, right, they are not necessarily aware of these possibilities for failure, right? What I'm thinking, what I'm seeing for 2025 is that companies that are embracing AI are going to look for more confidence. They want to know if I'm going to put AI into my chat bot and put that out on my website. How do I know it's not going to give away private information? How do I know it's not going to make up stuff, right? Just say violent things, for example, and that's what we're trying to do with actualization AI is to give you that confidence, and we're doing that by providing testing tools. So let's say that you have an AI product, a chat bot, and you want to be able to test to make sure that it's not going to violate privacy, right? Well, our tool helps generate test cases that are customized to your use case so that you can find out when your language model or when your AI fails and fix those failure points before you actually put it out into the market, right? And what we're trying to provide is some confidence that you have actually tested. You've been a actualization AI approved before you put that product out, and it does something embarrassing that might you know or might might leave your company liable. So we are NSF, SBIR funded National Science Foundation, and that funding started in late 2024 so we're very early stage, but you know, this is a University of South Florida spin off and an application for AI technology that think could benefit a lot of people.

Glenn Beckmann:

Yeah, that's fantastic. So taking a small pivot, but maybe not too large of a pivot. I was really interested to see Microsoft introduced a couple of weeks ago, their newest quantum chip, Majorana one. Now Microsoft, being Microsoft, they've gone so far as to claim Majorana one is a fourth state of matter. It's not a solid, it's not a liquid, it's not a gas, it's a fourth state of existence. From a marketing perspective, that's legit. I mean, that's that is hyperbole at its purest farm. Ignoring that for a second, what are your thoughts on something like that and its potential effect, not necessarily on quantum computing, but on AI, will it make AI building better, faster?

Dr. John Licato:

Yeah, it is really exciting. You know, when we think about what is going to come next, so AI changed the world. Got everybody excited, and we think about what's going to come next, what's going to be the next big change, right? Quantum computing? Quantum Computing is definitely one of those that's on the horizon, along with major advances in robotics and so on. I'm not a physicist, so I don't know if the fourth state of matter claim is pulls any water. I think there already are more than three states of matter. I think plasma is counseling. Imagine

Glenn Beckmann:

that the Microsoft marketing guy didn't tell the truth. Yeah,

Dr. John Licato:

I don't know if he has a physics degree, but we'll have to check it anyway. But so anyway, yeah, the possibility of quantum computing is something that AI researchers are paying attention to. It's been known for quite a while that if quantum computing can scale because the way that it does computations allows for many computations to go on in parallel that could break a lot of you know cryptography algorithms that We use a lot of ways that we use mathematical limitations to protect our data, right? Um, if quantum computing can actually perform the computations that we hope it does, then it might be that we can break, uh, credit card encryption, uh, reasonable time, right, which, which is a huge, huge problem, right? Um, lot of systems are are built on on the assumption that it's not breakable, right? The good news is that there are a lot of people working on what we call post quantum cryptography. So they're trying to figure out, if we do have quantum computers, how do we make even tougher encryption so that even Quantum. Computers can't break it. And quantum computers do have some limitations. So they can't just straightforwardly, run any algorithm, any AI algorithm that you throw on them. There's limitations in how you can actually read the results of a quantum computation, because, you know, with qubits, if you read the result, then it collapses it down to so that the superposition no longer holds. And, you know, and, and, yeah, the it seems that one class of algorithms where quantum computing might bring the most benefit is certain types of optimization. And optimization is, I mean, that's, that's all AI. That's AI is always doing optimization solving, right? So AI could be something that benefits quite a bit from quantum computing. So we're paying attention to all these advances.

Glenn Beckmann:

So I've only briefly read a little bit about it, but you probably can touch on it a little bit more deeply. The power requirements, or AI, are, in a word, massive, and to the point where I see companies, you know, some of the tech giants actually considering building their own power plants,

Dr. John Licato:

nuclear power plants, to power the AI,

Glenn Beckmann:

what are your thoughts on that? And, you know, and there's obviously the corollary, the effect on the planet, that kind of power consumption and the need to create that power to run those

Dr. John Licato:

things, yeah, yeah. The So, just to kind of put those into into numbers, it's been estimated at GPT three to train it once cost about$100,000 just in electricity costs, right? But remember, I said that it's not just a matter when you when you create an state of the art AI system, it's not just a matter of training at once. And then all of a sudden, got, you know, new, new benchmarks being broken, right? You have to train it. And then you figure out something doesn't work, and then you go back and tweak one of the parameters, and then train it again. And you got to keep doing that, a little bit of trial and error, right? There's, there's a little bit of art to it, too. You might train it 1000s of times, and then you realize that none of them work, and it's just there's no way to guarantee that, right? So if you multiply that $100,000 cost by however many times it took to train it, that's already a lot of electricity used, and that's just one company, and that was GPT three. GPT three is estimated to be, you know, like anywhere from 10 to 100 times smaller than GPT four. And you know, we're on GPT 4.5 now, right? And then that's one company. So now you've got all the other companies, Lucy and deep seek. So the power costs are massive, and there is a lot of research in how to do the computation more efficiently, right? However, the trend in machine learning and deep learning in AI is that whenever they give us more powerful hardware and they allow us to do more computation with with less money and less energy, we find a way to fill it up again, right? And there's no reason to believe that's not going to change it. They give us more efficient power consumption. So I don't think that's going away. That their AI is such a powerful asset for any country or organization to have that it's it's it's going to be an arms race, right? If, if one company says, if one country says, Well, we're going to make it so AI can only use this much power, and that's it, right? Then they're going to fall behind quickly in the in the arms race. And that's, yeah, that's, that's not something that I think countries are willing to do right now. We

Glenn Beckmann:

just have to figure out how to generate massive more amounts of power and do it cleanly, right? Yeah,

Dr. John Licato:

that's, that's the goal, but not only because of AI, right? And, you know, oh yeah, electric cars and just general power consumption is going up. So, yeah,

Glenn Beckmann:

for sure. So as someone who's both researching and building AI, what's one development in the field that you're really excited about? Oh, yeah,

Dr. John Licato:

it's so hard to not so hard to limit that down, right? Oh, and we'll give you two, okay, that might make it a little easier. So, yeah, I'll focus on things that we're anticipating for 2025 so one of them is, as I already mentioned, the increased attention to security for models. You know, a lot of companies are realizing that AI is not magic, that it's incredibly powerful, but it's not magic. You have to, you have to test it. You. Have to find out what its limitations are. You have to account for those. And I think people are going to be looking for a lot more confidence building solutions. So that's what we're trying to provide with actualization AI. Another thing that is anticipated with this year is the rise of agentic AI, so that's an agent is essentially, you can think of it like a classic, large language model, except now it has access to tools. So if you it can search Google on its own, it can fill out websites, right? There are agent frameworks emerging so that the AI can decide, can, you know, move the mouse around and click on things and fill out forms on the what on your computer? Right?

Glenn Beckmann:

Job seekers everywhere are jumping for joy right now,

Dr. John Licato:

it's, you're gonna see that. I'd say a month, I'd say a month from now, right? Yeah, you're and what makes them different is they have a higher level of agency. So it's not just a chat bot that you give more tools to, but you give it the ability to decide when to use those tools more agency. That's where the agentic hype phrase comes from, right? So we're going to see more of that, and we're going to see frameworks to make it easy to use. So you know, if I, if I tell Siri to search the web, it can already do that, right? But if I tell Siri go to this website and copy and paste this text and then put it into this PDF and then print it out, right? Those are the kinds of things that are in the agentic space. So, I mean, the the increase in capability of the tool, the AI tools that we already interact with, is gonna, I mean, if it hasn't exploded already, it has, just imagine what it's gonna look like. Yeah,

Glenn Beckmann:

I think it's just gonna be an ongoing series of explosions from now until however many years from now? Yeah, we

Dr. John Licato:

haven't hit the peak yet. That's, that's, that's for sure. Well,

Glenn Beckmann:

I want to wrap up today with a question that I've kind of wrestled with myself for a number of years, and now, with the acceleration of AI across literally everything, I've walked right up to the line of despair. In some cases, I have two daughters, 25 and 22 years old. I've tried to coach them through their formative years, the social media phase, you know, and tried to tell them about the artificialness of those platforms and the deliberate manipulation and things like that. Now, with the ubiquity of AI, I've been telling them, the greatest challenge they their friends and the younger generations are going to face in the future is to be able to tell what's real and what's not, and also to care that there's a difference, because the lines between those two worlds are not Only blurred, they're almost non existent anymore. What do you think? Am I being fatalistic? What do you think about it? Yeah,

Dr. John Licato:

it's, I mean, are we? Are we just being, are we just being old men about this and saying more than I know that this, this insistence on reality is, you know, important, and the new generation doesn't, doesn't care about, you know, I saw, I saw a comment on the social medias, right? That That said, Oh, man, it's so funny when people just lie for no reason, right? And you see this on, you know, Instagram posts, where someone posts a video clip, and then the comments will will say, What movie is this from? And people will just put random, random movies, just, you know, just to be funny, right? Whatever. But, but there is a, there is a sense in which it doesn't even matter whether what you say is true anymore, right? It's a, it's, it's, it's funnier if it's not, the confusion that it causes is fun and okay, maybe that's maybe that's just, maybe that's just a thing to do when you're young, right? But there is a deeper problem that we need to be aware of, which is that sometimes being accurate matters, right? And because it's going to be so easy to create false versions of everything that we typically use to verify truth, like videos, right? Videos aren't quite there yet, right? I don't, I don't know if the ability to generate realistic videos is going to accelerate that much this year, maybe maybe five to three years, right? But it's definitely coming right now. There's definitely going to reach the point where you can't tell the difference. There is reason to believe that we are never going to be able to tell the difference between. Mean AI and human content. There's deep theoretical reasons why that may be the case. I can, I can explain it briefly so we have this paradigm in AI called adversarial learning, right, where you have a generator, something that can create an image of cats, right, and then a detector that can tell whether an image of a cat is a real or a fake one, right? Okay, so the problem is that once you have these two systems and they have you have the ability to train them. You can have them operate in an adversarial way. The generator generates images, the detector tells the differences, and then they learn based on that. And then the generator learns how to do more realistic images of cats, and the detector learns how to tell the difference between those and they just keep iterating, right? That's an arms race. So they keep going. They keep getting better and better until a human can't tell the difference between the what the generator created, and an actual human, an actual image of a cat. But as they get better and better, they start to reach this point where the signal disappears. There's no longer any variance that the generator and the detector can use to actually tell the difference between you, because the variance approaches the variance that you would expect in reality, and there's no longer any signal, so it could actually tell the difference. So you know that same pattern can exist with videos text, right? It could just be that we're already at that point with text generation where I show you a random piece of text you can never say with 100% confidence that this is chat GPT generate. So what do you do? What do you say to the to kids who are growing up and uh, because

Glenn Beckmann:

Don't, don't. We need them to be able to tell what's real and what's not real.

Dr. John Licato:

Yeah, we need some ability to tell, but I think that we have to accept that That ability is just not going to exist, and we have to think about what that world's going to look like I think it's even more important to teach them to value the difference, right? That is something that I don't know how to do. I don't know how to get people to actually care that what they're looking at is fake. Maybe that's just one of those old man things, values, I hope

Glenn Beckmann:

not. I'm gonna go crawl off into the corner corner and curl up in a ball now. Yeah, nothing's real. John, thanks so much. It's always great having you on and look, I can't wait to see your story on the on the new Be bold campaign and tremendous success to you. And cake, yeah, and also to actualization AI and doing lots of great things. And USF is lucky to have you, and we're lucky. We're really grateful you're willing to share some time with us. Always happy to be here. Many thanks today to Professor John licatto from the USF College of artificial intelligence, cybersecurity and computing, or as the close personal friends of the college call it cake. It's a new college created here at USF less than a year ago. John is playing a key role in its creation, and he's also one of the stars of the new USF Be bold marketing campaign. Keep an eye out for him on all your screens. It's been a great update on what's going on with USF, new college, as well as a deeper look behind the current headlines in the AI space, John, we look forward to the next time we have you on the podcast, assuming, of course, we'll be able to afford you anymore. Next week, on at the boundary, we'll gather together some of our thoughts on the recently completed Tampa summit five. We started this after our previous conference, and it was a great success, though we thought we'd try it again. Next week, we'll have a round table conversation about key takeaways from Tampa summit five. If you don't want to miss it or any of our future episodes, be sure to subscribe to the podcast on your favorite podcast player. That's going to wrap up this episode of at the boundary. Each new episode will feature global and national security issues we found to be worthy of attention and discussion. I'm Glenn Beckman, glad to be with you today. Thanks for listening, and we'll see you next week at the boundary. You

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Fault Lines Artwork

Fault Lines

National Security Institute
Horns of a Dilemma Artwork

Horns of a Dilemma

Texas National Security Review
War on the Rocks Artwork

War on the Rocks

War on the Rocks
The Iran Podcast Artwork

The Iran Podcast

Negar Mortazavi