At The Boundary

The Hidden Security Risks Inside Artificial Intelligence

Global and National Security Institute Season 4 Episode 117

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 41:16

Text the ATB Team! We'd love to hear from you!

In this episode of At The Boundary, Dr. Andrew Whiskeyman, a Senior Non-Resident Fellow at GNSI, speaks with Ryan Gutwein, a data security and compliance expert, to explain how rapid advances in artificial intelligence are reshaping national security. The conversation breaks down what technological layers AI actually depends on and why each of these four layers introduces unique security risks. Gutwein highlights threats such as data manipulation, data poisoning, parameter theft, and malicious code injection in open‑source environments, and why they matter for both military operations and everyday technology.

Whiskeyman and Gutwein also discuss the importance of secure‑by‑design incentives, while advocating for accelerated deployment of large language models, expansion of AI infrastructure and cooling capacity, and scaling drone development. Altogether, the episode offers a clear, approachable look at how AI is changing modern security—and what the U.S. must do to stay ahead.

GNSI on X
GNSI on Linkedin
GNSI on YouTube


At the Boundary  from the Global and National Security Institute at the University of South Florida,  features global and national security issues we’ve found to be insightful, intriguing, fascinating, maybe controversial, but overall just worth talking about.

A "boundary" is a place, either literal or figurative, where two forces exist in close proximity to each other. Sometimes that boundary is in a state of harmony. More often than not, that boundary has a bit of chaos baked in. The Global and National Security Institute will live on the boundary of security policy and technology and that's where this podcast will focus.

The mission of GNSI is to provide actionable solutions to 21st-century security challenges for decision-makers at the local, state, national and global levels. We hope you enjoy At the Boundary.

Look for our other publications and products on our website publications page.

 Ep 117 - 9 February (Gutwein)_Edit-1

SUMMARY KEYWORDS

Artificial intelligence, supply chain, data manipulation, model security, compute layer, integration layer, cybersecurity, US Air Force, generative AI, national security, open source, hardware security, drone technology, AI acceleration.

SPEAKERS

Ryan Gutwein, Dr. Andrew Whiskeyman, Jim Cardoso

 

Jim Cardoso  00:12

Jim, hello everyone. Welcome to this week's episode of at the boundary, the podcast from the global and national security Institute at the University of South Florida. I'm Jim Cardoso, Senior Director for GNSI, and your host for at the boundary. On today's episode, we'll be discussing artificial intelligence with the author of our latest GNSI decision brief, Ryan Gutwein AI, is admittedly an off discussed topic, but today's conversation has a bit of a twist. More on that in a moment. First couple notes. We published our latest newsletter last week. This issue contains a story about our journal of strategic security, one of the world's most highly ranked military studies journals, surpassing 2 million downloads and a bunch more. You can find it online, but I recommend you simplify your life and sign up on our website to receive it in your inbox. We're closing in on the deadlines to apply for a couple of really exciting student opportunities this summer, GNSI will fund airfare, lodging and tuition expenses for both programs to remove barriers for potential applicants. You'll find application requirements to both on our website and we'll drop links in the show notes. First, we'll be sending a group of USF students to Washington DC in mid May for a week of mentoring and network building in the nation's capital. The DC experience is shaping up to be a signature GNSI beyond the classroom event that aligns with our focus on preparing future practitioners for USF students. That's you. The deadline to apply is February 15, later this summer, GNSI is again partnering with the Cambridge International Security and Intelligence Program for an incredible study abroad opportunity. The deadline to apply for that one is February 20. Last year, two students from the Geni SCI future strategist program made the trip, and they wrote a blog for us about their experience. Finally, a quick reminder that the St Petersburg conference on world affairs kicks off tomorrow night, February 10 as NASA astronaut Nicole Stott delivers her keynote speech. Nicole was one of our guests on the podcast last week, and if you haven't listened to that episode, it's a great conversation. The rest of the conference with the theme, outer space, international collaboration and competition begins on February 11 at the USF St Pete campus, check the show notes for more information and to get registered. Okay, on to today's featured interview. Ryan Gutwein, a US Air Force veteran, serial entrepreneur and cyber security executive, is the author of our latest GNSI decision brief, in which he tackles an overlooked aspect of artificial intelligence, the supply chain, specifically how vulnerable it is. We're also bringing back Dr Andrew wiskeman To conduct the interview. Andy is a GNSI non resident Senior fellow, an adjunct professor at Syracuse University and an adjunct instructor at the Air Force, Air Command and Staff College, and a former C suite level strategist, the two of them have a terrific discussion as they dive deeper into Ryan's decision brief, securing the AI supply chain, safeguarding us advantage in the age of generative AI.

 

Dr. Andrew Whiskeyman  03:38

Ryan, welcome. It's a pleasure having an opportunity to have this conversation with you today on a topic, or topics that I think are really, really important when it comes to national security, especially, but also local and personal security, and I'm sure some of that is going to touch all of the levels. I know at the macro level, we're really focused on the national piece of it, though, I'd like to start because we've had the opportunity to have a conversation already. But for our listeners, can you tell us a little bit about yourself, how you got into this field, and then a little bit about what do you think are the most important points that we need to be thinking about and covering, and then we'll take the conversation from there. So over to you, Ryan, Sure,

 

Ryan Gutwein  04:23

thanks Andy for the intro, and it's an honor to, you know, do this podcast with you and with GNSI, hosting and discussing really critical, you know, points that that that's going to define, I think, the next warfare. So I was prior Air Force veteran. I was a security forces member. I was in from 2005 to 2014 the ops tempo was extremely fast at the time. And, you know, gone nine months, come back home, three months, gone nine months. And so it was a lot of the ops tempo was just very fast. And what we did different, disparate things. Things within security forces when you're deployed, what did detainee ops at Camp Bucca Iraq, and some area support operations transporting detainees from Camp Bucca to Abu Ghraib, and obviously a lot of physical security from that standpoint. And then all throughout Afghanistan, Shin Dan and her at province specifically. And so that's kind of what I did for nine years, right? I did a lot of physical security and deploying Don range, doing police transition team in Iraq and and other things. And then once I got out, you know, I had gotten certified while I was in in cyber and was really interested about that, because when we were deployed, we worked in Doom shops, Director of Information Management, where you're basically, you're testing and evaluating the technology that's in the field, making sure that it's meeting security requirements like jamming technology. We got to make sure we can't get jammed by our adversaries, and so we would do all that testing and evaluation, and in a doom shop, they called it. So that's really where I got first kind of experienced into the technology and to the just really bureaucratic problem of the author authorization to operate within the Department of War. So when I got out, I went to Central Command here at MacDill led the foreign military sales program where we would help develop, you know, allied countries, weapon systems, like Qatar, UAE, Kuwait, Jordan. We would go to these countries that were developing weapon systems that might have us crypto on it, and we would train them on the risk management framework, on how to securely develop their tech to meet the requirements. And then I started getting into industry, started doing things like FedRAMP and helping companies within the defense industrial base go to market a lot faster. And that's kind of what I've been doing recently, is helping companies that that are developing weapon systems are developing specific software capabilities to get to market faster and kind of getting through that bottleneck. And I think some of the key things here now as as we see AI moving extremely fast, and you see a lot of the executive orders getting pushed out into the public around AI safety and AI security, most recently, Secretary hegseth had sent out a memo on January 9, around AI acceleration. Meaning there was some verbiage in there around getting foundation models to the warfighter within 30 days. I think that's that's great. I think you need to build in specific security guardrails within that 30 day window. I think that's totally possible. But, you know, in my paper discussed the different layers, the data, the model, the compute and the integration layer, on how it kind of plays a whole ecosystem into this problem, right? And then obviously the strategic geopolitical issues out there with China, Russia, and obviously China is building their own AI stack with Huawei chips and Cambria for neural processing and their SMIC, the semiconductor manufacturing International Corporation, which is CCP owned. The corporation was actually formed out of Cayman Islands. Thought that was pretty interesting. But I think it's also pertinent that you don't need to be a major power anymore. I think you're seeing that in the Ukraine, Russia conflict is, you know, you could take out a whole fleet of tanks with a $500 drone. So you don't need to be a power to to to kind of do some damage anymore. So I think now it's not just Russia, China. Yeah, it is. But there's other players, Iran, North Korea, as we know, but other smaller players in there as well. And then I know you talked about information sharing and analysis centers within food and agriculture, I think that's also a big thing for AI. Currently, there's not a singular one. There's a center for calibration, trust, measurement and evaluation that that Carnegie Mellon or Software Engineering Institute stood up. It's managed by cdao Now, but again, they just stood it up. And so there's not, there's nothing like, you know, fully formed. And I also think secure by design incentives. We need to incentivize companies. Within a defense industrial base, that FedRAMP CMC, should not be expensive, right? There should be, you know, other transaction authorities or Small Business Innovation Research contracts out there that if there are capabilities that these companies should be incentivized to build secure by design, capabilities for Department of War. And then, you know, from the cognitive side, this, all this, all cascades down, and we'll get into this. But you know, you manipulate the data, you manipulate the model. Then from the cognitive side, the intelligence analysts or key decision makers that make wrong decisions because that data is now trained on corrupted data. So I think those are the C key, key. Kind of things that we're going to hit today won't cover everything.

 

Dr. Andrew Whiskeyman  10:56

Yeah, that's quite a bit. First of all, I think it's an absolutely fascinating story how you went from I just find these fascinating with many people, the story of you, of how you start off in security forces and then you end up in tech. But security is a mindset, I think that too often people tend to compartmentalize the difference between physical security, say cyber when it comes to particularly software and hardware security and cognitive security, when in reality, they're interrelated. Yeah, there are different needs with each with each zone, but there's a mentality of thinking about security that ought to underpin all of it, and I get the sense with that when it comes to artificial intelligence as well, and don't want to fall into the buzz wordness of it, although it is topical, right? It's a subset of machine learning that's been around for quite some time, at least conceptually, since the 50s at Dartmouth, with the concept of artificial intelligence and the pursuit of really getting, getting machines to be able to sense patterns and predict patterns. But when it comes to artificial intelligence, and Gartner, if you're familiar with them, does a wonderful thing with hype cycles. And you see different hype cycles as technology comes along when it comes to artificial intelligence and the model that you laid out. And hopefully we'll get your paper linked in the in the show notes of this because of having had the opportunity to read it, I think it lays out a really solid argument with this. What are the risks that you see being introduced at the moment when it comes to rapid adoption of artificial intelligence? You know, the old adage is better, faster, cheaper. You can get two of the three, but not all three together, not, you know, exorbitant cost that desire to adopt AI quickly, I think is good and to innovate, but it introduces some risks that I think you laid out a couple that you can pull out for us. We can, we can work through Sure.

 

Ryan Gutwein  13:15

So I think breaking this down at the layers that I, that I presented. So first, the data layer, right? This is your, your images, your sensor data, you know? And if

 

Dr. Andrew Whiskeyman  13:25

you, if you could force, I'm gonna cut you off for just a second, because we don't have your your paper up unless somebody's pre read it. So if you could just outline quickly your framework and then, and then walk us through it, just to give our listeners kind of a whole picture concept, sure? First. So the

 

Ryan Gutwein  13:41

I lay out kind of a framework on how we are to secure this supply chain issue and in it, and it's a software and a hardware issue, but I'll break it down by the four layers, right? So the first layer is the data layer, and I think some of the risk, or the threats, you know, within this area is, is obviously around, you know, data manipulation, data poisoning, where adversaries could inject, you know, manipulated data and and the person that's using the AI may not know that that data is actually corrupted. So data poisoning is, is a, is a big threat within this layer, and it's hard, the hardest to detect, as I mentioned. So and how do you secure that got the data later, you got you need to need cryptographic provenance to understand where did the data come from, who trained the data, who had access to the data, data lineage, trusted repositories, again, where did this data come from? And then just continuous validation. The next layer is the model layer, and this is where you know. Like train weights. Weights are like parameters, and there's millions of billions of parameters within these models, and you have weight theft, or parameter theft, where malicious actors could copy or defeat your AI, by, by, by stealing those parameters. So again, you can't manually inspect billions of parameters. It's impossible. So there's no verification system for that currently. And then I think some examples that we've seen this out there hugging face, right? They had their open source models out there that that had, that had injected malicious code into there. So people were pulling models off, hugging face, and then, you know, working on malicious data. So there was 100 1000s Plus models out there, and then other libraries, like Xiu tools and other things around models that that were getting malicious code injected into them. But I also think the next point is the compute. So this is the hardware layer, right? This is your GPUs. This is your processing units. And we'll get into this at the strategic level more, but obviously from the hardware level, you know, making sure we're using Nvidia chips here domestically, and not, you know, Huawei or or any others, obviously, Taiwan does 90% of the production of advanced chips right now, so in a geographically vulnerable area that that's a concern, and big companies like Andrew get their chips From them, and but obviously we're making strides developing you know, TSMC Arizona and Intel has, you know, their DoD Secure Enclave. So that's the compute layer. Obviously, you can still manipulate the hardware at the fab shop, right wherever those chips are being fabricated. You can manipulate the algorithms there. And then the last layer is the integration layer. And so this is where you're developing the models in your we call it a continuous integration continuous deployment pipeline, where you're testing your code. You're testing your application in a sandbox before you push it into production level environment. So you're going through these checks through a pipeline and integration pipeline. And I think we saw that with GitHub copilot in 2024 where, you know, an agent, a prompt injection vulnerability was exploited and and GitHub was, you know, exploited on on that from the integration layer, so that's kind of the framework, the four layers, I think, and just some examples and threats within those layers, but, but I think, you know, again, it's kind of a, it's a full stack kind of problem, not just a software problem, but it's a hardware problem, right now, right?

 

Dr. Andrew Whiskeyman  18:02

The thing I like about the way you, you laid it out and have made the argument is that it tends to be a bit more of a holistic look when it comes to one aspect of security, in terms of you framed it with the supply chain piece of each of those. And I think that's a very useful way of approaching any new tool, right, and particularly with artificial intelligence, in this case, breaking down where there are vulnerabilities, where there are potentials, that it could be a malicious actor, it could be self inflicted with some things, data, in particular, you could make a immaculate decision on something, but if your data is bad, you will be perfectly wrong in the decision or the, you know, the conclusion that you come to. And that happens, whether you've just got bad data that you're making it off of, or whether an enemy or an adversary is sowing bad seeds within your data to poison it, and so protecting that, or doing your best to verify that that's there is an absolutely critical element, of course, when you're drawing upon multiple open sources, how do you go about even putting that into place?

 

Ryan Gutwein  19:24

Well, I think we don't want to kill openness. I think openness drives innovation, and we've seen it play out over the last few years within warfare and in software engineering as well. I think you need open source, but there's secure ways that you can develop open source. You know, Department of War. They've developed platforms where companies, dib companies, can deploy their applications to I'll give Palantir as an example. They have their foundry and their ontology. You can bring in an open source model. You could put it into their ontology. They'll process it, and then they'll verify. Hey. Model came from this area. The data is bad. The ontology will kind of go through all that processing for the large language model, or the rag solution, whatever agentic AI you put it in there. But Palantir is just one company, right, that's doing this. Not a lot of companies are doing that, obviously. So I think a secure way to bring in open open source and to develop it internally and train it on specific enterprise, or in this case, you know, Department data. And I think the US is extremely they have asymmetric data advantage here. As you know, the two decades of military and intelligence operations, we have all this data that no military can really replicate, and we need to bring in models that can organize that data so that we can be lethal, more lethal with it, because right now, this data is fragmented, and it's just, you know, it's behind, you know, legal stove pipes. It's invisible to operators and engineers and industry partners who can help us exploit it, right and test it. So, because we need a win with speed, because our adversaries are developing at speed. So we we need to accelerate. The way to do that is through open openness, with guardrails baked into it. So, and we can do that through platformization.

 

Dr. Andrew Whiskeyman  21:36

I get to me, it's the going in with eyes wide open, right? There's risks either way, and going more quickly can be good. I happen to agree with you. The openness of the US in particular has been a strength throughout history. There's been a tension between sort of the command economy driving things in a certain way, versus more open economies and innovation, but innovation brings different risks and ways to mitigate that, and focusing on guardrails is an important aspect of that, when it comes to the data and the models, though, and the weighting, and I know it can be manipulated. I mean, that's just a risk that's out there. I see a challenge with the human operator, with that aspect, though, when it comes to trust, I see really a polarization in that between almost a blind faith in the answers that are is given from llms in particular, Vice complete skepticism and distrust. Where, to me, it seems like the golden mean, as it has been in many things, is that is the healthy way to approach it. Do you have any thoughts about I mean, or feel free to disagree with my, my assessment of where we're at on that as well? I think that can be fruitful to have a good, good dialog. That's something you disagree on, but if you do agree, what are some ways you think you can build that sort of healthy approach for leaders to really be thinking about those two in particular? Then we can move into the other aspects of your your model too? Sure.

 

Ryan Gutwein  23:21

Yeah, I think, I think from the cognitive side, you need to change how, how the leaders within the defense or Department of War is going to accelerate this. Obviously, Secretary hegseth And everyone's on board. But as you know, there's other players authorizing officials that need to authorize this tech to even operate. And so there's, there's red tape, still even with the memos, right? Memos, executive orders are really just theater, in my opinion. But you need to educate the people that are authorizing this technology to understand it, and it's a good thing that cdao is kind of leading this charge of authorizing AI models to operate as they should, and their ao is very well, he's very, you know, for, you know, forward thinking. So I think that's a good thing. But I think throughout the entire Department of War in general, needs a lot of education from the cognitive side on, you know, this, this data doesn't look right, or this, this, this, you know, based on the data that we've been training it, it's not responding correctly, so, and this gets into, again, the, you know, Testing and Evaluation, right? It's just that the ecosystem is not quite there yet on how we test these things. Obviously, the foundation models are already deployed within Department of War, with open AI and anthropic currently on, you know, government contracts, but, but I think you know, to your point of of. Of the cognitive side you need to you need to train the leaders on right, because it's important that we do get a tool, war fighter. And we've seen innovation stagnate because of the at your problem so many times, right? And I think that's the concern with the war fighters, is, can they get the capabilities to be lethal and to save lives and to preserve the ideals of the West, but at the same time, you have risk, risk averse people that don't want to sign and in case there is a, you know, potential confidential data or intelligence data that that gets released to adversaries. But I you know, there's guardrails that you can do to prevent that from happening.

 

Dr. Andrew Whiskeyman  25:46

Yeah? To me, there's been historical patterns when it's come to the adoption of new technologies, and I see some of those patterns playing out again. One of my biases is the human person really hasn't changed. We just keep introducing new technologies, and it manifests over similar problems we've had. I'm going to lay out a couple just as for sake of argument again, feel free to disagree if you don't think my my interpretation of history is correct on this, and then tease out from your perspective, potentially some successes you've seen. You know, as we start to close out our brief half hour on this topic, which goes way too quickly, I want to leave our audience with some concrete recommendations that you might have for tangible steps that can be taken right to really get after the challenge, not from a band aid perspective, but really from more of a getting after some root things to to improve on. So if we dial back to World War One, which I would argue is probably the war that saw the most innovation across domains in a short period of time that leaders had to deal with when you think about radio, which hadn't existed, telephone, telegraph, submarine, chemical warfare, airplanes, tanks. I mean, each domain saw massive change in terms of the technology that was available for leaders to have to implement one of the patterns of failed leadership in that was taking a new technology and just sort of bolting it on to what was already going on. So thinking of the tank as just another horse, for instance, or the airplane is assigned to the Signal Corps in the army, and it's just a signaling device, as opposed to thinking about air power as really a domain of operation when it comes to artificial intelligence. And I think broadly cyberspace, in some ways, is falling into this, this problem as well. It enhances other things that we do. Usually, I use AI all the time for different things, with work, with school, with life and some things just it makes things easier, but in some respects, it's something separate and something that needs a different approach that just can't be slapped on to. And when it comes to something as big as the Department of War, when you're looking at organizational change, that's way different, I think, than saying, Hey, we got to train leaders. Yes, we do. But how do we do that within the system? How do we incorporate artificial intelligence into professional military education when it comes to Department of War issues or training exercises or use things that are beyond just Oh, we've we've adopted a model, and we're kind of running with it, something that really gets after thinking more deeply about it, to make a change that long winded soliloquy I just laid out. There is a question in that, in your experiences working in this field, where have you seen in any company or any section you've worked with sort of a success where the spark hits, where a tool doesn't just get brought in, but people are free to really think more deeply about it and really innovate and assess risks and mitigations to those risks.

 

Ryan Gutwein  29:17

And I agree with your historical context. Well, thank you very much. I appreciate, I think you could say the same thing after the Cold War, right? You had that, I forget. They call it the last supper after the Cold War. And they, you know, all the big leaders within Department of Defense at the time said, condense, condense, condense, consolidate, consolidate, consolidate. And then you just had, you had one company that was, that was building weapon systems and and, you know, you, you know very well, you know, you got Boeing, Northrop, Grumman, these kind of guys, right? So the consolidation really hindered our ability to innovate. I would, I would say, you know, since the Cold War, right? You had Pontiac that was building one. Weapon systems back then and and we just don't, we don't have that nowadays. And so I see now with AI drones and targeting systems and other other capabilities that are being deployed within the defense industrial base, and you're seeing it real life in real conflicts right now, being, I mean, you saw it in Venezuela, right? No casualties, and we got the mission accomplished. So you're seeing these capabilities play out in real time, and those capabilities have to get operationalized and approved somehow. So I think that that's, that's a positive sign is that we've seen good outcomes from Ai, but we've seen it also bad disinformation in the in the Russia Ukraine conflict, deep fakes, large language model ISR analysis, Same thing going on in the Gaza, Israel conflict. I but, but I think that that's, that's what we're seeing now within I call it the American industrial base, not the defense industrial base, because we're building things now in America, I mentioned a couple companies that were doing additive manufacturing, where they're deploying containers to to the to to country, and they have 3d printers inside these containers. And they're developing drones at scale in country. And so additive manufacturing is, is something that I've seen play out in real time within the, you know, today's current geopolitical landscape, and there's a lot of really good companies that are doing that, but I would say that that's one good example that I've seen, but there's a plethora of others that you know specifically within the drone space and AI that I've seen so

 

Dr. Andrew Whiskeyman  31:56

and I think your your model fits for both from an analysis perspective, as you start thinking about those we'll take out in manufacturing, right? Is it's an amazing technology, especially the advances that are going across different things that can get built. But what if your data is bad? What if somebody intentionally poisons the data and you build a perfect part at the edge for logistics, or perfect building that's perfectly flawed to collapse at the right time, or what if the modeling weights that are used for the calculations are purposely wrong, right? Each of these advances in tech, I see having very similar sort of points of potential failure, or where we need to reinforce to mitigate the risk along the way, right, because it's only as good as the previous step,

 

Ryan Gutwein  32:53

right and and it is a big problem. You're right with special especially with, you know, small mom and pop shops are developing weapon components for the DOD, but they're very small, or they're, they're, it's a big company too. I've seen it too, where you have 1000s of employees, but you know, the Department of War has this new cyber security Maturity Model certification, cmmc. They call it, where, if, if you don't, if you don't go through this, you don't, you can't even bid on government contracts. So so we assess these companies, and we, you know, making sure that the controlled unclassified information is not touching anything on the outside, and making sure it's not leaving the boundary. If they are using large language models to help develop designs within their CAD designs, we have to make sure that it's, you know, in a Secure Enclave, and it's not there. It's just training on the data that that they give it. So sandboxing, essentially their their their model for that so, but these companies, they have to go through this rigor of cmmc Now, or if they're a cloud peer cloud base, then they go through FedRAMP. But that's kind of the go to market for these companies. And they have to, you have, when you're building for the Department of War, your capability has to have security baked into it. I know compliance doesn't equal security, but but at the same time, these these requirements are there for a reason. You have to build your capability to meet that. That's why some companies have commercial versions and and and government versions of their product.

 

Dr. Andrew Whiskeyman  34:31

You know, when I was a even not when I was a kid, but a while ago, I used to listen to Car Talk on NPR. I don't know if you've ever heard the click and clack the tap is absolutely hysterical. I love them, but they would always end the show. By saying, You've wasted another perfectly good hour listening to us. And I want to say almost to our audience, you've wasted another perfectly good half hour talking to us because we're already almost at time, or we are at time. It's crazy how fast that goes. Yeah, but before we close out in the last few minutes, if I can squeeze out. Just a little bit of time. Of course, if you were only at a half hour of time, you could probably listen to us at one and a half or two times speed. I do that with some podcasts. Course, some people talk so fast that you just you can't listen to that, and it sounds like chipmunks or gibberish. I'm gonna ask you for three if you've got three, even if you only have a couple, is fine if you were, and we'll get to this a second, if you were the Secretary of War, what would be three things that you would say? We're going to do these three things immediately to kind of move us ahead in this space. And while you're thinking about that, you know, for the audience, we were Ryan and I were talking before, based on our last names, he's a goot wine, good wine, and I'm whiskey man. A lot of people claim to be a whiskey man, but I really am. You truly are. And we were talking about, I know war on the rocks has already been taken but some sort of podcast to talk through AI repeatedly. When you have a glass half full of good wine or or whiskey. We're missing beer, but, you know, we do have the Yingling center right here at USF, so I think, yeah, it is. So I think we've got something in the in the works, not to compete with this fantastic podcast that GNSI has more of an addendum, or maybe we add something to it. But anyway, I stalled long enough while you had a minute to think about some concrete things you'd like to do, and then we'll close out our time. So right over you.

 

Ryan Gutwein  36:25

First off, I like the idea of that podcast. It's, it seems like it's a match made in heaven, Andy with with our last names. But to answer your question, I think the main three things, and it's probably pointed out in his memo that he gave, he gave out on January 9, around AI acceleration. But I agree, I agree with the Secretary that we need to accelerate AI, whether it's large language models or retrieval, augmented generation. What a genetic AI systems? I think that is important, but, but also the hardware side. We need to keep accelerating the hardware side as well, so continually to develop the data centers, TSMC, Arizona, I mentioned 160 9 billion Hills Intel has Hillsborough, Oregon, New Albany, Ohio. So there's a lot of progress going on from the infrastructure side of of developing and AI infrastructure, but cooling as well. I think cooling is very critical. I mean, one megawatt, which can power 800,000 US houses a neighborhood, is in one rack, in a server rack in a data center. So 37 to 40% consumption of of of cooling. And there's some good companies that are doing it right now. And then, I would say the drone space, right? I mentioned one company that was doing it at a scale. We need more companies like them, because our adversaries are developing cheap drones very fast, and so we need to kind of replicate that. So I would say the AI space from the software side, we need to keep accelerating that. And then the hardware side, we need to keep accelerating that, and the infrastructure building out the Secure Enclave infrastructure for building the chips and the fab shops. And then the third thing would be, you know, developing drones at scale that have these eight secure model AI capabilities on them, targeting and things like that. So I think those three things along with kind of the framework I gave on how to secure that, and again, you can't get the capabilities unless an AO can sign off on it. So, so I think those are my three things. Andy, hi, thanks, Ryan, and I think we'll bid on this and maybe have a glass of whiskey and good wine on our next episode.

 

Dr. Andrew Whiskeyman  38:53

Amen brother, thanks Ryan for joining me and for laying out your model. And I really look forward to hearing other feedback on it as well. Hopefully this is the start of a lovely conversation, and not a one and done as they go. So thank you and have a great rest of your

 

Ryan Gutwein  39:10

day. Thanks for having me Andy.

 

Jim Cardoso  39:16

Special thanks to our guests today, Ryan Gutwein and Andrew wiskeman. We highly recommend reading Ryan's decision brief. It's available on our website, under publications. Next week, on the podcast, we're going to be talking with GNSI Senior Research Fellow Jeff Rog. We spoke to him last year about his book, The Spy in the state, the history of American intelligence. We'll again be talking to him about intelligence next week, but this time focusing on the future. Jeff has written a soon to be published decision brief focused on intelligence and technology. The lead sentence says it all, technology has always shaped intelligence, but never has the relationship between the two been as consequential as it is to. Day. Jeff is also one of the key architects of GNSI, April, international security experience, which is also built around intelligence and technology and includes a student led conference, an undergraduate strategy competition and a career fair. You don't want to miss that episode or any other episode, be sure to subscribe to the podcast on your favorite platform. We know you have virtually unlimited choices when it comes to choosing what you're going to listen to, and we're grateful you shared a few minutes with us today. You can find Gina Syren, YouTube, LinkedIn, NX. Be sure to follow like and subscribe. Tell your friends and colleagues as well. Remind her too, to sign up for the newsletter. All this is on our website, usf.edu/gnsi,

 

Jim Cardoso  40:52

that's going to wrap up this episode of at the boundary. Each new episode will feature global and national security issues we found to be insightful, intriguing, maybe controversial, but overall, just worth talking about. I'm Jim Cardoso, and we'll see you at the boundary. You.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Fault Lines Artwork

Fault Lines

National Security Institute
Horns of a Dilemma Artwork

Horns of a Dilemma

Texas National Security Review
War on the Rocks Artwork

War on the Rocks

War on the Rocks
The Iran Podcast Artwork

The Iran Podcast

Negar Mortazavi