
At The Boundary
“At the Boundary” is going to feature global and national strategy insights that we think our fans will want to know about. That could mean live interviews, engagements with distinguished thought leaders, conference highlights, and more. It will pull in a broad array of government, industry, and academic partners, ensuring we don’t produce a dull uniformity of ideas. It will also be a platform to showcase all the great things going on with GNSI, our partners, and USF.
At The Boundary
AI-Powered Cyber Attacks: How Hackers Are Exploiting Human Error
Text the ATB Team! We'd love to hear from you!
AI is revolutionizing cybersecurity, but human error remains the biggest vulnerability. In this episode of At the Boundary, Joe Blankenship, Chief Data Officer at Certus Core, unpacks the latest AI-driven threats, from phishing and malware to supply chain security risks.
🔎 What you’ll learn:
- How AI is weaponizing cyber threats like phishing and malware
- Why human error is still the biggest cybersecurity risk
- The role of blockchain in securing supply chains & transactions
- How organizations can regulate AI risks without slowing innovation
With AI and blockchain reshaping cybersecurity, understanding the risks is critical.
🎧 Listen now on Apple Podcasts & Spotify.
Links from the episode:
• Register for the FSP Cyber Frontier Summit Here
• Dr. Mohsen Milani's Iran's Rise and Rivalry with the US in the Middle East Book Interview
• GNSI Tampa Summit 5: The Russia-Ukraine War: Lessons For Future Conflicts Conference Video Playlist
At the Boundary from the Global and National Security Institute at the University of South Florida, features global and national security issues we’ve found to be insightful, intriguing, fascinating, maybe controversial, but overall just worth talking about.
A "boundary" is a place, either literal or figurative, where two forces exist in close proximity to each other. Sometimes that boundary is in a state of harmony. More often than not, that boundary has a bit of chaos baked in. The Global and National Security Institute will live on the boundary of security policy and technology and that's where this podcast will focus.
The mission of GNSI is to provide actionable solutions to 21st-century security challenges for decision-makers at the local, state, national and global levels. We hope you enjoy At the Boundary.
Look for our other publications and products on our website publications page.
Hi everybody. Welcome to another episode of at the boundary, the podcast from the global and national security Institute at the University of South Florida. I'm Glenn Beckman, communications manager at GNSI here today, again, to be your host for at the boundary. Today on the podcast, it's all about our future strategist program and the student led conference they're hosting in a couple of weeks here at USF. It's called the cyber frontier Summit. Lily shores will be here, and she'll be talking with one of the featured speakers at the conference, Joe Blankenship, co founder and chief data officer at service, before we bring them into the studio. However, just a couple of quick notes. Our latest GNSI newsletter drops this week, featured in the newsletter will be our latest decision. Brief, a deeper dive into a topic we discussed on the podcast a few weeks ago, the Military Recruitment crisis. GNSI research fellow Dr Guido Rossi, digs a little more into that crisis and tries to formulate potential answers and solutions for military leaders. If you haven't had the chance to check out the latest episode of our GNSI video series on YouTube. Dr Mohsen milani's book talk has become quite the attraction, 32,000 views and counting, as he discusses his latest book, Iran's rise in rivalry with the US and the Middle East with GNSI faculty Senior fellow, Dr Randy bourb, if you haven't had a chance to look, highly recommend you go over there. We'll drop a link in the show notes. Tomorrow. We're publishing another video episode as GNSI strategy and research manager, Dr Tad schnaufer sits down with Dr Maria snugovaya, a senior fellow at the Europe Russia and Eurasia program at CSIS, that's the Center for Strategic and International Studies. Their conversation is a continuation of her appearance at Tampa summit five earlier this month, where we examined the lessons for future conflicts arising from the Russia Ukraine war. If you don't want to miss any of these episodes, we recommend you subscribe to our channel while you're there. Okay, we told you earlier today, in today's podcast, it's all about our future strategist program and their upcoming conference, the cyber frontier Summit. That conference is scheduled for April 15. The Marshall Student Center here at USF in Tampa. Our team at GNSI has been helping support the conference, but the students of FSB have been doing all of the heavy lifting, and man, they've built quite an impressive event on the agenda, panel discussions about AI in cybersecurity operations, zero trust architecture for critical infrastructure, Quantum readiness for protecting data, securing the digital economy, the intersection of cyber policy, strategy and Modern Warfare. One of the most compelling and rewarding aspects of this conference is that the students will lead and moderate all of the panel discussions. Then a student research poster event will also be featured during the conference. In addition, GNSI and cyber Florida Executive Director, retired Marine Corps General Frank McKenzie, will be the keynote speaker. You can see the complete agenda and list of speakers on our website. We'll drop a link to that in the show notes, one of the featured speakers in the securing the digital economy, the future of trust and transactions will be Joe Blankenship. He's the co founder and chief data officer at Tampa Bay startup Service Corps. It's a software company started by veterans dedicated to changing the way people and machines interact with data. Let's bring him into the studio now, along with Lily shores, an officer with the GNSI future strategist program and one of the primary planners for this upcoming conference. She's working towards her undergraduate degree in International Studies at USF, and is an aspiring national security professional. I'll hand it over now to Lily. You. On.
Lily Shores:Thank you so much for the introduction. And we are welcoming Joe Blankenship, the Chief Data Officer of Certus core, and we are going to talk about cybersecurity. So my first question will be, as the digital economy continues to grow. What are the emerging cybersecurity threats that businesses need to be most vigilant about?
Joe Blankenship:Oh, man, this is really a question and a really good topic for discussion, especially in 2025 going to 2026 it's two years into the Gen AI trend, and it's only going to be moving forward with more speed moving the coming years. So I would say that when it comes to digital economies, specifically businesses, cybersecurity threats that we have to worry about, it's the same old, same old human vectors. They're going to be the biggest things, malware, phishing, ransomware. It's systems, in terms of just their resiliency against cyber threats, have become much better, but it's still, you know, people clicking on the wrong things and getting into the wrong spaces, looking at the wrong sites, they're going to be persistently the main vectors through which, you know, threats occur. And once again, there are no easy solutions to that, especially since AI is getting better at generating automated responses and getting these AI agents in the loop, where they are presenting more new paradigms, essentially, and how people are going to be engaging with AI and how they can be tricked by them. So there are plenty of examples of, you know, cybersecurity firms taking big leaps with AI to figure out what the best placement is for them. You know, really, quest is a local company. They're doing that right now and but once again, it's early days. You know, it's, you know, roughly two to three years into Gen ai, llms, their utilization, and we're still trying to find out the best paradigms through which we can actually use these systems to help counter bad actors, black hats, malicious software. AI has presented new paradigms in how malware is generated. Ransomware is being coded and deployed, and it doesn't help that AI is producing more human readable AI or emails and other types of social media content that humans are engaging with now. So I don't know from your perspective, like, what's been on the more, I guess, non traditional routes, you know, social media stuff like that, if you've seen trends like that, but I know that within at least your traditional business, enterprise type stuff that you know, spam emails, phishing, smishing, ransomware, malware, those kind of things have proven to be persistently more tricky as time has gone on. So.
Lily Shores:So how would you like could you expand on what you mean by the human vector part? Yeah, absolutely.
Joe Blankenship:Once again, when it comes to enterprise security and people using information and communication technology infrastructure, emails, chat, content management systems, knowledge bases, their data infrastructure, you know, databases, data, APIs, we're we're assuming that humans that are in the loop, you know, it, professionals, data analysts, non technical leadership, are using technologies responsibly within, you know, the training guidelines for cyber security practices within organizations. And what I, what I mean by human vectors is essentially those people and their interactions with the technologies that they're using inside the organizations, you know, their laptops or devices, even their cell phones. You know, we in many cases, allow people to use their personal devices to, you know, interact with and or use, or, you know, gain access to business and organizational infrastructure. So the challenge there is, can you trust their personal devices? Do they have proper authorization and trusted infrastructure to know that their identity is being protected and essentially delineated from personal stuff versus business stuff? So it's one of those things where it's easy for people to copy paste from one thing to another thing. You know, it's just simple stuff that humans do to expedite doing their jobs fast, efficiently, to get the effect for leadership and organizational goals, but also as a byproduct, essentially kind of derails the security practices that were intended to keep people from making missteps by trying to gain expediencies and effect elsewhere. So I know that in defense and I see people copy pasting things in the wrong systems and accidentally transferring things between classification levels. Happens all the time. But even within more traditional, you know, enterprises, you know, things like Deloitte Accenture still very tricky, because they have R D systems. They have more business enterprise systems, and they still, they still need maintain segregation for a lot of both data privacy and personal information and technique, techniques, tactics and procedures. You know, there's a lot of things that you wouldn't think would be indicators or vectors for hackers to gain access systems, or cyber security. You know, vectors to essentially leverage what they do and how they do it to gain access to the systems, but it happens all the time. Bad, bad, bad password practices using passwords that are just too simple, and these password systems and checkers themselves within the enterprise systems don't catch the simplicity, or don't catch they can be brute forced. So there's a lot of things in terms of just human activity that are still the main threats for business infrastructure, and I think, within the context of digital economy and businesses that are looking to be fully digitized, especially now that post COVID, most people are fully remote. They use a mix of personal company devices. These are just complicating factors in terms of the cybersecurity threats that a company can engage with or can incur as a result of even. Emergent digital economy paradigms. So,
Lily Shores:so how would you balance between the need to keep like efficiency but also trying to keep like the infrastructure secure? Oh, it
Joe Blankenship:comes down to training, comes down to personal responsibility, comes down to making sure that people can be trusted with what they do and how they do it. But I'd say education is definitely a first step. Make sure you have something like know, before something has good educational and practical use cases for how enterprises train human beings to use their systems, while also enabling people that are non technical to engage with technical people inside the organization, people that are in charge of maintaining the infrastructure and all the technologies that go into how the company is around, so they can better communicate to them what they're seeing, how they're seeing it, and how they can best counteract malicious actors going against overall enterprise infrastructure. So secondarily, it would be making sure that you have good software in place on company systems and people's devices to make sure that certain actions cannot be performed within certain contexts. Of using applications for company use that way you're kind of helping keep separate personal and professional type activities on devices that you might be, you know, might be using a personal device. So that'd be the second part. The other part is, you know, leveraging AI, you can't escape AI nowadays, so you're gonna have to find creative ways in which to enable people to understand what is at high risk, using AI to enable them to understand the risk and to help them counteract risky activities within the context of both the digital economy and how businesses need to observe how people are, You know, kind of working within these emergent paradigms.
Lily Shores:So there isn't a lot of concerns between like AI and a lot of people actually scared to use it. So how would you introduce the use of AI to someone who is more cautious about implementing that in their like, personal life or their work?
Joe Blankenship:Oh, start with the basics. AI is not threatening. It's not anything that is, you know, it's not Skynet. We're not battling terminators here. It's, it's, you know, these are NLP algorithms that are very nuanced and trained on a lot of data to produce a very good effect, human reader responses, good prompt engineering. But yeah, it's starting with basics, starting with, like, a focus on what they do and how they do it. You know, what are their business goals? You know, what's the vision and mission there? How does AI enable those gaps, that mission and vision goals that humans can't or, how can human activities be enabled better through AI? So I would, you know, start small, start with practical effects. You know, there's you don't need to, you know, conquer the world in one day with AI. It's one of the things where it's going to be an incremental and gradual process for organizations to both learn about what it is and how it can be best leveraged to affect within their organizations. And
Lily Shores:then couldn't AI also be used to attack cyber security systems?
Joe Blankenship:Oh, absolutely, it already has, my knowledge, the ability for people to prompt out malware and other bad scripts from Ai access points, things like chatgpt, like even things like anthropic you know, it's like Claude has been really fine tuned to be a very safe LLM and a very safe AI. But even with that, you can use prompt engineering to essentially trick these things into telling you how exactly to produce these malicious things that could be used to break people's security practices. Like I said, malware and ransomware are just two examples of scripts that could be generated from these llms and deployed within a matter of minutes if they wanted to. It's one of those things where I think it's twofold. It's one on the ability of human beings interact with these things in an ethical manner, but also a heavy lift on the chat gpts And the, you know, the Facebooks and the anthropics of the world to really constrain what those foundation models do and how people can leverage those foundation models within ethical boundaries of utilization, without keeping them from being innovative and creative with those llms At the same time, and which is a it's like said early days. So there are no good answers to that yet true.
Lily Shores:So when it comes to like protection, as you said, How would like blockchain technology be integrated into cybersecurity solutions to protect the digital economy and like, what are some specific sectors where it has already shown promising results.
Joe Blankenship:Oh, man, that's yeah, once again, good questions. I would say, just initially blockchain and decentralized technology very large. Keep in mind, Blockchain technologies are an ensemble of a bunch of different technologies pushed together for an effect. But I would say the two big things that I would think of right off the top of my head would be data protection and transaction verification. I think those two things have been shown to be viable and very useful in the context of decentralized technologies, like blockchain technologies in more conventional and traditional business use cases, specifically do. Sectors I think, have benefited or are going to benefit, probably most from this are going to be your supply chain management and vendor due diligence, I think, in terms of organizations knowing who to do business with and how those organizations have done business in the past with other organizations, is going to be increasingly important for people to do safe things with other safe organizations, to avoid hacks like Target experience, like Sony experienced, and to move towards, you know, maybe broader, broad conversations on how data is becoming like, essentially, more and more critical as it's being used to train and fine tune AI agents, but Also how, just basic digital economy practices. You know, I buy this. It was spent here. They're tracking these, these transactions, and better, just better solidifying a chain of responsibility and a chain of Providence and lineage on what is happening inside the digital economy and how blockchain helps you best preserve that canonical knowledge and protect against malfeasance within those transactions over time, people going back there and changing records so forth, so but I would say with swipe management as well, we can see with, I guess this kind of leads into the regulatory stuff as well, that nation state actors, governments Around the world, you know, they're leveraging things like sanctions, like tariffs, like that. And it's becoming more and more important to track the effects of these things, both in terms of the broader macro economic looking at how digital economy is being affecting and how it's affecting larger markets, but also how that's affecting smaller spaces of economy. Just time, manufacturing, logistics, multimodal transportation, container shipment, stuff, and when it comes to not just the vendor due diligence, looking at what organizations, people have done with other organizations, but looking at also how that affects things like supply chain management, how that affects broader macroeconomic effects through smaller economic shifts, in terms of how physical world meets the digital. So I think blockchain ontologies, and I guess decentralized technologies very large, are going to be critical in gaining insights on how those things are both done, practically and both and more safely, I guess, in the long term.
Lily Shores:So when you say, like decentralized economy, like, What do you mean specifically? Because I feel like a lot of people will hear those words and, like, hear blockchain, but they don't necessarily know, like, what it actually means, or, like, the goal of this technology and why it's so important as we move forward. Yeah.
Joe Blankenship:Yeah, absolutely. So once again, this would be my own, my terminology. I'm sure there'll be feedback from folks in the audience, but so centralized, if you think about centralized economies, these are things that are regulated by nation states or states, or at large state governments in different scales of state government, mainly because there's an implication of taxation, an implication of regulatory purview over what happens within certain scales and political boundaries of economy, a decentralized economy or a decentralized technology would be something that kind of supersedes that Bitcoin can be interacted with across national borders in a way that does not require the states or regulatory actors to be inside that loop. So I think as a baseline that was the promise of Bitcoin and other cryptocurrency technologies, and that's still the promise of things like ripple and things that have been promoted as decentralized technologies that don't focus so much on the cryptocurrency aspect of it, but focus on the decentralized canonical preservation of knowledge of how transactions occurred over time. So if you look at from that perspective, then you have something that is a better footprint for tracking and keeping safe transactional knowledge and information, while also keeping people's information transparent to a scale that you know protects personal data privacy but also encourages purview in a way that's not or just invasive, it would be a good word for it. So yeah, I think that would be the distinction there, and kind of the implication of using decentralized, or going decentralized versus more decentralized, or something that's more control in a central regulatory
Lily Shores:manner. And I'm just wondering how like with Blockchain technology, how that interacts with, like AI, and how like those two technologies, since they're both emerging like, how they interact with each other, or almost like counteract one another.
Joe Blankenship:Ooh, okay, so working with and working against. So once again, my knowledge as of recent, not very deep, but I do know that they, like any other technology sector, they are looking for best practices, and they're looking for ways that are the most practical ways to apply these things over time in application spaces, where it's more about the cryptocurrency, more about leveraging the digital economy aspects of it, I think it has been more minimal in the fact that they use it for the same thing that most people use in those at least from the technology standpoint. You know, it's co piloting of code and producing stuff like, you know, decentralized contracts. Yeah, but also summarization, understanding broader effects within those ecosystems, and how people can, from a non technical perspective, can better engage with more technical aspects of those, those economies and those communities of practice. So but against, I would say, I think it'd be people running too fast to a finish line with the assumption that AI is going to stop or bridge a technological gap that still has no clear answer from a human perspective, and then people rush to do something with an LLM or an AI agents like that, which only further complicates the initial challenge. And I would say, as a just a segue of that, at least from my personal experience of building, building technologies that enable better artificial intelligence and better generative AI, better LLM utilization, there's, I think there's a presupposition with most customers in that space that AI is going to magically solve some kind of problem that they don't have clear way to contextualize themselves, like within a organization, whether it's an NGO, a nonprofit, a four business organization or a digital economic segment writ large, if an organization cannot clearly describe or contextualize an issue within their vision, mission, goals, or they have no way to clearly connect an objective, they have to a practical methodology through which that objective can be realized, an AI agent won't really help with that. It will from an initial prompting point of like, hey, what do I do? It can give you general like, lay of the land of topically based on semantics and syntax, like, what you're kind of looking for, but the due diligence is still on the human being to bring those the technology and the practical aspect of what they want to do and how they want to do it to the table is, you know, you can't really depend on AI to do that for you. AI is a great force multiplier. Can give you a lot of insights from a lot of different core port of information. But I would say that there's going to be an increasing, increasing trend, increasing feedback from markets, that AI isn't going to be the panacea that solves everything. It's going to be something that has either made their lives a lot easier because they kept something small, or that's made their lives more complicated because they went too big too fast, and now they have to bring in more human beings with technical backgrounds in AI engineering, data engineering, data science, to then readdress things that were complicated through leveraging AI in a way that was unintended or one that does not produce consistent or rigorous results as an output of those processes. So once again, still early days, I think we'll see a lot of feedback, especially moving from 2025 2026 because it seems like this year, people are now realizing like, okay, LMS do have a lot of limitations. It, you know, things, they require a lot of fine tuning, a lot of rag graph rag processes to better train these foundational models with your information to make your responses more fruitful. So we're getting to a point where having the ocean isn't enough. You just need a little tiny pond of your specific knowledge to help you better direct knowledge to your solutions. So hope that answered the question.
Lily Shores:I think it did. There was some, like, technical words I didn't necessarily understand. So when you say like, LM ends like, what are you
Joe Blankenship:LLM or large language model? Okay, so something like chat, GPT from open AI, they that API gives you access to a large language model. Essentially, it's a next word next word prediction model based in natural language processing algorithms to help you, essentially give it a prompt, and it gives you the most human response based on an expert prediction. So now, once again, there's a lot of variety of those things that are also open source models. So llama from Facebook. They have an open source model where you can download it, build it yourself. There are ones that are privately owned, like chatgpt, and I believe Claude from anthropic is still closed source. I forget that, but, but, yeah, large language models are some people talk about AI nowadays. They're talking about large language models,
Lily Shores:okay. And then there was a point that you brought in where, like, you have to bring in more people that have, like, a technically trained background. But wouldn't that also open up more, like, human error mistakes, and how that how those mistakes could be training, AI, and like, how that all connects in like how bringing in more humans to train AI can also open it to having more human error within the AI, absolutely.
Joe Blankenship:Yeah, it's a vicious cycle. And it's just, you know, you think about old software paradigms from devsecops. It's like people build software, the software presents a new security issue. They have to solve that security issue. They've refactored and redeployed the software, and then hope they're not repeating the same mistake twice. Like I said, AI is just going to exacerbate that. If you don't know how to use it, you don't have to constrain its use, then you're going to, yeah, you're going to open it to more errors. The humans are going to come in there. The human injection will provide more solutions, but it will also create new problems. But that's that way technology has worked at large for many, many decades. So it's one of those things where it's not really AI. It is really just a human loop that once the initial questions of like, what are the major you know, cybersecurity issues? Well, it's still gonna be human human loop. And once again, decentralized technologies solve a little bit of that. But it, once again, it also leaves the drop into new paradigms of risk that we just did not account for when the previous technology didn't exist. But overall, it's a good process, and it's a good process to be engaged with, because overall, if you solve old problems, get new problems. That's still progress. It's a constructive progress. So long as you're leveraging technologies to constructive effect within organizations, so within the digital economy of leveraging AI for cyber security effect, it's good that you're finding out new issues, like new zero days, new things like that, new malware paradigms. You know, it's one of those things where it's only going to help people get better as security practices, and it's going to help improve cyber security as a community practice. Moving forward, it's just, it's once again, it's early days where people are learning like, Oh, these, these type of LLM and these kind of AI applications, you know, solve this, but they exacerbate these other things. So like I said, overall, it's a good process, it but yes, to your point, it is very, very tricky to bring a human loop that may have no pre existing knowledge of the previous issue and then reproduce that issue in a way that was not intended the first time around. So yeah, that will happen for sure.
Lily Shores:Yeah, it was just an interesting like point. I noticed as you were talking that even as they train the AI to be better, like, is the training empty of errors type of thing. So I was just very interested in that. And then a point that you brought up earlier, about like, the regulatory frameworks. What regulatory frameworks do you believe are necessary to secure the digital economy, especially as technologies like AI and blockchain continue to evolve?
Joe Blankenship:Oh, man, so I am one to say that probably we don't need more regulatory more fast. I guess better way to explain that would be, we tend to at least the history of computer technologies within like the past four or five decades. And regulation is that often governments rush to regulate stuff too quickly before they understand the implication of how the technology is actually working in the first place, whether it was CFA or SOPA and PIPA or, you know, GDPR. It's like governments rush to and rightfully so. They rush to protect individual data privacy, and overall, you know, effects of how they can actually gain access to these things make sure they are protected before they actually compromise, you know, the citizen's injuries. You know, overall day to day life, you know, a a hack in your your personal information could lead to a credit score issue that completely destroys your credit history for getting home loan, you know, stuff like that. And governments are concerned about that, because that affects revenue, affects taxation, affects a lot of stuff in terms of centralized regulatory systems like a state, you know, state government, national government. So when it comes to digital economy and kind of understanding how to regulate and what to regulate, I think it's, once again, it's not very clear how AI is going to affect that, how blockchain should be related. Once again, Blockchain is a decentralized technology, so there's a question of, even, should that be regulated in the first place? Especially since the goal of like blockchain technologies, like Bitcoin in the first place were to be completely removed, you know, from regulatory purview. You know, the chain was supposed to be the regulatory mechanism using consensus, consensus algorithms like proof of work, to engage and control how the canonical record was produced and how it was maintained. However, beyond a very basic scale of operation, that becomes very tricky, very quickly. You go from something like Bitcoin to Ethereum. Ethereum has much more broader application space as decentralized contracts, those contracts have a broad number of application spaces. But to your point, you bring human programmers into the loop producing those contracts, you open up a lot more risk paradigms for people to have commodities or cryptocurrencies on those systems. So then only becomes more complicated when you have nation states like China who want to produce digital currency systems, and they want to use those things or leverage them within a global economy. In a global economy, that to, in my opinion, when it comes to the definition of digital economy, is almost 100% digital. Like almost every currency can think of, has a digital footprint. Every economy, at least from a macroeconomics perspective, is has a digital, you know, representation somewhere within a national or regional government regulatory system. So when it comes to, like, you know, how do we use AI and blockchain to, kind of, you know, control these things while they're also kind of evolving themselves, very, very murky waters, and once those, I think it's just way too early to kind of determine outside of, like, basic data privacy and interoperability. I think it'd be very hard to say, like, how we can constrain these systems in a way that is. Safe for the people using them, but it was also constructive to people trying to apply them to accelerate good things inside of digital economy. You know, everybody wants faster access to commodities for a cheaper price. And every time we try to leverage technology, that's the kind of the goal that's been kind of the Industrial Revolution goals, you know, it's like we build a technology to bridge gaps, that accelerates things, reduces costs, specializes labor. But at the same time you're also talking about you shift labor. You shift labor skill sets, which takes time to retrain. That means that there's a lowering in terms of labor access and different types of jobs, which affects salaries, which affects overall market penetration and market capabilities for people that have those kind of jobs. So there's like weird cascading and, you know, ebbs and flows of economy what, when you're talking about AI and blockchain, and how those things can be both regulated or can help regulation, because I think there's a lot, there's a lot in terms of decentralized ledgers that can help regulatory systems. I think when it comes to keeping track of who votes on what, or what was voted on to have, what kind of digital economy impact. It'd be nice to have a canonical record of that and that. I think it would help with people understanding what pitfalls not to make a second time around when it comes to implementing regulation on a grand scale. So I think with that, plus AI and AI, giving people more generalized layman access to much highly technical language legalese inside regulatory systems. I think that's another benefit of AI in terms of regulation and how we can better engage with regulatory processes. I think the average citizen in most countries have really bad issues with not getting good enough access to people that represent them in government and understanding what they're doing and how that what they're doing is affecting them individually. I think that's something that AI, in terms of regulation and access to regulation, can really help, because AI can give you, like the one paragraph, you know, simple answer to things, and really kind of help you better become an engaged citizen, I think. But I'll do it in a way that helps you with data privacy, helps you with very integrating your concerns into a broader conversation as well. So there are those intrinsic interstitial spaces that blockchain and I also help with, that aren't immediately clear when people are talking about regulating these technologies and kind of keeping them safe, so people don't kind of like, run too fast, too far with these things and break a lot of things. But I also think if you think about, like, just beyond digital if you go beyond digital economy, you know, government's very large in terms of regulating, because they're the main regulatory bodies of anything. They have a lot of legacy systems that they deal with, and a lot of them are based heavily in physical reality. You know, it's like old infrastructure for electric water. You know, it's all these systems that have very antiquated technology connections. AI will kind of expose those a bit more, because we've done a really good job of writing up what's wrong around regulatory systems without fixing it. So when it comes to cyber security around blockchain, and I to bring it back to the original, you know, cyber security focuses is that it will exacerbate hacks that you don't expect, and there'll be hacks based on older technologies. You know, if we have dam systems and we have, you know, important systems that are based on old, you know, 19/20, century technologies, and we've written about all the issues with these port systems, all these supply chain management systems, all these vendor due diligence systems, and we haven't put the actual technology offices in place. AI is going to expose what we what we definitely know we have gaps in but have no solutions for them, which means we're in a race, essentially, with bad actors in the cybersecurity space to fix these things, because AI is really helping them more than helping us at this point. But there's also opportunity there for us to find those things first and really address them and actually produce good regulatory artifacts, you know, laws, so like that that help us bridge those gaps and fix those issues. So once again, it's one of those things where, with AI decentralized technologies, it's just kind of really ramping up people's awareness of things that we have really neglected over time. And I think it's it, to me, it's hopeful, because it does give you chance to really fix things so
Lily Shores:well. That was very interesting. So do you have any final thoughts or points that you wanted to touch on that we haven't gotten the chance to yet. Now,
Joe Blankenship:I mean, at this point, I dead horse, but it's one of those things where it's, you know, just emergent technologies, like, there's, there's a lot of potential there. They were created for a reason, and those reasons weren't malicious in any way, shape or form. It just technologies, build technologies for cool stuff. It's always the second stage of like in cybersecurity, especially like cybersecurity professionals, hackers, they all see technologies in like, it's a system. How can I break if you look at it in terms of it being just another system on top of systems, then you start, can, you can see, kind of, you know, both sides of the arguments. You see all sides. Wouldn't say false dichotomies, but it's, it's critical. You. Especially in terms of digital economy, because when things are digitized, they're moving at the speed of an electron, you know, through wires, cables, fiber optics. It's like you really have to think about the implications of deploying these technologies before they're really matured. So but then again, the question is like, what is maturity at this point? Like, is it to the point where you can ask it questions, or is it where you can get the same question back twice based on different prompts. You know, it's it's still early days. I think people should be conservative in their their estimations on how to leverage these things while also not cutting off the potential for them to be used in a broad array of solution sets. And that means kind of constraining what we would do regulatory to allow cybersecurity professionals to really engage with these things on a practical, fast level, to where they can really kind of address zero days, address these emergent issues with LMS and Gen AI, to kind of help better secure the digital economy and businesses writ large, around how humans in loop may affect, be affected by these things, but how infrastructure in and of itself can be better secured against these kind of activities. And like I said, decentralized technologies are a good part of that. Like I said, data privacy and their ability are gonna be something that are gonna be continually addressed as we go along, from year to year, especially in those 2030s if these models continue to gain speed and continue to gain applicability, they only become more and more embedded in your day, to lives. You know, the Siri of yesterday is going to be replaced by a Gen a agent that is much more tailored to your experience on your smartphones than, you know, one year ago, five years ago. So it's going to be great, because it's going to give you access to all your stuff, and it's going to know exactly what you do and how you do it, and it can give you better advice on how you've done what you've done. But it's also going to be a repository of all of your individual knowledge that can then be copy pasted to someone's system to replicate a digital version of you. So I think moving forward, we need to be very cognizant of how AI is embedded with our lives, day to day lives, and how both they can be used to enable the best part of it, while also constraining on exactly how multi could leverage that to something negative in your fact, how we could affect address regulatory aspects of that in a way that doesn't cut you off from having to use the technology the first
Lily Shores:place. Okay, well, that is a great point to end on, and thank you again, so much for joining us today.
Joe Blankenship:Yeah, absolute pleasure. Thank you. You
Glenn Beckmann:Joe, there you have it, a conversation between Lily shores of our GNSI Future strategist program and Joe Blankenship, co founder and Chief Data Officer at certiscorp, a Tampa based startup. A special thanks to both of our guests today, and we're really looking forward to hearing more from Joe at the upcoming cyber frontier Summit, a student led conference on the Tampa campus of USF on april 15. There's no cost to attend, but registration is required. You can find more info in the show notes. Next week on at the boundary, our special guest will be Dr Zachary Selden. He's currently an associate professor at the University of Florida. Previously, he was the director of Defense and Security Committee of the NATO Parliamentary Assembly and the author of a book called economic sanctions as instruments of American foreign policy. That book will be the primary focus of our conversation with him next week on the podcast. Thanks for listening today. If you like the podcast, please share with your colleagues and your network. You can follow GNSI on our LinkedIn and X accounts at USF underscore GNSI, and check out our website as well. Usf.edu/gnsi Or you can also subscribe to our monthly newsletter. You Glenn, that's going to wrap up this episode of at the boundary. Each new episode will feature global and national security issues we found to be worthy of attention and discussion. I'm Glenn Beckman, thanks for listening today. We'll see you next week at the boundary you.