Artwork for podcast Tech Transforms, sponsored by Dynatrace
So What? It’s 5:05! Edition: Beyond the Headlines of AI, Election Disinformation and SpyGPT
Episode 7613th December 2023 • Tech Transforms, sponsored by Dynatrace • Carolyn Ford
00:00:00 00:35:25

Share Episode

Shownotes

On this special So What? episode we go deeper in to some of the top stories being covered on the It’s 5:05! podcast with It’s 5:05! contributing journalist, Tracy Bannon. How are cybersecurity stress tests battling misinformation and aiding in election security? Is AI contributing to election disinformation? How is the CIA using SpyGPT? Come along as Carolyn and Tracy go beyond the headlines to address all these questions and more.

Key Topics

  • 04:20 Proactive approach needed for software voting security.
  • 09:12 Deepfake technology can replicate voices and videos.
  • 12:38 Politics focuses on presidential level, ignores others.
  • 15:53 Generative AI creates new content from data.
  • 17:19 New tool aids intelligence agencies process data.
  • 20:13 Bill Gates discusses future AI agents on LinkedIn.
  • 25:24 Navigating biases in AI towards democratic values.
  • 29:13 CISA promotes continuous learning and holistic approach.
  • 30:51 Demystifying and making security approachable for all.
  • 33:33 Open source, cybersecurity, diverse professional perspectives discussed.

Importance of Cybersecurity and Responsible AI Use

Embracing Cybersecurity Measures and Privacy Protections

In their conversation, Carolyn and Tracy discuss the imperative nature of both individuals and organizations in embracing robust cybersecurity measures. As we live in an era where data breaches and cyber attacks are on the rise, the implementation of effective security protocols is not just a matter of regulatory compliance, but also about safeguarding the privacy and personal information of users. Tracy emphasizes the continuous need for cybersecurity vigilance and education, highlighting that it is a shared responsibility. By making use of resources like the CISA cybersecurity workbook, Carolyn suggests that individuals and businesses can receive guidance on developing a more secure online presence, which is crucial in a digital ecosystem where even the smallest vulnerability can be exploited.

Addressing Biases in AI to Align With Public Interest and Democratic Values

Tracy expresses concerns over the biases that can be present in AI systems, which can stem from those who design them or the data they are trained on. Such biases have the potential to impact a vast array of decisions and analyses AI makes, leading to outcomes that may not align with the broad spectrum of public interest and democratic values. An important aspect of responsible AI use is ensuring that these technological systems are created and used in a way that is fair and equitable. This means actively working to identify and correct biases and ensuring transparency in AI operations. Plus, constantly checking that AI applications serve the public good without infringing upon civil liberties or creating divisions within society.

Demystifying Cybersecurity: "We need that public understanding, building this culture of security for everybody, by everybody. It becomes a shared thing, which should be something that we're teaching our children as soon as they are old enough to touch a device." — Tracy Bannon

The Proliferation of Personal AI Use in Everyday Tasks

The conversation shifts towards the notion of AI agents handling tasks on behalf of humans, a concept both cutting-edge and rife with potential pitfalls. Carolyn and Tracy discuss both the ease and potential risks of entrusting personal tasks to AI. On one hand, these AI agents can simplify life by managing mundane tasks. Optimizing time and resources, and even curating experiences based on an in-depth understanding of personal preferences. Yet, Tracy questions what the trade-off is, considering the amount of personal data that must be shared for AI to become truly "helpful." This gives rise to larger questions related to the surrender of personal agency in decision-making. The erosion of privacy, and the ever-present threat of such tools being exploited for nefarious purposes.

CISA's Cybersecurity Workbook

Enhancing Accessibility with AI Use: Summarizing Complex Documents through Generative Tools

Tracy introduces the concept of leveraging generative AI tools such as ChatGPT to summarize lengthy documents. This innovative approach provides a way to digest complex material quickly and efficiently. For instance, users can feed a PDF or a website link into ChatGPT and request a summary which the tool will produce by analyzing the text and presenting the key points. Tracy emphasizes this method as a step toward making dense content like government reports or lengthy executive orders, more accessible. She also transitions to discussing CISA's cybersecurity workbook. Illustrating a movement towards the dissemination of important information in a format that a broader audience can understand and apply, not just tech experts. Tracy appreciates the effort by CISA to create resources that resonate with everyone's level of technical knowledge.

Comprehensive Guidance for Security Measures

The comprehensive guide provided by CISA, Tracy notes, is robust in offering detailed strategies for planning and implementing cyber security measures. The workbook does not shy away from diving deep into the assessment of potential cyber risks. It details leading practices that organizations can adopt. Planning for incident response is a highlighted area, acknowledging that security breaches are not a matter of if but when. The workbook thus serves as an invaluable reference for initiating proactive steps to fortify against cyber threats. This level of comprehensive guidance serves not only as a tool for implementing robust security measures. It is also a learning resource that promotes a widespread understanding of best cybersecurity practices.

Government's AI Use

Potential Introduction of Generative AI by the CIA

Tracy and Carolyn discuss the CIA's plans to potentially introduce generative AI through a program dubbed "SpyGPT." The idea behind this integration is to enable the parsing and understanding of extensive open-source data more efficiently.

Generative AI, similar in concept to models like ChatGPT, could revolutionize how intelligence agencies handle the vast amounts of data they collect. If implemented, this AI would be able to generate new content based on massive datasets. Providing insights that could be invaluable for intelligence processing. Carolyn raises comparisons to traditional methods of intelligence gathering, noting that such technological advancements could have helped in past events had they been available. In response, Tracy emphasizes the historic struggle of intelligence agencies to rapidly sort through surveillance information. A challenge that tools like SpyGPT could mitigate.

The Double-Edged Sword of AI Use in Predictive Analysis

A tool like SpyGPT has the potential to rapidly identify patterns and connections within data. This could lead to quicker and more accurate intelligence assessments. Carolyn points to the use of crowdsourcing information during the Boston Marathon bombing as an example of how rapid data correlation and analysis can be critical in national security efforts. The ability to predict and possibly prevent future threats could be significantly enhanced.

The Dangers of Internet Era Propaganda: "I can take any idea, and I can generate vast amounts of text in all kinds of tones, from all different kinds of perspectives, and I can make them pretty ideal for Internet era propaganda." — Tracy Bannon

However, as Tracy notes, the power of such technology is a double-edged sword, raising concerns about privacy, the potential for misuse and ethical implications. The conversation raises the specter of a "Minority Report"-esque future, where predictive technology verges on the invasive. Both Tracy and Carolyn agree on the tremendous responsibilities that come with the implementation of generative AI when it intersects with privacy, civil liberties and security.

Election Security

The Critical Role of AI Use in Election Security Stress Testing

Stress testing in the context of election security revolves around rigorously probing the voting system to uncover any flaws or weaknesses. This process requires collaboration between various stakeholders, including the manufacturers of voting machines, software developers and cybersecurity experts. Tracy emphasizes the crucial nature of these simulated attacks or real-world scenarios that help reveal potential points of exploitation within the system. Identifying these vulnerabilities well before an election can give officials the necessary time to address and reinforce weak spots. Ensuring the reliability and resilience of the electoral process against cyber threats.

The AI Use in Unveiling Election System Vulnerabilities

Tracy discusses the necessity of not just identifying but also openly revealing discovered vulnerabilities within election systems as a means to foster trust among the populace. Transparency in the security measures taken and the clear communication of vulnerabilities found, when managed properly, instill a higher sense of confidence in the electoral system's integrity. This approach also plays a pivotal role in countering misinformation. By proactively conveying the true state of system security and the efforts being taken to remedy issues. It can help to dismantle unfounded claims and skepticism about the election infrastructure from various sectors of society.

Exploring the Impact of AI Use in Deepfake Technology and Artificial Persona Creation

Capabilities of Deepfake Technology and AI-Language Models

Recent advancements in AI and deepfake technology have brought breathtaking capabilities. Primarily the power to manipulate audio and video content with astounding realism. Tracy emphasizes the profound implications of this tech. Specifically pointing to language models such as "Vall-E," which can simulate a person's voice from just a few seconds of audio input.

The Rise of Deepfakes: "Imagine what's gonna happen with the deepfake. Take a right? I can take your video. I can take your voice." — Tracy Bannon

This technology uses sophisticated algorithms to detect nuances in speech patterns. Allowing it to generate new audio that sounds like the targeted individual, effectively putting words into their mouths that they never actually said. This ability extends beyond simple mimicry. It propels the potential for creating audio deepfakes that can be nearly indistinguishable from genuine recordings. Such capabilities raise significant concerns about the reliability of auditory evidence and the ease with which public opinion could be manipulated.

Creation of Artificial Personas Using AI Tools

Tracy brings to light the increasingly effortless creation of false personas through AI tools such as ChatGPT, which is an iteration of AI language models capable of generating human-like text. These tools can fabricate compelling narratives and even mimic specific writing styles. It can create non-existent but believable social media profiles or entire personas. Tracy points out how these synthetic entities can be programmed to deliver credible-sounding propaganda, influence political campaigns, or sow discord by spamming internet platforms with targeted misinformation. The creation of these artificial personas signifies a dramatic shift in how information can be disseminated. Posing risks of eroding trust in digital communication and complicating the battle against fake news.

About Our Guest

Tracy Bannon is a Senior Principal with MITRE Lab's Advanced Software Innovation Center and a contributor to It’s 5:05! podcast. She is an accomplished software architect, engineer, and DevSecOps advisor having worked across commercial and government clients. She thrives on understanding complex problems and working to deliver mission/business value at the speed. She’s passionate about mentoring and training and enjoys community and knowledge-building with teams, clients, and the next generation. Tracy is a long-time advocate for diversity in technology, helping to narrow the gaps as a mentor, sponsor, volunteer, and friend.

Episode Links

Transcripts

Carolyn Ford [:

Thanks for joining us on Tech Transforms. I'm Carolyn Ford here with Tracy Bannon. Hey, Tracy. How are you?

Tracy Bannon [:

Hola. I am just right.

Carolyn Ford [:

I love that. Well, today, we're doing something a little bit new. So on So What, we like to look at, you know, what's going on in cyber, what's going on in tech and the government, and we are going to leverage our friends at 505 podcast, which you are involved in. And we're going to. So the 505 podcast is a daily podcast. It's usually, like, 10 minutes. It's a quick hit of what's going on around the world In tech and specifically cybersecurity. So we're gonna look at those highlights and dig into them because it's just quick, it's kinda just headlines.

Carolyn Ford [:

formation. As we approach the:

Carolyn Ford [:

She's based in Washington DC. She recently explored how cybersecurity stress tests may help battle misinformation and obviously make the elections more secure. So I want you to talk a little bit more about how that is making the elections more secure. I mean, I think that's a little bit obvious, the stress testing, but how it's helping with disinformation too.

Tracy Bannon [:

So you've gotta take a step back and just make sure everybody understands what a stress test is in this case. And so it's it's getting together. The voting machine companies, the folks writing the software, and cybersecurity experts, and they're getting together and there's they're running these tests, Applying stress to the system, seeing different attack vectors, different scenarios where they could try and figure out where are their vulnerabilities because they wanna address those vulnerabilities right now. Mhmm. Why it matters right now is we gotta get to we have to get to a point where we have more transparency and more trust. You've got gaggles of people. I think everybody understands now that computers are fallible. They're created by humans.

Tracy Bannon [:

They're reading lines of directions that we've given them. They're only so smart. Even generative AI, it's only so smart. Right? So having open discussion about the security measures and the vulnerabilities, saying what you found a year ahead of an election, managing the information to prevent any other exploitations, like, That is going to start to build some sense of trust. Right? Trust takes a lot. You can't just say, hello. I'm here. I've got some cool test results.

Tracy Bannon [:

Will you trust me? But we can get that stuff out into the open. And there's a lot of collaboration that needs to happen across Multiple external cybersecurity experts can't just be voting machine company that is running all of their tests and saying, we promise you that we have some really, really good test result here. No. No. Bring in the experts that are known to industry. No. Those that are have all different kinds of responsibilities and different kinds of talents, and then start to, in a coordinated way, disclose on the vulnerabilities. Mhmm.

Tracy Bannon [:

So we have to be proactive about it. I couldn't tell you exactly what makes up the different a software, that's being leveraged for voting. I have some ideas on how I could nefariously try to get into it, and I have some ideas on How I think it might work, there are people that focus on that. There are other FRDCs, even MITRE that are Advising the government on how to, address vulnerabilities, how to proactively involve the public and disclosing when there are software vulnerabilities. It's more of a common practice in industry. Right? We see this vulnerability come up. Well, what's the consortium across the voting machine gang? Right? It's so it's it's important. We wanna find the issues, But think about all the things that finding the issues in a somewhat transparent way.

Tracy Bannon [:

We're not gonna hand it all to the Chinese them to understand all of our vulnerabilities, but we do have to get to a bit more open disclosure. We do need to get it out there. So It helps us get away from misinformation because if without sharing those results, I can today say, You're not cybersecure. Prove to me you're cybersecure. Well, this is the proving out. Right? So that is part of tearing down misinformation is simply by providing correct and honest and

Carolyn Ford [:

open information. And importantly, you mentioned it's multiple groups doing this testing. It's not 1 group. It's not, bipartisan. It is It's multiple groups so we can have, like, to your point, more trust in what's happening and, Get ahead of anybody saying anything's rigged. Mhmm. So Hillary also talked about AI election disinformation, and explored whether or not AI can rig the system. I just watched the new Mission Impossible.

Tracy Bannon [:

We need to take away your we need to take away your streaming, you know, your streaming sources. Just saying. Oh, but, you know, the direction that you're going, I love it because we are now at the point where sci-fi is reality. Need to think about that. Sci-fi is reality. Think about a Robot from many, many, many years ago. Think about Minority Report. What was that? 2 decades ago so far.

Tracy Bannon [:

Where you're predicting. I can predict if Carolyn Ford is going to misbehave, and therefore, I'm gonna arrest her before she misbehaves. I say that tongue in cheek, but imagine all the things that I can do with generative AI to help bring an election. And that's that's just The the simplest thing the simplest thing. You've heard the term as script kitty. Right? It's that that 16 year old who's at home. They've got the computer.

Tracy Bannon [:

They've got all the time in the world. They might be latch key. They're just out there, and they figured things out and they figure out, but it takes them a lot of effort.

Carolyn Ford [:

Yeah. It's War Games.

Tracy Bannon [:

Yes. It is. It is. Thank you, Matthew Broderick. But we just dated ourselves. Wanna put that out there, by the way. You might wanna That's a classic. Okay.

Tracy Bannon [:

Okay. Sure.

Carolyn Ford [:

Doesn't date us.

Tracy Bannon [:

Okay. I'll go with that. I will go with that. So one of the things that We have been alerting people to is that generative AI in specific sounds like it's correct. Rena, it speaks with credibility because it is so well logic. The sentence structure is beautiful. And now with the ability to write these Absolutely incredible prompts. You can engineer the prompt.

Tracy Bannon [:

I can feed it samples of what Carolyn Ford writes like and sounds like, and then I can say, answer this question. Zoe, you're Carolyn Ford. So think about how easy it is to generate negative propaganda. I can take any idea, and I can generate vast amounts of text in all kinds of tones, all kind from all different kinds of perspectives, and I can make them pretty ideal for Internet era propaganda. I mean, that's that's, I think, the 1st biggest one, and then let's disseminate that. Let's throw that into not just a blog post. Let's put that on Twitter slash x. Let's put it on Discord.

Tracy Bannon [:

Let's put it on Reddit. Let's put it on LinkedIn. Let's put it on Facebook. Let's put it In every outlet, let's start to then feed it into the mainline media. So now I've got social media disinformation. I've got it being propagated out because we've seen that sometimes in an effort to get information as quickly as possible, people are not checking the sources as adequately as they should. Right? And that's just the start of it. You and I talked about a year ago about this ability to use deepfakes.

Tracy Bannon [:

Mhmm. Imagine what's gonna happen with the deepfake. Right. Take a right? I can take your video. I can take your voice. I only think it's I only need 2 minutes. I'll look it up. Vall-E is a language model that can take snippets of your voice, just a couple of seconds of your voice Because it's been so well trained with different emotions and phonemes, I can have you reading a story to, you know, a storybook to my kids If I wanted to by Carolyn Ford, what can we do when the media with that? Remember, we used to need to, We thought that anything that we read was correct, and we realized that we didn't, it wasn't if we read it.

Tracy Bannon [:

We had to hear it. And then we realized, right, Watergate, other things that what we hear may not always be adequate. Now what I can actually fake if I'm even looking at the camera. NVIDIA has an eye-balancing algorithm that I can look away and it'll still look like it will be doing that to me right now?

Carolyn Ford [:

Because that's not cool, Tracy.

Tracy Bannon [:

No. I don't have. I don't do that now. You see my eyes going all over the place. Looking down at my papers, but I point that out because all of these amazing capabilities exist to get after those deepfakes. So imagine being able being able to create personas. One of the most followed people on TikTok and Instagram is actually a bot. It's not even a human.

Tracy Bannon [:

I'll have to look up the name for it. Yeah. I'll have to look up the name for you. I went and checked it out recently. Somebody sent it to me, and I was like, no.

Carolyn Ford [:

And what? Why? Because of how sensational the information from it?

Tracy Bannon [:

No. It's just interesting. It's just trendy make trendy informational. It's a it's a young right right between Gen Z and, and the young millennials. It's right in that age category. Yeah? Yeah. Wow. Just so much that we can do with AI, and that's just using it to generate.

Tracy Bannon [:

Right. Just to push stuff out, I'm not even talking about the fact that I can ask it. I can ask ChatGPT or worse. I could go to warm GPT, Which is ChatGPT with all of the guardrails removed. It's a real thing. It's out there. It's not illegal to use. The bad guys are using it, and the good guys are using it.

Tracy Bannon [:

I can say to any of these tools. I can ask it to create a social media campaign And give it all the parameters of all that I want to do. So I don't even have to figure out how to do a social media campaign because I just need to ask, And it'll do it for me. So, yeah, our brains should be exploding. We now need to stop. Used to listen. Right? You could read it. You could listen to it.

Tracy Bannon [:

You could see it. I believe we're gonna see a time in the not too distant future where we're gonna actually need to be, You know, the old school stumping. Right? You know that term? We're gonna have to actually go to the local green, to the local park, to the local. We're gonna have to go to the coliseum. We're gonna have to go places where we hear the human live for us to believe that it's real. I for spooky dooky.

Carolyn Ford [:

I was talking to my friends about this, and they're like, well, what's what's the solution? Like, how do we know if it's real or not? And I'm like, you go meet them in a room in real life. That's how you know. Exactly.

Tracy Bannon [:

The

Carolyn Ford [:

problem is it doesn't scale.

Tracy Bannon [:

It does or doesn't it? I am giving some of my own true personal opinion. Our politics have gotten so focused at the presidential level only Mhmm. That it's it's red versus blue. We've ignored all other parties. We just say if you're not one of those 2, you're somehow in this big club of independent, which is everything from communist to libertarian, like, announced party? Yeah. Imagine if we brought it back home. I can go to my local borough council. I can go listen to my local politicians.

Tracy Bannon [:

My local politicians have more impact on my life day in and day out than anybody at the federal level. Imagine if we started to pay attention there, And then we maybe we do have the state governor or the state electors, you know, actually canvassing the state. How crazy is that? That we get them out of Washington, have them do their jobs back home? Yeah. It's true that maybe asking the president to be available to everybody is Not viable. But do you really think it's so unrealistic that we shouldn't try?

Carolyn Ford [:

No. And when I say it doesn't scale, I still believe there's some truth to that. But more truth for me is Just say it out loud. I'm not gonna go.

Tracy Bannon [:

That's that's all of us.

Carolyn Ford [:

That's what I think too. I think I'm a really good micro study of all of us.

Tracy Bannon [:

Well, I'm I am fat and lazy. We had the conversation. What's the point where I would go out of my house? Now when I when I go to vote, I walk 4 blocks. I walk 4 blocks to vote. I live in a tiny little borough where my city hall is a borough hall, and it's blocked over. So I have accesses that other people probably don't have, and still, it's easy to get lazy. At what point will people be less willing To just consume what's coming at them. I think one of the things that scares me is just we're talking about sources of disinformation.

Tracy Bannon [:

How do we help to have the whole population somehow wake up. Mhmm. Like like, don't don't just Instagram your evening away. No offense to Instagram. Don't TikTok your days away. How do we get people to have more engaging real-time conversation? Right? How do we how do we do that? That's what I think is gonna be important.

Carolyn Ford [:

I agree. And I'm gonna move us to the next topic. So staying with the theme of AI here.

Tracy Bannon [:

Mhmm.

Carolyn Ford [:

Where are we going with generative in our agencies. So I've actually heard that some agencies aren't even going to allow it, but Katie Craig reported and I mentioned it in the intro, SpyGPT. Yeah. SpyGPT. So CIA is gonna introduce this tool. Is it really generative AI like ChatGPT? What I mean, what is it? Oh, So we've gotta look at

Tracy Bannon [:

I refer to the sexy trifecta. There's AI that we've known about for a while. There's ML. And there is Generative AI, it is a little bit different because it takes from a large corpus, and generates net new. So if you're not familiar, the way that these large language models ChatGPT’s work is that they have calculated mathematical probabilities that words appear with each other. And we're not talking about a simple, like, a star chart with, like, one connection. We're talking about 100 whenever they talk about all these factors and all of these embeddings and these other things. It truly does take all of that information and reorganize it and spit it back out, which is a little bit than some of the predictive things that other analytics can do.

Tracy Bannon [:

So when we talk about the government using AI, the government using ML, and the government using generative AI, kinda have to delineate which one we're talking about right now because they've been using AI for a long time. They've been using machine learning for a long time. They use it in their help centers. Right? You get online and you're talking to a chatbot. That little assistant that pops up and asks you for your information is generally not a human. It's generally automated. That have AI embedded in them.

Carolyn Ford [:

So that's and that's why I'm curious. Is this thing that the CIA is planning to introduce, is it really generative AI?

Tracy Bannon [:

As they have spoken about it, as they've written about it, the tool is it's being spoken about with a lot of parallels to ChatGPT, and it's kind of a leap, forward in how the intelligence agencies process the vast amount of open source. I'm gonna use the term open source data, all the data that's out there. And if you think back over events that have happened in time, if you think way back to 9/11, there were reports that we we did were not able to comb through all of the surveillance information that we had rapidly enough to get far enough ahead, right, on these trajectories. This kind of technology allows us to start to do that. It and then the major function of a tool that the CIO put together is going to be to interpret those massive corpuses, then they can start

Tracy Bannon [:

to ask questions about it. But, you know, think about, personal information, your location information. It would help them when they are gathering intelligence. Right? It'll help them. National security efforts, but it also makes us nervous. Right?

Carolyn Ford [:

Well, it makes me think and it makes me think about the Boston Marathon bombing.

Tracy Bannon [:

Mhmm.

Carolyn Ford [:

Like, they found those guys really through crowdsourcing. Right?

Tracy Bannon [:

Mhmm.

Carolyn Ford [:

And, like, and I wonder I don't know.

Tracy Bannon [:

Well, I imagine how much quickly how much more quickly it could have happened or better Or

Carolyn Ford [:

or better.

Tracy Bannon [:

Predictive. Right.

Carolyn Ford [:

So that's so scary now. We just went Minority Report on us.

Tracy Bannon [:

But we did.

Carolyn Ford [:

If all that data had been correlated where he bought the like, they knew all these things. Where the backpack was bought, where it was left, all of that. And if they were watching, it just oh my gosh. It's so scary.

Tracy Bannon [:

I have talked about this before, and it's important for us to realize that this is an amazing and terrifying technology in the same breath. We can't go all in without knowing that going all in could go bad. And so this is unprecedented. Unprecedented. When you think about the number of very important scientists, data scientists, all of those who are out there in tech, they're all saying Sam Altman is saying, right. This could be really dangerous. And it's not just the government saying, well, we have to regulate cell phones. No.

Tracy Bannon [:

No. When these folks say that unbridled, it could have the potential to destroy humanity. I don't know exactly what that means, but even the little things that we're thinking about right now, should should scare us all.

Tracy Bannon [:

Bill Gates just wrote a really great piece. He posted it on LinkedIn, and what he talks about is the future. So in the past, we've had these little bots, and bots can only kinda do what you tell them to do. They're not overly smart, but they were smarter. Like, he brings up remember Flippy back in the day that drove us crazy with Microsoft Word and well, it was a bot. What Gates talks about there is the future of when we have agents. And an agent is essentially your, AI helper.

Tracy Bannon [:

You're AI smart. You're all of those characteristics tied together. And if you want your life to be consistently and continually easier on the mundane things, you will delegate more and more to your agent, meaning that you'll allow your agent, your agent to have more and more and more information about you.

Carolyn Ford [:

We start outsourcing our brains, though, Tracy.

Tracy Bannon [:

We do. We do. But at first, it be nice if I could just, instead of ask Siri or ask Alexa, if I could just ask my agent. I'll just call my agent Race, Race Banner. That's a that, by the way. You did not. Yes. I did.

Tracy Bannon [:

I just made I just made a reference.

Carolyn Ford [:

I love that you just referenced that. Oh, I hope some of our listeners got that one.

Tracy Bannon [:

Race Banner. I love it. So when I ask Race, to come up with a vacation agenda. And I'm thinking about this one particular area by allowing it to be plugged into all of my purchase data. All All

Carolyn Ford [:

of your own personal preferences.

Tracy Bannon [:

All my personal preferences. It's everything that I buy, everything that I eat. It monitors my Fitbit or whatever on my Apple Watch. It knows all of those things. Knows I wanna go for hikes. It knows I want to, I don't have to answer any questions. Right? It's like I'm asking my husband.

Carolyn Ford [:

Do you know what? I want that just to curate my music list. Yes. Like, give me the perfect music for right now. Know what my mood is. What should my music be?

Tracy Bannon [:

Right. So I know how much are we willing to give away, though?

Carolyn Ford [:

Right.

Tracy Bannon [:

And my son is on the opposite side of this. My son is a he's a he's a, an artist. He's a furniture designer, and works with his hands. He's tech adjacent, knows how knows how to write software. And he's like, no. I'm, I apologize. I'm gonna be part of the coup de resistance. I will not.

Tracy Bannon [:

It'll have to be tough for me. I will have to read books made of paper at some point.

Carolyn Ford [:

I feel like your son and my son must be talking because my son the same way. He's like, mom, you gotta stop. Don't use any of this. I mean, he's very, anti, actually.

Tracy Bannon [:

Oh, I think there's some beauty to it, but think about some of the responsibilities, and this is where the executive order came out recently. Mhmm. I think it came out on the 30th of October. We had come out on the 30th of October. We had out on the 30th. And, like, the end of September, but they're encouraging. The government is encouraging us. They know that we need to get after AI and the government sector has to have this.

Tracy Bannon [:

We have to have defense, the intelligence agencies. They all need to have technological superiority. So we have no choice. You need to hear this. We have no choice but to compete from the government and the defense and the intelligence perspective. We don't have a choice because everybody else is doing the same thing.

Carolyn Ford [:

That's right.

Tracy Bannon [:

So we have to embrace it on some level even if it's to shut it out if that's the choice. I mean shut it out is the wrong term because that it's not possible. I guess we can fishbowl ourselves in Montana.

Tracy Bannon [:

I'm not sure, but we're gonna have to look after, you know, cybersecurity as part of that executive order too. And making sure that we are paying attention to data privacy. I think that the EU, is probably the strongest, you know, advocates for data privacy for the individuals. Little bit stronger than the US, a little bit, ban an opinion, not an opinion of anybody else. That executive order also got off their ethical and responsible use.

Tracy Bannon [:

We need to compete against the enemies. We have to have technological superiority. We also have to make sure we're looking against cybersecurity threats. We have to make sure they're not coming after Carolyn's data, my data, government data. We have to think about biases. I hate talking about bias. I know. You know the bias though in software? Bias in these models doesn't just mean, like, a gender bias.

Tracy Bannon [:

Like, those are the easy biases. Right? Gender, race, age, those are easy biases. It's more difficult when you get after other biases other types of biases that are brought up because of who is creating the corpus of data. So, you know, there are language models that are biased for the US. Because it's being trained by US data. Right? So there are all kinds of ideological right.

Carolyn Ford [:

But I don't even know geology. Biased.

Tracy Bannon [:

Well right. Right. And so it's the term bias is getting all mucky too because it says there there there are always going to be biases. But how do we make sure that those biases are in check with the public interest and with democratic? Not them, not the Democratic Party, but with the overall democratic values of our republic. Right? So there's a lot in the executive order that came out, and I think it's a pretty good job. All kinds of standards are put in there. Privacy protections, and civil rights around, how AI is used. I mean, it's pretty impressive.

Tracy Bannon [:

I have not been super excited about many of the memorandums that have come out from the government around tech. I'm a big fan of CISA, and I actually kinda like the executive order that just came out in October. It's a step in the right direction.

Carolyn Ford [:

I you know, I've heard the same things and the the points. I have not read the 111 pages, but my friends like you, who I trust oh, here we go. See, I'm outsourcing my brain, but I'm a fan of it too based on what I know from my friends like you.

Tracy Bannon [:

Make sure you but that's an important piece. Like, you could take it. You can actually now ChatGPT has been updated, so you can provide it with a PDF or URL, and you can say read this document to me. Right. That's new, right, just in the last few weeks. Oh, that's a good point. So you don't have to read 113 pages. You can have it summarized for you.

Tracy Bannon [:

But then the important part is you need to have these conversations. We need to have somebody I trust. See you when we're having the conversation even though we're hundreds of miles apart.

Carolyn Ford [:

Yeah. And if I take that PDF and I feed it into ChatGPT, and I say summarize it for me, I'm getting biases. It's choosing what points to call out for me. Let's shift gears and close out today's conversation with something that you talked about on 505, and you've mentioned before that you're a big fan of CISA. And, again, admittedly, I am too because of friends like you.

Tracy Bannon [:

And Alan.

Carolyn Ford [:

And Alan, and I mean a lot of leaders that I've talked to and really respect continually bring up CISA and the good work the CISA is doing. So you just talked about on 505, CISA's cybersecurity workbook, which is a comprehensive guide for planning and implementing effective security measures. Talk to us about the workbook and what's so great about it. Summarize it for me.

Tracy Bannon [:

Well, I should be I need to make those little typing sounds that ChatGPT makes when it's when it's on your phone. It uses haptics. So it goes into a number of different areas, key components providing this really crucial information, and it's consumable by individuals. Individuals can read it and understand the organizations. That's one of the things they're democratizing.

Tracy Bannon [:

It's such a broad approach to this. It's not a super deep dive that only the scientists' weenies locked in the top-secret labs can understand. It, you know, talks you through assessing potential cyber risk. Right? How do you do that? It's a foundational step. You're gonna have security guns, walk you through leading practices. How do you plan for incidence response? Like, bad things are gonna happen. How do you get after that? So it provides all of this really good information. It emphasizes training.

Tracy Bannon [:

One of the first times, one of the 1st government organizations that really pounds home that you've gotta have this continuous learning going on. So it's It really is holistic, and CISA does this so well. They provide you with the what and the why, and they jump-start you with some of the process pieces, But you have to figure out the nitty gritty how for yourself. So that's one of why I'm one of the fans of CISA, they're not trying to mash a solution against you that isn't going to fit with your context with what you're working on.

Carolyn Ford [:

It gives you some guidelines, some framework, and then you gotta figure out what works for you.

Tracy Bannon [:

Exactly. Exactly.

Carolyn Ford [:

Or you can just ask ChatGPT.

Tracy Bannon [:

You could. Or you could ask Bard or you could ask Perplexity, dotAI or any of them. I forget, but there were, like, 3 that were announced this weekend. So, yeah. Pick your favorite, whatever your whatever your generative muse is. Yes. But, you know, they went with a wide audience, and I think that that was really important. Instead of purely technical, there's an inclusivity that has to happen with cybersecurity.

Tracy Bannon [:

It's not just an IT professional's concern in this interconnected world. It's everybody.

Carolyn Ford [:

It's everybody. Everybody needs to be into cybersecurity.

Tracy Bannon [:

Even if you are a company of one. You need to be concerned. If you

Carolyn Ford [:

are a mother, if you I mean, everybody, every citizen

Tracy Bannon [:

If you're a human. Yes.

Carolyn Ford [:

We're in this together. There you go. If you're a human. Yes. There we go. See, that's my bias, Tracy, because I'm a mother. So that's immediately where you know

Tracy Bannon [:

Yeah.

Carolyn Ford [:

My priority goes.

Tracy Bannon [:

Well, I think you're right. And it gets after, you know, another thing, the technique that they've been using with this is they're kinda demystifying it. They're making all of the content accessible. So you don't have to be the Uber tech weenie to understand it. You are very approachable, and we need that right now. We need that public understanding, building this culture of security for everybody, by everybody, for everybody, it becomes a shared thing, which should be something that we're teaching our children from the early as soon as they are old enough to touch a device. Those little ones Who are so adept with their teen, tiny little fingers on the iPhones or the mini iPad minis, we should be teaching them security at that point. And all of the guidance that they provide is adaptable.

Tracy Bannon [:

So I'm you know, I look at it from, instead of being 1 size fits all, they did a lot of, smart things. From an architectural perspective, like, I like those trade off, discussions. It's not overly specialized, but it's specialized enough. It allows you to take it further with what you need to be. There's, a lot of emphasis on compliance, but not an overemphasis on because What's compliance gonna be these days? Tell me what compliance is today, and I'll argue with you tomorrow that it'll be something slightly different.

Carolyn Ford [:

So I think it was really valuable. And I wrote about it from that perspective because I was really excited. I was really excited when something comes out that we can all get after, and I can simply share in 2 minutes or 3 minutes, and that's what 505 is all about. There's a global pool of journalists. There are over 20 of us and we're all volunteers. And at least once a week, every one of the journalists, but some are our daily writers. It comes out Monday through Friday at 5:05 Eastern.

Tracy Bannon [:

And it is meant to be the kind of the whack a mole. What do you need to know? It's a potluck.

Carolyn Ford [:

Yeah. I love it. It's 10 minutes. We don't say 10 minutes and then go an hour. It's really 10 minutes.

Carolyn Ford [:

It gives it's just that headline hit, and that's for this episode, you know, I went back and I said, alright, Tracy. I want you to unpack these things over the last couple of weeks and even month. I wanna talk more about this. And so, yeah, I'm encouraging our listeners to just like you, subscribe to that 10 minutes, you get your news, what's important, what matters, and it's really focused on cybersecurity.

Tracy Bannon [:

Open source and cybersecurity. So from my perspective, I'll always I'm an architect, so I'm always gonna come at it from a software architecture and engineering perspective. There are folks like, you know, Chris Hughes, Katy Craig, who are cyber pros. Shannon Lietz, the woman who came up with the term DevSecOps and wrote the DevSecOps manifesto is one of those folks. Derek Weeks, he comes out from a very different perspective. So you've got a lot of really different voices. Olimpiu Pop from Romania, and he has a very different perspective on things, and we often go back and forth about the differences between EU and the US.

Tracy Bannon [:

So it gives a broad perspective, but each person is trying to provide, reporting out on a news item from our perspectives. So it's not regurgitating what somebody else has said, like, you can get a news feed. I'm trying to give the additional information why does this matter to you, and that's what all the journalists do is they provide you a little bit more on why it matters, and then we provide other resources for you to go and get after.

Carolyn Ford [:

Right. And that's what we're gonna keep doing on so what. We're going to keep following 505 and unpack some of those stories that, either I don't understand or I just find the most interesting. So thanks, Tracy. Great conversation. Thank you, listeners. Smash that like button. Share this episode.

Carolyn Ford [:

We'll talk to you next time on So What. Thanks for joining Tech Transforms sponsored by Dynatrace. For more Tech Transforms, follow us on LinkedIn, Twitter, and Instagram.

Links