Artwork for podcast Tech Transforms, sponsored by Dynatrace
So What? Generative AI with Tracy Bannon
Episode 6312th July 2023 • Tech Transforms, sponsored by Dynatrace • Carolyn Ford
00:00:00 00:35:18

Share Episode

Shownotes

Tracy Bannon, Senior Principal/Software Architect & DevOps Advisor at MITRE, returns to Tech Transforms for our So What segment to discuss all things generative AI. Following Tracy's presentation at the RSA Conference 2023, she and Carolyn discuss everything from software development lifecycle to the potential that various AI models may have.

Key Topics

  • [01:29] - Software Development Lifecycle: RSA Conference Recap
  • [04:48] - Generative AI as a Service
  • [07:36] - Potential for Disinformation
  • [12:04] - Potential of AI for Developers
  • [17:15] - Low Code / No Code Capabilities
  • [26:14] - Discussion Roundup
  • [31:14] - Tech Talk Questions

Quotable Quotes

Definition of generative AI: "Generative AI is under the umbrella of large language models. And a large language model is just that. It is a model where vast amounts of text data have been fed in and it uses statistical analysis to figure out the likelihood that words or phrases go together." - Tracy Bannon

On generative AI models: "It's only as good as the information that's going in, garbage in, garbage out." - Tracy Bannon

Generative AI advice: ''Know that we have to really get focused on the ethics of using these tools. Know that there are big security risks, but get familiar. Get familiar. It isn't going to take your job today. It is going to augment many jobs, but it's not going to take them completely away." - Tracy Bannon

About Our Guest

Tracy Bannon is a Senior Principal with MITRE Lab's Advanced Software Innovation Center. She is an accomplished software architect, engineer and DevSecOps advisor having worked across commercial and government clients. She thrives on understanding complex problems and working to deliver mission/business value at the speed. She’s passionate about mentoring and training, and enjoys community and knowledge building with teams, clients and the next generation. Tracy is a long-time advocate for diversity in technology, helping to narrow the gaps as a mentor, sponsor, volunteer and friend.

Episode Links

Transcripts

Carolyn Ford:

Welcome to Tech Transforms, sponsored by Dynatrace. I'm Carolyn Ford. Each week, Mark Senell and I talk with top influencers to explore how the US government is harnessing the power of technology to solve complex challenges and improve our lives.

Hi. Thanks for joining us on Tech Transforms our so what series with Tracy Bannon, senior principal at Mitre. On this series, Tracy and I unpack some of the biggest trending news topics in federal technology. Today we get to talk about her presentation at RSA in May on one of my favorite topics of the day, which is generative AI, such as ChatGPT. So glad we're doing this, Tracy.

Tracy Bannon:

It has been a while. I am so glad that we have just reinvigorated things and we're getting connected again.

Carolyn Ford:

Yes, well and ChatGPT, man, it really is one of my favorite topics to talk about. So I had the pleasure of watching a recap of your RSA presentation, which for our audience, they will be able to watch it on demand, I think June 1st.

Tracy Bannon:

Yeah. I believe that's correct. Yeah.

Carolyn Ford:

All right. So let's start with, if you can give us a quick summary of what your presentation at RSA was.

Tracy Bannon:

Well, so RSA, if folks are not aware, is the RSA conference. RSA if you have token that you use to log in it's a changing token. They're the big leaders in that, and they created this conference decades ago. It is the preeminent conference about cybersecurity. I was there specifically on DevSecOps Day and really focused on talking about SDLC. Software development lifecycle, because that's what I do. I'm a software architect. So looking at how generative AI can be applied. So the first place that we started was, well, why don't we define generative AI? And I'm going to do that right now. I'm going to do that just for the sake of anybody who's not aware. Generative AI is under the umbrella of large language models. And a large language model is just that. It is a model where vast amounts of text data have been fed in and it uses statistical analysis to figure out the likelihood that words or phrases go together. That's it.

Now, there are lots of complexities to how it does that mathematical scoring and figuring those things out. You hear about the number of parameters that a model has, but that's what a large language model is. And when we say generative, it's because it is generating something. So it is not inventing things from scratch. It's generating it from all of the data that's been fed in. So when I start to think about generative AI and why it matters to the SDLC, I start need to think about low-code and no-code environments. I need to think about custom development, different parts of the spectrum and how it can help. But really how we have to be really careful about what can happen with it. And I could take this entire time and go down the rabbit hole and walk you through every piece of it. Some of the big moments are asking people to walk away and figure out what their own organizations are doing. So I'm going to start with that imperative. I'm going to start with the very last thing that I would say to people.

First thing to do in your organization, turn around and ask folks, what are you doing? Are you using any language service, any large language model? Are you using ChatGPT? Are you using perplexity? Ask the question and ask it in an open way and ask them, "How are you using it? Why are you using it and what are you finding?" Those bits, putting those into place first and understanding that will allow you to then rationalize, are you at risk? Is it actually helping people or is it fundamentally either a security gap or productivity loss? Because everybody talks about how much it's going to help. It doesn't always help as much as we think it does.

Carolyn Ford:

Well, and for the general population, ChatGPT which just came on the market hot really for general population, it's been in within the last 10 months.

Tracy Bannon:

In November.

Carolyn Ford:

November, okay. However, this generative AI has been around, especially in the developer's world for a long time. You guys have been using this for a while?

Tracy Bannon:

But in different ways and the models are getting that much more improved that much more quickly. If you think about when ChatGPT from OpenAI went live, it was 3.5, then very quickly became 4.0. So in increase in the number of parameters and the complexity and the sophistication of the model and how things are knitted together. But yes, we've been using it for a while, but not in the ways that people think. We haven't been using it to generate a lot of code. Don't listen to what people are telling you. There is an offering called Copilot, which is from Microsoft. It is with GitHub. Anyways, I always get the two mixed up. Sorry guys. But they trained it by looking at all the repositories of code that people had out there. It doesn't mean it's really great.

As of the last time I looked it up about three weeks ago, they still only had a 26% acceptance rate of the code that's being generated. What's that mean? Well, it means they're not that good yet. There are other offerings. A group called Tabnine does more of a code completion. But even if I'm trying to use it to write code, you need to think about what it's doing. Remember, you hear in the media right now that people are seeing ChatGPT hallucinations mean it's making up junk. It makes up junk code too. Sometimes it's okay, but I got to say to everybody, you're not going to lose your job. Not tomorrow. It's going to be incredible. It's going to be a massive difference in change, but it's so embryonic right now. If somebody tells me that their development staff is getting 80% productivity gains, I will say BS, show me. Show me.

Carolyn Ford:

So coming at it from a non-developer point of view, from a marketing point of view, I've found it incredibly useful as a brainstorming tool, as helping me make my words sound smarter. And I'll even tell it that, that's even the prompt. I'll type something in and I'm like, "Make this more formal or make this more casual and conversational." And I have never, as long as I've been using it for the last several months, taken something that it's written verbatim. I take it. I read it, it gets stuff wrong, I fix it. But if I were not a subject matter expert and what I was asking it to help me with I mean-

Tracy Bannon:

It's wrong.

Carolyn Ford:

It's wrong. It's disinformation. And you and I were talking about this, the potential for unintentional disinformation of getting pushed out from ChatGPT I think is enormous.

Tracy Bannon:

So there are so many things to unpack with just this. So we're going to put the technical part aside for a little bit. We'll come back to that later because I always gravitate towards that. There was a lawyer, I believe it was Friday of last week, who has charges of some sort against him. Some kind of judicial hearing is being held against him because he used ChatGPT. Well, it seems like it should be an okay thing. You've taken all the legal books, you've fed them all in. Why? Well, it turned out that a number of the cases that he was citing as precedents were hallucinations. They were bunked, but he was pressed for time. He put it in, brought it out, read it. Didn't double check, bang, that happened. I have been able to feed in my own transcript that I know by heart and say, summarize this to have it insert chunk in there. A good friend of mine, and actually you're familiar with her as well, Katy.

Carolyn Ford:

Katy Craig, former guest.

Tracy Bannon:

Katy Craig.

Carolyn Ford:

Has her own podcast, 505?

Tracy Bannon:

Yes. And it told me that she was a theater and communications major, and I can tell you that is nowhere near true. Fantastic communicator. There ain't no drama around this woman. Not at all. So there are two different things at play here. There is just the fact that the models are not yet as sophisticated as they will be. You don't have easy access to determine the provenance. Where did it get that piece from? There are some other models. One in particular and I'm not sure which language model's underneath it, but as a service, it's called perplexity. And when you type it in and it gives you a response, it actually gives you a footnotes. So you can look at the footnotes and it takes you to where it got the originating information, not as verbose, not as convincing as ChatGPT.

But it brings up another bigger point. And this is where my head goes with both the talks that I gave as well as the clients that I'm talking with. Who is building the model? Who's controlling the model? Who's putting data into the model? Let's say that Carolyn Ford is a lawyer and she's looking up precedents. If I really wanted to tank her boat, wouldn't I figure out ways to poison that? It's data poisoning. You can model poison. Their quality assurance of the models themselves is massive. Nobody's talking about that because it's still so fun. First thing that I did, we sat down and made chicken jokes.

Carolyn Ford:

Well, in the compute behind those models is so massive and expensive. I mean, like you said, who's controlling them? I just, I'm reading Four Battlegrounds by Paul Scharre, excellent book on AI. And it goes into the history of ChatGPT. And to your point, who's controlling those models and what's the agenda?

Tracy Bannon:

And what are they doing to notify you? So consider different ways you might be using it. It might not be ChatGPT, it might be another model and you may be subscribing to their service. So let's say I'm subscribing to it and it's helping me... I'm going to go to the technical part. Maybe it's helping me to put in some kind of algorithm and I can use it for my end users and maybe it helps them filter things or it helps them find things or search things. So I'm subscribing to a model when they have a problem, either a security issue or they're getting errant data coming into it. How would I ever even be aware and I'm directly passing that along to the next guy who's using my app or the thing that I've created.

So there's a lot that we need to do in determining what our role is when using that capability. Are you using it to generate and build something? Be careful what you're building. Are you building marketing? Are you building tweets? Are you building a LinkedIn post or are you building something that could put others at a different type of risk, a physical risk or a cyber risk? All of those things count. They're all a part of this.

Carolyn Ford:

So let's go more technical here and talk about it from your developer hat point of view. Bernd Greifeneder who, full disclosure, he is the founder and CTO of Dynatrace, which is where I work, recently wrote a blog on generative AI and he addresses a lot of its potentials in the developer world specifically. His point is there's potential. He also in the same blog says, "And there's got to be some guardrails." So talk about first of all, as a developer, how you are using it, if you are, and where you... Like the potential, how soon, how real?

Tracy Bannon:

So the work that I've done with it, the tests that I've done, the different services that I've tried have done a marginal job so far. So if you think about writing a software ecosystem, I'm not writing one tiny little thing. So if you think about asking ChatGPT or asking one of these services to create something, how much information do I have to put into it to give it all the context that it would need to understand what the rest of the system does or what the rest of the system needs to do?

Right now, we're not able to give it all of the context that it needs. So what we end up doing is we do routines or the most effective thing that I found it for, I don't like it to generate code for me because I'm finding it has flaws. It doesn't compile. If I have to spend more time debugging what ChatGPT gave me, then it would take me to write it or to phone a friend and say, "Hey, I'm trying to figure this out. What would be the best way to code it?" Well, that's an issue.

What comes out of it though is generated. But if I'm not going to use it to compile, I can use it in another cool way, modernization. When we have lots and lots and lots of COBOL, we've got lots and lots of ADA code, we've got things written in languages that people are not being educated on anymore. And we have an aging group that have not been able to backfill. So we're either going to have to maintain that code that doesn't have a lot of documentation, shame on other generations, shame on all of our generations that didn't create the necessary documentation. But think about being able to take that piece of code in a language that's getting archaic and saying, explain this to me. Explain the intention of this. So there's great power there now to have it explain it.

The problem is I don't want any new in career developers, new in career technical folks, new in career anybody leveraging this from a technical perspective because they aren't able yet to grasp where the errors are. Where I was talking to my friend Bryan Finster, he works with defense unicorns. He originally was with Walmart for many years and we hopped on the phone with Dave Farley, and he's written a couple of system engineering books, quite well known. We got on a call this morning just to discuss these pieces. What are the security issues? What are the quality issues? Are we all going to lose our jobs tomorrow? No, we're not.

The way that we can use them is that we should be limiting junior people. If we want them to understand language and put some code in and be able to dissect it. It can be good for a little bit of that interactiveness to it. I can take code and I can have it explained, but there's a risk there. Am I taking corporate code? Is it proprietary? Am I feeding it in and saying, now explain it? So we're getting into some muddy waters with that. There are a lot of cool stuff that's on the horizon. Even though that article that you cited, even though the blog post did mention there's potential for it to do some boilerplate stuff for us, not a whole lot of that yet because it's too new. We're in the middle of building things. Yes, there's a lot of playing with how it will help us in the future, but it's still embryonic and everything that I'm telling you now, in six months it'll be different. It'll be massively different because of the rate. We are working through this.

It's interesting that with generative AI, there are things that it can jumpstart to help you with because generative AI is not only text-based with languages. There are...What's the word I'm looking for? There are visuals. You can do graphics. So there's some interesting things. There are some user interface designers. I have an entire list of different types of generative AI and how it helps environments, for example. Interesting. Low-code, no-code is often looked down on by the developers, the system engineers of the world, the software engineers. It really does have a place people need to be able to answer their problems. Sure.

Carolyn Ford:

Define low-code, no-code.

Tracy Bannon:

Sure. The difference between low-code and no-code. These are digital platforms that jumpstart things. A lot of times they will use what you see is what you get a visual canvas to allow you to pull things together, put arrows between them, click on what you want them to do. So it gives a way for somebody who is not an engineer to develop a capability that can be an application.

Carolyn Ford:

Like back in the day when I developed my own website, what I did was I used a template.

Tracy Bannon:

Kind of but I could imagine if the template was broken down into smaller bits and pieces. The difference between low-code and no-code is that low-code is intended for someone with an IT background that can do some amount of configuration and programming on top of it so they can add onto it. No-code is intended for somebody who's completely business and we need to sit them in front of this and let them answer their own business problem.

So these platforms have been working with AI for a bunch of years. It's actually interesting that they have been more open to leveraging this than the software custom software, traditional custom software. And I think it's because they for a long time have been looking for the fastest way to get to value. Whereas if you are a software engineer, you want to get that value, but you also are highly trained and concerned about getting it right, the quality, the reliability, those things that are underneath. Whereas with low-code and no-code, the platform does that. So generative AI in these no-code platforms has been around for a bit and they have just a whole bunch of cool stuff that they're doing well.

Carolyn Ford:

So maybe that's where we address like, there were jobs that went away with this low-code. Because now you don't... But did the jobs go away or did-

Tracy Bannon:

No, no, no, no, no. Jobs haven't gone away. We don't have enough technical people. We don't have enough software engineers. We don't have enough developers to fill the void. So instead of that, let's democratize it. Let's make it so that Carolyn, you have something that you need to do. Let's make it so that Carolyn can do what she needs to do. And that's really the core of it is getting after they sometimes call them citizen developers. I wouldn't call them developers. I would say we've democratized it so that people who are non-technical in nature can create technical capabilities. That's what it is.

So I'm going to bring us forward though. Are there ways that it can help me as a software developer? It can do code completion for me, that's pretty helpful. So I start to enter something and it fills it out. Couple of extra lines, like code, or word completion when you're typing in your Gmail and it pumps out the whole sentence.

Carolyn Ford:

But you don't still have to spend some time to verify that it's-

Tracy Bannon:

Generally not. You tab past it and now it was amazing when it was a half a word and now it puts a whole phrase out there. So we're getting used to it and starting to trust it. Code completion is more trustable than code generation. With code generation, that can use natural language and say I want a function that does this, but it'll oftentimes give me four or five options. So I can click through those options. If I'm new in career, how do I pick? I don't have the tools. I don't know how to pick those. So it's really good for somebody who is more advanced in their career, who's more experienced. There are some interesting things that are coming up with code review, but it does mean that I need to feed in my code. So maybe not with ChatGPT, but maybe with my own private model that I feed in-

Carolyn Ford:

Because I was just going to say, as soon as we put our own stuff into ChatGPT, it's no longer our own stuff. It's now out for the whole world to have. Right?

Tracy Bannon:

Yeah.

Carolyn Ford:

So you say put it into your own model. You have your own model?

Tracy Bannon:

I am working with a group and we're standing up our own model, but we're doing this as something as a community. We're literally coming together, paying for it by ourselves so that we can get the experience in building the models. Big firms have been able to do this. I was talking with my friend Dejay Shlyne, and he works with Yahoo. He is part of the Paranoids from Yahoo, the security group there. And they were looking and beginning their journey to create their own model until very super recently, as in weeks, it was essentially cost prohibitive for anybody.

Carolyn Ford:

That's what I was just going to say. Again, not only the amount of data needed to create a good model, but also just the money.

Tracy Bannon:

The compute.

Carolyn Ford:

The compute, yeah. So really you're talking about the Yahoos and Amazons of the world that can do this.

Tracy Bannon:

Generally though ChatGPT is rolling out or has rolled out a business class that is supposed to be able to isolate what you put in from others. Now show me your security architecture and then I'll believe you.

Carolyn Ford:

So Nick Shalon... You sent me this post by Nick. I think what he did with Ask Sage is super cool. Talk to me about it a little bit because I think this is a good example.

Tracy Bannon:

It is one of the best examples that I have seen. He got on it immediately. He saw it as soon as he realized, I mean split second. So he's not generating code in general. There's probably some of that. But what he was looking at is the full software development lifecycle for government and where do they struggle? The amount of contracts that they have to create, the amount of language that they have to interpret. So what if I was able to feed in RFIs, a request for information. I publish it out. I have 10 questions that I'm asking industry and I want industry to help me by providing me information. Well, let's say I get 20 responses, I can push those in and it can analyze those. It can help us to grade and interpret all of those different responses. That's a really good use because it's language for language. It also can help with contracts. Yeah, go ahead.

Carolyn Ford:

Let me tell you what I think you just said. So Ask Sage, you create your RFI, you feed it into Ask Sage, and then contractors can respond. How do they-

Tracy Bannon:

They'd be a little bit different. You'd post it out, get the responses, and then it would be up to me, as the person who got them, to post in what all of those responses are. So if I got 20 responses, I can put them in. And then for question number one, analyze the question number one responses from all of those different contractors, vendors, industry, academia, all kinds of people respond to these.

Carolyn Ford:

So you can tell it to group them, compare them, analyze them, and help you pick the top three?

Tracy Bannon:

Right. Or at a minimum provide you with what the similarities and the differences are. If you just did that to get through 20 of them, it does the language analysis to understand when we're reading language, when we're reading something like this, you have to get into the groove of the person who wrote it or the team that wrote it. Is the language differences from one company to another company? Except for the buzzwords. Pretty dynamic style differences. This levels that playing field because the style of the writing is less important than statistically the words that are going together. Well, there's a myriad ways to use a Ask Sage. One of the things I like the most is generating contract language. Contract language acquisition strategy aspects has to be very, very specific. But oftentimes it's very repeated.

So if I write something, I'm buying toilet paper over here and I want to buy toilet paper over there, and yes, I'm being facetious. I would want that same language, but I would want all of the additional clauses, all of the additional things that go with that. So it is good for that. You're saying it used a lot in the legal space right now be because it is pure language analysis. But I'm going to say the same thing I said earlier. It's only as good as the model itself as the information that's going in, garbage in, garbage out. So if you're not loading in good quality data, if you don't have quality assurance of the model, doesn't matter if Nick has created it and is hosting it on Azure and has it isolated so that it can be classified. Or if it's out there and you've just logged in from the public internet into ChatGPT Major. Doesn't matter if the data is not solid, if the data's not being watched.

Carolyn Ford:

Okay. So before we run out of time, let's do a little roundup of what we've talked about. So our listeners, what do you want them to know about generative AI? I guess let's just leave it at that. What do you want them to know about generative AI?

Tracy Bannon:

It's not ready for prime time. And it's really cool.

Carolyn Ford:

It is fun. I can rat hole on it for a really long time. I have to sometimes set a timer.

Tracy Bannon:

As we all can. The second thing is that it's not always correct. Remember that it speaks to you with conviction. There's such good language constructs that when we read it, we're immediately gobsmacked with how professional it sounds. Therefore, it seems authentic, it seems credible to us. Now ChatGPT is actually added on its page when it is generating stuff for you. It says may be inaccurate. I mean it's right there beside the results that it's giving you.

Carolyn Ford:

I haven't gotten that one yet. But I don't get into really technical stuff. It will tell me things like, "Well, I'm just a..." Not just. "I'm a language model, blah, blah, blah. So, here's something. But this is not my specialty, essentially."

Tracy Bannon:

This is literally a label at the bottom of the page now. I mean it's permanently at the bottom of the page. I think it was May 24th. It was the last time I saw it. So just a couple of days ago that it was posted. So the current tools are growing. I want everybody to understand and to play with it a little bit, but not put in personal information. Just don't do it. Just don't do it. Don't ask it to do your taxes. Nothing like that. Don't ask it to. Don't give it all of your symptoms of why your knee is hurting because I actually did that to try and figure out. Don't do that.

Carolyn Ford:

Wait, did it tell you why your knee was hurting though?

Tracy Bannon:

Yeah, it actually was interesting. But I had to play with the prompts. I had to go back and forth and give it more information and give it more information because it builds during a session, you build on the question that you ask or the information that you gave it. So it builds on that.

Carolyn Ford:

I've found it's made me a better question asker.

Tracy Bannon:

Yeah, no it is.

Carolyn Ford:

Right? Because you have to figure out the right question or the right prompt to get what you're looking for.

Tracy Bannon:

Exactly. So know that it is going to be incredible. Know that we have to really get focused on the ethics of using these tools. Know that there are big security risks, but get familiar. Get familiar. It isn't going to take your job today. It is going to augment many jobs, but it's not going to take them completely away. If you're a copy editor, you're not suddenly going to be thrown out on your ear. If you're a developer, you're definitely not going to be thrown out on your ear. So those would be the big things that I would take away. If anybody wants to get into details though, we want really want to ferret it out some of the nuances within applying it to the software development life cycle, people can reach out to me. I will be glad to have a conversation on this.

Carolyn Ford:

Fantastic. Well, and you've already, like I said, your RSA presentation is going to be live soon. And you've talked about this with Katie on her podcast, but that's fantastic they can reach out to you just directly.

Tracy Bannon:

And there are a couple other links that I'll provide where I've done some recordings on these so people can consume it on their own time and just be thinking about it. Yeah.

Carolyn Ford:

Great. Yeah. I mean, I'd just like to echo what you said about not taking it at verbatim. You need to still be an expert about what you're talking about. And if you're not an expert, okay, take what it's got, but you got to verify it. Go find sources that verify the information that's coming out of it. You can't just take it as is. And it has been a time saver for me, and it's helped me get unstuck when I'm writing things.

Tracy Bannon:

There's a quote by a fellow by the name of Ajay Agrawal. He is a professor at the University of Toronto. And I just love this quote, and maybe we can end with this. "ChatGPT is very good for coming up with new things that don't follow a predefined script. It's great for being creative but you can never count on the answer."

Carolyn Ford:

Yeah, exactly. It's good brainstorming.

Tracy Bannon:

It's wonderful for that. But imagine brainstorming with literary words, and brainstorming code that could have flaws and compiling where it's not quite there yet, but it will be, I believe. I truly believe. I believe in my heart of hearts that in years to come, we will ask. We will say, "Where were you when ChatGPT rolled out."

Carolyn Ford:

I agree. All right. So I get to ask you a couple of tech talk questions since you're our main speaker today not just a host. What was your favorite part of the RSA conference? What was the standout for you?

Tracy Bannon:

It's actually a really personal standout. Not what people would expect, but I'll give you two. So the personal and the professional. The personal is the, it's the first time that my husband and I have gone to a conference together in two decades and he's operations, I'm development. And we came at things with very different perspectives, but it was so much fun because we were there together. I wasn't taking a picture or texting him about it. So experiencing that in that way was fantastic. The other thing that was both a favorite, but I can't say it's a favorite, a big takeaway for me from RSA, was that there's a real lack of focusing on secure by design. And we're still highly reactive. Everything is about how we detect the problem instead of preventing the problem in the first place. But there are always so many good people. There are so many well-intended people that are trying to make the world safer.

Carolyn Ford:

And back to your passion for DevSecOps. You do it right the first time or it's not worth doing it all.

Tracy Bannon:

Exactly. Or at least do it small enough that I can fix it quickly.

Carolyn Ford:

Yeah. All right. Give me something to read or listen to.

Tracy Bannon:

Gosh. Something to read or listen to. Gosh, from a military perspective, I recommend a book called The Kill Chain by Cameron [Christian] Brose. So that's a big one on my list. I wasn't expecting that. So I'll turn around and look at all my dozens of books that I pick up. Project to Product by Dr. Mik Kersten is another fave of mine. And all of these have nothing to do with generative AI. Things to listen to, Real Technologists podcast. That is new and it's focused on thought diversity and getting after technologists. Yeah. You can see the t-shirt. There's the plug for that. I would listen to that. And I would also, if people would want a little bit of cybersecurity every day, 505 podcast is 20 journalists from around the world. And that's what we do every day at 505. You get between four and six little episodes, it takes less than 10 minutes and it gives you the latest in open source and cybersecurity news. So those are those things I would listen to.

Carolyn Ford:

I love the less than 10 minutes too. So do you do anything like for brain candy, read, watch, or trace?

Tracy Bannon:

Well, actually, I just finished Matthew McConaughey's Greenlights and it floored me. It was wonderful and I listened to it as well. And he has a technique where something where he sees a sign and realizes that that is a greenlight in his life, he'll just go, "Greenlight." So that has made its way into the Bannon House lexicon. So that that's some candy. It's definitely some brain candy on the side.

Carolyn Ford:

All right. I will take it into... Honestly, no, not going to go there. Opinion on Matthew. I like him a lot.

Tracy Bannon:

Good actor.

Carolyn Ford:

All right.

Tracy Bannon:

He gets the job then.

Carolyn Ford:

Yeah, exactly. Okay. This is so fun. I love talking to you.

Tracy Bannon:

Well, let's hang together again and soon.

Carolyn Ford:

Absolutely. And thank you to our listeners. There will be a lot of good links that Tracy already mentioned in the show notes that you can go to share this episode, smash that like button, and we'll talk to you soon on Tech Transforms.

Thanks for joining Tech Transforms sponsored by Dynatrace. For more Tech Transforms, follow us on LinkedIn, Twitter and Instagram.

Links

Chapters

Video

More from YouTube