Artwork for podcast Tech Transforms, sponsored by Dynatrace
The Future of Government Technology: FedRAMP, AI and Compliance in Focus with Ross Nodurft
Episode 756th December 2023 • Tech Transforms, sponsored by Dynatrace • Carolyn Ford
00:00:00 00:41:57

Share Episode

Shownotes

As technology rapidly innovates, it is essential we talk about technology policy. What better way to get in the know than to have an expert break it down for us? Meet Ross Nodurft, the Executive Director of the Alliance for Digital Innovation. Ross dives in, explaining the evolution of FedRAMP controls and the recent, giant, AI Executive Order (EO) from the White House. Listen in to find out what this EO means for the government, the industry and the workforce as the U.S. attempts to implement policy ahead of AI innovation.

Key Topics

  • 04:25 Increasing security controls for cloud migration
  • 07:51 Discussion about customer feedback and cloud migration.
  • 12:17 Encouraging commercial solutions into federal government securely.
  • 15:39 Artificial intelligence shaping policy for future technology.
  • 16:54 AI EO covers critical infrastructure, AI, data, immigration.
  • 22:34 Guidance on AI impact assessment and testing.
  • 27:02 AI tools adoption must not be delayed.
  • 30:03 Ensure AI technologies have fail-safe mechanisms.
  • 32:08 Concern over rapid pace of technological advances.
  • 34:29 AI and technology advancing, policy aims control.
  • 39:37 Fascinating book on technology and chip history.

The Future of Government Technology: Shifting to FedRAMP High and Accelerating Cloud Adoption

Shift from FedRAMP Moderate to High for Sensitive Workloads

When FedRAMP was established over a decade ago, the focus was on managing the accreditation of emerging cloud infrastructure providers to support the initial migration of workloads. The baseline standard was FedRAMP Moderate, which addressed a "good amount" of security controls for less risky systems. However, Ross explains that increasing volumes of more sensitive workloads have moved to the cloud over time - including mission-critical systems and personal data. Consequently, agencies want to step up from moderate to the more stringent requirements of FedRAMP High to protect higher-risk systems. This includes only allowing High-cloud services to interact with other High-cloud applications.

The Evolution of Cloud Computing: "So right now, we're at the point where people are existing in thin clients that have access to targeted applications, but the back end compute power is kept somewhere else. It's just a completely different world that we're in architecturally." — Ross Nodurft

The Future of Government Technology: Streamlining FedRAMP for the SaaS-Powered Enterprise

According to Ross, the COVID-19 pandemic massively accelerated enterprise cloud adoption and consumption of SaaS applications. With the abrupt shift to remote work, organizations rapidly deployed commercial solutions to meet new demands. In the federal government, this hastened the transition from earlier focus on cloud platforms to widespread use of SaaS. Ross argues that FedRAMP has not evolved at pace to address the volume and type of SaaS solutions now prevalent across agencies. There is a need to streamline authorization pathways attuned to this expanding ecosystem of applications relying on standardized baseline security controls.

High-level Security Controls for Sensitive Data in the Cloud

Addressing Data Related to Students and Constituents

Ross states that as agencies move more sensitive workloads to the cloud, they are stepping up security controls from FedRAMP Moderate to FedRAMP High. Sensitive data includes things like personal HR data or data that could impact markets, as with some of the work USDA does. Willie gives the example of the Department of Education or Federal Student Aid, which may have sensitive data on students that could warrant higher security controls when moved to the cloud.

Ross confirms that is absolutely the case - the trend is for agencies to increase security as they shift more sensitive systems and data to the cloud. Especially with remote work enabled by the pandemic. So agencies with data related to students, constituents, healthcare, financial transactions etc. are deciding to utilize FedRAMP High or tailor Moderate with additional controls when migrating such workloads to ensure proper security and rights protections.

The Future of Government Technology: Navigating the Tradeoffs Between Cloud Innovation and Data Security

As Ross explains, FedRAMP High means you can only interact with other cloud applications that are also FedRAMP High. So there is segmentation occurring with more sensitive data and workloads being isolated via stricter security controls. However, he notes it is not a "bull rush" to FedRAMP High. Rather agencies are steadily moving in cases where the sensitivity of the data warrants it.

Willie then asks about the costs associated with these stricter cloud security authorizations, given even Moderate is expensive. Ross explains there are currently policy discussions underway about making FedRAMP more streamlined and cost-effective so that innovative commercial solutions can still sell to the government without having to completely re-architect their offerings just for these processes. The goal is balancing the accessibility of cloud solutions with appropriate security based on data sensitivity.

Modernizing Federal Government IT: "We need to stop requiring companies to have their own completely separate over architected environment. We want commercial entities to sell commercially built and designed solutions into the federal government." — Ross Nodurft

Laying the Groundwork: The AI Executive Order and the Future of Government Technology

Robust Framework for Future Policy and Legal Development

Ross states that the AI Executive Order is the biggest and most robust executive order he has seen. He explains that it attempts to get ahead of AI technology development by establishing a framework for future policy and legal development related to AI. Ross elaborates that there will need to be additional regulatory and legal work done, and the order aims to "wrap its arms around" AI enough to build further policy on the initial framework provided.

According to Ross, the order covers a wide range of topics including AI in critical infrastructure, generative AI, immigration reform to support the AI workforce, and government use of AI. He mentions the order addresses critical infrastructure like pipelines, hospitals, transportation systems and more. It also covers immigration policy changes needed to ensure the U.S. has the talent to advance AI. Additionally, it focuses heavily on government consumption and deployment of AI.

Mapping the Future of Government Technology

Navigating the Future of Government Technology

The AI executive order tasks the Office of Management and Budget (OMB) with developing guidance for federal agencies on the safe and secure adoption of AI. Specifically, Ross states that the order directs the Federal CIO and other administration officials to establish rules that allow government consumption of AI in a way that protects safety and rights. Before writing this guidance, the order specifies that OMB must consider the impacts of AI on safety-critical infrastructure as well as rights like privacy and fairness.

Ross explains that OMB recently released draft guidance for public comment. He says this draft guidance contains several key components. First, it establishes AI governance requirements, directing every major federal agency to appoint a Chief AI Officer and create an AI council with agency leadership that will oversee adoption. Second, it mandates that agencies take inventory of existing AI use and develop plans detailing how they intend to utilize AI going forward.

Requirements for Agencies to Appoint a Chief AI Officer

According to Ross, a primary governance requirement in the OMB draft guidance is that all major agencies assign a Chief AI Officer to spearhead their efforts. Additionally, he notes that the guidance orders agencies to construct AI councils with membership spanning functions like IT, finance, HR and acquisition. Ross specifies that these councils will be led by the Deputy Secretary and Chief AI Officer of each department.

The Uncertain Future of Government Technology

Collaboration, Prioritization of Assessments, Compliance, Monitoring and Validation

Ross highlights the need for collaboration between industry and agencies to address issues like prioritization, timing, specifics of compliance, attestation and who pays for and validates assessments. The order pushes the use of AI but lacks specifics that could slow adoption of widely-used technologies with AI. Ross notes this could introduce friction, slowing productive technologies when faster digital services are demanded. Better defining compliance pathways is needed to avoid nervousness using AI.

AI Ethics and Regulation: "You've got to run as close to live testing as possible, you've got to have human people factored into the decision-making engines." — Ross Nodurft

While embracing AI, the order does not detail how to facilitate adoption. Ross says this could cause confusion across agencies. His trade association ADI sees the need to add specifics around governance mechanisms to avoid inconsistencies. The lack of clarity risks friction and slowing AI incorporation, which Ross believes is imperative.

Balancing Innovation and Responsibility in Emerging Technologies

Demand for a Digital Environment and the Importance of Observability

Ross states that there is a quick move towards a digital environment across all services, driven by demand from millennials, Gen X and Gen Z. He emphasizes that everything needs to have an app or digital access now to engage users. Ross then highlights how Dynatrace provides important observability of these new cloud-based architectures, allowing agencies to understand usage, interactions and performance. He argues this is essential to properly managing digital services.

Ross worries that the new AI executive order guidance lacks specifics around compliance, which risks creating friction in adopting widely-used technologies like Dynatrace that have AI components. He states there is uncertainty whether tools like Dynatrace must be inventoried and assessed under the new policy. If so, there are many open questions around prioritization, timing, specific compliance activities, and who pays associated costs. Ross emphasizes that this uncertainty could hinder cloud adoption without more clarity.

Responsibility and Control Over the Use of AI Technology

Ross stresses that while AI technology enables incredible things, we have full control and responsibility over its uses. He states we must consider processes and safeguards that provide oversight and allow intervention over AI systems. Ross argues we cannot afford to deploy AI blindly, but highlights it is in our power to leverage these technologies to benefit humanity with appropriate guardrails.

Shaping the Future of Government Technology

The Future of Government Technology and Managing Change for Emerging Fields

Ross asserts today there is greater intention around anticipating risks from emerging technology compared to past eras. He advocates for building off switches and review processes that allow understanding and course correction around new innovations like AI. Ross states this considered approach is essential for nanotechnology, quantum computing and other exponentially advancing fields.

The Influence of Artificial Intelligence in Policy and Legal Development: "But artificial intelligence is now more than ever being built into everything that we do technologically." — Ross Nodurft

Ross disputes the concern that AI will replace jobs, arguing instead it will shift skills required by humans. He provides examples of comparable historical technology shifts requiring new expertise, like transitioning from horses to locomotives. Ross states AI moves job responsibilities in different directions rather than eliminating careers, necessitating learning new tools and approaches.

Establishing Processes and Organizational Structures for the Future of Government Technology

Ross highlights how the AI executive order establishes agency governance bodies to oversee adoption. He details required personnel like Chief AI Officers that must review and approve AI use. Ross states these processes aim to identify risks in using innovations like AI while still encouraging adoption. He argues this organizational oversight is a new paradigm essential for emerging technologies.

About Our Guest

Ross Nodurft is the Executive Director of the Alliance for Digital Innovation (ADI), a coalition of technology companies focused on bringing commercial, cloud-based solutions to the public sector. ADI focuses on promoting policies that enable IT modernization, cybersecurity, smarter acquisition and workforce development. Prior to joining ADI, Ross spent several years working with industry partners on technology and cybersecurity policy and several years in government, both in the executive and legislative branches, including Chief of the Office of Management and Budgets cyber team in the White House.

Episode Links

Transcripts

Carolyn Ford [:

Welcome to Tech Transforms. I'm Carolyn Ford, and my co-host today is Willie Hicks. Hi, Willie.

Willie Hicks [:

Hey. How's it going today?

Carolyn Ford [:

It's going great. So today, we're gonna talk policy, and policy is tough to navigate. Like, we're gonna dive into the new executive order on artificial intelligence, and it is a big document. Can you even call it a document? It's like a 112 pages. So there's a lot to navigate there. But fortunately today, Willie, we have Ross Nodurft, who is an expert on policy. Hi, Ross.

Ross Nodurft [:

Hi. Thank you for having me.

Carolyn Ford [:

Yeah. Well, thanks for being here. So I'm just gonna give a little bit of your background, Ross. You are the executive director of the Alliance For Digital Innovation, ADI. ADI is a coalition of technology companies focused on bringing commercial cloud based solutions to the public sector, and ADI focuses on promoting policies that enable IT modernization, cybersecurity, smarter acquisition, and workforce development. So prior to joining ADI, you spent several years working with industry partners on technology and cybersecurity policy and several years in government, both in the executive and legislative branches, including Chief of the Office of Management and Budgets, OMB, the cyber team at the White House. So you really are an authority on policy, and I'm glad there are people in the world like you that find this interesting and that can break it down for us. So I wanna start off with FedRAMP authorization.

Carolyn Ford [:

I mean, FedRAMP is like, it's a must have for cloud technologies. We've been sitting comfortably. I think, like, the standard has been FedRAMP moderate. Is that still the case? Are we trending up? What's going on there?

Ross Nodurft [:

was set up about December of:

Ross Nodurft [:

Since that time, cloud has exploded, and what we have seen is, you know, kind of a set number of major major infrastructure providers. Not to say everybody's an infrastructure provider, but, you know, it's then we've moved platforms. So areas that people can build different applications on, and then we've moved on top of that to SaaS applications. So right now, we're at the point where people are existing in thin clients that have access to targeted applications, but the back end compute power is kept somewhere else. It's just a completely different world that we're in architecturally. And FedRAMP has not moved. It has not evolved. Right? So a program that was meant to take a deep look at cloud infrastructure providers as a new service has not evolved to meet the need and the pipeline of SaaS applications that are baseline of security controls. So you've got low, moderate, and high.

Ross Nodurft [:

You know, few security control, I'd say a good amount of security controls, more than a good amount of security controls and a whole heck of a lot of security controls, so low, moderate, and high. And moderate was where people traditionally went from an authorization standpoint, meaning, like, the person who is selling into the government, an application provider, an infrastructure provider. Met a certain number of controls at that moderate level, and the agency reviewed the documentation saying, yes, you met all those controls. We check the box. You can come into our environment. Well, as more and more people have moved more and more systems into those, cloud environments, they've gone from systems that probably were less risky, because that's the stuff you put into something that's unknown first, to systems that are more risky. And as we've gone and moved and migrated more stuff into the cloud environments and are using more applications to interact with those datasets, those systems that are governed by that. People looking at the security controls saying, may I wanna migrate a little bit from Moderate to High.

Ross Nodurft [:

They wanna add a few more security controls. Maybe it's Moderate tailored with a few more. Maybe it's FedRAMP High. And what you do when you step from Moderate to High is all of a sudden you're now in an ecosystem where You only interact with other cloud applications, especially the manager services that are at that High level. So that's what I'm seeing right now is it's not it's not like a a bull rush to hide, but people are going from Moderate to High, and they're doing it as they adopt more and more cloud services. The but that's what I'm saying.

Carolyn Ford [:

So the Moderate addresses what we looked like, what our environments looked like maybe 5 years ago? 3 years ago?

Ross Nodurft [:

I would say there was a lot of Moderate. I would say 3 to 5 years ago is fair. I think in the last probably Since the pandemic, that accelerated a lot of cloud adoption, and it accelerated a lot of, application consumption. So moving really from platform to SaaS and everybody using commercial SaaS solutions to meet the needs that were thrust upon them in the wake of the pandemic and people working from home, at least in the enterprise environment.

Carolyn Ford [:

Alright. So so the FedRAMP High addresses those SaaS security needs and that that move, like you said, that the pandemic caused for us to all get into the cloud. Moderate just didn't deal with a lot of the things a lot of the security needs that come with moving to the cloud. Did I get that right?

Ross Nodurft [:

Well, I think that that, when you move more sensitive workloads into the clouds. So think your personal HR data. Think you're, Interacting on a mission level where you have to have, things that could move markets. Right? So USDA, for instance, would move some of the work that they're doing in the in the cloud that wasn't traditionally cloud based that, you know, maybe they move the commodities market with the publication of a spreadsheet. Well, they wanna protect that a little so as they move some of the more sensitive data sets and systems into cloud environments because people are working remotely. Mhmm. It's people are deciding to step up and use FedRAMP High to protect the systems that govern, those mission sets.

Willie Hicks [:

And, so I was gonna jump in real quick, just ask a question if I could, Carolyn. Because it it seems, you know, that's this this is kind of what I'm hearing from our customers too, and I think Carolyn knows. I I don't often get in and talk a lot about, just kind of our platform and product, but, I I have seen a lot, from our prospects and our customers where moderate it's to your point, has been, has been okay. It's been more than acceptable. But I'm starting to see and it's it's really fascinating where, you know, you're getting, I guess more and more confident. Maybe it was COVID. I didn't think about it until you said it, but maybe it was COVID where people start to figure out, oh, we can start moving these workloads into the cloud where you get, agencies. And I'm just gonna call out, you know, and not saying that they're you know, what they're moving to the cloud or not.

Willie Hicks [:

But even if you think about someone like, department like, education or or federal student aid, when you're starting to move, like workloads maybe to the cloud that might even have information, like, you know, that might have sensitive information about people, students, constituents, would those kinds of of data be I'm not even thinking about PHI type data, but, you know, it it could be or PI it could be PI data. That that could get into sensitive high security realm too, what I I would imagine.

Ross Nodurft [:

Absolutely. Yeah. Absolutely. And, look, the trend the trend's not going away either. Cloud is cloud is functional. It's useful. It supports new technology. It is managed in a lot of ways For you.

Ross Nodurft [:

So, you know, what look. The trade association that I run, the Alliance for Digital Innovation, we are doing everything we can clear the pathway for agencies to access more of that technology. Right? We think it's net good. People gain efficiencies. People gain security. That's what that's what we're doing. So that's a trend we're seeing, and that's a trend we want to continue to see, to be honest.

Willie Hicks [:

And if I could just follow-up with 1 more question, and I don't wanna stay too too long here, Carolyn, but it just kinda I just thought about this. You know? And from your association standpoint, I'm sure you're helping towards this. But, You know, when you start thinking about more and more agencies moving High base, more secure workloads to the cloud and needing that higher baseline, I remember, you know, just getting our Moderate, authorization was an expensive proposition. And now looking at High and some of these other ones, these are extremely expensive propositions. You know, how what is the what are agencies doing? What is the the PMO doing to try to, you know, make this as, It's important work. I mean, we it it should be a burden to do because, you know, to to bring this guy to security is not gonna come, you know, free. But, also, we wanna make it so, more and more providers and industry can actually participate in this without having to to shell out 1,000,000 of dollars to to become bed ramped, to maintain it. I mean, is are there any movements there to kinda make this more, you know, easier to to kinda meet this baseline.

Ross Nodurft [:

its inception in December of:

Ross Nodurft [:

The final thing that I think gets to your point is, they said is they're one of their kind of opening preamble baselines of mark right? This is kind of how they're thinking about it is we need to stop requiring companies or building processes that require companies to have their own completely separate net over architected environment. We want commercial entities to sell commercially built and designed solutions into the federal government. We need to figure out how to get there, and we are going to enable the FedRAMP PMO. We own the White House is gonna enable the FedRAMP PMO, the people who own and run the prod, program to figure that out. We need to, you know, update the way that we're doing security controls. We need to take a hard look at what accreditations we're accepting and offering reciprocity too. Right? We need to think about how we're approaching this environment, this growing, vibrant environment of of possibility and say, how do we open the doors to access more of those or, more those solutions, in a way that's cost effective enough for folks to wanna do business with the federal government while not deprecating security. But that's, that's that's the policy discussion that's happening right now today, and ADI is actively involved in it.

Carolyn Ford [:

I'm really glad you brought it up, Willie, and I'm glad to hear what you're saying, Ross, because we hear all the time from government leaders that there has to be more, better, faster partnership between government and industry, that it has to be we have to be working together, and authorizations like this can really become major hurdles. Right?

Ross Nodurft [:

Mhmm.

Carolyn Ford [:

And so it's it's refreshing to hear that it's being addressed. So speaking of another major hurdle. Let's talk about AI. Specifically, the first thing I wanna talk about is the recent executive order, around AI. I wanna get your thoughts on it. It's massive as I mentioned at the very beginning of this episode is a 112 pages, I feel like I've heard you say, Ross.

Ross Nodurft [:

A 111. Yep.

Carolyn Ford [:

A 111. Yeah. Yeah. So massive. I'd like to get your standouts from the order. What makes it different? Well, it's really it's is the first one of its kind. Right?

Ross Nodurft [:

Mhmm.

Carolyn Ford [:

Okay. So talk to us about it.

Ross Nodurft [:

Sure. Sure. So, couple of things. 1, the AI executive order is I mean, I am not familiar with an executive order that's as big and robust as this. I've never seen anything like it. What I think we're seeing is the the White House is attempting to get ahead of a type of Technology development, which is a wild thing to think about. Right? Imagine if the White House decided to get ahead of Cyber writ large or something along those lines. AI is not one particular technology, and, frankly, AI's been going on in different ways for years.

Ross Nodurft [:

But artificial intelligence is now more than ever being built into everything that we do technologically. There there there are plenty of advantages to it. Right? It's it's a It's a type of way of considering a problem and producing responses, and there's some amazing leaps that are happening with it. And the the way that that toolset is enhancing technology, is is is making rapid changes across a number of environment. So White House has seen that, and they decided to try and set up a framework for future policy and legal development. They know that there's going to be additional work to do from a regulatory standpoint from a legal, authorization standpoint, and they needed a way to wrap their arms around it enough that they could they could build on their framework. So the the AIO is that. It is a, an attempt to kick start a bunch of processes that will enable them to develop specific and bespoke policies that are hopefully attuned to various environments.

Ross Nodurft [:

Now the AI EO itself covered everything from critical infrastructure, AI, generative AI, data, that feeds and teaches AI, immigration reform, that would be necessary to make sure that the United States has the best possible workforce to continue to work on AI, to the government's use of AI, which is what, you know, the line for digital innovation cares about deeply, and a bunch of other things. Some of the main standout pieces of the AI EO. I mean, for me, look, I've been concentrating on, What it what it kick starts from a federal government standpoint. I'm happy to talk about it. I'm looking at you guys for guidance. Which direction do you want me to go? I can talk more about the EO as a whole. I can talk about the OMB guidance that is gonna follow on from that. I can talk about, some of the interesting kinda quirky aspects of the EOs.

Ross Nodurft [:

We see it I mean, you tell me which way you want

Carolyn Ford [:

Willie?

Willie Hicks [:

So, my personally, I would be curious in kind of the direction from, like, from an OMB standpoint, kind of what we're seeing from a guidance standpoint, and, you know, how is this gonna, you know, really impact our our daily lives, if that if that makes sense.

Carolyn Ford [:

Yeah. I'm curious to know if it has any teeth too. I mean, we get these EOs all the time, and they don't really go anywhere.

Ross Nodurft [:

Mhmm. Right. So it's a good question. So I wanna make sure that our listeners are are are tracking. So the executive order, which is what Joe Biden signed, had a section in it devoted to the pub the Public sector consumption of artificial intelligence. Basically said, look. You, federal CIO, And a couple other folks in the administration, we need you to go and write some guidance and some rules to enable the federal government to consume it safely and securely. And it's a and before you go do that, make sure that you take into 2 things into consideration, the the AI that could be safety impacting and the AI that could be rights impacting.

Ross Nodurft [:

Then it starts to talk about what that means and then tells OMB to go define it. So the AI executive order tasks OMB, The Office of Management and Budget to put out guidance, and the Office of Management and Budget released probably 2 weeks ago now. A a draft of that guidance for everybody in the world to comment on. And that guidance had a few sections to it. 1, it talked about broad governance. What it does is it says, Look. We're we're gonna task every department and agency to set up to to appoint somebody who is well trained, has background to be the Chief AI Officer of that agency. Everybody's gotta find 1.

Ross Nodurft [:

If you don't have 1, go get 1. 2, it says every CFO active agency. What that means is, every big department that you can think of. There's 24 of them. The big ones, they all have to, set up a AI council, and those councils will be, chaired by the deputy secretary of those departments and agencies And co-chaired or vice chaired by the chief AI officer. So that's those are the top 2 people in those councils, and then they'll appoint some other folks from across the agency. So everybody from the HR, head of HR, to the head of budget, to the head of finance, did you know, you name it. Right? To the CIO, they're all gonna be there.

Ross Nodurft [:

Then it turns around and says, okay. You've got your governance structures in place. Now you need the right plan. So go look inside. Tell me what's what what are you working on? Because AI has been around for a while. So what do you what do you have currently that it is considered AI? Right? So is it a tool that uses AI to give you a result? Is it actual AI technology that you guys are leveraging to deliver your mission? Go do an inventory of that stuff and come back to us. Then from that inventory, we need you to develop a plan on how you're gonna Use AI going forward. And they they caveat that with a proactive, we want you to use AI.

Ross Nodurft [:

We think it's a good thing that you use AI. AI, including generative AI, Will be a net positive for us, but just don't just don't do it in a way that introduces safety risk or,

Carolyn Ford [:

Privacy.

Ross Nodurft [:

It's broaden the privacy. Is, was it safety and rights rights impacting risk? So and what I mean by and I'll tell tell you about it in a sec. So then it says, look. Here are some examples of safety safety and rights impacting AI. Safety impacting are things that that, you know, could mess with water, could mess with, hospitals, could mess with, TSA related pipeline stuff. You know, you name it. That's that's critical infrastructure type things, traffic, transportation, airplanes. You know? And then it turns around and says, rights impacting our you know, if you're going, to educational institutions and you've gotta take an SAT test, and there's AI and correlating statistics around that link.

Ross Nodurft [:

That's rights impacting. Voting, rights impacting. Things that are less harm to physical self and more harm to, potentially Leading to misinterpreted things that could be biased in some way, shape, or form.

Willie Hicks [:

Mhmm.

Ross Nodurft [:

And I'm not doing it justice. I mean, the the guidance has a list of things, to consider. Then underneath it and, again, I'm sorry if I'm taking too much time, but underneath it, it says it's big it's big, big series of guns. It gives you it says, alright, agencies. Now that you've got your plan, now that you've got your people, now that you've got your counsel, We need you to run a process for every piece of AI that you either currently have in there or are gonna bring in there To make sure that it doesn't impact the safety and, rights and, safety and rights that we're listing out. And you gotta do this by Basically doing a a a large assessment of it, documenting it, having somebody, review that documentation. You've gotta run as close to live testing as possible, you've gotta have human people factored into the decision making engines. You've gotta you've gotta train people to make sure that they're head of the AI is your factoring room.

Ross Nodurft [:

Right? Like, they go through a laundry list of stuff that agencies have to do if they wanna employ, deploy, purchase AI. There's a lot of stuff that remains to be, spelled out. Right? So I I think that the guidance is a start, But it it's not very explicit, and agencies are gonna be looking around trying to figure out where to start. And, frankly, it leaves a lot of autonomy for individual agencies to make individual decisions about processes. So for companies, especially the companies in in the Alliance For Digital Innovation, everybody is looking around and said, "Oh, guys, we need to put a little bit more of a finer point on some of this stuff. We need to use current governance mechanisms. Otherwise, we're gonna be running around Answering to, you know, a 100 plus agencies different interpretations of this matter." So that's where we are right now.

Willie Hicks [:

Could I ask, just a a question thinking back to something you were saying earlier? So they're gonna have to do this inventory. I'm I'm using it loosely. I guess, like, an my inventory of of what they've got. You know? So how so is it expected, and I'm thinking of this more, you know, from a a Dynatrace perspective. So Dynatrace, we've we've had AI built to our platform for for many years now, and A lot of our agencies use that. I don't know how many at that level, how many of our agencies know that, you know, core to Dynatrace is kind of built on AI. Are are we or from an industry standpoint, are we supposed to be working with the agencies to try to make sure they, where their AI

Carolyn Ford [:

That's a really good question. I mean, honestly, if I were using Dynatrace, I don't think I would think, oh, this is one of my AI technologies.

Ross Nodurft [:

Yep. It's a great question. That is actually one of the questions we're posing back to OMB right now. Because look. There's a there's a core difference between, building in, ChatGPT and functionality and using a tool to, you know, look across your enterprise at the different applications and potentially, manage the risk associated with them that uses inside of its engine, AI Algorithms. So we don't know. And if if the, let's let's pretend for a second that the answer is yes, that everybody's gotta work together to figure out where there's AI. Is there a prioritization about who gets assessed first? What's the timing look like? What's the compliance actually look like to do that? How do you attest against it? Is are you guys, is is this something else that Dynatrace is gonna have to pay for? Who's gonna monitor it and validate it in the department and the agency? Your end users? So there's a lot of questions that still need to be answered, and we plan on raising as many of them as we can think of.

Willie Hicks [:

I'm glad that you're you're on our side there. I didn't I I hadn't even considered that until until we started talking about this.

Ross Nodurft [:

Yep. That's what that's what we do. So, look, I think I think in general because, look, we gotta take a step back And think about what this what they're saying. So they're saying 2 things at the same time. One, they're saying use AI, first and foremost, which I am personally appreciative, though. I think that it is good that we have a government that is embracing AI and is and is hopefully gonna put some resources around it. Right? Because all the stuff that's gonna follow from this needs to be paid for somehow. This can't be just a, you know, requirement without the support from the government to meet the requirement.

Ross Nodurft [:

On the other hand, they're not real specific about how to get there, and that's gonna cause a lot fusion. So, hopefully, it's not gonna slow down the adoption of tools and services that have AI because, frankly, we don't have the time to do that. We have people who are well, you can take it from a bunch of different angles. People are like I said at the beginning, people are bringing on, SaaS applications at a rapid clip, it is enabling people to do their jobs faster, better, in a way that they couldn't before. Our our the citizenry of the United States is demanding a migration to, a digital environment for everything from from service delivery to to protection of their homeland, everything in between. Right? If it doesn't have an app, millennials, Gen Xers, Gen Zers aren't touching it. Right? So it's it's we are moving quickly to a digital environment, and, you know, we need to we need to make sure that we are managing everything in the enterprise that is interacting digitally. Right? So I think for Dynatrace is a perfect example.

Ross Nodurft [:

We need observability of of our new newly architected cloud based environments. We need to understand who's on there, what's on there, how it's interacting, how much how much compute it's using, like, everything you can need need to read. And if all of a sudden you're introducing some risk and some some some nervous energy around using products that have been used for years, and a new compliance pathway that's not real well defined. I'm worried that it's gonna create some friction where there wasn't friction before. So that's the there's the good, and then there's the the potentially bad. We gotta mitigate bad so that we can keep on using companies like Dynatrace in our environments and really kind of delivering the services that need to be delivered to the American people.

Carolyn Ford [:

So you made me think about, rumors around AI, like some of the scarier scarier things that people think about with AI. Let's talk about some of those rumors that we should stop spreading.

Ross Nodurft [:

So before I get into, like, specifics, I do wanna I wanna I wanna delineate, because I think this underpins it all, between technology and the uses of technology. You can make technology does incredible things. We are currently, as a species, making making technology that does incredible things. Our uses of technology and what we use those incredible new tools for is completely within our control. And even if that control is layered, right, even if we empower a machine to make a decision to utilize technology, Which is what the big scariness is. Right? Are we gonna have somebody push a button to do a to to shoot a missile? Right. We have the ability to say yes or no.

Ross Nodurft [:

We have the ability to say yes or no. I forget who it was. There was somebody, one of the big companies, I think it was Brad Smith, recently said, let's just build an off switch, and it's not that simple. But in a way, we need to think about the processes and how we're building our tools and technology around AI to make sure that we are considering that off switch. So as we layer more and more machine based decision making on top of each other, we need to think about the layering of that in ways where we have off switches all along the way that we can flip or and or, make those chains happen in a way where it demands our attention. So to get back to your point, we can't afford to build systems where we are completely ignorant of them, and that's the big fear, I think. But it's completely within our power to to have those same dynamic, wonderful, incredibly fascinating technologies that are interacting with one another in a way that benefits the the the human race without blowing ourselves I think it totally is. So this and look forward to that environment.

Ross Nodurft [:

And we look. Also, this is something I've said all the I say all the time. We I've been using AI for a very long time. Our data has changed and continues to change, and we continue to put more and more data into algorithms, but Dynatrace is a great example. You guys are an AI company. I mean, correct me if I'm wrong, Willie, but, like, you guys have been using it for a while now. No?

Willie Hicks [:

Oh, yeah. For for many years, we've had it built in.

Ross Nodurft [:

So I think I think that we will continue to evolve the technology, and not all AI is generative AI that mimics human thought. You know, AI does a lot of things behind the scenes that just makes it easier for us as humans to do what we wanna do.

Willie Hicks [:

And, you know, you you said something earlier that just kinda stuck with me, Ross, and just kinda delineating that, you know, that technology that that actual physical, I guess, technology versus kind of the use and how we're and I'm I'm I'm loosely, remembering what you said here, but, one one thing that I I'm curious of your thoughts on. And, you know, one of the problems I think I see from a technology standpoint is that, we are now at a point where, you know, we're getting these advances at such a rate. What it seems like to me is that we are outpacing ourselves. Not that, you know, you know, building that all switch is but, you know, people don't seem to have the time and energy to actually just sit down and think through what is the meaning of this new technology? How is it going to impact my life? How is it going to I mean, because these things take energy. It takes time. And when we had you know, when technologies were happening, like, on the the the you know, every 10 or 20 or 30 years or 100 of years, you had time to think about, you know, how's the steam engine gonna impact, you know, industry. You had Generations to to kinda figure out the impact of these things. Now you got hours before the next sometimes, it seems like before the next thing hits you on the news or on Twitter, x or whatever it's called today Willie we get there.

Carolyn Ford [:

To your point though, like, when we had years to consider how the steam engine was going to affect us, maybe some there were a lot of unforeseen impacts because of the steam engine, because of industrialization that we're just now realizing. And so to your point, Ross, build those off switches in. How do we know where to put the off switches?

Ross Nodurft [:

No. Look. I I hear you guys, and I don't know that there's a clean answer to that. But I do know that the intentionality is, I think, different than, it was at the turn of the century, and you're you're seeing in the wrap or last century, and we're seeing industrialization occur. Right? So having the having the conversation we're having now, mark It may or may not have happened.

Ross Nodurft [:

It probably did, but I don't know that it happened as widely, and as aggressively this happened right now. I mean, look. I think AI and the pace and churn of AI, quantum, The the the evolution of that. We didn't have an office of science and technology policy that we do now. Right? And I think that there's a reason for that. And I think that what we're seeing is the United States and other folks who are thinking about these things are are Trying to figure out ways to get their arms around it so so that they build in a process to allow them to try and find where to put that off switch. What I mean by that is it goes back to the question you asked me about the policy initially. What's in this policy? What's in this huge executive order? And what it is is it's a a series of systems.

Ross Nodurft [:

They're building in processes. Put in, an organizational structure that's gonna review every single piece of this and think about it on the way through. Put in a series of stuff that acts as general guardrails for like, oh, wait. Am I gonna put this piece of technology over the boiler that sits under the thing of my my, you know, secretary of the treasury? Probably not a great idea. Right? Like, put through a series of, processes, a framework to think about it so that you can hopefully identify where you're building that off switch. And I think that that has to be the way that we approach things from now into nanotechnology, into quantum computing, and in between all those things. And that's different too than where we used to be.

Carolyn Ford [:

Do we have the experts in the

Willie Hicks [:

Nope. Nope.

Carolyn Ford [:

So AI is creating jobs for humans.

Ross Nodurft [:

Yeah. No. We're we're retraining current current humans. Right? I think there's there's a you know, we've talked about this for for for a long time, but there's there's a lot of concern about AI putting people out of work. I think I think AI is going to put people into places where they have to learn. They'd have to learn the AI, learn about how to use new tools to do their job faster, better, quicker. They have to learn different skill sets to enable AI. Right? Like, so technology moves things into a different direction.

Ross Nodurft [:

Where people used to be horse and buggy drivers, they're now locomotive drivers. Same rules apply. Where they used to be mechanics that were, you know or people that were shoeing horses on a regular basis, they're now repairing locomotive engines and understanding how steam, makes hydraulic engines turn. Right? So, like, we as a species are not gonna put ourselves at work. People are going to find time, and then you gotta think about how we exist culture. This is getting a little meta. I know. But, like, we will not all of a sudden build enough technology where where Ross is gonna go take a nap on whenever he wants to.

Ross Nodurft [:

That's just not gonna happen. I wish it was the case.

Carolyn Ford [:

That's what I want.

Ross Nodurft [:

That's not gonna happen. It's not gonna happen. People are still to come to you, Carolyn, and say, Carolyn, this new, completely computerized VR enabled environment is a great place for us to have a Dynatrace discussion. Can you put it together for us? Right? And you're gonna have to figure out what that what that looks like and how it sounds and what it feels like in an environment that's completely traditional. But they're still gonna want you to do that because you have the the wherewithal to curate that more than anybody else.

Carolyn Ford [:

Alright, Willie. Time's beaten us. Do you wanna ask Ross any last questions?

Willie Hicks [:

Actually, I'm curious. Or mister Ross, what is your holiday Thanksgiving tradition? Do you have any any good traditions?

Ross Nodurft [:

So I'm from New Orleans, Louisiana, and my favorite thing to do every Thanksgiving actually happens the day after Thanksgiving, Where I take the turkey bones, and I make a great stock, and then I make a great gumbo. Turkey gumbo all the way. Smells up the house, takes me hours, and it's one of the highlights of my year.

Willie Hicks [:

So I I might have to invite myself over. And you're from Louisiana. I mean, we might watch some football Alabama.

Ross Nodurft [:

Maybe. Maybe.

Carolyn Ford [:

Alright. Well, you know Willie, I'm always looking for a good read. So, Ross, I mean, other than a 111 page executive orders, What do you what do you read and or watch or listen to?

Ross Nodurft [:

I'm an avid podcast guy, but I'll tell you the the book that I've been I listen I listen to everything. I'm a I'm a terrible visual guy. I try to listen to as much as I can. So I'm reading a book or listening to a book called Chip Wars, right now.

Willie Hicks [:

Mhmm.

Ross Nodurft [:

And it is phenomenal.

Carolyn Ford [:

Like c h I p?

Ross Nodurft [:

C h I p. And I'll tell you if you give me a second, I'll tell you the author. K. One second. Chris Miller.

Carolyn Ford [:

Okay.

Ross Nodurft [:

K? And it is one of the the for anybody who thinks about technology, it's important for us to remember how it's built and how it's being driven. And Chip Wars, the first half of the book takes you through through the history of Silicon Valley. Its inception from transistors to microchips to the the things that that we're doing now around microchips, then it takes you through the kind of geopolitical progression to get to the blockchain that's global right now that we have and even kind of some of the current dialogue that's happening around chips today. It is fascinating and, frankly, taught me a lot about, you know, various types of chipsets and what underpins certain types of technologies and what, you know, what goes into the the servers that power the cloud spaces versus what's going into, some of the systems that power AI and what the differences are and where they're made and how they're made. And it's it's fascinating. And it helps because you start to realize how it's all connected and how at the end of the day, you know, again, we're in control of our destinies, but we're still doing some crazy stuff. It's it's it's a good level. So that's my that's my recommendation for both.

Willie Hicks [:

Our fantastic I think about tech I was gonna say technology. I already bought it. So I just went ahead.

Carolyn Ford [:

Right. I I was search engineering it while you were talking. Okay. Added to my Kindle. So there you go. Alright. Well, thank you so much for spending time with us, Ross.

Carolyn Ford [:

We really appreciate it.

Ross Nodurft [:

Yep. I'll come back whenever you want. This is great.

Carolyn Ford [:

Yeah. Okay. Be careful with what you offer.

Ross Nodurft [:

Thanks to Dynatrace. We're glad you guys are members of the Alliance For Digital Innovation. You really value members, and we appreciate your input.

Carolyn Ford [:

Well, thank you. And thank you listeners for listening. Please smash that like button and share this episode, and we will talk to you next time on Tech Transforms. Thanks for joining Tech Transforms sponsored by Dynatrace. For more Tech Transforms, follow us on LinkedIn, Twitter, and Instagram.

Links