Stephen Magill, Vice President, Product Innovation at Sonatype dives into the complexities of open source and software security. Find out how government agencies are utilizing open source, and what Sonatype is doing to help secure our most trusted software.
Carolyn:
Welcome to Tech Transforms, sponsored by Dynatrace. I'm Carolyn Ford. Each week, Mark Senell and I talk with top influencers to explore how the US government is harnessing the power of technology to solve complex challenges and improve our lives.
Carolyn:
Hi, I'm Carolyn Ford. Today I get to welcome Dr. Stephen Magill, vice president of Product Innovation at Sonatype. And he's going to share his insights on the evolving landscape in open source security threats, the growing response in regulations for software management, and how to be SBOM ready. And I'm going to ask him about Log4j. I mean, we're a year in, and it's still a huge problem. So welcome to Tech Transforms, Stephen. How are you?
Stephen:
I'm good, thank you.
Carolyn:
Well, it's really good to have you. And I'd like to start off with, tell us about what you do and what Sonatype is all about.
Stephen:
Yeah, sure. So I'll start with Sonatype. We are a company that focuses on open source governance and software security. We have been really one of the original companies focused on helping companies and other organizations, government organizations and so forth get control of their software supply chain, monitor it continuously, so that they can be aware of vulnerabilities that are discovered in open source. This is a major source of vulnerabilities and exploits, and leading to data exfiltration and so forth. And so that's sort of our bread and butter, our core focus area.
Stephen:
We also do number of things having to do with general software security and software quality. And so that's sort of my domain. I founded a company called MuseDev that created a product that was a code scanning product to help developers write better, higher quality, higher reliability, more maintainable software. Sonatype acquired that company a couple of years ago, and so we've merged that into our product suite. It's called Sonatype Developer now.
Stephen:
And what I've done since then, is I've shifted over to a research role leading a team of researchers and engineers that are developing the next generation of technology. Again, focused on code analysis, code quality, what can we tell you about your software, what can we tell you about the open source that's going into your software? To help you manage risk and be more secure. So I've been in that role now for a little over a year.
Stephen:
And it's really exciting. I get to interact with folks in industry and government, try and stay on top of what the current needs are to predict what's coming in the future, what's going to be important, and make sure that we're developing technology to address those future risks.
Carolyn:
You just answered or relieved some anxiety that I have had for a long time. Because every time I hear open source and it being used really anywhere, but especially in government, I think that seems like a bad idea to crowdsource code, and how do you make sure that it's safe to use? So thank you. You just literally relieved some anxiety for me.
Stephen:
I'm glad to help.
Carolyn:
So let's talk about open source, and how it's being used in the federal software supply chains. How prolific is it? How are you seeing it being used and secured?
Stephen:
Yeah. So open source is very common in the federal space. Really, I think just as common as it is in the rest of the commercial space. We see, depending on the survey you look at, it's something between 80 and 90% of modern software applications consist of open source. The way a developer builds a project now is you go find the open source libraries that do what you need to do. They interact with the APIs that you need to interact with. They help you store data in various formats. JSON, XML go through the file system, do machine learning. There's libraries for every core piece of functionality that you might need. And then the developer is sort of writing the code that glues all that together, and layers business logic on top of that, and addresses your organization's particular problem needs. And so that open source is a big part of your software, and it's an important risk vector to maintain awareness of.
Stephen:
And it's sort of a double-edged sword. So you're bringing in a lot of functionality, and it really helps modern development happen much faster. You benefit from all the community's work in that project. But then you are in a sense, inviting all these developers into onto your team. Your team isn't just the developers that you're paying.
Carolyn:
Anywhere in the world, can anybody contribute to open source software?
Stephen:
That is the ideal. It's basically achieved by most of these projects. They're very open. They'll accept co-contributions from anywhere. And that's not to say that everything is just accepted without review. The open source projects generally do try to have a very stringent code review process, have various controls in place. And there are organizations like the Open Source Security Foundation that are working with the community to up-level further, and make sure that especially critical open source projects really are following best practices when it comes to code review, looking into who's making this change, what is the change. Making sure there's more than one person signing off on that, scanning the software to look for vulnerabilities, making sure that their dependencies are up-to-date, and they aren't bringing in open source risk from the open source projects that they use because there's several layers here. An open source project will itself use other open source projects.
Stephen:
And so that all helps. But we still have seen some issues with malicious actors contributing to open source projects. And in fact, that sort of attack vector has been growing a lot recently. So around 730% growth year over year for each of the last three years, in these sorts of malicious supply chain attacks.
Stephen:
And so this is malicious actors really creating their own opportunities by either becoming trusted contributors and sneaking a code change into an open source repository, or exploiting the trust models or lack thereof for package management systems. So things like npm for JavaScript or PyPI for Python, these are pretty open ecosystems. And so you can just upload a package, say, "Hey, I'm creating this new package," and put it up there.
Stephen:
And if it has a name that's very close to a legitimate package, like say it's bite_array instead of bite-array. So it's a very easy mistake for a programmer to make when they're adding that dependency. You can take advantage of those typos and get people to use your malicious package, and get into the supply chain that way.
Carolyn:
If I were a bad guy, that's exactly what I would do. But you're telling me before it makes the library, there's a lot of security gates that it has to go through. So there's a review board that gets it in. And then once the agency decides to use it, they also do their own security checks?
Stephen:
So it depends on the project, but usually there is an approval process. A code review process and an approval process that in at least some cases, involves scanning. But it's very project by project.
Carolyn:
Wait, wait, wait. What do you mean project by project? So you're telling me that some projects don't do a review before they use the code? Don't say that.
Stephen:
There are certainly projects out there that end up playing a critical role in software stacks, that are maintained by a single individual who's doing it as a side project or maybe hasn't even touched it in a couple of years. And it's still included as a dependency and code that's being used widely.
Carolyn:
But not in the federal government. Right?
Stephen:
Surely not, surely not.
Carolyn:
Now you're just placating me. Okay. So that's horrifying. Talk about some of the security threats that we're facing, particularly in open source.
Stephen:
Yeah, yeah. So what I just mentioned is certainly one. These projects, I think the level of trust in the community and these larger foundation supported projects is probably pretty high. Projects that are part of the Linux Foundation or the Apache Foundation. They put pretty robust governance structures in place, and they help projects include code scanning and things like that, follow best practices.
Carolyn:
How big are these foundations though? How many people? Because if you're getting coders all over the world, how can they possibly review everything?
Stephen:
Yeah. Well, it's peer-based review. So when a code change gets submitted to a project, other developers of that project that are contributing regularly to that code base will review it and say, "Okay, this looks good." Or, "You should fix this. Have you considered this edge case that could lead to a problem?"
Stephen:
So it's a really important process for ensuring quality and security. You catch a lot of issues there if you have a robust code review process. But like I said, not every project does. That doesn't catch everything. So you really want layers of security. You want to be doing that. You want to be using code scanning tools.
Stephen:
And then you want to be doing other things like signing your releases, right? Because you do all this code review, you package it up into this official binary, or JAR, or source distribution, or whatever. You put that up in the package repository. If a developer then pulls that dependency down, you want to be able to verify, "Yes, this is the version of this package that was built by the maintainers, that is official, that has gone through all of those checks."
Stephen:
Because otherwise, you could end up with a copy that's altered again by a malicious actor. There's various ways to inject that into your builds chain. And so you always want layer defense and depth, layers of security. And that applies to software supply chain, just like it applies to security architectures and things like that.
Carolyn:
Okay.
Stephen:
Yeah.
Carolyn:
So how has the landscape of software supply chain security changed since you got into this business?
Stephen:
Yeah. So I think the biggest shift is this recent one that I mentioned earlier of attackers creating their own opportunities. So the traditional software supply chain attacks that have been going back to the beginning of time, the beginning of software time involve... Hopefully security researchers, but sometimes hackers identifying vulnerabilities that are just latent in software.
Stephen:
All software has bugs. Some of those bugs end up being security relevant. So they can be leveraged to gain access to a system, execute commands you shouldn't be able to execute, gain privilege, exfiltrate data, things like that.
Stephen:
So if a security researcher or a hacker spends enough time with a code base and they're sort of banging on the software in various ways, there's some chance they'll discover something that they can exploit. Something that gives them a toe-hold from an attack perspective. And so most vulnerabilities traditionally came from that sort of work.
Stephen:
But then more and more, we've seen attackers creating their own opportunities and introducing malicious code into people's build processes. And like you said, if you were an attacker, if I were an attacker, that's what I do. That's the low hanging fruit right now. There's not a lot of protections in place to guard against this. And you can tell that there's not a lot of protection, because the sophistication of these attacks is very low. It's things like uploading packages that have a name very close to an existing package. We call that typo squatting. You're taking advantage of the fact that some percentage of developers will fat finger the name of this package. And you can just blast a bunch of those up on onto these repositories.
Carolyn:
How many are of that is slipping through. Is that the 700% that you told me? They're slipping through and actually making it into the libraries?
Stephen:
They make it into the libraries for a period of time. So we have developed technology to identify this and block this. And so there's products out there, technologies you can deploy to recognize those sorts of attacks and block suspicious looking packages.
Carolyn:
On the user side. So once the agency chooses to use something from the library, then they need to do their own checking to look for the malicious code?
Stephen:
Yeah. It lets you put in place a protection at the border of your network. So a common architecture for build pipelines at large organizations is you have what's called a caching repository that you run locally. And what this does is when a developer is building some application and it needs these five open source packages, it goes to those and say they're Python packages. And so they live at the PyPi repository. it goes out to PyPi, it pulls those five packages, but then it stores them locally. So then the next developer who's using those packages just gets it from the local cache. So you don't have to always be going back to the source. And so it's an efficiency thing. It makes builds faster. But it also gives you a great point to enforce policy.
Stephen:
So you can centrally deploy this policy that says, "Okay, we're not going to let you use packages that have critical vulnerabilities. We're not going to let you use packages that look suspicious, things that were just uploaded to the repository that have a name very close to an existing package." Things like that. You can put technology in place at that border that will protect you against this.
Carolyn:
So would coders and agencies... Not everybody, not every developer could go access the open source software library as a whole. What they would access is whoever had the authority to go pull things down from the library on their local cache. Then they can access from the local cache. Is that true?
Stephen:
That's right. And you can automate all of this so that approved packages can just automatically-
Carolyn:
Just automatically come to the cache, and you're sandboxing, and doing the vulnerability testing in this sandbox. So it's not hitting your network and proliferating that way.
Stephen:
That's right. That's right. And this gives you a great point to protect against those malicious supply chain attacks that I mentioned earlier. So we call it firewall, like a repository firewall. This idea of blocking those, at that border.
Carolyn:
Well first, let me ask a dumb question. I should know the answer to this. that how Log4j wasn't an open source attack, was it?
Stephen:
It was not this new style malicious attack. Log4j was the traditional-
Carolyn:
Somebody found a vulnerability in open source. See, this is what scared me.
Stephen:
Yeah. And had been there for a long time.
Carolyn:
Yeah. So let's lean into Log4j for a minute. And talk to me about what agencies can do, what kind of tools they can use to protect their software supply chain. And we don't have to stick with Log4j, but it will help my brain process if we kind of use that use case. What kind of tools you've seen successful for these agencies to discover Log4j.
Carolyn:
And I've been reading some horrifying reports, that there's still a lot of vulnerability, even a year out there. And maybe, we'll never eradicate it completely. Is that true, ever?
Stephen:
I think that's possible. There's plenty of legacy software still running. I hear reports of Windows 3.1 boxes still kicking around in a few places. So you never completely get over old technology. And that, I imagine will apply along for Log4j as well.
Stephen:
But you're right. Even taking that into account, we are farther behind from a remediation perspective than would be ideal right now. So Sonatype maintains Maven Central, which is the primary repository for Java open source. And Log4j is hosted on Maven Central. And so we're able to see what versions of Log4j are people downloading. As they do bills, as they request these packages.
Carolyn:
What do you mean versions of Log4j? They're downloading malicious code on purpose?
Stephen:
So Log4j is used all over the Java ecosystem. It's one of the most popular logging libraries, and logging is a very important functionality. So a whole bunch of Java projects, when you build them, they pull in some version of Log4j.
Carolyn:
Do you know what's sad, Stephen? I associate the name Log4j, to me that is the vulnerability. But it's not. It's the log library file?
Stephen:
Yeah. So Log4Shell was the name of the exploit. And then it affected a certain version range of Log4j, so versions between these version numbers. And so what we see is most of the downloads are the patched secure version. But, there's still about 30% of the downloads that are the vulnerable version. And so there's still a lot of people-
Carolyn:
Why can they even download that still? Why is it even available? This is crazy, Stephen.
Stephen:
So yeah. So people have said, "Why don't you pull the vulnerable versions?" And the problem with that is it would break a lot of people's bills, that would go against the commitment that these package repositories have. Which is, we will make this software available. It will continue to be available in the future. If you pull this software into your build process and you're depending on the software's availability, we won't break your build. We won't get in the way of that.
Stephen:
What you can do is, I mentioned having your own caching repository and having that layer where you can enforce policy. There, you very much could say, "Okay, I'm not going to allow my developers to pull these versions of Log4j, period."
Carolyn:
Sorry, I still don't understand why it's even a possibility. I know you just explained it. My head cannot process what you just said to me.
Stephen:
Yeah. What's incredible is that there are still so many people downloading vulnerable versions of Log4j. That vulnerability got more press than anything else in recent memory. So the idea that there's people that still... I mean, it just shows that there's a lot of people that still don't know what's in their software, right? Because if you knew Log4j was in your software, and you had heard about Log4j, and everyone's heard about Log4j at this point, you would've fixed it. So I think there's just a lot of people who are unaware of what's going in there.
Carolyn:
Do you at least send it with a warning and say, "Hey dummy, do you realize what you're downloading?"
Stephen:
Yeah. If you go to Maven Central and you look at the component list, you look up various versions, see what you're using. There are notifications about known vulnerabilities. And there are open source tools to scan vulnerabilities. You don't have to use a product like the products that Sonatype produces to get a handle on this. There's open source components that will let you scan your software.
Stephen:
You were asking what the federal agencies can do to protect against this. That's a big part of it. There are tools out there. It's taking that step to make sure that all of your build pipelines, all your software development teams, they are using these tools and scanning their software, to discover these vulnerabilities. And when they find something, those notifications go out in an appropriate way. You have a process for responding to those. So if it just gives you a warning in some log file somewhere, no one's going to pay attention to that. You need that message to go to someone who can make sure that it gets followed up on.
Stephen:
The other thing, so talking a bit more about Log4j. Why was Log4j such as scramble? It was this case where the patch for the vulnerability was disclosed basically the same time that the world became aware of the error. So it was not quite a zero day. There was a patch for it when it came out. But there was a big scramble. The community didn't have a lot of time to adopt that patch before exploit began.
Stephen:
So it became this scramble to identify, where am I using Log4j? And basically, the way to be prepared for that is to already have this list of what's in your software. So we hear a lot about SBOMs lately. Software bills of material. These have gotten a lot of notice because there was a cyber executive order out of the White House a couple years ago that mandated that agencies start producing guidance and regulations around requiring software that's sold to the federal government to ship with an SBOM.
Stephen:
And really, the goal of that was to force organizations to start paying attention to what's in their software. I mentioned there's clearly a bunch of people who don't know what's in their software, as evidenced by all these people downloading vulnerable versions of Log4j.
Stephen:
So step one, I mentioned what you want to be doing is finding out what's in your software, and scanning that for vulnerabilities, and having a process for aiding those vulnerabilities and everything. But step one is like, "Know what's in your software." So we're in terms of step one, producing these SBOMs, and having a mechanism to inventory them, and keep track of them, and say, "Okay, we have these 1,000 repositories supporting our application stack. Here's the list of open source libraries that each of them uses. We have some system where that's recorded." Then if you have that and something like Log4j comes out, you can just go refer to that source and say, "Okay, show me all of these applications that include Log4j." And it's a very simple query. And then the remediation might take time. You got to go in and change the code, update the version number, and build new versions and deploy these. At least you know, you can immediately answer that question. Is this a problem for me? How big of a problem is it? How much resourcing do I need to put against this to really address the risk?
Carolyn:
So I can't even manage my own files on my desktop. I can't find stuff most of the time. Are SBOMs automated? And is there a way that categories are created so you even know where to start looking? Does that make sense?
Stephen:
Yeah. So SBOMs can be produced by basically all of these software composition analysis tools. So we produce one of those called Lifecycle. You can export an SBOM from there, you can import SBOMs. So if you get SBOMs from third party software that you want to monitor, you can import those-
Carolyn:
What do you mean import an SBOM? Because isn't an SBOM unique to the organization?
Stephen:
It's unique to an application. So if your developers have some software they're developing, there'll be an SBOM associated with that. If you are getting some software like say, I don't know, Adobe Acrobat Reader. That's a software application that a lot of organizations use to view PDFs, and print them and so forth. There is some SBOM associated with that that says, "Here's the open source components that this application uses."
Stephen:
And with the executive order around shipping SBOMs with software, we're starting to see more and more vendors make those SBOMs available, wrap a process for requesting them.
Stephen:
So when something like Log4j happens, you really want to do two things. You want to identify where it occurs in your software stack. But then you also want to know, which of these third party applications I'm using are vulnerable? Do I need to go update my version of product X, Y, or Z?
Stephen:
So both of those things happened at the same time when the world learned about Log4j. There was a whole bunch of internal conversations at every company and every government organization saying, "Are we vulnerable to this? How do we fix it?" And then there were a whole bunch of phone calls being placed between businesses, and the government, and various suppliers, and so forth saying, "Hey the products I'm getting from you, are they vulnerable? What's your ETA on a fix? How do I deploy that fix?" So SBOMs can help answer both of those questions.
Carolyn:
So if you have good SBOM hygiene, you could theoretically just say... Okay, this is me being an end user, simple desktop end user. But just do a quick search on show me everywhere the Log4j is being used, every application. And it can search your SBOM library and identify them?
Stephen:
Yeah, that's right. That's right. Yeah. And having that tooling in place makes a big difference. So we saw customers who had on the order of 2,000 applications, remediate Log4j in less than 30 days at their organization. Because they were able to immediately identify everywhere it occurred, prioritize remediation of those things, and then just work through that list. At places that weren't prepared to answer these questions, It took in some cases, weeks just to identify everywhere where Log4j-
Carolyn:
Yeah. If you don't have a comprehensive SBOM library... Is that the right term?
Stephen:
Yeah, sure.
Carolyn:
Then can you do search my environment, and show me everywhere, and map everywhere Log4j is happening? What do you do?
Stephen:
Yeah. You can do a scan. Well, if you have say GitHub, you use GitHub for all your repositories, or GitLab, or whatever. You can go to your repos source and start scanning through that. And so that's what companies that didn't have a solution in place already started doing. It takes more time.
Stephen:
And especially in a large organization and a complex organization, you see some companies that they've grown via acquisition, and there's actually 10 different subunits, and they all have their own technology stacks and everything, in that sort of environment. This let's just go scan everything. Idea becomes much more complex to implement.
Carolyn:
I'm thinking about federal agencies that have contractors. So you're getting stuff from a lot of different places, and you might not even have the ability or the visibility into their network. So when an agency works with a contractor or a big integrator, I guess they have rules in place that says, "We need to have an SBOM for everything that you bring into our environment."
Stephen:
That's certainly the direction it's going. So the guidance from the executive order was, yes, we want to start riding into purchasing guidelines and so forth.
Carolyn:
This wasn't in place before? People could bring stuff into your environment just willy-nilly>
Stephen:
Yeah, there wasn't a requirement that you ship an SBOM. Yeah.
Carolyn:
Oh my gosh. Which is why you have a job, right?
Stephen:
Right, right. Yeah, yeah. Like I said, this is step one. This isn't even where you want to get ultimately. We sometimes make the analogy that requiring SBOMs for software, yes, it's a good first step. But if you think about it in terms of other products that we're used to interacting with like cars. Cars have a very complex supply chains, with parts originating from various companies getting assembled in various locations, and ultimately coming together into this final product.
Stephen:
Shipping an SBOM with software is a bit printing out that parts list and putting it in the glove box. And saying, "Okay, now you're good." You've told everyone what's in your car. But that's not really what we expect. What we expect is automakers to be doing that. And to have that inventory, that library of parts, or whatever. And then be able to monitor those and say, "We've noticed that this airbag has been failing more than we would expect." We need to go back and investigate that. We find out something's wrong with that part. Now we want to recall all the cars that use that part so that we can fix it and get out updates.
Stephen:
Ideally, we want to be able to support that sort of recall process for software. You want to be able to say, "When Log4j comes out, notify your customers." Say, "We're pulling this from the market. We're going to ship an update," and then deliver that. And so that requires not just generating the SBOM, but tracking it, and continuously monitoring it.
Carolyn:
So the application owners, they identify a vulnerability? They're proactive reaching out to anybody that they know that's using that application that has Log4j in it. Then they reach out and say, "This has happened." I got that right?
Stephen:
That's right. That's right. And some proposed regulations do go in that direction. So the European Union has been discussing the Cyber Resilience Act, which is their current version of SBOM legislation. And it does go in that recall direction, which is good. There's some other issues that they have to work out in terms of getting the legislation to a place where it'll work for the market, and for open source suppliers, and everything. But seeing that next step is nice.
Carolyn:
So if you could sum up three best practices that you would give to government agencies specifically, or any organization really. I mean, that's it. What are your three tips that you wish organizations would do for security measures?
Stephen:
Yeah. So one of them is putting in place code scanning technology that will let you scan for open source vulnerabilities, and having a process in place to act on those results. I think the other two best practices, they get less attention because they're less directly tied to vulnerabilities and security risk.
Stephen:
First of all, being careful about your choice of components. So when you go to add a library to your application, say you discover you need to parse some JSON. You'd have to access some JSON files and sort of act on them. You need a JSON parsing library. There's five available. Which one should you choose? Think about that choice. Which one has better support? Maybe it's part of an open source foundation. Which one has a larger more active development team?
Stephen:
We have actually, a community thing that we've put out there recently called this Sonatype Safety Rating that rolls up some of these. So it takes information from the open source security foundation, something they call the scorecard, which is a list of best practices, that really all software projects should be implementing. And it tells you which ones they're implementing. Where are they on this journey towards-
Carolyn:
So our listeners can Google this. What did you call it? Sonatype-
Stephen:
Safety rating.
Carolyn:
Safety rating. Safety rating. They can Google that and go find your safety rating. You've pulled all this information into one place for them.
Stephen:
Yeah, that's right. We just have it for Java right now. We're working on expanding that. And actually, if you go to Maven Central and search for a Java component, if we have that information available, it'll appear there. So you can see what that rating is.
Carolyn:
So Maven central and Sonatype. Say it again? Safety rating?
Stephen:
Yep. Safety rating.
Carolyn:
Sonatype Safety Rating. Love that. So as you're talking through your tips, the buzzword of the day is coming to mind of zero trust. Is that a thing with open source, applying a zero trust mentality? I mean that's what it sounds like. You're saying, "Don't trust it. Check it yourself."
Stephen:
That's right. Yeah. Don't just assume that it's secure, that it's not going to cause problems. Be scanning it so you can detect issues. Also, I mentioned release signing. Making sure you're using signed releases. That's another way to take a step back from just trusting that this JAR file you got is the correct one, right? You want to be verifying each step in the process.
Stephen:
And the third best practice that I'd mentioned is just staying up to date generally. So most of the vulnerabilities that are disclosed are still this style, where there's been some vulnerability discovered that's been sitting in some project for a while. Someone finally looked at the right part of the code and identified this. And then there's a patch that's released, and they give the community some time to adopt that patch, and then they disclose the vulnerability. It's called responsible disclosure. You put out the patch, wait some time, and then disclose that there was a vulnerability.
Stephen:
And if you're just staying up to date generally, you'll naturally be adopting those fixes. So when those advisories come out, you don't have any work to do because you've already acted.
Stephen:
And the great thing about staying up to date is you can make that a planned proactive process, and you can schedule it as part of your development time, instead of having everything be this reactive scramble.
Carolyn:
So can it be an automated thing? Every day, you just go pull down the latest stuff?
Stephen:
Yeah, it can be certainly. And there's dependency management tools that will automatically suggest pull requests to keep you up to date. And so that's a great route to go.
Carolyn:
So automatically request updates, automatically check those updates. Is that a thing too?
Stephen:
Yeah. It used to be that I could just stop my advice there, stay up-to-date period. But then with this new style of attack where the latest version actually might have some malicious commit in it, you have to be a little bit more careful. So now, I guess I would layer onto that another bit of advice which is have something sitting at the border, monitoring for those malicious commits. So like I mentioned, a repository firewall is one product there, one solution there. But have something, either just if you're doing it manually, keep up to date. But maybe don't use the latest version until it's been out there for a couple of weeks. Let that vetting process take place. Or if you're doing it in automated way, make sure you have automation around the update, but also around the monitoring for malicious code.
Carolyn:
Okay. So do a recap. We got four best practices here. Do your recap for me.
Stephen:
Sure. So be scanning your dependencies, your open source dependencies for vulnerabilities. And have a process to act on those results. Be careful about what projects you pick. Have a process for deciding which is going to be a better, easier to maintain, higher security component for the future. And then be staying generally up to date and have some process for doing that in a proactive way. But also, have some technology or process in place to guard against malicious commits.
Carolyn:
Got it. Okay. Good advice. So for our listeners, I'm going to mention those two resources that you brought up a couple of times. Maven Central, and Sonatype Safety Rating. Really good resources for our listeners.
Carolyn:
And then before I let you go, I'm going to do probably my favorite part of the show, where I ask you tech talk questions. Quick, fun questions, answered pretty quickly. So I'm going to start with... I'm looking at my list of questions. I need something new to read. So help me build my reading list Stephen. What do you read for fun? Not a coding manual, because I'm not going to read that, unless that's all you read that. That's fair enough.
Stephen:
I know. Yeah, I read for fun. I like sci-fi and fantasy. So I'm a big Stormlight Archive fan. Brandon Sanderson in general fan.
Carolyn:
Oh my gosh, I love him. Isn't he from Utah?
Stephen:
Yeah.
Carolyn:
Yeah. So that's my home state.
Stephen:
Oh, cool.
Carolyn:
Yeah. My son, when he was a kid, we followed him. He's been here. Yes, big Brandon Sanderson fan.
Stephen:
Yeah. I've also been going back and rereading some like sci-fi classics. So Stranger in a Strange Land, The Man Who Fell to Earth. The Heinlein, some Asimov, Foundation. That sort of thing.
Carolyn:
These are all series that I don't know about, and sci-fi is my jam. So I will dig into those. Have you heard of The Broken Earth Series?
Stephen:
No, I don't think I've read that one.
Carolyn:
I'll send you the link. It's so good. It's three books, and sci-fi it's fantasy. She won the Hugo Award three years in a row for each one of these books.
Stephen:
Oh, cool
Carolyn:
All right, so next question. If you could wave your technology magic wand and have anything you wanted... And you're a sci-fi fan, so I know we can go big here. What would you magic into existence?
Stephen:
I guess I would say the Star Trek replicator thing. Actually, I got a 3D printer recently, and I've been having a lot of fun with that. Which is I guess a step in that direction, but pretty far from the ultimate realization of that thing. It would be cool to actually have the thing.
Carolyn:
I agree. And I need to not know how things work theoretically sometimes. Because my understanding is the port replicator rips you apart at the molecular level, and then reconstructs you on the other side. That sounds really not pleasant. But they all didn't seem to mind. Right?
Stephen:
It's definitely best not to think about it. I think there were some times where it didn't work properly, so that wasn't great.
Carolyn:
Yeah. All right. Well thank you so much Stephen, for being part of Tech Transforms, and for being patient and giving me the 101 on open software.
Stephen:
Yeah, thank you.
Carolyn:
Yeah, it's been really fun. And thanks to our listeners. Please like and share this episode, and we will talk to you on Tech Transforms next week.
Carolyn:
Thanks for joining Tech Transforms, sponsored by Dynatrace. For more Tech Transforms, follow us on LinkedIn, Twitter, and Instagram.