Loading Episode...
AppSec Builders - Sqreen, Inc. EPISODE 2, 4th November 2020
Framework Security with Ksenia Peguero: the paved road foundation
00:00:00 00:41:34

Framework Security with Ksenia Peguero: the paved road foundation

In this episode I’m joined by Ksenia Peguero, Sr. Research Lead at Synopsys, for a discussion around frameworks and the foundational effect they have on the security of your application. We’ll share concrete tips for upgrading your security through your framework, choosing the best framework for app security, performing a framework migration, and how to spot and fix security blind spots in your frameworks.

Resources:

About Ksenia

Ksenia Peguero is a Sr. Research Engineer within Synopsys Software Integrity Group, where she leads a team of researchers and engineers working on static analysis and security of different technologies, frameworks, languages, including JavaScript, Java, Python, and others. Before diving into research, Ksenia had a consulting career in a variety of software security practices such as penetration testing, threat modeling, code review, and static analysis tool design, customization, and deployment. During her decade in application security, she performed numerous engagements for clients in financial services, entertainment, telecommunications, and enterprise security industries. Throughout her journey, Ksenia has established and evolved secure coding guidance for many different firms, developed and delivered numerous software security training, and presented at conferences around the world, such as BSides Security, Nullcon, RSA, OWASP AppSec Global, TheWebConf, and LocoMocoSec. She has also served on review boards of OWASP AppSec USA, EU, and Global conferences.

Ksenia Presentations:

Additional Resources:

Passeport, Flask login

http://www.passportjs.org/

https://flask-login.readthedocs.io/en/latest/

Sails CSRF protection

https://sailsjs.com/documentation/concepts/security/csrf

Express CSRF plugin

https://github.com/expressjs/csurf

Django / React security page https://docs.djangoproject.com/en/3.1/topics/security/

https://guides.rubyonrails.org/security.html

Ksenia Angular listing rules https://github.com/synopsys-sig/tslint-angular-security

W3C security WG

https://www.w3.org/2011/webappsec/

Levels of vulnerability mitigation: https://image.slidesharecdn.com/javascriptframeworksecurity-amsterdam-191008173330/95/how-do-javascript-frameworks-impact-the-security-of-applications-7-638.jpg?cb=1570556143

Episode 2 Transcript:

[00:00:02] Welcome to App Sec Builders, the podcast for practitioners building modern AppSec hosted by Jb Aviat.


Jb: [00:00:10] Hello Ksenia, nice to meet you


Ksenia: [00:00:14] Hi, Jb, how are you doing? 


Jb: [00:00:20] I'm great, thank you. So, Ksenia, you're a senior research engineer at Synopsis.


Jb: [00:00:24] You led a team of researchers and engineers working on static analysis. Before Synopsys. You've had a consulting career where you did penetration testing, threat modeling, code review, and you are also a seasoned speaker at various app security conferences across the world, such as the famous OWASP AppSec. So could you tell us a bit more about you and what you enjoy in the AppSec field?


Ksenia: [00:00:49] Sure. Well, I come from an engineering background. I was an application developer in the gaming industry for about five years. And then I came to the United States to do my masters.


Ksenia: [00:01:01] And the last year I got an internship with Cigital that used to be called Cigital, a consulting company as a security intern. And I never went back to development. That was absolutely fascinating career because as a consultant, as a security person, you always need to learn new things. So I did consulting for about seven years and kind of went up into the ranks of principal consultants. And then I pivoted and started to dig more into the research and security research. And around the same time Cigital was acquired by Synopsys. So now I work in Synopsys. So pretty much with the same company, with the same people, with a different name. But now as part of the security research lab, as a security engineer.


Jb: [00:01:47] Super cool. You mentioned the gaming industry. Ksenia, so did you develop anything popular, famous?


Ksenia: [00:01:55] Well, that was many, many years ago and I was developing games in Flash. Adobe Flash.


Jb: [00:02:03] Oh, my God.


Ksenia: [00:02:04] So, you know, the little match, three type Tetris type games for housewives and people at work.


Jb: [00:02:13] All right. That could be a nice introduction. Yes. That's how I got into security by hacking flash in the browser, but maybe not.


Jb: [00:02:24] Ok, and so you were actually in the middle of a PHD thesis, right?


Ksenia: [00:02:29] That's correct. Hopefully closer to the end. But yes, in parallel with my full time job, I'm also doing a PHD and I'm working on guess what? Security research and on framework security specifically.


Jb: [00:02:45] And so how do you feel academic research is helping application security moving forward today?


Ksenia: [00:02:51] It's interesting. I feel like - because I have a lot of experience in the practical field, I hope - I feel that I bring a different perspective into the academia because a lot of the research that there is and academia, at least in the last 10 years, a lot of the research was focusing on exploits, on finding vulnerabilities, which is great because people in academia spend a lot of time, you know, finding those new vulnerabilities, new types of attacks, especially on more complex concepts like crypto attacks, for example.


Ksenia: [00:03:25] But until about now, academia wasn't focused much on fixing the problems that they find. So with my background in security consulting, where we not only find the issues, but we help developers to fix the issues, that's what I'm trying to bring into my research. How do we actually get rid of the bugs and not just find the bugs?


Jb: [00:03:47] Yes so basically more shifting left, right?


Ksenia: [00:03:51] Exactly. Yeah, exactly.


Jb: [00:03:53] So helping academia futures shift live to great outcomes here. So this extensive research that you've done on frameworks, so you presented it recently at AppSec Cali. So in your research, you found that some frameworks made it easier than others to introduce certain categories of vulnerabilities. So would you mind telling us a bit more about that?


Ksenia: [00:04:15] Sure. So as part of my research, I was focusing on JavaScript and I started with the client-side JavaScript frameworks or template engines, and then I switched into server-side JavaScript frameworks and I looked at different vulnerabilities.


Ksenia: [00:04:30] And the hypothesis was that if the framework actually has security controls or mitigations built-in, then the applications would be more secure than if they're not. So it's kind of a native idea. But with the help of the categorization framework developed by John Steven, I divided the places where the mitigation can exist into different levels. So we start with a level zero where there is no mitigation and code is vulnerable. And oftentimes that happens when there is no framework in use at all. So everything is written by developers from scratch and then we go into the next level of a custom function that developer has written and then into a third party library that developer is using and then into a framework plugin, so something that works very tightly with a framework and then the next level of the mitigation is built into the framework. And then actually there is another level that I discovered throughout my work is when the mitigation is built into the language, programming, language or platform itself. And of course, as far as you go closer to the framework or closer to the architecture level, those vulnerabilities will be fixed and it's less likely that they will actually appear in the applications. But we also need to remember another important thing that again, I discovered comparing the applications and running different security tools on them is that it's not just the built-in mitigations, but also the defaults are important. So if something is built into the framework but not enabled by default, then developers may not even know it exists or may not enable it, or they may be disabled or they need to enable it in a test environment. And then it never got enabled when the application transitioned in production.


Jb: [00:06:21] Yes. And so you actually prove by analyzing actual applications that I think it was CSRF protections that were not enabled by default.


Ksenia: [00:06:32] Yeah, exactly. So I took several server-side JavaScript frameworks, Express, Koa, Hapi, Sails, and looked at which level each of this framework has the Cross-Site Request Forgery protection enabled if, for example, Express and Koa have plugins. So it's an extra step that developers need to go find the plugin, turn it on, and enable it correctly with correct settings versus Sails, for example, has it built-in, but it wasn't enabled by default. So when I tested about like 500 applications on GitHub and compared them based on the framework, I actually could see that the number of applications that have Cross-Site Request Forgery in Express, for example, is the same as in Sails, which that wasn't what I expected. But when I was digging deeper, most often it was the case that in Sails that protection was not enabled. It was just set to false by default.


Jb: [00:07:34] So that's an interesting outcome and so our data at Sqreen concurs with that.

One thing we have seen is that amongst Sqreen customers, I've seen that applications without frameworks are 7 times more likely to have vulnerabilities than applications with a framework. That is something like - I'm a former pentester and so at the beginning of my career, I witnessed how Ruby on Rails grew in popularity and helped popularize development best practices across the industry. So it really was a game-changer at the time. As Rails popularized MVC templating engines, database migrations, object-relational mappers, convention over configuration. So it wasn't perfect, but it was such a huge step forward that we really witnessed the quality of Web applications changing. And so did you experience the same thing, like some frameworks drastically improving the security of some applications?


Ksenia: [00:08:33] Yeah, yeah, it's fascinating. If we look at the OWASP top 10, as I was researching specifically, Cross-Site Request Forgery. If we'll look at the OWASP top 10 in 2003 and 2009. CSRF was in the top 10, right. Higher up like fourth place than seventh place. And then it was starting to gradually go down. And then in the recent one, it's not even there. It's not present. And the reason for that is that because a lot of frameworks has CSRF protection enabled by default. And sometimes it's not that it's some sort of security feature that they built-in. I mean, it is kind of on purpose, but it's just the way the framework built. So, for example, if we look at .Net, ASP .Net, they have a view state that they save for every page. And so if the content of the page changes, it's like a signature of the page. Right. So if the content changes, for example, if an attacker is trying to inject a request and doesn't have a CSRF token in it, then the request will not be accepted just because the page was crafted by an attacker and it looks slightly different. So basically it's a CSRF protection as long as that view state is signed so that attacker cannot fake it as well. But yeah, basically some of such big frameworks also Spring Security, for example, has that enabled by default for all POST and DELETE requests. So since it's enabled by default, developers don't need to think about it and it just stops being an issue.


Jb: [00:10:09] Yes, yes. The view state is famous indeed, it reminds me of there were actually a vulnerability in the view state implementation. You said it's signed and I remember back in the days they were padding Oracle vulnerability in, I think it was like Liferay, one of the .Net frameworks, And so basically you could just basically generate or recover the signature for anything. And just you had the remote code execution by just managing to fake the actual state. That was a small one but a fun one. Padding Oracle attacks are amazing in theory. And when you have one that works in practice, it's always a good achievement. Exactly. So, yes, very good example. And so you mentioned about the levels of vulnerability mitigation by John Steven. So that's a concept that I didn't know and that I discovered in you in your OWASP Cali presentation. Really interesting. So I will share an illustration in the episode resources. So I think it can help us categorize the frameworks because so you have frameworks that tend to be very simple, very modular, such as Express, Flask or Sinatra. And you have so they have very little out of the protection for common threats because they give a lot of freedom to the developer and it's up to the developer to choose what they want to use and how they want to use.


Jb: [00:11:32] So amazing performance because they do very little out of the box. And on the other hand, you have much more elaborated frameworks such as Sails, Django, Ruby on Rails, that have many, much more out of the box pieces. And so there are like several ways to add security constraints, either relying on the team or library. So like the team would push their own library to add their own controls. That would be level 1 according to the classification. Maybe it would be a very well known library, like Cerberus or Joy in Node, level 2, or a framework plugin, level three, etc.. And so your research showed that the closer to the frameworks you are and the ultimate being, having this mitigation of this library built into the framework, the best level of security would be achieved. So if we assume that someone wants to pick a framework for a project, so usually security isn't the main driver to deciding what piece of software you want to use. Security is only one dimension amongst others when you evaluate a framework. So how would you recommend evaluating the security of a framework?


Ksenia: [00:12:42] Yes, I wish security was important, important for developers when they take it from the start. Right. But of course, I mean, we choose a framework


Jb: [00:12:50] I wish security was more important for frameworks developers, Ksenia. But it's unfair, It's more and more true.


Ksenia: [00:12:58] Right. But yes, when we choose the framework, we'll look at performance, at functionality doesn't actually solve. Our problem is that MVC framework is a rest framework. Like what is the problem we are trying to solve and then is it popular? Is there the documentation? And then with somebody will say, oh, what about security of the framework? And I actually have a story about that.


Ksenia: [00:13:19] So when I was a consultant, we had a client, a big financial industry organization, and oftentimes such companies are not quick in accepting new technologies. They like things that are proven and tested. So they would use .Net and Java and with JSP for the front end. I mean, that was a few years ago, quite a few years ago. And so the front end developers wanted to switch into using Angular and just management was like, well, what is the security impact if we're switching all our front end development into Angular? So they hired us to answer that question. And being the security-minded person, I dig into Angular and found a bunch of ways that you could exploit and different security vulnerabilities. And frankly, Angular is a very secure framework. Right. So there are not many ways compared to other things. But of course I did my best and came up with this presentation and show it all the way is how Angular can be hacked.


Ksenia: [00:14:21] And the management were all very frightened. I was like, oh my God, this is so insecure. We should have new security protocols, new manual code review steps or anything else if we want to introduce that and actually no. Right, it's still a front end framework. It still has the same issues as your JSP or another templating engine. It's still going to be vulnerable to cross site scripting and other like iFrame bypasses, etc.. So from the protocols, from the policy standpoint, it's no different. But actually Angular is a pretty secure framework because if you look at the documentation, A. they have like a security page that's separate in the framework documentation not many front-end frameworks have a security section in them, and they made an effort to mitigate as many vulnerabilities as possible.


Ksenia: [00:15:19] So, for example, Angular has the contextually aware escaping that is built into the framework it has the way to enable the CSRF protection if it's also enabled on the server-side and have the service and the clients that talk to each other and some of the tokens, et cetera. So, yes, it's great to look at the security of the framework. And as a developer, I mean, of course, maybe you cannot go and actually test the framework and evaluate, OK, what are the security issues with it. But you can definitely look into the documentation, see if there is a security section in it. You can look at the release notes and see what kind of bugs were fixed in the last few versions of the framework. Like were there things that were fixed that have to do with security? If it's an open-source framework, you can go to their GitHub and look at the issues. What kind of issues were reported? Are there a bunch of security issues that were reported and were never addressed and they're still open or are they fixed quickly within a few days? The other thing, of course, is the popularity of the framework. Why Angular and React, for example, are so popular because they are backed by Google and Facebook and big companies that, you know, they will spend time on security and they will support that. And as you said, for example, Django, if you look again at the documentation of Django, they mentioned that they say we treat security as a first-class citizen. That's their quote. So, yeah, those would be my top choices is basically to look at the documentation and the support of the framework and just see how important is security for that framework.


Jb: [00:16:54] Yes, I definitely agree.


Jb: [00:16:55] And an interesting point is, for instance, with React the biggest risk you have to write Cross-Site Scripting in React is by using a method that is called dangerouslySetInnerHTML. So as a developer, I think if you call a method called dangerous, then you should really ask yourself what this is about.


Ksenia: [00:17:17] It's interesting because if you compare Angular and React so they both have built-in protection from Cross-Site Scripting, the contextual aware escaping and in the first versions of Angular English is they call that the same thing, it was enabled by default. But if you wanted to kind of turn it off, they called that method trustAsHTML. So as a developer say, oh, I trust this HTML. Of course, it's you know, it's not bad, but maybe it's coming from the user, but I trust it. And then React, when they came out, they called it dangerouslySetInnerHTML. And then in the next version of Angular, they changed the name of that method and called it bypassSecurity. And they trust the HTML.


Jb: [00:18:00] Who will come up with the most frightening name? I was not aware, good story.


Jb: [00:18:10] All right. And so let's assume a team wants to pick a framework, right? So, you know, it has security gaps. You know, it's not perfect. But for many reasons to want to go with it. So how would you recommend the team to live with an imperfect framework?


Ksenia: [00:18:25] Well, I think what framework cannot do so. Maybe there are plugins for the framework that can cover those holes in the framework. Right. The mitigation controls that don't exist in the framework. As you said, for example, the team uses Express or Flask. They don't have all the bells and whistles built into it. But Flask, for example, has a plugin called Flask Login. Right. So to take care of all your authentication authorization, developers shouldn't write that code from scratch but use a well-known plugin, same for Express, there is Passeport JS, for example, is very popular to handle all the user management part. And then the next part would be procedures. Right. And proper kind of procedures and protocols. So to make sure during the code review developers check for those things that they have these security controls enabled, that they have secure defaults set for these plug-in secure settings, and that actually can be fairly easily achieved with a linting tool. Everything that we talk about configuration, I mean, there are SaaS tools that are built for that. They're open-source tools. They're closed source SaaS tools that are built to support specific frameworks and they check the specific configurations of frameworks. But if it's something in your team, if it's the framework is not very popular or maybe you're using a plug-in that's not supported by a commercial tool, it's a very new plug. And you can always write a limiting tool to check, OK, are all the secure default turned on? And then, of course, the next step and the secure development lifecycle would be testing to test for those things. Have that written into your unit tests, secure tests, and run them daily in your CI/CD pipeline ideally, to catch them early.


Jb: [00:20:16] I like the linting example. And so that's one thing that is pretty convenient to enforce as an AppSec team. So, for instance, if your developers are using React and you are afraid of them using dangerouslySetInnerHTML, then you could just add that as a linting rule. And you are pretty sure that this rule will almost never trigger. But when it does, that's something that is really important. So that wouldn't be a false positive and it would definitely make the developer care about that.


Ksenia: [00:20:45] Yeah, and actually, if you look for jslint, the most famous JavaScript linting tool, there are plugins with those rules. And I think I wrote the ones for React.


Jb: [00:20:55] Oh, I didn't know that, I think we are probably using it.


Ksenia: [00:20:58] Yeah. A colleague of mine wrote one for Angular. I mean, it's been a few years ago, but yeah, we were teaming up and we're splitting the frameworks to write linting rules for Angular and React.


Jb: [00:21:08] Oh that's cool. I'll check it out.


Jb: [00:21:12] And some frameworks are sort of like unfair advantages. For instance, I know in Rails there is the Brakeman static analyzer that is famous, has a very, very good quality, very good level. So that's also something that you should use for free, etc. when you are using your given framework, like make sure of the best practices that are famous around that framework. OK, sometimes there is no best practices. And so in that case, that's probably a flag. Maybe the framework is too immature. So if your team really wants to go with that framework by then, I think it's time to hire someone like Ksenia and ask them to do some some security research on that framework. Yeah, and obviously using security-friendly frameworks doesn't mean that everything is perfect because frameworks can fail as well and have vulnerabilities. And it's just like any piece of software, usually the most lines of code you have, the most likely you are to have bugs. Some of those bugs will eventually be security issues. And so there are a couple examples where frameworks have fallen, for instance, in Rails up to 4.1 by default, the cookie store was storing serialized Ruby objects, it was Marshalled inside the cookie so controlled by the user. Obviously, it was a ciphered but you had some vulnerabilities that helped you either retrieve the key, either break, the signature, and just allowed you basically to inject a new Ruby code inside the Rails application and get remote code execution. I think, like Ruby on Rails is also the framework that popularized the issue of mass assignment vulnerability, now that is well known and found in the I think in the OWASP API top 10, for instance. And that's something that you can find in many different places. But it inherently came from the flexibility that Ruby on Rails was offering. And I think another famous one is Struts and the OGNL injections that also allowed for remote code execution. And that's the famous Equifax breach. Are you aware of any, like a famous framework flaws Ksenia?


Ksenia: [00:23:20] Yeah, yeah. So the other interesting aspect is when we talk about applications, sometimes today, applications don't use just one framework. And when you start combining the frameworks, that's where the fun begins. So like my current research is focusing on Electron, which is a framework for desktop JavaScript applications.


Ksenia: [00:23:41] And so what Electron has, it has no JS behind the scenes kind of on the server part of the application, and then it has a stripped-down Chrome browser on top and you can run whatever client-side code you want to have the UI of your desktop application. So oftentimes you will find applications that have Node, Electron Node on the on the bottom, on the server, and for example, React on top. So now you have two frameworks and the question is how they interact with each other. And so speaking of the dangerouslySetInnerHTML, that is a method in React. There was a well-known vulnerability in Signal the messaging application, which is built in React, and it uses React as the front end. And so in one of the ways where you send a message, it had the dangerouslySetInnerHTML enabled, which usually leads to cross-site scripting. But now that this is an Electron application, a desktop application, that JavaScript may not just read the cookies or read some other client-side data, but it can actually be executed on the server-side. And so by tying these different vulnerabilities and because the rendering window of the desktop part was not also secured by the Node.js part of Electron, it has a feature that's called Node integration, which was not enabled in that case, which means that client side JavaScript can now call Node functions, things like, you know, execute any any code, right. So it was a cross-site scripting that led to an RCE, it's a remote code execution.


Ksenia: [00:25:25] And basically the way it looked as I send somebody a message with something that looks like a link, link contains some JavaScript, it's embedded and the receiver's persons, whoever receives the message into their page and then executes a code, arbitrary code on their machine. So pretty scary. And that's very typical, unfortunately for electron applications. Just this morning, I was reading another blog about a similar vulnerability in the Discord app, which is also built on Electron. And it kind of has a similar pattern. It has a Cross-Site Scripting that leads to remote code execution. But in that case, they had the Node integration disabled, so they protected that whatever is run in the client-side window, even if there is a cross-site scripting, it cannot directly call any internal Node JS methods. However, they forgot to secure that window with another feature of Electron, and that's called context isolation. So what they were able to do is override a JavaScript primitive in the client-side window, a function like Array prototype dot join. So for every array in JavaScript, there is a join function and so they basically override what that function does. So it's also a prototype pollution vulnerability tied into this. And then when some part of the service side code runs that join method, basically they're going to run some custom code. And then instead of joining the pieces of the array, they're actually running exec and then the name of the binary is sunk.


Ksenia: [00:27:07] So it's fascinating. It's absolutely fascinating. I'm glad people have time to do that research and like to tie different vulnerabilities together.


Jb: [00:27:17] It's fun because the browsers, they actually put a lot of efforts into isolation, sandboxing, leveling up security, defining new security protocols like W3C, the security working group is doing an amazing job. And the desktop applications that are using like Electron and everything, it feels like there is still a gap. And so they're catching it. But there is still some much more work to be done to get parity from a security standpoint. Exactly. I think you are currently researching this type of application right? And you mentioned that you were presenting some research in December, I think.


Ksenia: [00:27:54] Yes. So, yes, this is kind of my third part of my thesis research and my goal to be done soon. But I will be presenting my intermediate results at a conference called Absolute AppSec in December. It's going to be a remote conference as everything else remote today, December 16th and 17th. And I think I'm scheduled to speak on the first day, December 16th. So I will be talking about Electron security and specifically the protections that the framework provides and how developers use and do not use it and what happens when they don't use it and how we can help developers actually use them by default.


Jb: [00:28:36] We'll share the links to the conference in the podcast notes, of course.


Jb: [00:28:41] So we attach that to the quality of the ecosystem libraries in the frameworks. So, yeah, obviously that's a dimension that you want to you want to take into account. So you mentioned that like Passport or Flask login for authentication. I guess that's the same for a cryptography and everything. And so that's so important to choose libraries that are strong in the ecosystem that you are choosing. But often it's not like trivial because you have several libraries that you don't have, like a single default choice because the plug. So you have to research what is the one that makes the most sense from a usage point of view. Have you seen some caveats around that, Ksenia?


Ksenia: [00:29:22] Yeah, of course, we talk about using the most popular plugins, the most supported plugins, but we need to remember that today software is built on top of open-source software, be it open-source, a closed source. So, for example, our subsidiary of BlackDuck in 2017, they in their research that showed that 96 percent of commercial codebases use at least one open-source component and oftentimes more more than one. And then more recent data from 2020 showed that from 70 to 90 percent of the code of the commercial code actually consists of open source libraries. And so, of course, if we are using an open-source and I'm not saying open-source code is bad, I'm just saying that we are using a lot of code that our developers did not write, that we did not review manually and we have no idea what's in that code, and even if you are using an open-source security control library, right.


Ksenia: [00:30:24] For example, Flask login or authentication or OWASP CSRF guard or something, that library will likely use other open-source libraries which use other open-source libraries. Right. Oftentimes, if you if you're a JavaScript user and you install a plugin and you say, you know, npm install plugin foo and then it runs and says, oh, just installed 267 libraries and what's in this 267 libraries. I was only going to install one. Right. And you can make sure that that one is actively developed, it's effectively fixed, it has documentation but whatever that library uses another library that uses another library that is not maintained well, that had a vulnerability.


Ksenia: [00:31:10] So there was another one probably about a year ago, a famous on the npm where an author of the library gave authoring rights to somebody else, to another contributor that they didn't know about, that somebody wanted to control. It's all anonymous, of course, right? Somebody wants to contribute. Made a few requests, the code looked good and they gave full admin rights to maintain that library because the person who wrote the library originally was not interested in maintaining it.


Ksenia: [00:31:40] And that's normal. Life happens. We move on from our projects. So the new maintainer turned out to be malicious and submitted another change to the library that was scanning for Bitcoin wallets on the application that it had became part of the applications. And then it would scan for a Bitcoin wallets on your computer and steal your Bitcoin money.


Jb: [00:32:04] So, yes, that's the tradeoff of using external libraries.


Ksenia: [00:32:08] But we cannot really say that, oh, we should stop using external libraries. Like, that's not realistic. We are all going to use open-source code. Otherwise, we have to write everything from scratch. And I'm not sure that's the best solution.


Jb: [00:32:20] Yes. So I think in that case, the best thing we can do is to be super reactive like we need to update quickly those things. And that's actually the tradeoff of using famous frameworks, famous and widely used pieces of code is that when one vulnerability is found, it might get exploited very, very quickly because as the market share of the framework is super high, someone with an exploit will have a lot of potential customers. And so the key thing you need to do, obviously, when you are relying on great frameworks, is to monitor the security of those frameworks. So either by using tooling, either by watching the mailing lists. You can also do both, which will help to rank the severity of everything. But as like any company is growing, you tend not to have one framework. You would you tend to have 10 hundreds of different frameworks that you need to monitor. And so one thing that will help you keep that is like having an inventory of all the applications and all the frameworks that you are using. A feature that was interesting is that the Equifax breach, that was so damaging for them, took only a month between the vulnerability was disclosed and the update available in the framework, and the Equifax breach. So the timeline is not long. You need to patch quickly your applications, which is why inventory is really, really needed in such cases.


Ksenia: [00:33:52] Yeah, absolutely. It's it's one thing that you need to patch the frameworks that you think you are using, your big main frameworks, your Spring, whatever, Struts your Angular. But then also you said that you may be using a bunch of small libraries. And if you don't track if you don't have the data and don't do the component analysis right, then the patches come out for those other small plug-ins and libraries and you need to update them as well. And my own message to the framework developers and maintainers. When you develop a new version, please, please make it backward compatible. Please make make the update process as easy as possible. I understand it's not always possible, but from a security standpoint, if it introduces a breaking change and there is no good way forward, unfortunately, it is very likely that a lot of companies developers will not upgrade. That's why we find in our assessments all the time, you know, older versions of frameworks and libraries are still there that have known vulnerabilities.


Jb: [00:34:54] Yes. And so that's why it's very important not to fall behind too many versions of the ways the data is getting more and more painful. And you need to urgently do it because there is a security vulnerability and it makes dealing with that vulnerability extremely, extremely hard.


Ksenia: [00:35:09] Yeah.


Jb: [00:35:10] So framework's are foundational and really critical in securing any application, and so to me, the concept of a paved road, you know, the fact that the developers should have a secure way to do certain things and that, you know, in order to be used those certain ways, then they need to be easier than doing things in an unsafe way. So that's the concept of a paved road that was like highly publicized by the Netflix AppSec team, for instance. So the frameworks are a really great place to start with plenty of the tasks that the applications are fulfilling. And so that's really the chance for a security team to review some of the best practices and to enforce or to popularize some of the best practices, like providing examples, providing helpers. Most of the frameworks have nice and useful command-line helpers that can really, really be leveraged to perform certain tasks. One typical example is creating S3 buckets when a major source of is in most companies is like publicly exposed S3 buckets. Huge source of data leaks. And if you provide your developers with a secure way and standard way to create S3 buckets, for instance, rake create bucket asking public or private use, this might a) reduce the likelihood that something is created with the wrong permissions b) give the opportunity to the security team to review it and maybe periodically. So that's really a great place to enforce this concept of paved road.


Ksenia: [00:36:41] Yeah, having those kind of harnesses when you're starting a new application and using the pre-built skeleton of the application that already has all the security features enabled and said the way you wanted them to be said for your organization specifically like that, that concept is really, really great.


Jb: [00:37:02] Yes, definitely. So maybe one advice to any AppSec team Ksenia, so I know you've done that in the past several times, how would you recommend like a new AppSec engineer to process when assessing a brand new framework security?


Ksenia: [00:37:17] The best ways to write an application in that framework is to write your own code. So first you understand the functionality and then try to break that code and try to hack your own application. I mean, that's always the best way to learn. And yes, you can look at the code examples of the sample applications that already exist for that framework. But again, try to break them, take your OWASP top 10 list or whatever list of vulnerabilities you want to find and try to see if you can find every single one of them and build the application to be vulnerable and then build application to be secure. How you fix them with that framework? Yes, that's that's what we've done so many times when I developed courses for engineers. Right. Security development courses. So you could write a simple application and then you try to break it and introduce vulnerabilities, but then you try to also fix them. And as a result, you basically get a lab for the students. Here's the app. Find all the vulnerable points and then fix them.


Jb: [00:38:18] Yes, and I think it's also super important to build an application that is close to what the developers are trying to achieve. So if it's a webapp, if it's an API, it's a Walker client application, just mimic that as much as possible and use the tools that the developers will be using. If such a cache of such database is popular in your company, then try to to use them and see how much the full security is built into the framework. Obviously, I would say that the next step that you need to be doing if you are a security researcher or an AppSec person doing this work is to contribute with your findings back to the community. And so one of the ways could be to actually contribute to the security page of the framework, which may maybe put request if you find vulnerabilities in the framework. But just don't keep those findings for yourself in your company, contribute them back in the framework. And so that will also be the best way for next projects in your own company to start, with much more secure defaults.


Ksenia: [00:39:17] Yeah, my other favorite way is to contribute to Stack Overflow because as you're building something and you're using your framework and you Google oh, how do I read files with this new technology? How do I create a user? And then you stumble upon examples and they're like, these examples are vulnerable. They are not using, you know, the best security things.


Ksenia: [00:39:39] And instead of as a security person, instead of sitting there and saying, oh, developers are stupid, they have all these bad examples and they copy-paste that code and now you have this bad coding application like, well, answer that question and provide your code example with good settings, with good security, or at least comment and mention that, you know, there is a better way to do it.


Jb: [00:40:00] Yes. Stack Overflow powered security! Yes. Or at least enrich the Stack Overflow to have more more secure defaults as well. Yeah.


Jb: [00:40:10] Yes. So choosing in the right framework and the right defaults and best practices is really critical in any application security journey. So the framework is really the foundation of all of the code written or having the right examples the right helpers the right CI around it can be really a game changer for serving most of the security low hanging fruits.


Jb: [00:40:32] So Ksenia thank you. Thank you so much for sharing your knowledge with us today. I really appreciated having you in the podcast.


Ksenia: [00:40:41] Thank you very much. Thank you for having me. And congratulations on your brand new podcast and all the best luck in bringing that knowledge, because it is so important to have more information for the builders, not just about how we hack things, but how we build things that are secure.


Jb: [00:40:57] Thank you so much. I wish you the best of luck for finishing your PHD. I'm sure it will be amazing. I'm looking forward to your graduation.


Ksenia: [00:41:06] Thank you. Thank you very much. I need all the luck, all the good vibes for that. Thank you very much.


[00:41:13] Thanks for listening to this episode of AppSec Builders. You can find all the resources discussed during this show on www.appsecbuilders.com. Be sure to subscribe to our podcast to get updates on our upcoming episodes.