DevSecOps is a culture shift in the software industry that aims to bake security into the rapid-release cycles that are typical of modern application development and deployment, also known as the DevOps movement. Embracing this shift-left mentality requires organizations to bridge the gap that usually exists between development and security teams to the point where many of the security processes are automated and handled by the development team itself.
How does DevSecOps differ from traditional software development?
Traditionally, major software developers used to release new versions of their applications every few months or even years. This provided enough time for the code to go through quality assurance and security testing, processes that were performed by separate specialized teams, whether internal or externally contracted.
However, the past ten years have seen the rise of the public clouds, containers and the microservice model where monolithic applications are broken down into smaller parts that run independently. This breakdown has also had a direct impact on the way software is developed, leading to rolling releases and agile development practices where new features and code are continuously pushed into production at a rapid pace. Many of these processes have been automated with the use of new technologies and tools, allowing companies to innovate faster and stay ahead of the competition.
The advance of cloud, containers and microservices also led to the emergence of what the industry calls the DevOps culture, where developers can now provision and scale the infrastructure they need without waiting for a separate infrastructure team to do it for them. All major cloud providers now offer APIs and configuration tools that allow treating infrastructure configuration as code using deployment templates.
While the DevOps culture brought a lot of innovation to software development, security was often not able to keep up with the new speed at which code was being produced and released. DevSecOps is the attempt to correct that and fully integrate security testing into the continuous integration (CI) and continuous delivery (CD) pipelines, but also build up the knowledge and skills needed in the development team so that the results of testing and the fixing can also be done internally.
Three key things make a real DevSecOps environment:
- Security testing is done by the development team.
- Issues found during that testing is managed by the development team.
- Fixing those issues stays within the development team.
Chris Wysopal, co-founder and chief technology officer of application security testing firm Veracode, tells CSO. “The last one is going to take some time, but I think that’s when application security truly becomes DevSecOps and there’s no need for a separate team.”
Achieving true security/development integration
According to Wysopal, the reason why achieving the last step is hard is because developers must build up the skill set required to fix security-related bugs without outside guidance and that takes time. Many teams get thereby embedding a so-called security champion within their development teams. This is someone who has expertise in application security and has taken more advanced training in this field than most of the team, even though training the entire team on secure programming practices should also be part of the process. This person can review security fixes to make sure they are correct.
It doesn’t mean the security champion can’t go outside the team for an expert opinion, for example, to the company’s application security testing provider who might be offering consulting services to customers. This would be in special cases, not the norm. This is different from having separate development and security teams and having one or more members of the security team embedded into development teams.
According to Brian Fox, CTO of DevOps automation and open-source governance firm Sonatype, the integration between development and security needs to happen at the management level, too. “When the mission of security is not fully aligned with being completely integrated into development, top to bottom, I don’t think you end up with the right thing,” Fox tells CSO. “You end up with these management-level clashes sometimes where the goals are different even though the people are working in the same team. It’s similar to parallel play with little kids: You have two toddlers who are playing next to each other, and they’re not fighting, but it doesn’t mean they’re really playing together. I think it’s an element of that happening in a lot of organizations.”
This has happened before with QA, where there used to be a QA manager and an engineering manager and they were working together, but there was always a bit of tribalism going on, Fox says. “As soon as that went away and QA became part of the things the people on the development team were doing, you stopped seeing that us versus them mentality, and we’re not quite there yet with security. I think that’s where a lot of companies struggle.”
DevSecOps testing and tools
Silicon Valley tech companies led the way in DevSecOps adoption early on, but the security testing tools available at the time were not developer-friendly. Developers want command-line tools that can be automated, that allow them to easily tweak various configurations and whose output can easily be imported into bug trackers, whereas traditional security scanners are designed with security teams and CISOs in mind, whose goals are governance, security policy compliance and risk management.
Slowly new tools started to spring up that were created by developers for developers and were integrated into development environments and CI/CD workflows. Some were open-source, others were start-up business models built around them, but while they solved the needs of developers, they didn’t really address the needs of the CISO anymore.
If many different open source tools are being used, the development team might feel like they’re covering what they think they need to cover. From a governance perspective, it’s difficult for the security team to map all these different fragmented tools to the company’s policies, Wysopal says.
Over the past couple of years, traditional application security vendors have changed their products to address both use cases: To provide both the analytics and reports needed by CISOs and also have the flexibility and ease of use expected by developers. Some providers of cloud-based services aimed at developers such as GitHub, have started adding security testing directly to their services. When it’s not available as a native feature, it’s usually available in the service’s marketplace as integration from a third-party vendor.
“Over my whole career I’ve observed a pattern that repeats itself,” Fox says. “There’s a pendulum that seems to swing back and forth between people wanting one vendor and an all-encompassing tool suite and people who are assembling best-of-breed toolchains. I would say in the last two years we’ve seen things swing pretty dramatically towards the all-encompassing single suite.”
Fox warns that this consolidation will reverse at some point when the next disruptive technology comes along, and organizations need to be ready for that. The problem with suites is that they can excel at one or more things the organization needs, but then include other features that are used because they come for free with the package, not because they’re the best solution for those respective problems that’s available out there. In time, this can lead to splinter groups of developers inside the organization who will start testing and using other tools that address their needs better than what the company-approved suite provides.
From a governance perspective having unmanaged teams is bad, but companies need to be aware that one or two years from now, it will inevitably happen and despite attempts to restrict tool usage, there will always be some developers who will do their own thing, Fox says. “The early adopters of cloud sometimes were individual teams within much larger organizations who were rebelling against how long it took to get machines.”
“If you embrace and understand that that’s going to happen and think about it, then you can be a little bit more flexible to recognize that, hey, this new team might actually be on the edge of some really disruptive innovation that might be the thing that we want to replace this [suite functionality] with,” Fox says.
According to Wysopal, more companies are integrating automated security scans as part of the CI/CD pipelines, but the results might not be immediately apparent because of what he calls the “security debt,” which are the number of vulnerabilities that make it into production because developers have chosen not to fix them.
This can happen for a variety of reasons, including not being able to fix them immediately, not planning to ever fix them because there are other mitigations in place or not fixing them because they have a lower severity. In its 2019 State of Software Security report, which is based on data collected from scans performed on 85,000 applications over the course of a year, Veracode reveals that the average fix time for vulnerabilities found in applications is 171 days compared to the average time of 59 days a decade ago when the first report came out. However, this is skewed by the accrued security debt and the median time to fix has actually remained about the same.
When correlating the scan results with the frequency of scans for a certain application — increased frequency suggests the integration of automated scanning in CI/CD workflows — the data shows that applications scanned daily have a median time to fix of 19 days compared to 68 days for applications that are scanned monthly. This suggests that scanning more frequently makes it more likely for vulnerabilities to be patched quicker.
“As with financial debt, escaping out from under security debt necessarily requires changing habits to pay down balances,” the company concluded in the report. “The integration of software development and IT operations (DevOps) and integration of security into those processes (often called DevSecOps) over the last several years has certainly changed habits.”
Another benefit of a true culture change toward DevSecOps should be that the number of serious vulnerabilities that exist in the code should also decrease. Veracode’s data shows that the percentage of applications with no vulnerabilities has actually dropped overall compared to a 10 years ago, suggesting the situation has worsened, but the percentage of applications with no high-severity flaws has actually increased from 66% to 80%.
“I see so many organizations still struggling with this model,” Fox says. “They’re moving toward this continuous development environment, and they’ve got the infrastructure and CI and they’re using containers. Then they have an application security team who’s coming in later running scans, producing reports — sometimes literally physical paper printed reports — and bringing them to development, instead of leveraging tools that would empower the development to avoid those mistakes upfront. The bulk of organizations that I see still fall into this us versus them, dev versus security mentality.”
That said, even with DevSecOps, some tasks will still need to be performed by security professionals and manual testing still has its role to play. For example, it’s hard to find logic flaws or design flaws using completely automated scans.
“What we’re starting to see is that manual testing is not a once-a-year kind of thing, or twice-a-year,” Wysopal says. “It’s being more iterative. It’s being done more as part of that DevOps process where maybe there’s a two-week sprint, where they’re doing a new feature that has security impact as a small amount of manual testing that is happening just for that feature. That can sometimes be done by the security champion if they understand enough about manual testing and that would meet the goal of the development team doing it themselves.”