A year or so ago I was on an application security program assessment project in one of those very large enterprises. We were working with the security team and they had all the scanners, from SAST/SCA to DAST to vulnerability assessment, but their process was really struggling. It took a long time for bugs to get fixed, things were slow to get approved and deployed, and remediating in-production vulnerabilities could also be slow and inefficient.

At one point I asked how vulnerabilities (anything discovered after deployment) were being communicated back to the developers/admins?

“Oh, that data is classified as security sensitive so they aren’t allowed access.”

Uhh… okay, So you are not letting the people responsible for creating and fixing the problem know about the problem? How’s that going for you?

This came up in a conversation today about providing cloud deployment administrators access to the CSPM/CNAPP. In my book this is often an even worse gap, since a large percentage of organizations I work with do not allow the security team change access to cloud deployments, yet issues there are often immediately exploitable over the Internet (or you have a public data exposure… just read the Universal Cloud Threat Model, okay?).

Here are my recommendations:

  • Give preference to security tools that have an RBAC model which allows you to provide devs/admins access to only their stuff. If you can’t have that granularity, this doesn’t work well.
  • Communicate discovered security issues in a manner compatible with the team’s tools and workflows. This is often ChatOps (Slack/Teams) or ticketing/tracking systems (JIRA/ServiceNow) they already use.
  • Groom findings before communicating!!! Don’t overload with low-severity or false positives. Focus on critical/high if you are inserting yourself into their workflow, and spend the time to reduce either true false positives (the tools made a mistake) or irrelevant positives (you have a compensating control or it doesn’t matter). On the cloud side this is easier than with code, but it still matters and takes a little time.
  • Allow the admins/leads (at least) direct access to the scanner/assessment tool, but only for things they own. The tools will have more context than an autogenerated alert or ticket. This also allows the team to see the wider range of lower severity issues they still might want to know about.

And one final point:

  • Email and spreadsheets are not your friends! When the machines finally come for the humans, the first wave of their attack will probably be email and spreadsheets.

One nuance arises when you are dealing with less-trusted devs/admins, which often means “outsourcing”. Look, the ideal is that you trust the people building the things your business runs on, but I know that isn’t how the world always runs. In those cases you will want to invest more in grooming and communications, and probably not give them any direct access to your tooling.

I’ve written a lot on appsec and cloudsec over the years, and worked on a ton of projects. This issue has always seemed obvious to me, but I still encounter it a fair bit. I think it’s a holdover from the days when security was focused on controlling all the defenses. But time has proven we can only do so much from the outside, and security really does require everyone to do their part. That’s impossible if you can’t see the bigger security picture.

Most of you know this, but if this post helps just one org break through this barrier, then it is worth the 15 minutes it took to write.

Share: