Last updated at Wed, 03 Mar 2021 14:59:53 GMT

This blog is part of an ongoing series sharing key takeaways from Rapid7’s 2020 Cloud Security Executive Summit. Interested in participating in the next summit on Tuesday, March 9? Register here!

Identity and access management (IAM) credentials have solved myriad security issues, but the recent cloud-based IAM movement has left many scratching their heads as to why it can be so complex.

IAM on-premises vs. IAM off-premises

IAM on-premises, well, it's become a whole lot simpler. In many organizations, it is LDAP-based, so most things are tied back into it, such as database credentials and system accounts. There are more known processes and ways to deal with those. However, when it comes to the cloud, organizations now have to deal with inheritance and different aspects that may not correlate back to the on-premises world. These new concepts, new constructs, and different ways to interact can be overwhelming.

Complexity can really become an issue with something like assume-role in AWS. Going to a least-privileged model can frustrate people, so they may just want access to everything on a given surface, promising to scale the permissions back later. The worry there is that you can end up with over-permissioned identities that never get fixed. With assume-role in particular, credentials are no longer stored inside a physical operating system, but rather in a metadata layer associated to a piece of infrastructure. This applies to a number of different services specifically within the cloud provider—everything from compute instances, to database instances, to storage assets, and more. These aspects can all be very complex to secure, but there’s no question it makes operations safer. Speaking of safe…

Going fast vs. going safe

This may be stating the obvious, but a general sentiment of feature developers is that they aren’t all that excited about security. Often, this is because their jobs may depend on speed—they’re told to go fast. A main value prop on the cloud is that teams buy into the sentiment of “I can be as agile as I need to be, I can be as quick as I need to be.” So there’s a sort of relief at that, but it’s very likely those teams are also told, “Make sure we don't have incidents, data breaches, or data exposures.” A tension can grow from increased use of the cloud and operational offloading of IAM protocols. In this case, keeping things as simple as possible is a way to maintain efficient processes and keep them moving. What are some tools organizations can use to lessen friction between developer teams and security?  

Service-control policies and session-control policies

If there’s an umbrella structure sitting over your cloud accounts, or over apps like Google projects, policies can be pushed from a top level down with service-control policies. A session policy is generated either from the user side, from an application, or from an assume-role. Going even further, there’s also an identity policy associated with each individual in an active session. With all of this potential complexity, how would organizations go about simplifying, especially as they shift authentication to the cloud?

The above process does provide an abundance of granularity as well as freedom to explicitly allow or deny at many, many levels. The flipside of that, of course, is that there are many, many levels.  

  • Leverage all aspects: Some companies may restrict access to specific services or regions. Plus, they may not want to go and use just any old cloud service due to various IAM implications, based on the level of security in the organization. So it’s about tailoring a set or policies to specific goals.
  • Pre-canned policies: Leveraging automation, it might be faster to deploy these types of policies while also leaving room for a certain amount of autonomy. In this way, teams can tailor some resource-level access standards.  

Again, based on a company’s specific goals, these types of processes can help ease friction between teams looking to ship a project fast and those trying their best to keep them secure.

Seeing it and protecting it

At the end of the day, the feeling from much of the security world is that visibility has to be there for SecOps to be able to tell you that something is secure. They may not need to go inside and read the data, but if there is no visibility, they can’t comment on any aspect of configuration. An old idea around securing infrastructure is that everything in a private network is implicitly authorized to access everything else. So, once a security checkpoint is crossed, anything could be accessed.

Of course, this whole process is aiming to improve incident-response efficiency by reducing security controls at every step of the kill chain. This would mean not assuming identities and not trusting anybody, also known as zero trust. Therefore, if the information on your website is something that’s super-secret, it might make sense to put it behind a VPN just for that extra layer of security—even if you have HTTPS authentication authorization—against someone who might not be part of the team.

A zero-trust initiative opens the discussion of workload identity in the cloud. Teams could use Google-native services to ensure apps are talking to each other, authenticating properly (AWS), and that connections originate from the right machines.

Identity is everything

At the end of the day, it’s never about invasion of any sort of privacy. It’s all about securing as much as you can and authenticating connections to protect against threats. A combination of technical processes and open communication is key in mitigating the challenges of protecting against those threats in a cloud-based IAM solution.

Want to learn more? Register for the upcoming Cloud Security Executive Summit on March 9 to hear industry-leading experts discuss the critical issues affecting cloud security today.