Security and Access Management in Serverless Architecture: Best Practices

By Ganna Shargorodska and Oleksandra Martyntseva

As we announced, EPAM Orchestrator is now spinning off its open source version, Maestro 3. The beauty of the new Orchestrator is not only in the ability to customize your application by adding only the required features, but also in the code contribution possibilities. Yes, if you think that your Orchestrator should have a feature which the out-of-the-box solution lacks, you just write the code and add it to the application.


The Deployment Framework is fully based on AWS services with AWS Lambda being the elementary unit of code representing a certain feature of your application.

In designing the framework for code contributions, we tried to follow the same flows and procedures as are used in standard deployment processes. However, there was one big difference - the new Orchestrator is a serverless architecture and this posed certain challenges, one of them ? organization of security and access management.

A serverless application is based on an AWS account. You can decide whether to share the AWS account credentials with the team or to create a super-admin user to manage the account. Either way, you will get an effective development process. If you are using a super-admin access, the super-admin deploys the code to the AWS account serving the application. Alternatively, a Jenkins server can be set up with the same access to the AWS account to automatically deploy the builds after verifying that they contain no errors.

This is where the issue of code debugging arises, as with serverless architecture a developer has no possibility to debug the Lambda function before committing it. There is no way to run Lambda functions locally to see if everything works as intended. To see if it works you need to deploy it and then just watch if it performs properly or crashes. We recommend making as many tests as possible before committing and writing everything in the log - this way you have better chance of identifying an error.

OK, but is there no way to analyze what went wrong if your Lambda function is already deployed and not performing as it should? You can always view the logs... or can you? By default, Lambda functions write their logs to the native Amazon log aggregator - AWS CloudWatch. However, to access CloudWatch logs you need an AWS account access which, as we have found out, you do not have, so where can you get the logs?

To make application logs available for the developers, we suggest setting up a separate external log aggregator which will gather logs from CloudWatch and store them on its side. The developers can then view logs through the log aggregator UI requiring no AWS account access.

Separating the Environments

In the usual development flow, the code passes a certain sequence of environments before final deployment - from development to testing to staging and then to production. But, remember, this is a serverless architecture, there are no servers, so how can we set up separate environments and make sure they do not conflict?

As other clouds, AWS uses regions to provide cloud computing services. The easiest way to separate your environments is placing them in different regions - you can host your QA in eu-west-1 and your stage in eu-central-1.

However, there is another matter to take care about - IAM roles which are not associated with a region and apply to the entire AWS account.

Whenever a Lambda function is created, it is assigned an IAM Role. The role contains a set of policies defining the actions permitted to such Lambda function. For example, if a Lambda function is to write to DynamoDB and to read from an S3 bucket, it has to explicitly have permissions for both actions.


In the figure above, you can see examples of two IAM policies allowing different sets of actions. When they are combined in one IAM role, such role will allow all actions contained in both policies.

To allow specific permissions to a Lambda function, assign the appropriate role to it with an annotation:


IAM roles and policies provide a flexible and convenient platform for building your access models, however, they can cause certain difficulties when your environments are separated by AWS regions. You will still be working under the same AWS account, and IAM roles apply to the whole account, rather than to a particular region. This means that when an IAM role for a Lambda function is modified in the course of development, this modification will override the same IAM role settings in all environments and may cause application failure. There are two ways to avoid this:

  • - Create a new IAM role containing a new set of policies instead of modifying the existing roles
  • - Use separate AWS accounts for different environments. This way, you will be able to configure separate sets of roles and policies which will never interfere with each other

If you have any questions about this open-source initiative, please contact Cloud Consulting Team.