It's been a while since we announced that EPAM Orchestrator goes open-source. The enterprise solution that has become an integral component of software development in our company is getting to the new level. Maestro 3 (this is how the open-source solution is called) is a real construction set that allows you to have your own version of EPAM Cloud Orchestrator, containing only the elements you need.
Moreover, you can customize the existing things and create your own features. Maestro 3 is a serverless application, fully based on AWS services, with AWS Lambda being the main "engineering material" here.
In our previous article, we highlighted the basic features of Maestro 3: Angular JS-based front end, consolidated billing, auto configuration... But the real heart of the solution is the Deployment Framework which enables automatic smart deployment of the application to AWS, ensuring that all the steps will be done timely and everything will be set up properly.
The natural question to ask would be - why don't we use AWS Serverless Application Model, the native tool designed for AWS-based serverless solutions? The answer is very simple - having our own framework, we can support the necessary functionality just when we need to, not depending on third-party applications update. Any new feature can be added at any time by the project team - so, everything is just in our hands!
Currently, we already have a number of our own features:
- - We enabled the deployment of AWS Step functions quite after this service was introduced in AWS. We can do it for any service that can be useful for Maestro 3
- - Support of the dependencies feature allows to ensure that all the resources will be created in the correct order
- - In your java code, you can use annotations to reference the most used configurations (described in .json files) right from inside Java IDE
Thus, Deployment Framework users do not depend on the framework provider, and can improve the tool according to changes in their application structure and user needs, immediately.
You can use our framework to deploy any AWS Lambda-based application, all you have to do is follow a number of simple rules that would allow the tool process your code correctly.
In this article, we reference a simple Lambda application that you can use to check the whole deployment within minutes. At the end, we will give the link to the internal EPAM Git repository with the application, and with the instructions on deployment.
But before we go on, let us look at an example of a Lambda application, that could be deployed by our Framework. This will be a simple application, using a set of typical AWS services:
- 1. Amazon SNS triggers a Lambda with a corresponding event (Lambdas are put inside a Maven-built Java project)
- 2. The Lambda inserts the event subject as a new document to the Dynamo DB table, that was previously created during the application deployment
- 3. The Lambda saves the event body as a new object in an Amazon S3 bucket
- 4. The Lambda writes logs to Amazon CloudWatch
All these actions and dependencies are described in annotations within the application .java file:
The @DependsOn annotation specifies the AWS resources that need to be created for the application to perform correctly:
- - Line 20: the application needs the m3_DemoTable in Dynamo DB
- - Line 21: the application needs the m3_sample_s3_bucket in S3
The @SnsEventSource annotation specifies the SNS topic for events that will trigger the Lambda.
The @LamdaHandler annotation marks the class as a Lambda, which makes Deployment Framework recognize it properly.
The detailed descriptions of the table, the bucket, and the sns_topic are given in the deployment_resources.json file, so they are just referenced in the annotations:
Please note, that in order to perform its functions, each lambda should have IAM policies set up. Access and permissions control is an interesting and important topic, which is described in our next article.
In the demo allocation, all the necessary policies and roles are already set up in the \m3-deployment-framework\config\env\default\policies and \m3-deployment-framework\config\env\default\roles folders.
And now: how does Deployment Framework cope with this? When run, the Deployment Framework acts according to this flow:
- 1. Necessary IAM roles settings are performed on AWS
- 2. The next step is scanning through your Java project, and automatically detecting Lambdas (thanks to the @LambdaHandler annotation)
- 3. Based on existing .json files and the annotations in all Lambdas, the meta.json file is created which, as a result, contains meta descriptions for all the resources to be created
- 4. Validates the meta.json to make sure that all the resources are specified properly. In case the validation fails, the deployment is canceled
- 5. Creates Lambdas:
- a. Uploads the .jar files to S3
- b. Creates necessary resources, according to the order, specified by dependencies
For your convenience, the Deployment Framework logs all the actions that were performed during the deployment. In addition, you can set up the framework so that it fills the newly created environment with a bit of mock data, so that you can immediately check it.
When the creation is completed, you will get the respective output in the console.
To sum up, Maestro 3 Deployment framework is a powerful tool that enables effective deployment of AWS Lambda-based serverless applications. It is oriented on Maestro 3, and includes the features that, in our opinion, perfectly fit the needs of a Cloud Orchestration application.
However, being open-sourced, it can be used for any needs of its users, and easily customized according to the specifics of target applications.
Currently, Maestro 3 is under internal beta and can be accessed only by EPAMers who have at least read access to the company's resources. We will announce the public beta as soon as all the necessary preparations are made. But, if you are already in, feel free to find the demo application and the detailed instructions here.