Chef in 21 days. Part III. A Chef and AWS tutorial

by Ievgen Kabanets

Greetings, readers! Here is the third, final part from the series of articles for Chef beginners ( Part I, Part II). This part is dedicated to a specific example of using Chef in the Amazon cloud. As I have already mentioned, it is a rather popular scenario. To understand the subject better, we will look at a case with two ec2-instances (Amazon virtual servers), with the Chef server as the one and the node as the other.

AWS and Chef

First, I would like to clarify that we are going to launch the instances using AWS CloudFormation. Of course, we could run and manage them manually, but that's not the point of automation, is it?

CloudFormation includes two main concepts:

  • a template, a JSON file that describes all the resources we need to launch the instance.
  • a stack, containing the AWS resources described in the template.

To those who's just getting acquainted with AWS, Amazon offers ready-to-use sample templates that cover most of the aspects necessary for working with AWS. For the link to these sample templates, see the end of the article.

Let's take a close look at what a template is. In the basic case, it consists of four blocks: Parameters, Mappings, Resources, and Outputs.

The Parameters block describes the variables and their values that will be passed to the stack during its creation. You can define parameter values while creating the resources, or you can use the 'default' field in the parameter description. Parameters can contain any type of information, from a password to a network port or a path to a directory. To get the parameter value, use the Ref function in the template.

The Mappings block contains a set of keys with the corresponding parameters and values. Mappings are typically used to define the AWS regions and instances corresponding to them. To get the value of a given mapping, use the Fn::FindInMap function, where you define the key and the parameters that will be used to search for the value.

The Resources block describes our ec2-instances or any other AWS resources. This is the section where you define the instances for the Chef-server and the client node. The description should include the type of the resource (for example, AWS::EC2::Instance); you can also specify some metadata describing the node or defining the pre-install procedure directives (for example, a certain package must be installed when launching the image). Properties are a fundamental part of this block; they contain the detailed information about the launched image. Here you can define the instance type to run (for example, Amazon Linux 32-bit), the membership of the new instance in one Security Group or another (essentially, it is a firewall with defined traffic handling rules, where the default action is deny). But the central part of the Properties block is User Data. This is where we will describe the script that will turn our characterless instance into a Chef server or a Chef client.

See the Template followed by my comments attached to this article .

As we can see from the template, we have five defined parameters. Two of them contain default values - the type of the created instance (in this case it is m1.small) and the subnetwork of the IP addresses that will have SSH-access to the node. When creating the stack, we must define three parameters: the key for the SSH-access to the nodes (created separately in the AWS Console), the access key, and the secret key (both are configured during AWS access account registration).

The mapping describes two instance types, both with a 64-bit architecture. AWSRegionArch2AMI mapping property also describes the IDs of the instances corresponding to the instances with the Amazon Linux operating system (the ID information can be obtained in the AWS Console).

Then, we describe the resources of the Chef server and the Chef client. In both cases, before running commands from the User Data section, we must install wget via the Metadata block (just in case - in reality, Amazon Linux images should contain such packages). The resources to be created are defined by the ImageId and InstanceType properties (in this case, they are predefined as Amazon Linux, m1.small and a 64-bit architecture). Then goes the main body of the resource - User Data. It is a configuration bash-script that is executed step-by-step after our instance is initialized.

In brief, we should perform the following steps for the node that will serve as the Chef server:

  • Disable iptables (to avoid possible issues with dropping packages).
  • Install Open Source Chef Server.
  • Replace default.rb (yep, the issue with Amazon Linux images is that they are not presented as RedHat, which they essentially are, but as Amazon, so the Chef server service cannot work at full capacity).
  • Auto-configure and restart the server.
  • Create a configuration file for the AWS console.
  • Upload the admin.pem and chef-validator.pem files to the repository (we will need them on the Client).
  • Install the Chef client on the node (yep, the Chef server does not have its own knife).
  • Get the client.pem and knife.rb files for the knife running on the server (something like the starter-kit from Part I).
  • Recreate the structure of the chef-repo directory where we store our cookbooks, role files, etc.
  • Upload and install cookbooks on the server (EPAM cookbooks are stored in the repository, and others are provided by the community).
  • Upload and install roles on the server.
  • Add a scheduled task that will trigger node checks every five minutes and install the basic role on each new node.

Although this explanation can seem muddled and confusing, an in-depth analysis of each part of the script would be much longer. Therefore, if you have any questions, ask away in the comments or message me directly. Back to the template. For the node that will serve as the Chef client, we should take the following steps:

Disable iptables (to avoid possible issues with dropping packages).

  • Create a configuration file for the AWS console.
  • Load the client configuration files from the repository.
  • Add the Chef server address to the client's configuration (it can change even when the images are relaunched, or when the stack is restarted).
  • Add a scheduled task for launching the Chef client.

It is note-worthy that, thanks to such options as json_attribs, we can assign the node with a tag that will define its role in the infrastructure. This can come in handy when there are nodes with different infrastructural roles among the Chef clients.

The following resources - WaitHandle and WaitCondition - describe the conditions when the process of creating a stack can be put on hold. If WaitHandle receives a signal about the successful completion of the process that was active during the timeout period defined in WaitCondition, stack creation is resumed/finished.

The next resource declared - Security Group - is a firewall for the node. Here, we describe ports forwarding and the source addresses of packages.

The last block - Outputs - serves so that we can receive some useful variables after we successfully launch the stack and the instances. For example, a domain name for gaining access to the instance.

In the end, we get a universal template and the ability to deploy our neat little infrastructure by running a single command from the AWS management console (if you're interested in a setup with a large number of instances - use Auto-Scaling Group). You can see the result of the launch in the CloudFormation section.

What comes next? You will have the opportunity to manage your nodes by means of knife, cookbooks and roles. You can use the community cookbook, create your own custom cookbooks, write wrappers for other existing cookbooks. The possibilities are numerous, and the choice depends on the final objective.

In this series of articles I tried to scratch the surface of the exciting subject of management automation for a group of computers, as well as working with the AWS cloud resources. I hope some of you newbie DevOps will find these articles interesting and useful.

If you have any questions or suggestions - feel free to comment on any article or send me direct messages. I sincerely thank everyone who found the time to read it all through.

Until we meet again!

Links:
AWS documentation: aws.amazon.com/documentation/
AWS CloudFormation: aws.amazon.com/documentation/cloudformation/
AWS EC2: aws.amazon.com/documentation/ec2/
AWS Sample Templates: aws.amazon.com/cloudformation/aws-cloudformation-templates/
AWS Console: docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/getting-started.html

Documentation

Below is a list of documents related to this section. You can find the full list of our documents in the Documentation Storage.

Chef In 21 Days (template)

The template used a s a reference in "Chef in 21 days. Part 3" blog post.