JoeHack3r

Eat like a bird, and poop like an elephant --Japanese philosopher

Validating EC2, RDS, Elasticache (Memcached and Redis) CloudFormation Parameters With Allowed Values



For anybody who is curious, my favorite Amazon Web Service is AWS CloudFormation. It allows me to to manage infrastructure as code. For a “DevOps” engineer, this is awesome! I use CloudFormation wherever and whenever possible and combine it with Ansible for a great one-two punch - Ansible to create images and CloudFormation to deploy and manage them.

One area where my CloudFormation templates need updating is the allowed values for EC2 Instance Types. Like a good engineer, I prefer to validate my inputs before submitting the stack to AWS. However, AWS will release new EC2 instance types leaving my templates out of date. Recently I went to change an instance type to c3.medium and failed my own validation. Even while I immediately knew the issue, I did not want a quick fix of only adding c3.medium to the list; I wanted to include all the new instance types.

I have not been able to find a programmatically available resource (like an API) that provides a current list of EC2 instance types. As a result, I am in the process of creating one. While I’m still fine-tuning the code for possible public release, the output is available for your immediate use:

If you notice the leading whitespace in the document, this is due to an upcoming series of blog posts on using CloudFormation. In that series, I will piece together CloudFormation templates using these snippets of json. The whitespace is there to make the json more readable in the final template.

Interested in the upcoming series? Follow me on twiter or sign up to the email list to learn when it comes out.

Creating a Links Page



My future posts (and old ones that I edit) will start to have their links go through http://blog.joehack3r.com/links. Before making this change, you deserve to understand the reasons behind it. It comes down to two main reasons:

  • Help better serve you
  • Help pay for this site

By using http://blog.joehack3r.com/links, more data is available on where you navigate to companies’ products and services. This helps me understand what products you are most interested in, find most valuable, etc. As a result, I can focus on providing more content for these companies and products. Furthermore, I can contact the company, let them know how many people are interested in their product, and ask for an enhanced trial in order to provide better content for you. This is the primary reason for using http://blog.joehack3r.com/links.

Now, eventually some links will be affiliate links. The affiliate links do not cost you anything and will help offset the costs of this site. Rest assured there will only be affiliate links for products I use and feel others should consider using. I share a similar feeling of affiliate links as Pat Flynn of Smart Passive Income:

…I recommend them [companies] because they are helpful and useful, not because of the small commissions I make if you decide to buy something. Please do not spend any money on these products unless you feel you need them or that they will help you achieve your goals.

From my personal experience, I have gotten lots of very valuable help from others without paying them a single penny. These people deserve some monetary reward for their hard work and value they provide. One way I repay them is by using their affiliate links.

If you have opinions about the links change, please let me know.


Analyzing CloudTrail Data Using SumoLogic



Recently, Amazon Web Services let users know if their accounts were compromised by having access key id and secret access key publicly available on sites like GitHub. If this happened to you or made you wonder about the security of your access keys, there is a service you need to be using: CloudTrail.

AWS offers CloudTrail to provide a history of AWS API calls for your account.

  • Not sure if an account is still being used? CloudTrail can help.
  • Not sure what permissions an account needs? CloudTrail can help.
  • Want to know where access is coming from? CloudTrail can help.

CloudTrail saves API log data in an S3 bucket which can be analyzed using products like SumoLogic, Splunk, etc. I am most familiar with SumoLogic and created this video to help you get started using CloudTrail and SumoLogic. A couple of notes before watching the video. First, watch out for the Source Category name. It must be AWS_EAGLE for the logs to be parsed properly.

In the video, I mention if you are using an existing bucket, the bucket policy will need to be edited. More specifically, the steps outlined in the AWS Cloud Trail User Guide need to be followed.

As for the IAM user created for SumoLogic to access the S3 bucket, here is that for your reference:

IAM User Policy for Access to CloudTrail bucket
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:GetObjectVersion",
        "s3:ListBucket",
        "s3:ListBucketVersions"
      ],
      "Resource": [
        "arn:aws:s3:::myBucketName",
        "arn:aws:s3:::myBucketName/*"
      ]
    }
  ]
}

Finally, here are some CloudTrail-SumoLogic searches to help you get started:

SumoLogic CloudTrail search - good first search
1
2
3
4
5
6
7
8
9
10
//What are the most frequent userName, eventName, userAgent, IPAddress, accessKey combinations
_sourceCategory=AWS_EAGLE |
parse "\"accessKeyId\":\"*\"" as accessKey |
parse "\"userName\":\"*\"" as userName |
parse "\"sourceIPAddress\":\"*\"" as IPAddress |
parse "\"userAgent\":\"*\"" as userAgent |
parse "\"eventSource\":\"*\"" as eventSource |
parse "\"eventName\":\"*\"" as eventName |
count as count by userName, eventName, userAgent, IPAddress, accessKey |
order by count, userAgent, IPAddress
SumoLogic CloudTrail search - any suspect IPs
1
2
3
4
5
6
7
8
9
10
//What IP addresses are the requests coming from?
_sourceCategory=AWS_EAGLE |
parse "\"accessKeyId\":\"*\"" as accessKey |
parse "\"userName\":\"*\"" as userName |
parse "\"sourceIPAddress\":\"*\"" as IPAddress |
parse "\"userAgent\":\"*\"" as userAgent |
parse "\"eventSource\":\"*\"" as eventSource |
parse "\"eventName\":\"*\"" as eventName |
count as count by IPAddress |
order by count, userName
SumoLogic CloudTrail search - look for errors
1
2
3
4
5
6
7
8
9
10
11
//Looking for errors
_sourceCategory=AWS_EAGLE errorCode |
parse "\"accessKeyId\":\"*\"" as accessKey |
parse "\"errorCode\":\"*\"" as errorCode |
parse "\"userName\":\"*\"" as userName |
parse "\"sourceIPAddress\":\"*\"" as IPAddress |
parse "\"userAgent\":\"*\"" as userAgent |
parse "\"eventSource\":\"*\"" as eventSource |
parse "\"eventName\":\"*\"" as eventName |
count as count by userName, eventName, userAgent, errorCode, IPAddress, accessKey |
order by count, userAgent, IPAddress
SumoLogic CloudTrail search - search for specific key
1
2
3
4
5
6
7
8
9
10
11
//Looking for specific key: AKIAACDEFGHIJKLMNOP
_sourceCategory=AWS_EAGLE
AND "\"accessKeyId\":\"AKIAABCDEFGHIJKLMNOP\"" |
parse "\"accessKeyId\":\"*\"" as accessKey |
parse "\"userName\":\"*\"" as userName |
parse "\"sourceIPAddress\":\"*\"" as IPAddress |
parse "\"userAgent\":\"*\"" as userAgent |
parse "\"eventSource\":\"*\"" as eventSource |
parse "\"eventName\":\"*\"" as eventName |
count as count by userName, accessKey, IPAddress |
order by userName
SumoLogic CloudTrail search - ignore specific userName
1
2
3
4
5
6
7
8
9
10
11
//Not my IAM DataDog user
_sourceCategory=AWS_EAGLE
AND !"\"userName\":\"DataDog\"" |
parse "\"accessKeyId\":\"*\"" as accessKey |
parse "\"userName\":\"*\"" as userName |
parse "\"sourceIPAddress\":\"*\"" as IPAddress |
parse "\"userAgent\":\"*\"" as userAgent |
parse "\"eventSource\":\"*\"" as eventSource |
parse "\"eventName\":\"*\"" as eventName |
count as count by userName, accessKey, IPAddress |
order by userName


References: https://support.sumologic.com/entries/30216746-Sumo-Logic-App-for-AWS-CloudTrail

First Ansible Contribution - Delete EBS Volume on Instance Termination

After a few months of using Ansible, I became a contributor on March 18, 2014. The term contributor should be used lightly as it was only a documentation change, although a worthwhile one. Recently, the EC2 module was updated to include volumes. This change is noteworthy as it allows users to pass Block Device Mapping when running (creating) instances. A common use of BDM is to attach volumes at instance launch and set them to delete when the instance terminates. This is useful to prevent EBS volume sprawl (one challenge I’ve had with using Ansible).

However, the examples didn’t show how to do this.

So I forked the repository, updated an example, and submitted a pull request. It may take a few days before the documentation is updated, so here is a sneak peak:

Update example to include delete on termination mark:16
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# Single instance with additional IOPS volume from snapshot and volume delete on termination
local_action:
    module: ec2
    key_name: mykey
    group: webserver
    instance_type: m1.large
    image: ami-6e649707
    wait: yes
    wait_timeout: 500
    volumes:
    - device_name: /dev/sdb
      snapshot: snap-abcdef12
      device_type: io1
      iops: 1000
      volume_size: 100
      delete_on_termination: true
    monitoring: yes

I hope you find this useful. If you want more, connect with me via twitter:


The Blog’s Reboot

For some background on this blog, it originally started in August 2012. My thoughts or goals for the blog were to provide “getting started” type assistance. Unfortunately, I focused too much on providing simple, step-by-step instructions that anybody could use. Misquoting Joe Miller in Philadelphia: explaining it like you are a four-year-old. However, you are not a four-year-old. You are older and technologically proficienct. The blog posts had minimal value to you and took too long to write for me.

A reboot was needed.

Now my thoughts and goals have shifted to share all sorts of learnings and experiences with the world. The focus will be on systems engineering and architecture for the web, cloud, etc. The content level will vary from beginner to advanced. I am a beginner at many things…Octopress and Ruby are two examples. I am advanced at a few things… system architecture and AWS CloudFormation come to mind.

In the end, this blog is intended to help you. For that to happen, please provide input and feedback via twitter, email, or comments to the post.

  • E-mail: blog at joehack3r
  • Twitter: joehack3r

Thank you.

Command Cheat Sheet Introduction

Every developer, systems engineer, network admin, etc. has a some sort of document or “cheat sheet” with reference commands. Sometimes these are simple references while learning a new application. Other times you discover a command that was extremely valuable although it was infrequently used. If you have ever forgotten one of those commands (especially the valuable ones), you understand the necessity of writing it down for future reference. Some commands make the cheat sheet because they are used in an often overlooked way or used in a manner not previously considered. Sometimes you want a reference instead of reading the man page.

Here is a personal example for Windows administration:

1
sc \\<hostname> config <WindowsService> obj=<domain>\<username> password=<password>

This command let me connect to a remote system and update a Windows service to set its account name and password. As you can see, it’s a pretty straightforward and easy to use command. I would often go months before needing to update the account credentials and would forget the command or syntax that made it such an easy process. It was an easy decision to add it to my cheat sheet.

I shared the command with many other system administrators, particularly Unix admins who had to cross into the Windows world (NSClient++ anybody?) As I continued to share more cheat sheet commands, the amount of sharing everybody did increased. I want to continue that sharing with a larger community - you!

To help with that, there is a page called “Command Cheat Sheets” that I will update on a regular basis to include more and more cheat sheet commands. If you have a suggestion for a cheat sheet command, send it to me.

Octopress CloudFormation Template

Usually one of the first things I do when experimenting or installing new software is determine the repeatable steps to get started. After this is done, I translate the steps for use in any number of tools to repeatably create a clean environment when needed. Two of my favorite tools for doing this are AWS CloudFormation and Ansible. Given Octopress’ usage pattern, I translated these steps to a CloudFormation template.

Some of my thoughts when creating this template were to use a “source” S3 bucket to store the source or markdown files for the blog and a “destination” S3 bucket to host the blog. I also wanted to make it easy to start and stop the instance so it is available only when needed which will help save costs. As such, the CloudFormation template is written to include permissions to read from a specific S3 bucket and write to another S3 bucket. It also places the Octopress instance in an Auto-Scaling Group. When creating the CloudFormation stack, I set the minimum size and desired size to 0 and the maximum size to 1. When I need an instance, I change the desired size to 1 and wait a couple minutes. When I’m done, I set the desired back to zero, the instance is terminated, and I’m billed for the hours the instance was running.

People may ask why not use your laptop or desktop for Octopress blogging. The answer is you can use your laptop and desktop for Octopress too. Personally, I do so much experimentation and changing of my local operating system, it is nice to have a clean environment available if and when I need it. AWS provides this for me for pennies an hour.

Many users may want to use GitHub to store their files. This can be accomodated by modifying the template accordingly. If you are interested in this or have other ideas, start a discussion about it and we’ll work on it together.

Happy blogging.

New Year, New Start

With the new year, I decided to move this blog from WordPress to Octopress. I’ll get into details later.

In the meantime, please bear with me as I get started with Octopress.