Skip to main content
All CollectionsGeneralData Collection
Automated gathering of RDS/Aurora Metrics
Automated gathering of RDS/Aurora Metrics

This document will guide you in automating the collection of RDS/Aurora Cloudwatch metrics for assessing your database workload(s).

Phoebe Farber avatar
Written by Phoebe Farber
Updated over 2 weeks ago

This document will guide you in automating the collection of RDS/Aurora Cloudwatch metrics for assessing your database workload(s). The collected metrics can be used to provide placement guidance for right sizing purposes. You can run this automation either from an EC2 instance or AWS CloudShell environment.


Preparing Environment

Using AWS Cloudshell

  • AWS CloudShell is a browser-based shell that gives you command-line access to your AWS resources in the selected AWS Region. AWS CloudShell comes pre-installed with popular tools for resource management and creation.

  • Tessell recommends using AWS CloudShell as this has all of the pre-requisites and authentication preloaded. Using CloudShell will avoid many of the steps in this document. To use CloudShell you will need access to the AWS console. You can learn more here: https://aws.amazon.com/cloudshell/

  • Launch Cloudshell from AWS console

  • Copy the automation code into Cloudshell terminal using Upload file option

Cloudshell > Actions > Upload file

  • Make it an executable file.

shell> chmod a+x tessell_fetch_rds_aurora_metrics.sh

  • Verify the automation code using below step

shell> bash tessell_fetch_rds_aurora_metrics.sh -h

  • You can now go to step for executing the automation.

Using AWS EC2 Instance

Pre-requisites

The following are required to run this automation:

  • getops

  • aws cli

  • zip

  • AWS Access keys with Read-Only Cloudwatch permission

Installation

To install the dependent packages and automation, follow these steps:

getopt

OS

Install getops

Debian/Ubuntu

apt-get install util-linux

CentOS/RedHat

yum install gnu-getopt

MacOS

brew install gnu-getopt

AWS CLI

Installation instructions can be found here.

zip

OS

Install zip

Debian/Ubuntu

apt-get install zip

CentOS/RedHat

yum install zip

Deploy Code

Copy the attached file “tessell_fetch_rds_aurora_metrics.sh” to a location where you plan to execute the automation and make the file executable.

shell> chmod a+x tessell_fetch_rds_aurora_metrics.sh

Verify Installation

To validate the installation of dependent packages and the automation code, follow these steps:

shell> which getopt

shell> which aws

shell> which zip

shell> bash tessell_fetch_rds_aurora_metrics.sh -h

Setup AWS Credentials

Configure aws credentials in the host environment where you run this code.

You can get the aws programmatic access from AWS Console and for more details, please refer to the link.

shell> aws configure

AWS Access Key ID : [****************]

AWS Secret Access Key : [****************]

Default region name : us-east-1

Default output format : None


Steps to be followed for executing Automation

Prepare Source Database List

Create a text file with the list of RDS/Aurora Database identifier name(s) separated by a new line.

You can get the DB identifier name from the AWS RDS console.

You can specify n number of db instances in the text file but ensure that they run in the same region. If the instances are in different regions, you need to gather the information separately.

Example: If you have one RDS and one Aurora cluster with 1 writer and 4 reader nodes, then the file should look like the following:.

cat /tmp/db_list.txt

my-rds-service

my-cluster-writer

my-cluster-reader1

my-cluster-reader2

my-cluster-reader3

my-cluster-reader4

Execute Automation

Run the automation with the number of days to be analyzed along with Cloudwatch metric sampling interval. This will take a few minutes depending upon the number of db instances to be analyzed.

Suppose if you want to gather metrics for Aurora cluster for last 7 days with sampling interval 86400 seconds, you can run as follows:

shell> time bash tessell_fetch_rds_aurora_metrics.sh --type aurora --days 7 --period 86400 --file /tmp/db_list.txt

View Logs and Metrics

The generated logs and metrics can be found in /tmp/get_metrics* folder. You can import the generated CSV file into an Excel sheet for better visibility and for performing db workload analysis with ease.

Open Google Sheets > File > Import > Upload > Drop csv file

Sample Output

Usage Tips

tessell_fetch_rds_aurora_metrics.sh --type <type> --file <input-file> --period <period> --days <number of days>

where,

--help | -h : Display this help.

--type | -t : Please provide db type either RDS or AURORA to collect metrics[Mandatory].

--file | -f : Please provide a file containing a db identifier list separated by new lines[Mandatory].

--days | -d : Please provide a number of days to determine the first data point and last data point for metrics.

--start_date | -s : Please provide a start date to determine the first data point in the format 'YYYY-MM-DDTHH:MM:SS'.

--end_date | -e : Please provide an end date to determine the last data point in the format 'YYYY-MM-DDTHH:MM:SS'.

--period | -p : The granularity, in seconds, of the returned data points. Default value is 86400 seconds.

Note: The options -d and -s/-e are mutually exclusive and you can provide either one of them.

Example:

tessell_fetch_rds_aurora_metrics.sh --type <type> --file /path/to/db_list.txt --period <period> --days <number of days>

tessell_fetch_rds_aurora_metrics.sh --type <type> --file /path/to/db_list.txt --period <period> --start_date '2020-12-10T00:00:00' --end_date '2020-12-10T23:59:59'

Did this answer your question?