See how you can save 70% of the cost by reducing log volume and staying compliant.

Understanding and Leveraging AWS Cloudwatch Logs

Learning Objectives

• Understand the different ways you can leverage and use CloudWatch logs

Amazon Web Services (AWS) CloudWatch collects metrics and logs from across the services you use within AWS, from memory usage on EC2 instances to queue sizes in SQS. In combination with CloudTrail, IAM, and a few other services, CloudWatch can collect data related to the infrastructure that underpins your applications on AWS services. 

CloudWatch processes data as it arrives, which might be in real-time or delayed by a few minutes, depending on the service (5 minutes is the default delay on EC2 instances). You can leverage the data collected to create custom dashboards that will allow you to visualize the data, create alarms to trigger specific events, and even search through the raw log data group by group.

Exporting CloudWatch Logs to S3

CloudWatch retains logs indefinitely by default, so you don’t technically need to export them to another service for archiving. However, it can be challenging to search through an endless volume of data. S3 is cheaper (per GB) and still allows the logs to be accessible if you need them. Once you decide to export the data to S3, it would make sense to configure your logs to expire in CloudWatch. First, you need to enable log expiration. Then, select the length of time to keep the log data (this can be anywhere from 1 day to 10 years). After the set period expires, the system will purge the log data.

Exporting data to S3 is a manual process that you can initiate quickly via the command line or with a couple of clicks in the UI. However, it can take up to 12 hours for the data to be available for export, however, and you’ll need to do some prep work beforehand, such as creating an S3 bucket with the proper IAM rules. You can find the instructions for exporting logs from CloudWatch to S3 in the AWS documentation.

Exporting CloudWatch Logs to Other Destinations

So what else can you do with the logs that pass through CloudWatch besides storing them in S3? One of the most valuable things you can do is take those exported logs and import them into a third-party platform with advanced log management capabilities. Mezmo, formerly known as LogDNA, for example, can give you a holistic view into all of your applications, wherever they are and however they write log files – and that’s just the start of what Mezmo can do. A holistic view will enable you to achieve better and more comprehensive insights.

There are two ways to send the logs to an external system. The first is to have the external system pick up the log files archived to S3. This method is more labor-intensive, and it does not operate in real-time.

Alternatively, you can create a function in AWS Lambda that subscribes to a log group in CloudWatch. Creating a function is the recommended approach since the subscription allows the data to be sent to the external log system as soon as CloudWatch receives it. You can find more details and instructions for integrating Mezmo with CloudWatch and Lambda in Mezmo’s documentation. For a deeper dive into how AWS Lambda accesses CloudWatch’s logs, you can check out the AWS CloudWatch documentation.

Conclusion

Mezmo can import logs collected by CloudWatch and include them in your enterprise dataset. Everyone from your production SRE team to individual DevOps engineers can build a holistic view of all aspects of the applications they deliver to support your business. 

It’s time to let data charge