DEVELOPER BLOG

HOME > DEVELOPER BLOG > 【Introduction to Containers】Tips for handling ECS container logs in AWS - PrismScaler

【Introduction to Containers】Tips for handling ECS container logs in AWS - PrismScaler

1. Introduction

Hello! We are a writer team from Definer Inc. In this issue, you are wondering about some tips for handling ECS container logs in AWS. In Amazon Elastic Container Service (ECS), container logs play a crucial role in monitoring, troubleshooting, and understanding the behavior of containerized applications. ECS provides a streamlined way to handle and manage container logs, making it easier to gain insights into your application's performance and diagnose any issues that may arise. When running containers in ECS, each container generates its own set of logs, capturing valuable information about its execution, errors, and output. These logs contain details that can help you understand how your application is behaving, track down bugs, and ensure optimal performance. Let's take a look at the actual screens and resources to explain in detail.

2. Purpose/Use Cases

This article is written for the purpose of properly managing container logs. It is a collection of information and practices that can be helpful to IT professionals when they want to properly implement log management when using AWS to operate ECS containers.   Container logs serve several important purposes:
  1. Troubleshooting and Debugging: Logs are invaluable for troubleshooting and debugging issues within your ECS containers. By managing and analyzing logs, you can identify errors, exceptions, or unexpected behavior in your application.
  2. Performance Monitoring: ECS logs enable you to monitor the performance of your containers and identify any bottlenecks or performance issues. By analyzing logs, you can gain visibility into resource utilization, response times, and other performance metrics.
  3. Security and Compliance: Log management is crucial for security and compliance purposes. By collecting and analyzing logs, you can detect and respond to security incidents, such as unauthorized access attempts or suspicious activity within your containers.
  4. Capacity Planning and Scaling: Logs provide insights into the resource usage of your containers, allowing you to make informed decisions about capacity planning and scaling.
  5. Application Monitoring and Analytics: Logs serve as a valuable source of information for monitoring and analyzing the behavior of your application.
  6. Centralized Log Management: Managing logs centrally simplifies log aggregation, storage, and analysis. With a centralized log management solution, you can consolidate logs from multiple ECS clusters or services into a single location.
  7. Operational Insights and Anomaly Detection: By leveraging logs, you can gain operational insights and detect anomalies within your ECS containers. Through log analysis, you can identify patterns, spot outliers, and detect unusual behavior. This allows you to proactively respond to issues, automate alerting and notification systems, and ensure the smooth operation of your application.

3. How to handle ECS container logs

We will consider how to handle logs in container operations with ECS.   Since ECS Docker containers are used and discarded each time they are deployed, application logs cannot be stored in the container. Therefore, application logs must be managed using external storage or external services.   The following is a summary of some options for log management for ECS containers.   (1) Cloudwatch Logs The most popular method is to use Cloudwatch Logs. WatchLogs is a feature in Amazon ECS (Elastic Container Service) that provides real-time streaming and centralized management of logs from containers running in ECS tasks. Here are the key features and uses of WatchLogs: ・Real-Time Log Streaming ・Centralized Log Management ・Custom Log Filtering ・Integration with AWS Services ・Simplified Log Retention ・Support for Encryption ・Seamless Integration with ECS   The advantages are as follows: ・It is a managed service of AWS and easy to integrate with ECS. ・Cost reduction is possible by rotating to S3. ・Visualization is easy with ElasticSearch.   On the other hand, there are some disadvantages as follows. ・Container logs must be exported to standard output to be transferred to Cloudwatch Logs. ・Log stream units are difficult to see (it is important to be able to identify which container the logs originated from, for example, by including the startup date and time in the host name, etc.)   (2) Log collection service Another option is to use a log collection and analysis service such as fluentd. Using a log collection service provides benefits such as centralized log management, efficient log retrieval, scalability, real-time log streaming, advanced log search and analysis, alerting and notifications, integration with other services, and compliance support. Here are some key advantages: ・Centralized Log Management ・Efficient Log Retrieval ・Scalability and Elasticity ・Real-Time Log Streaming: ・Advanced Log Search and Analysis ・Alerting and Notifications ・Integration with Other Services ・Compliance and Auditing   The advantage is that flexible customization is possible. The disadvantages are the service fee and the cost of implementing integration with AWS, configuration and maintenance efforts, scalability challenges, infrastructure dependency, complexity and learning curve, data security and compliance considerations, and vendor lock-in risks.

4. How to configure Cloudwatch Logs For ECS task definition

Let's start with the Cloudwatch Logs configuration for the ECS task definition. As a precondition, we assume that the application logs in the container are spit out to standard output (STDOUT).   (1) Create a CloudWatch Logs Group Go to the CloudWatch console → Log groups screen → Choose Create Log group   (2) Preparation of Json files as inputs This time, the following input file is used. The key point is lines 17 through 25, where "awslogs" is specified as the log driver. ・"awslogs-create-group": "true": This option specifies that a new log group should be created if it doesn't already exist. Setting it to "true" ensures the log group "ecs/test" is created. ・"awslogs-group": "ecs/test": This option specifies the name of the log group where the container logs will be stored. In this case, the log group is named "ecs/test". ・"awslogs-region": "ap-northeast-1": This option indicates the AWS region where the log group and logs will be stored. In this example, the logs will be stored in the "ap-northeast-1" region. "awslogs-stream-prefix": "test": This option defines a prefix for the log stream name. The log stream represents an individual source of logs within the log group. In this case, the stream name will have the prefix "test".  
{
    "family": "test",
    "containerDefinitions": [
        {
            "name": "test",
            "image": "${account number}.dkr.ecr.ap-northeast-1.amazonaws.com/test:610",
            "cpu": 512,
            "memory": 1024,
            "portMappings": [
                {
                    "containerPort": 80,
                    "hostPort": 80,
                    "protocol": "tcp"
                }
            ],
            "essential": true,
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-create-group": "true",
                    "awslogs-group": "ecs/test",
                    "awslogs-region": "ap-northeast-1",
                    "awslogs-stream-prefix": "test"
                }
            }
        }
    ],
    "taskRoleArn": "arn:aws:iam::${account number}:role/${IAM Task Role Name}",
    "executionRoleArn": "arn:aws:iam::${account number}:role/${IAM task execution role name}",
    "networkMode": "awsvpc",
    "requiresCompatibilities": [
        "EC2",
        "FARGATE"
    ],
    "cpu": "512",
    "memory": "1024",
    "runtimePlatform": {
        "cpuArchitecture": "X86_64",
        "operatingSystemFamily": "LINUX"
    }
}                
 

(3) Update ECS task definition

Go to the ECS console, "Task Definition" → Task Details screen → "Create New Revision" → "Create New Revision Using JSON".

Copy and paste the JSON file that will be the input and save it.

 

(4) Register or Update the Task Definition:

Use the register-task-definition or update-task-definition command to register or update your task definition with the container log configuration.

 

(5) Run or Update Your ECS Task:

Use the run-task or update-service command to run or update your ECS task or service using the registered or updated task definition.
aws ecs register-task-definition --cli-input-json file://task-definition.json
aws ecs run-task --cluster my-cluster --task-definition my-task                  

5. Goals for ECS container logs in AWS

The goals for managing ECS (Elastic Container Service) container logs in AWS are to ensure efficient log collection, centralization, and analysis, which are essential for monitoring and troubleshooting containerized applications running on ECS. Achieving these goals helps to enhance operational visibility, detect issues early, and optimize the overall performance of containerized workloads. Some specific goals for ECS container logs in AWS include:
  1. Centralized Log Collection: The primary goal is to aggregate logs from multiple ECS container instances and tasks into a centralized location. Centralized log collection simplifies log management, allows for unified log analysis, and ensures that logs are easily accessible for troubleshooting and monitoring purposes.
  2. Real-time Log Streaming: The ability to stream logs in real-time enables rapid detection and response to critical events or issues. It allows operators to monitor application behavior as it happens and respond quickly to anomalies or errors.
  3. Search and Query Capabilities: The logging solution should offer efficient log search and querying capabilities to quickly identify specific events or patterns within vast log datasets. Being able to run complex queries on logs helps in troubleshooting and root cause analysis.
  4. Security and Access Control: Logs can contain sensitive information, and access to them should be appropriately controlled and secured. Implementing fine-grained access control ensures that only authorized personnel can access and analyze the logs.
  5. Log Visualization and Dashboards: Presenting log data in a visual format through dashboards and graphs aids in understanding the system's health and performance at a glance. Customizable dashboards can help teams monitor key metrics and trends effectively.
  6. Alerting and Monitoring Integration: Integrate logging with alerting and monitoring systems like Amazon CloudWatch Alarms to receive notifications on specific log events or patterns that require immediate attention.

6. Cited/Referenced Articles

7. About the proprietary solution "PrismScaler"

・PrismScaler is a web service that enables the construction of multi-cloud infrastructures such as AWS, Azure, and GCP in just three steps, without requiring development and operation. ・PrismScaler is a web service that enables multi-cloud infrastructure construction such as AWS, Azure, GCP, etc. in just 3 steps without development and operation. ・The solution is designed for a wide range of usage scenarios such as cloud infrastructure construction/cloud migration, cloud maintenance and operation, and cost optimization, and can easily realize more than several hundred high-quality general-purpose cloud infrastructures by appropriately combining IaaS and PaaS.  

8. Contact us

This article provides useful introductory information free of charge. For consultation and inquiries, please contact "Definer Inc".

9. Regarding Definer

・Definer Inc. provides one-stop solutions from upstream to downstream of IT. ・We are committed to providing integrated support for advanced IT technologies such as AI and cloud IT infrastructure, from consulting to requirement definition/design development/implementation, and maintenance and operation. ・We are committed to providing integrated support for advanced IT technologies such as AI and cloud IT infrastructure, from consulting to requirement definition, design development, implementation, maintenance, and operation. ・PrismScaler is a high-quality, rapid, "auto-configuration," "auto-monitoring," "problem detection," and "configuration visualization" for multi-cloud/IT infrastructure such as AWS, Azure, and GCP.