Giter Club home page Giter Club logo

ecs-cloudwatch-logs's Introduction

ecs-cloudwatch-logs

This repository provides the assets referred to in the [blog post on using Amazon ECS and Amazon CloudWatch logs] (http://blogs.aws.amazon.com/application-management/post/TxFRDMTMILAA8X/Send-ECS-Container-Logs-to-CloudWatch-Logs-for-Centralized-Monitoring)

You can use Amazon CloudWatch to monitor and troubleshoot your systems and applications using your existing system, application, and custom log files. You can send your existing log files to CloudWatch Logs and monitor these logs in near real-time.

ecs-cloudwatch-logs's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ecs-cloudwatch-logs's Issues

Auto discovery for logs by directory

As far as I understand, in order for this agent to recognize the log file, you must specify the file name in the conf file. However, this approach, as I see it, requires me to change the conf every time I add a new custom log file in my log directory.

Is it possible to provide the ability to specify a log directory and have the agent auto discover the .log files within the specified directory? The configuration settings should be propagated to each found log file. The log_stream_name should take the configured standard and append the actual found log file name.

Unable to use tcp in --gelf-address

Hello,

We are running our application in AWS ECS and we are trying to collect our container logs via TCP port using gelf log driver. But if I used TCP in --log-driver-options --gelf-address tcp://ipaddress:port my container is not starting.

docker: Error response from daemon: gelf: endpoint needs to be UDP.

Kindly suggest how to use TCP in gelf-address option.

My docker version is 1.12

Regards,
Raja

Auto Discovering log files for AWS Cloudwatch

I have the following directory structure:

  • /opt/workspace/module1/logs/module1.log
  • /opt/workspace/module2/logs/module2.log
  • /opt/workspace/module3/logs/module3.log

Is there a good way for AWS CloudWatch to auto discover all the log files under the parent workspace folder ?

What we are trying to achieve is have a single log group with multiple log streams per module
so there would be 3 log streams under 1 log group for the above

  • module1 log stream
  • module2 log stream
  • module3 log stream

We do not want to register a new log stream every time there is a new module
If this is not supported out of the box could you please let me know what is the best way to automate this?

Please note I have already checked JIRA, this is not my use case: [https://github.com//issues/1]

awslog driver not taking docker image name, ecs service name and task definition

Hello,

I am using awslogs driver to collect my ECS container logs and I enabled Lambda (LogsToElasticsearch_mytestloges) to stream my logs to AWS elasticsearch, but it is not taking ECS Service name, Container instance name, task definition and Docker image name.

The below are my log sample on AWS Elasticsearch. Here I cant find docker image name, ECS Service name and task definition.

{
"_index": "cwl-2017.01.03",
"_type": "mytestloges",
"_id": "33081504929870395169036308436466460047129742806680469507",
"_score": 1,
"_source": {
"Message": 1482993472485,
"Request": "request",
"Rsponse": "request received",
"Method": "GET",
"Info": "INFO",
"Information": "services-shelters",
"URL": "/favicon.ico",
"Plugin": "0|app |",
"@id": "33081504929870395169036308436466460047129742806680469507",
"@timestamp": "2017-01-03T06:46:25.785Z",
"@message": ""0|app |" 1482993472485 INFO "services-shelters" request "GET" "/favicon.ico" "request received"",
"@owner": "855158544446",
"@log_group": "mytestloges",
"@log_stream": "nodejs-app/blue-green-task/9f158062-346b-4a07-b2d3-e6bdd9321dee"
},
"fields": {
"@timestamp": [
1483425985785
]
}
}

Someone kindly suggest how to proceed further.

Regards,
Raja

I want to integrate this with my app which am using gunicorn server

In gunicorn to specify syslog you give it the syslog address. How to you find the syslog url for the cloudwatch container

This is how you specify the syslog in aws coud

syslog_addr
--log-syslog-to SYSLOG_ADDR
udp://localhost:514
Address to send syslog messages.

Address is a string of the form:

unix://PATH#TYPE : for unix domain socket. TYPE can be stream for the stream driver or dgram for the dgram driver. stream is the default.
udp://HOST:PORT : for UDP sockets
tcp://HOST:PORT : for TCP sockets

awslog driver not taking docker image name, ecs service name and task definition

Hello,

I am using awslogs driver to collect my ECS container logs and I enabled Lambda (LogsToElasticsearch_mytestloges) to stream my logs to AWS elasticsearch, but it is not taking ECS Service name, Container instance name, task definition and Docker image name.

The below are my log sample on AWS Elasticsearch. Here I cant find docker image name, ECS Service name and task definition.

{
"_index": "cwl-2017.01.03",
"_type": "mytestloges",
"_id": "33081504929870395169036308436466460047129742806680469507",
"_score": 1,
"_source": {
"Message": 1482993472485,
"Request": "request",
"Rsponse": "request received",
"Method": "GET",
"Info": "INFO",
"Information": "services-shelters",
"URL": "/favicon.ico",
"Plugin": "0|app |",
"@id": "33081504929870395169036308436466460047129742806680469507",
"@timestamp": "2017-01-03T06:46:25.785Z",
"@message": ""0|app |" 1482993472485 INFO "services-shelters" request "GET" "/favicon.ico" "request received"",
"@owner": "855158544446",
"@log_group": "mytestloges",
"@log_stream": "nodejs-app/blue-green-task/9f158062-346b-4a07-b2d3-e6bdd9321dee"
},
"fields": {
"@timestamp": [
1483425985785
]
}
}

Someone kindly suggest how to proceed further.

Regards,
Raja

How can I apply colon-delimited filter in Clouwatchlogs filter pattern

Hi team,

I am not sure this is the correct place to ask my request. I am trying to put some pattern while passing my Cloudwatch logs to AWS ES. Let's assume my logs look like below lines.

Running:on:http://localhost:8081
Running:on:http://localhost:8081
Running:on:http://localhost:8081
Running:on:http://localhost:8081
Running:on:http://localhost:8081
Running:on:http://localhost:8081

How can I split using colon-delimited filter in AWS Cloudwatch Filter pattern. Kindly someone suggest how to fix this.

Regards,
Raja

logrotate?

Since you are not running cron, you won't have logrotate, and the logs will eventually take up a LOT of disk space in the container. Right? IT would be really nice if we could just forward the logs off in the rsyslog config rather than writing them to a file then loading them back up.

Failed to create the second prefix stream to the same log group.

Hi,

I ran into an issue but am not sure if is a bug.

I have TaskDefinition #1 that was configured to send container log to cloudwatch log group name LG1. The stream prefix was created successfully for TaskDefinition #1. Then a second TaskDefinition failed to create the stream prefix to the same log group "LG1" with error message "LG1 already exists in stack reference the arn of TaskDefinition #1".

I wonder if TaskDefinition #2 cannot be added to the existing LG1.

Thanks In Advance,
DT

cloudwatch logs vs logstash

Hi Team,

It is a kind question. I am just curious to know why logstash is the best tool to send logs to elasticsearch. We are running our all applications in AWS so, we are planning to use AWS Cloudwarch logs to collect and send application logs to elasticsearch. please give some suggestion for best practices.

Regards,
Raja

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.