logstash-plugins / logstash-output-s3 Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
http://build-eu-1.elasticsearch.org/view/LS%20Master/job/Logstash_Master_Default_Plugins/612/console
1) LogStash::Outputs::S3#generate_temporary_filename should not add the tags to the filename
Failure/Error: s3 = LogStash::Outputs::S3.new(config)
LogStash::ConfigurationError:
The setting `tags` in plugin `s3` is obsolete and is no longer available. You can achieve similar behavior with the new conditionals, like: `if "sometag" in [tags] { s3 { ... } }` If you have any questions about this, you are invited to visit https://discuss.elastic.co/c/logstash and ask.
# ./lib/logstash/config/mixin.rb:82:in `config_init'
# ./lib/logstash/config/mixin.rb:66:in `config_init'
# ./lib/logstash/outputs/base.rb:44:in `initialize'
# ./vendor/bundle/jruby/1.9/gems/logstash-output-s3-2.0.1/spec/outputs/s3_spec.rb:84:in `(root)'
# ./vendor/bundle/jruby/1.9/gems/rspec-wait-0.0.7/lib/rspec/wait.rb:46:in `(root)'
# ./rakelib/test.rake:47:in `(root)'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:240:in `execute'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:235:in `execute'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:179:in `invoke_with_call_chain'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:172:in `invoke_with_call_chain'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:165:in `invoke'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:150:in `invoke_task'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:106:in `top_level'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:106:in `top_level'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:115:in `run_with_threads'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:100:in `top_level'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:78:in `run'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:176:in `standard_exception_handling'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:75:in `run'
This plugin need to be updated to use single
or shared
concurrency, I believe it should be easy to use shared the only critical zones in this plugin are file rotation, and writing to the file. Both of them are already protected by mutex.
Docs report that time_file
and size_file
are optional, but you need to at least set one of those to make this work:
output {
s3{
access_key_id => "crazy_key" (required)
secret_access_key => "monkey_access_key" (required)
endpoint_region => "eu-west-1" (required)
bucket => "boss_please_open_your_bucket" (required)
size_file => 2048 (optional)
time_file => 5 (optional)
format => "plain" (optional)
canned_acl => "private" (optional. Options are "private", "public_read", "public_read_write", "authenticated_read". Defaults to "private" )
}
Currently this test can take up to a minute to complete because, in "worst case", it will write 15k events 200byte file.
Should be revisited and either:
a) moved to a stress test file
b) tagged as :slow or other excluded tag by default
c) reduced to a simpler scenario
Any plans to use lambda events to ingest?
Seems like a simpler model, vs scanning at an interval.
When Logstash is terminated the currently logged output isn't sent to S3.
According to documentation, default value for tags is "[]", and it is not required.
However, this error is coming if I don’t define tags on config file:
NoMethodError: undefined method `join' for nil:NilClass
format_message at /opt/logstash/lib/logstash/outputs/s3.rb:351
receive at /opt/logstash/lib/logstash/outputs/s3.rb:302
handle at /opt/logstash/lib/logstash/outputs/base.rb:86
initialize at (eval):41
call at org/jruby/RubyProc.java:271
output at /opt/logstash/lib/logstash/pipeline.rb:266
outputworker at /opt/logstash/lib/logstash/pipeline.rb:225
start_outputs at /opt/logstash/lib/logstash/pipeline.rb:152
When I did the refactor for the new shutdown semantic I replaced the Thread.new
with Stud::Task
and this cause the test to hang.
#62 added support for a boolean server_side_encryption
flag. If it is true
, the S3 write uses :aes256
encryption: https://github.com/logstash-plugins/logstash-output-s3/blob/master/lib/logstash/outputs/s3.rb#L181
My application requires S3 writes to be encrypted via 'aws:kms'
instead of :aes256
. Could the server_side_encryption
option please be updated to accept both of these arguments supported by the AWS SDK Object API? (http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Object.html#server_side_encryption-instance_method.)
Hi,
We have a recurring bug that is happening for a while now.
Every once in a while the plugin just stops uploading the files to S3.
The files are still saved to disk, but the disks just keeps getting filled with the uploader not doing anything..
Any ideas what it can be or how i can debug it? There are no warnings/error in the log file
We need the Outputted file from Logstash to S3 to be fully controlled by bucket owner .
Currently only Below Options are available :
canned_acl => "private" (optional. Options are "private", "public_read", "public_read_write", "authenticated_read". Defaults to "private" )
We should add support for "bucket_owner_full_control"
I am running Oracle Java 1.8
root@foobar:/home/ubuntu# java -version
java version "1.8.0_66"
Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
The error I am seeing is
NotImplementedError: fstat unimplemented unsupported or native support failed to load
size at org/jruby/RubyFile.java:1161
rotate_events_log? at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-s3-2.0.3/lib/logstash/outputs/s3.rb:298
synchronize at org/jruby/ext/thread/Mutex.java:149
rotate_events_log? at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-s3-2.0.3/lib/logstash/outputs/s3.rb:297
handle_event at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-s3-2.0.3/lib/logstash/outputs/s3.rb:340
register at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-s3-2.0.3/lib/logstash/outputs/s3.rb:218
call at org/jruby/RubyProc.java:271
encode at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-json_lines-2.0.2/lib/logstash/codecs/json_lines.rb:45
receive at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-s3-2.0.3/lib/logstash/outputs/s3.rb:292
handle at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.0.0-java/lib/logstash/outputs/base.rb:80
output_func at (eval):60
outputworker at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.0.0-java/lib/logstash/pipeline.rb:252
start_outputs at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.0.0-java/lib/logstash/pipeline.rb:169
with size_file being configured.
With only time_file in the configuration, things are alright.
An error occurred in an after(:all) hook.
Errno::EACCES: Permission denied - C:\Users\jls\AppData\Local\Temp/studtmp-f5ec8bb748d6ff0d724e60c45935ef913ccf03a1188
dfdc964cbfe07bfa6
occurred at org/jruby/RubyFile.java:1089:in `unlink'
Probably trying to unlink a file that's still open
5) LogStash::Outputs::S3#generate_temporary_filename should add tags to the filename if present
Failure/Error: expect(s3.get_temporary_filename).to eq("ls.s3.logstash.local.2015-01-01T00.00.tag_elasticsearch.logstash.kibana.part0.txt")
expected: "ls.s3.logstash.local.2015-01-01T00.00.tag_elasticsearch.logstash.kibana.part0.txt"
got: "ls.s3.logstash.local.2015-12-22T11.45.tag_elasticsearch.logstash.kibana.part0.txt"
(compared using ==)
# ./vendor/bundle/jruby/1.9/gems/logstash-output-s3-2.0.3/spec/outputs/s3_spec.rb:79:in `(root)'
# ./vendor/bundle/jruby/1.9/gems/rspec-wait-0.0.8/lib/rspec/wait.rb:46:in `(root)'
# ./rakelib/test.rake:72:in `(root)'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:240:in `execute'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:235:in `execute'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:179:in `invoke_with_call_chain'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:172:in `invoke_with_call_chain'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:165:in `invoke'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:150:in `invoke_task'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:106:in `top_level'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:106:in `top_level'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:115:in `run_with_threads'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:100:in `top_level'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:78:in `run'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:176:in `standard_exception_handling'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:75:in `run'
6) LogStash::Outputs::S3#generate_temporary_filename normalized the temp directory to include the trailing slash if missing
Failure/Error: expect(s3.get_temporary_filename).to eq("ls.s3.logstash.local.2015-01-01T00.00.part0.txt")
expected: "ls.s3.logstash.local.2015-01-01T00.00.part0.txt"
got: "ls.s3.logstash.local.2015-12-22T11.45.part0.txt"
(compared using ==)
# ./vendor/bundle/jruby/1.9/gems/logstash-output-s3-2.0.3/spec/outputs/s3_spec.rb:90:in `(root)'
# ./vendor/bundle/jruby/1.9/gems/rspec-wait-0.0.8/lib/rspec/wait.rb:46:in `(root)'
# ./rakelib/test.rake:72:in `(root)'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:240:in `execute'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:235:in `execute'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:179:in `invoke_with_call_chain'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:172:in `invoke_with_call_chain'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:165:in `invoke'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:150:in `invoke_task'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:106:in `top_level'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:106:in `top_level'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:115:in `run_with_threads'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:100:in `top_level'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:78:in `run'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:176:in `standard_exception_handling'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:75:in `run'
7) LogStash::Outputs::S3#generate_temporary_filename should not add the tags to the filename
Failure/Error: expect(s3.get_temporary_filename(3)).to eq("ls.s3.logstash.local.2015-01-01T00.00.part3.txt")
expected: "ls.s3.logstash.local.2015-01-01T00.00.part3.txt"
got: "ls.s3.logstash.local.2015-12-22T11.45.part3.txt"
(compared using ==)
# ./vendor/bundle/jruby/1.9/gems/logstash-output-s3-2.0.3/spec/outputs/s3_spec.rb:85:in `(root)'
# ./vendor/bundle/jruby/1.9/gems/rspec-wait-0.0.8/lib/rspec/wait.rb:46:in `(root)'
# ./rakelib/test.rake:72:in `(root)'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:240:in `execute'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:235:in `execute'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:179:in `invoke_with_call_chain'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:172:in `invoke_with_call_chain'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/task.rb:165:in `invoke'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:150:in `invoke_task'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:106:in `top_level'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:106:in `top_level'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:115:in `run_with_threads'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:100:in `top_level'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:78:in `run'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:176:in `standard_exception_handling'
# ./vendor/bundle/jruby/1.9/gems/rake-10.4.2/lib/rake/application.rb:75:in `run'```
As a user, I would expect Event Dependent Configurations to work with the Logstash Output S3 plugin's "prefix" config. This is not the case. I discovered this in a community discussion instead of in the documentation.
Please update the documentation to state that Event Dependent Configurations do not work with the "prefix" config.
Thank you.
Version 4.0.0 of the plugin complains about S3 permission issues, whereas 3.2.0 does not (and works). To see if it was a missing permission I tried granting s3*
to the bucket (via the AWS role of the box) but that didn't fix it. Reverted to 3.2.0 and S3 uploads worked again.
Log entry:
{:timestamp=>"2017-01-03T00:27:44.601000+0000", :message=>"Pipeline aborted due to error", :exception=>"LogStash::ConfigurationError", :error=>"Logstash must have the privileges to write to root bucket `[snip]`, check you credentials or your permissions.", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-s3-4.0.0/lib/logstash/outputs/s3.rb:187:in `register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.1-java/lib/logstash/output_delegator.rb:75:in `register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.1-java/lib/logstash/pipeline.rb:181:in `start_workers'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.1-java/lib/logstash/pipeline.rb:181:in `start_workers'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.1-java/lib/logstash/pipeline.rb:136:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.1-java/lib/logstash/agent.rb:491:in `start_pipeline'"], :level=>:error}
Config:
output {
s3 {
codec => "json_lines"
bucket => "[snip]"
size_file => 512000
time_file => 5
encoding => "gzip"
}
}
This is in regards to the logstash-output-s3-2.0.7
When dynamically creating dated subfolders in the s3 bucket (yyyy/mm/dd)
It would be observed that the size_file part# would increment when a time_file is triggered:
example with 10 minute time_file rotation s3 file names:
...
ls.s3.vm.test.local.2015-08-25T15.33.part34.txt
ls.s3.vm.test.local.2015-08-25T15.43.part35.txt
ls.s3.vm.test.local.2015-08-25T15.53.part36.txt
...
It appears the fix for this (on my end) is to change the "configure_periodic_rotation" function and replace "next_page" with "reset_page_counter"
private
def configure_periodic_rotation
@periodic_rotation_thread = Stud::Task.new do
LogStash::Util::set_thread_name("<S3 periodic uploader")
Stud.interval(periodic_interval, :sleep_then_run => true) do
@logger.debug("S3: time_file triggered, bucketing the file", :filename => @tempfile.path)
move_file_to_bucket_async(@tempfile.path)
reset_page_counter
create_temporary_file
end
end
end
Hey, any chance to add gzip support before uploading?
Hi all,
I make a config for logstash ,
I want if the logs match logtype==request and env==prd , it will save to Bucket 1
If the logstash logtype==request and env==stg , i will save to Bucket2.
However, all logs are save on Bucket 1.
In the beginning , I think there is something wrong with my configure or logstash condition didn't work properly.
So I tried to remove configure which save logs on Bucket2 to check the conditional logstash.
However, it worked correctly.
That's why I think the logstash output-s3 don't allow to save logs on many buckets.
output {
if [logtype] == "request" and [env] == "prd" {
s3 {
access_key_id => "XXX"
secret_access_key => "XXX"
bucket => "XXX1"
endpoint_region => "us-east-1"
time_file => 1
}
}
if [logtype] == "request" and [env] == "stg" {
s3 {
access_key_id => "XXX"
secret_access_key => "XXX"
bucket => "XXX2"
endpoint_region => "us-east-1"
time_file => 1
}
}
}
It would be valuable to have the ability to configure the plugin to compress files before uploading them to s3. At the moment we manually download, compress, upload, delete uncompressed files. Perhaps something like:
s3 {
...
compress => 'gzip'
}
This is a feature I'm considering implementing, but I wanted to get a sense of if the maintainers would find this valuable, and/or if there are any other concerns I should be aware of before taking it on.
I see that #1 is a fairly extensive refactor. Is there a timeline for when that is expected to be merged? I'd like to avoid having to rebase against a large change.
Bundle complete! 5 Gemfile dependencies, 47 gems now installed.
Use `bundle show [gemname]` to see where a bundled gem is installed.
Using Accessor#strict_set for specs
Run options: exclude {:redis=>true, :socket=>true, :performance=>true, :couchdb=>true, :elasticsearch=>true, :elasticsearch_secure=>true, :export_cypher=>true, :integration=>true, :windows=>true}
..FF.FF......F.FFFF
An error occurred in an after hook
NoMethodError: undefined method `teardown' for #<LogStash::Outputs::S3:0x69058404>
occurred at /home/jenkins/workspace/logstash-plugin-output-s3-unit/jdk/JDK7/nodes/metal-pool/spec/outputs/s3_spec.rb:197:in `(root)'
F..
Failures:
1) LogStash::Outputs::S3#receive should send the event through the codecs
Failure/Error: s3.register
NameError:
uninitialized constant Stud::Task
# ./lib/logstash/outputs/s3.rb:363:in `configure_upload_workers'
# ./lib/logstash/outputs/s3.rb:362:in `configure_upload_workers'
# ./lib/logstash/outputs/s3.rb:204:in `register'
# ./spec/outputs/s3_spec.rb:261:in `(root)'
2) LogStash::Outputs::S3#register should create the tmp directory if it doesn't exist
Failure/Error: s3.register
NameError:
uninitialized constant Stud::Task
# ./lib/logstash/outputs/s3.rb:363:in `configure_upload_workers'
# ./lib/logstash/outputs/s3.rb:362:in `configure_upload_workers'
# ./lib/logstash/outputs/s3.rb:204:in `register'
# ./spec/outputs/s3_spec.rb:50:in `(root)'
3) LogStash::Outputs::S3#write_on_bucket should prefix the file on the bucket if a prefix is specified
Failure/Error: s3.register
NameError:
uninitialized constant Stud::Task
Hello.
On my configuration (Ubuntu 14.04, Logstash 1.4), logstash-output-s3
randomly crash with no obvious reason.
This bug is really annoying as it crash logstash too, and thus, logs sent by hosts are lost by logstash.
Here is the stacktrace I have after the plugin crash. Please, let me know if you need more informations
IOError: Connection reset by peer
syswrite at org/jruby/ext/openssl/SSLSocket.java:667
do_write at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/jopenssl19/openssl/buffering.rb:318
write at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/jopenssl19/openssl/buffering.rb:336
write0 at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/1.9/net/protocol.rb:199
write at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/1.9/net/protocol.rb:173
writing at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/1.9/net/protocol.rb:190
write at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/1.9/net/protocol.rb:172
send_request_with_body_stream at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/1.9/net/http.rb:1963
exec at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/1.9/net/http.rb:1929
transport_request at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/1.9/net/http.rb:1325
catch at org/jruby/RubyKernel.java:1284
transport_request at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/1.9/net/http.rb:1324
request at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/1.9/net/http.rb:1301
request at /opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-1.35.0/lib/aws/core/http/connection_pool.rb:330
handle at /opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-1.35.0/lib/aws/core/http/net_http_handler.rb:61
session_for at /opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-1.35.0/lib/aws/core/http/connection_pool.rb:129
handle at /opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-1.35.0/lib/aws/core/http/net_http_handler.rb:55
make_sync_request at /opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-1.35.0/lib/aws/core/client.rb:252
retry_server_errors at /opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-1.35.0/lib/aws/core/client.rb:288
make_sync_request at /opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-1.35.0/lib/aws/core/client.rb:248
client_request at /opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-1.35.0/lib/aws/core/client.rb:508
log_client_request at /opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-1.35.0/lib/aws/core/client.rb:390
client_request at /opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-1.35.0/lib/aws/core/client.rb:476
return_or_raise at /opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-1.35.0/lib/aws/core/client.rb:372
client_request at /opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-1.35.0/lib/aws/core/client.rb:475
put_object at (eval):3
write_with_put_object at /opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-1.35.0/lib/aws/s3/s3_object.rb:1751
write at /opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-1.35.0/lib/aws/s3/s3_object.rb:607
write_on_bucket at /opt/logstash/lib/logstash/outputs/s3.rb:197
upFile at /opt/logstash/lib/logstash/outputs/s3.rb:223
each at org/jruby/RubyArray.java:1613
upFile at /opt/logstash/lib/logstash/outputs/s3.rb:215
register at /opt/logstash/lib/logstash/outputs/s3.rb:285
time_alert at /opt/logstash/lib/logstash/outputs/s3.rb:174
loop at org/jruby/RubyKernel.java:1521
time_alert at /opt/logstash/lib/logstash/outputs/s3.rb:172
failure in defaults plugins test: http://build-eu-00.elastic.co/view/LS%20Master/job/Logstash_Master_Default_Plugins/612/console
1) LogStash::Outputs::S3#generate_temporary_filename should not add the tags to the filename
Failure/Error: s3 = LogStash::Outputs::S3.new(config)
LogStash::ConfigurationError:
The setting `tags` in plugin `s3` is obsolete and is no longer available. You can achieve similar behavior with the new conditionals, like: `if "sometag" in [tags] { s3 { ... } }` If you have any questions about this, you are invited to visit https://discuss.elastic.co/c/logstash and ask.
# ./lib/logstash/config/mixin.rb:82:in `config_init'
# ./lib/logstash/config/mixin.rb:66:in `config_init'
# ./lib/logstash/outputs/base.rb:44:in `initialize'
# ./vendor/bundle/jruby/1.9/gems/logstash-output-s3-2.0.1/spec/outputs/s3_spec.rb:84:in `(root)'
# ./vendor/bundle/jruby/1.9/gems/rspec-wait-0.0.7/lib/rspec/wait.rb:46:in `(root)'
# ./rakelib/test.rake:47:in `(root)'
Code is: https://github.com/logstash-plugins/logstash-output-s3/blob/master/spec/outputs/s3_spec.rb#L76-L86
I have ELK on Centos 7 with the Logstash-output-s3 plugin configured, however the content of the files in s3 contain only the following:
2016-03-28T11:33:23.000Z 10.x.x.10 %{message}
2016-03-28T11:33:23.000Z 10.x.x.10 %{message}
2016-03-28T11:33:23.000Z 10.x.x.10 %{message}
2016-03-28T11:33:23.000Z 10.x.x.10 %{message}
2016-03-28T11:33:23.000Z 10.x.x.10 %{message}
2016-03-28T11:33:31.000Z 10.x.x.100 %{message}
2016-03-28T11:33:32.000Z 10.x.x.100 %{message}
2016-03-28T11:33:33.000Z 10.x.x.100 %{message}
2016-03-28T11:33:43.000Z 10.x.x.201 %{message}
2016-03-28T11:33:51.000Z 10.x.x.100 %{message}
2016-03-28T11:33:51.000Z 10.x.x.100 %{message}
I am new to Logstash and would like to know how to troubleshoot this condition. Logs in logstash are parsed properly and appear in Kibana, normally.
Cross posted on: https://discuss.elastic.co/t/s3-output-cant-find-files-it-generates/68546
I'm using Logstash 5.0.2 (and .0.1 before) and I need to add an s3 output but I can't because randomly one of the file logstash generates cannot be found:
[2016-12-07T14:37:39,809][ERROR][logstash.outputs.s3 ] failed to upload, will re-enqueue /tmp/logstash/ls.s3.ip-172-31-45-248.2016-12-07T14.31.part0.txt for upload {:ex=>#<Errno::ENOENT: No such file or directory - /tmp/logstash/ls.s3.ip-172-31-45-248.2016-12-07T14.31.part0.txt>, :backtrace=>["org/jruby/RubyFile.java:370:in `initialize'", "org/jruby/RubyIO.java:1197:in `open'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-s3-3.2.0/lib/logstash/outputs/s3.rb:175:in `write_on_bucket'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-s3-3.2.0/lib/logstash/outputs/s3.rb:289:in `move_file_to_bucket'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-s3-3.2.0/lib/logstash/outputs/s3.rb:454:in `upload_worker'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-s3-3.2.0/lib/logstash/outputs/s3.rb:436:in `configure_upload_workers'", "org/jruby/RubyProc.java:281:in `call'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/task.rb:24:in `initialize'"]}
As it says Logstash "will re-enqueue" the file over and over again filling up the disk with log messages. This happens randomly: Logstash works for some time (minutes or hours) and then suddenly this happens.
This is my output configuration:
if [type] == "my-log" {
s3 {
bucket => "mybucket"
prefix => "logstash/bla/"
size_file => 10485760 # rotate every 10M
codec => line {
format => "%{message}"
}
}
}
I'm on ubuntu 14.04. Oracle JVM:
# java -version
java version "1.8.0_111"
Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
In AWS environment, there's a way to make zero-download time upgrade, using the AWS auto scaling feature.
Details in here.
http://serverfault.com/questions/389301/amazon-ec2-elastic-load-balancing-strategy-for-zero-downtime-server-restart/679502#679502
So it's usual to close one EC2 instance and wait for the new one come to serve in the upgrade procedure.
In this situation, it's useful to upload the temp files to s3 when the s3 plugin is shutting down to avoid some log data missing after EC2 stop and terminate.
So it's nice to have a flag to enable this kind of "graceful shutdown" for this plugin.
I'm using the latest logstash (2.3.1): https://download.elastic.co/logstash/logstash/logstash-2.3.1.zip
on Windows (tried on 7 and 2008 R2)
Here is the configuration:
input {
file {
path => "some_log_file.logs"
start_position => "beginning"
ignore_older => 0
}
}
output{
s3 {
access_key_id => "some_id"
secret_access_key => "some_key"
region => "us-east-1"
bucket => "some_bucket"
prefix => "/Logs/"
time_file => 1
size_file => 2048
canned_acl => "public_read_write"
temporary_directory => "some_folder"
}
stdout{}
}
Plugin is registered correctly and test file is successfully uploaded but the next attempt fails. Here is what I have in logs:
For test file:
{:timestamp=>"2016-04-11T16:34:43.997000+0000", :message=>"S3: Creating a test file on S3", :level=>:debug, :file=>"/logstash-2.3.1/vendor/bundle/jruby/1.9/gems/logstash-output-s3-2.0.7/lib/logstash/outputs/s3.rb", :line=>"242", :method=>"test_s3_write"}
{:timestamp=>"2016-04-11T16:34:44.000000+0000", :message=>"S3: ready to write file in bucket", :remote_filename=>"/Logs/logstash-programmatic-access-test-object-1460392483", :bucket=>"some_bucket", :level=>:debug, :file=>"/logstash-2.3.1/vendor/bundle/jruby/1.9/gems/logstash-output-s3-2.0.7/lib/logstash/outputs/s3.rb", :line=>"170", :method=>"write_on_bucket"}
{:timestamp=>"2016-04-11T16:34:45.966000+0000", :message=>"S3: has written remote file in bucket with canned ACL", :remote_filename=>"/Logs/logstash-programmatic-access-test-object-1460392483", :bucket=>"some_bucket", :canned_acl=>"public_read_write", :level=>:debug, :file=>"/logstash-2.3.1/vendor/bundle/jruby/1.9/gems/logstash-output-s3-2.0.7/lib/logstash/outputs/s3.rb", :line=>"183", :method=>"write_on_bucket"}
For the next attempt:
{:timestamp=>"2016-04-11T16:35:11.843000+0000", :message=>"S3: ready to write file in bucket", :remote_filename=>"/Logs/ls.s3.WIN-0QTMO6ENPPD.2016-04-11T16.34.part0.txt", :bucket=>"some_bucket", :level=>:debug, :file=>"/logstash-2.3.1/vendor/bundle/jruby/1.9/gems/logstash-output-s3-2.0.7/lib/logstash/outputs/s3.rb", :line=>"170", :method=>"write_on_bucket"}
...
{:timestamp=>"2016-04-11T16:37:56.526000+0000", :message=>"S3: AWS error", :error=>#<AWS::S3::Errors::RequestTimeout: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.>, :level=>:error, :file=>"/logstash-2.3.1/vendor/bundle/jruby/1.9/gems/logstash-output-s3-2.0.7/lib/logstash/outputs/s3.rb", :line=>"178", :method=>"write_on_bucket"}
Dear logstash devs,
I setup a ceph storage system like this http://docs.ceph.com/ and now I want to output log to ceph by s3 api http://docs.ceph.com/docs/v0.80.5/radosgw/s3/ruby/ but default s3 of logstash just support s3 aws, please add a function to custom s3 host to use with ceph storage, example: s3.mystorage.net
The output settings may like this:
output {
s3ceph {
url_endpoint => "s3.mystorage.net"
access_key_id => "my-access-key"
secret_access_key => "my-access-secret-key"
bucket => "logstash-output"
use_ssl => false
}
}
Thank you,
Creation of different file paths in s3 for different kind of events or tracking log file is not supported.
For example, here folder prefixes based on fields "app" and "type" are not supported.
input {
file {
path => "//data//executors//logs/"
}
}
filter {
grok { match => ["path","/%{GREEDYDATA}/executors/%{GREEDYDATA:app}[.]%{GREEDYDATA}/logs/%{GREEDYDATA:type}"]
}
}
output {
s3{
.....................
.....................
prefix => "applogs/%{app}/%{type}"
}
}
After #85 merged, my bucket has several test objects litter:
$ aws s3 ls s3://my-bucket
2016-09-06 12:11:49 4 logstash-programmatic-access-test-object-1473189107
2016-09-06 12:20:18 4 logstash-programmatic-access-test-object-1473189616
This functionality was removed due to issues with access policies. Instead of failing like before or littering like now, the plugin could attempt to remove the objects and ignore failures.
The format setting that used to exist, is still in the example config
# USAGE:
#
# This is an example of logstash config:
# [source,ruby]
# output {
# s3{
# access_key_id => "crazy_key" (required)
# secret_access_key => "monkey_access_key" (required)
# endpoint_region => "eu-west-1" (required)
# bucket => "boss_please_open_your_bucket" (required)
# size_file => 2048 (optional)
# time_file => 5 (optional)
# format => "plain" (optional)
# canned_acl => "private" (optional. Options are "private", "public_read", "public_read_write", "authenticated_read". Defaults to "private" )
# }
# }
But a config test, returns Unknown setting 'format' for s3 {:level=>:error}
LogStash::Outputs::S3
#write_to_tempfile
should append the event to a file
#write_events_to_multiple_files?
returns false if size_file is zero or not set
returns true if the size_file is != 0
#register
should raise a ConfigurationError if the prefix contains one or more '^`><' characters
should create the tmp directory if it doesn't exist
#write_on_bucket
should use the same local filename if no prefix is specified
should prefix the file on the bucket if a prefix is specified
#restore_from_crashes
read the temp directory and upload the matching file to s3
#generate_temporary_filename
should not add the tags to the filename
normalized the temp directory to include the trailing slash if missing
should add tags to the filename if present
#rotate_events_log
having periodic rotations
raises no error when periodic rotation happen
having a single worker
returns false if the tempfile is under the file_size limit
returns true if the tempfile is over the file_size limit
#receive
should send the event through the codecs
#move_file_to_bucket
should not upload the file if the size of the file is zero
should upload the file if the size > 0
should always delete the source file (FAILED - 1)
configuration
should fallback to region if endpoint_region isnt defined
should support the deprecated endpoint_region as a configuration option
when rotating the temporary file
doesn't skip events if using the time_file option (FAILED - 2)
doesn't skip events if using the size_file option (FAILED - 3)
Failures:
1) LogStash::Outputs::S3#move_file_to_bucket should always delete the source file
Failure/Error: Unable to find matching line from backtrace
<File (class)> received :delete with unexpected arguments
expected: (#<File:/var/folders/yl/9hpq_qs528g60cv9trnt5z180000gn/T/studtmp-72f48e9f805c10c91f497cc766c4dbdb30bf6ab092f92017fb1bbf552908>)
got: ("/var/folders/yl/9hpq_qs528g60cv9trnt5z180000gn/T/logstash/ls.s3.sashimi.2015-08-19T11.23.part371.txt")
# ./lib/logstash/outputs/s3.rb:253:in `move_file_to_bucket'
# ./lib/logstash/outputs/s3.rb:385:in `upload_worker'
# ./lib/logstash/outputs/s3.rb:369:in `configure_upload_workers'
2) LogStash::Outputs::S3 when rotating the temporary file doesn't skip events if using the time_file option
Failure/Error: expect(events_written_count).to eq(event_count)
expected: 590865
got: 137744
(compared using ==)
# ./spec/outputs/s3_spec.rb:347:in `(root)'
# ./spec/outputs/s3_spec.rb:313:in `(root)'
3) LogStash::Outputs::S3 when rotating the temporary file doesn't skip events if using the size_file option
Failure/Error: expect(events_written_count).to eq(event_count)
expected: 5155
got: 191
(compared using ==)
# ./spec/outputs/s3_spec.rb:308:in `(root)'
# ./spec/outputs/s3_spec.rb:279:in `(root)'
Finished in 47 seconds (files took 4.49 seconds to load)
22 examples, 3 failures
Failed examples:
rspec ./spec/outputs/s3_spec.rb:221 # LogStash::Outputs::S3#move_file_to_bucket should always delete the source file
rspec ./spec/outputs/s3_spec.rb:312 # LogStash::Outputs::S3 when rotating the temporary file doesn't skip events if using the time_file option
rspec ./spec/outputs/s3_spec.rb:278 # LogStash::Outputs::S3 when rotating the temporary file doesn't skip events if using the size_file option
Randomized with seed 59288
our logstash is crashing when it starts to push files every x MB e.g 1MB or 5MB -- I think this might be because of the rate. We notice that when we set size_file=5MB, we might be pushing the files every 20 -30 seconds to s3.
Is it possible to assume a role to get credentials to deliver to an S3 bucket?
Use case - I want to be able to deliver the output to an S3 bucket in a different AWS account, without having to provide credentials in a config file.
Docs still report old deprecated parameter in the config example:
output {
s3{
access_key_id => "crazy_key" (required)
secret_access_key => "monkey_access_key" (required)
endpoint_region => "eu-west-1" (required)
bucket => "boss_please_open_your_bucket" (required)
size_file => 2048 (optional)
time_file => 5 (optional)
format => "plain" (optional)
canned_acl => "private" (optional. Options are "private", "public_read", "public_read_write", "authenticated_read". Defaults to "private" )
}
Using that, you get:
You are using a deprecated config setting "endpoint_region" set in s3. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Deprecated, use region instead. If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"endpoint_region", :plugin=><LogStash::Outputs::S3 access_key_id=>"xxxx", secret_access_key=>"xxxx", bucket=>"users.elasticsearch.org/testlogstashs3", prefix=>"dev/logstash-ohi-ingest", endpoint_region=>"us-east-1", codec=>"plain">, :level=>:warn}
please update to endpoint_region
to region
I was running Logstash v1.5.6 with output plugin (with arguably minor modifications vs core-v1 https://github.com/ihorkhavkin/logstash-output-s3/tree/v1-patched ) and hitting error like this:
{:timestamp=>"2016-05-10T17:58:43.262000+0000", :message=>"upload_worker exception", :ex=>["org/jruby/ext/openssl/SSLSocket.java:768:in `syswrite'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/jruby-openssl-0.9.12-java/lib/jopenssl19/openssl/buffering.rb:318:in `do_write'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/jruby-openssl-0.9.12-java/lib/jopenssl19/openssl/buffering.rb:336:in `write'", "/opt/logstash/vendor/jruby/lib/ruby/1.9/net/protocol.rb:199:in `write0'", "/opt/logstash/vendor/jruby/lib/ruby/1.9/net/protocol.rb:173:in `write'", "/opt/logstash/vendor/jruby/lib/ruby/1.9/net/protocol.rb:190:in `writing'", "/opt/logstash/vendor/jruby/lib/ruby/1.9/net/protocol.rb:172:in `write'", "/opt/logstash/vendor/jruby/lib/ruby/1.9/net/http.rb:1964:in `send_request_with_body_stream'", "/opt/logstash/vendor/jruby/lib/ruby/1.9/net/http.rb:1930:in `exec'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-v1-1.66.0/lib/aws/core/http/patch.rb:78:in `new_transport_request'", "org/jruby/RubyKernel.java:1242:in `catch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-v1-1.66.0/lib/aws/core/http/patch.rb:77:in `new_transport_request'", "/opt/logstash/vendor/jruby/lib/ruby/1.9/net/http.rb:1302:in `request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-v1-1.66.0/lib/aws/core/http/connection_pool.rb:356:in `request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-v1-1.66.0/lib/aws/core/http/net_http_handler.rb:64:in `handle'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-v1-1.66.0/lib/aws/core/http/connection_pool.rb:131:in `session_for'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-v1-1.66.0/lib/aws/core/http/net_http_handler.rb:56:in `handle'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-v1-1.66.0/lib/aws/core/client.rb:253:in `make_sync_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-v1-1.66.0/lib/aws/core/client.rb:289:in `retry_server_errors'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-v1-1.66.0/lib/aws/s3/region_detection.rb:11:in `retry_server_errors'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-v1-1.66.0/lib/aws/core/client.rb:249:in `make_sync_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-v1-1.66.0/lib/aws/core/client.rb:509:in `client_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-v1-1.66.0/lib/aws/core/client.rb:391:in `log_client_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-v1-1.66.0/lib/aws/core/client.rb:477:in `client_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-v1-1.66.0/lib/aws/core/client.rb:373:in `return_or_raise'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-v1-1.66.0/lib/aws/core/client.rb:476:in `client_request'", "(eval):3:in `put_object'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-v1-1.66.0/lib/aws/s3/s3_object.rb:1765:in `write_with_put_object'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-v1-1.66.0/lib/aws/s3/s3_object.rb:611:in `write'", "/opt/logstash/vendor/local_gems/dd42f719/logstash-output-s3-1.0.2.3-20160126-1615-643afc43/lib/logstash/outputs/s3.rb:155:in `write_on_bucket'", "org/jruby/RubyIO.java:1183:in `open'", "/opt/logstash/vendor/local_gems/dd42f719/logstash-output-s3-1.0.2.3-20160126-1615-643afc43/lib/logstash/outputs/s3.rb:151:in `write_on_bucket'", "/opt/logstash/vendor/local_gems/dd42f719/logstash-output-s3-1.0.2.3-20160126-1615-643afc43/lib/logstash/outputs/s3.rb:255:in `move_file_to_bucket'", "/opt/logstash/vendor/local_gems/dd42f719/logstash-output-s3-1.0.2.3-20160126-1615-643afc43/lib/logstash/outputs/s3.rb:406:in `upload_worker'", "/opt/logstash/vendor/local_gems/dd42f719/logstash-output-s3-1.0.2.3-20160126-1615-643afc43/lib/logstash/outputs/s3.rb:384:in `configure_upload_workers'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/task.rb:12:in `initialize'"], :level=>:error}
It looks like something that might affect even the current master as my changes shouldn't affect exception handling (except for logging this specific case that has blown). I suspect that when upload to S3 fails (e.g. somewhere in the middle of upload due to intermittent networking or s3 issue), uploader just exits and eventually no uploader is running at all.
In advance, sorry for nasty backtrace from modified logstasg-output-s3 without exception name . But any thoughts are welcome. At this point I don't really understand how logstash-output-s3 current master is supposed to handle dying workers in case of S3 failure. If this is common issue, it might be worth looking into.
Failures:
expected: 12373
got: 0
(compared using ==)
# C:\Users\Administrator\logstash\lib\logstash\runner.rb:57:in `run'
# C:\Users\Administrator\logstash\lib\logstash\runner.rb:112:in `run'
# C:\Users\Administrator\logstash\lib\logstash\runner.rb:170:in `run'
Finished in 2 minutes 37.8 seconds
21 examples, 1 failure
Failed examples:
Hi,
When I run logstash in Amazon ECS, it gets gracefully shutdown:
My containers seem to use the entire 30s. According the Logstash docs on fault tolerance:
When the pipeline cannot be flushed due to a stuck output or filter, Logstash waits indefinitely. For example, when a pipeline sends output to a database that is unreachable by the Logstash instance, the instance waits indefinitely after receiving a SIGTERM.
So I have a few questions:
Does logstash-output-s3 attempt to flush to S3 in event of a SIGTERM?
If it does and fails to flush, will it log the error?
I don't see any errors logged in stdout, so I can't tell if logstash-output-s3 is stuck :(
Thanks!
When using:
encoding => "gzip"
in the output filter for logstash, it will post empty 20 byte files to S3 in concert with the time_file parameter set at 10 minute intervals instead of skipping the time interval if the file is empty.
Originally from here: elastic/logstash#2487
21) LogStash::Outputs::S3#register should create the tmp directory if it doesn't exist
Failure/Error: FileUtils.rm_r(temporary_directory)
Errno::EACCES:
Permission denied - C:\Users\jls\AppData\Local\Temp/temporary_directory-a6f078be29a35da4c0affa10b36ee52394c8ea02121e58c52455ed46031f/ls.s3.sadness.2015-01-30T14.12.part0.txt
# C:\Users\jls\Documents\GitHub\logstash\lib\logstash\runner.rb:57:in `run'
# C:\Users\jls\Documents\GitHub\logstash\lib\logstash\runner.rb:112:in `run'
# C:\Users\jls\Documents\GitHub\logstash\lib\logstash\runner.rb:170:in `run'
22) LogStash::Outputs::S3#move_file_to_bucket should not upload the file if the size of the file is zero
Failure/Error: s3.move_file_to_bucket(temp_file)
ArgumentError:
wrong number of arguments (0 for 1)
# C:\Users\jls\Documents\GitHub\logstash\lib\logstash\runner.rb:57:in `run'
# C:\Users\jls\Documents\GitHub\logstash\lib\logstash\runner.rb:112:in `run'
# C:\Users\jls\Documents\GitHub\logstash\lib\logstash\runner.rb:170:in `run'
23) LogStash::Outputs::S3#move_file_to_bucket should upload the file if the size > 0
Failure/Error: s3.move_file_to_bucket(tmp)
ArgumentError:
wrong number of arguments (0 for 1)
# C:\Users\jls\Documents\GitHub\logstash\lib\logstash\runner.rb:57:in `run'
# C:\Users\jls\Documents\GitHub\logstash\lib\logstash\runner.rb:112:in `run'
# C:\Users\jls\Documents\GitHub\logstash\lib\logstash\runner.rb:170:in `run'
24) LogStash::Outputs::S3 when rotating the temporary file doesn't skip events if using the size_file option
Failure/Error: pipeline = LogStash::Pipeline.new(config)
LogStash::PluginLoadingError:
Couldn't find any input plugin named 'generator'. Are you sure this is correct? Trying to load the generator input plugin resulted in this error: no such file to load -- logstash/inputs/generator
# ./lib/logstash/plugin.rb:142:in `lookup'
# ./lib/logstash/plugin.rb:140:in `lookup'
# ./lib/logstash/pipeline.rb:270:in `plugin'
# (eval):7:in `initialize'
# ./lib/logstash/pipeline.rb:32:in `initialize'
# C:\Users\jls\Documents\GitHub\logstash\lib\logstash\runner.rb:57:in `run'
# C:\Users\jls\Documents\GitHub\logstash\lib\logstash\runner.rb:112:in `run'
# C:\Users\jls\Documents\GitHub\logstash\lib\logstash\runner.rb:170:in `run'
25) LogStash::Outputs::S3 when rotating the temporary file doesn't skip events if using the time_file option
Failure/Error: Stud::Temporary.directory do |temporary_directory|
Errno::EACCES:
Permission denied - C:\Users\jls\AppData\Local\Temp/studtmp-7340c6a684273fb26a56eb96e962fdf1bdfbede5b978fe96dfb8f8fd4ceb/logstash-programmatic-access-test-object-1422655940
# C:\Users\jls\Documents\GitHub\logstash\lib\logstash\runner.rb:57:in `run'
# C:\Users\jls\Documents\GitHub\logstash\lib\logstash\runner.rb:112:in `run'
# C:\Users\jls\Documents\GitHub\logstash\lib\logstash\runner.rb:170:in `run'
Sometimes s3 output suddenly stops dues to a s3 AWS error:
{:timestamp=>"2016-09-27T07:23:29.417000+0000", :message=>"Pipeline main started"}
{:timestamp=>"2016-09-27T11:58:28.012000+0000", :message=>"S3: AWS error", :error=>#<AWS::S3::Errors::XAmzContentSHA256Mismatch: The provided 'x-amz-content-sha256' header does not match what was computed.>, :level=>:error}
Without a restart of logstash the s3 output does not restart. If I restart logstash manually logstash does not process old files from the temporary directory.
It would be cool if logstash would just retry in case of any error and retransmit all files from the temporary directory.
I had cases where the filesystem was full because of this issue. We are using logstash 2.4
Hi!
Thanks for the great work with this plugin.
I need to output some logs to s3, but in CSV format.
¿Is there a way to combine output plugins?
The concept would be something like this:
{
"csv" {
fields => ["name","surname","age"]
s3 {
...
}
}
}
If not, the solution would be to add csv format parameter to s3 plugin?
I'm also working on a output plugin for dropbox, completely based in logstash-output-s3, because I need to upload some logs to dropbox in csv format too and i don't know what is the correct approach for the logstash architecture.
Regards!
It would be nice to have an option to turn on server side encryption for S3 data
We are trying to use one bucket for multiple applications, but separating them into subfolders. In AWS it is possible to give permissions to subfolders only, but the plugin requires access rights on the root level of the bucket as well. I guess thats because there are some calls where the prefix is not included. It would make sense to always add the prefix if there is one specified to avoid access to the root level.
Figuring this out was a bit of a trial-and-error exercise, and its not really specified anywhere in the documentation or made obvious in the code.
It would be quite helpful to specify the access rights the plugin needs and that it needs it at root level.
Ceph for example (http://ceph.com/docs/master/radosgw/s3/).
The only change I expect is to override/set your own hostname
I am pretty sure that this line
logstash-output-s3/lib/logstash/outputs/s3.rb
Line 293 in dbc474b
canned_acl
I think this is a bug.
When I start the logstash, error log:
"The error reported is: \n initialize: name or service not known"
my logstash config file:
...
region => "cn-north-1"
...
I check the request url:
https://xxx.s3-cn-north-1.amazonaws.com
But when region is "cn-north-1", the url should be:
https://xxx.s3.cn-north-1.amazonaws.com.cn
Then I check the code, and found:
vendor/bundle/jruby/1.9/gems/logstash-output-s3-2.0.4/lib/logstash/outputs/s3.rb
ln 143:
:s3_endpoint => region_to_use == 'us-east-1' ? 's3.amazonaws.com' : "s3-#{region_to_use}.amazonaws.com
Then I modify this code, start success.
The current version of the plugin stores all of the uploaded files in the root of the configured bucket. It would be valuable to be able to configure the path and filename within the bucket to allow for ease of longtime archiving without having to scroll through 1000's of files when viewing the contents of the bucket. Maybe a static configuration, or a proc that can be eval'd before uploading the new file?
s3 {
...
path => "{year}/{month}/{day}/{hour}/{host}{uuid}"
path_proc => '{|time| "#{time.year}/#{time.month}/#{time.day}/#{time.hour}/#{Socekt.gethostname}#{uuid}'
}
We need to resolve errors like "Error: Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4"
. We can fix this by adding an option to instantiate the library with AWS::S3::Client.new(:s3_signature_version => :v4)
.
As a user I would like to be able to use sprintf
dynamic configurations in the prefix
configuration of the logstash-output-s3 plugin.
As stated by Magnus Bäck in a community discussion, this would be complicated to implement in a general way because of the complexities of storing the files before uploading them to S3. However a config like prefix => events/%{+yyyy.MM.dd}
would only change once a day.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.