Giter Club home page Giter Club logo

duo_log_sync's People

Contributors

alanisaac avatar ceckim avatar cisco-kyluce avatar coltwill avatar duo-jubin avatar duo-kfleischman avatar duokristina avatar ggacusan-at-duo avatar jaekitch avatar saxonww avatar sizehnde avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

duo_log_sync's Issues

Add Facility Option

Add Facility Option for Syslog destination

Detailed Description

Add an option to choose which facility will be used when sending to the destination.

Use Case

Currently sending logs to Solarwinds SEM, and the log is going to the /var/log/user.log, and since I cannot change the "facility", I can not force it to go to an unused log file.

Workarounds

None

Graceful shutdown

Currently, there doesn't appear to be a way to gracefully shutdown the app. Ideally, it would trap SIGINT and SIGTERM with a handler that finishes sending any already-retrieved log entries to the transport, updates the checkpoints files, and then terminates.

SSL error with self-signed certificate

I'm setting up DLS on Windows (Python 3.8) and I can't figure out how to setup TCPSSL.

I connected to my SIEM (Alienvault) using a browser & saved the certificate. I placed that cert in the DLS home directory & added the .cer file to the config under cert_filepath. When I run DuoLogSync, I get an error.

Shutting down due to SSL: CERTIFICATE_VERIFY_FAILED certificate verify failed: self signed certificate in certificate chain (_ssl.c:997)

I did multiple searches but was unable to find anything regarding certificates. Some older articles reference a "cert_dir" directive which doesn't exist in the template_config.yml file so this only adds to my confusion. Is there an opportunity to update the documentation to include a section on SSL?

1 - Was I supposed to grab the certificate from the server?
2 - If using the server certificate, should I also have the private key saved in the same directory?
3 - Should I instead be using a new self-signed client certificate?

I can't move forward with this as I don't even want to test without SSL.
Any guidance would be greatly appreciate.

Thanks in advance.
Geoff

Use command line arguments to set configuration options.

Wondering about using the command line instead of defining options in config file. We would like to not store secure information (ie. API keys) in configuration files. Wondering if there is a way to set certain (or all) configured options as part of the duologsync command instead?

queue issue

Command completes successfully, wondering if anyone is experiencing the same issue with messages queued
DuoLogSync: shutdown successfully. Check /tmp/duologsync.log for program logs

2022-08-24 11:11:06 INFO auth producer: adding 1000 logs to the queue
2022-08-24 11:11:06 INFO auth producer: added 1000 logs to the queue
2022-08-24 11:11:06 INFO auth producer: shutting down
2022-08-24 11:11:06 INFO auth consumer: shutting down
2022-08-24 11:11:38 INFO telephony producer: adding 1000 logs to the queue
2022-08-24 11:11:38 INFO telephony producer: added 1000 logs to the queue
2022-08-24 11:11:38 INFO telephony producer: shutting down
2022-08-24 11:11:38 INFO telephony consumer: shutting down

Add program in log

Hello team,

Today the log sent is like this :
2023-04-18T17:21:32+02:00 {"epkey": null, "hostname": null, "ip": "xx.xx.xx.xx", "location": END OF THE JSON.........}

It's not RFC compliant, you must have the machine name and progam name after the timestamp and before the message like this :

2023-04-18T17:21:32+02:00 myserver duo {"epkey": null, "hostname": null, "ip": "xx.xx.xx.xx", END OF THE JSON.........}

Could you, please, make the change to be compliant ? :)
Thanks

Connect call failed ('127.0.0.1', 8888)

Team,

We just do fresh install on Ubuntu Linux and setup the DUO log sync .

Ran up with below errors.
ubuntu@ip-172-31-39-49:/duo_log_sync/duologsync$ duologsync ./config.yml
Exception ignored in: <bound method Task.del of <Task finished coro=<BaseConsumer.get_connection() done, defined at /usr/local/lib/python3.5/dist-packages/duologsync-1.0.0-py3.5.egg/duologsync/consumer/base_consumer.py:15> exception=SystemExit(1,)>>
Traceback (most recent call last):
File "/usr/lib/python3.5/asyncio/tasks.py", line 93, in del
File "/usr/lib/python3.5/asyncio/futures.py", line 215, in del
File "/usr/lib/python3.5/asyncio/base_events.py", line 1177, in call_exception_handler
File "/usr/lib/python3.5/logging/init.py", line 1308, in error
File "/usr/lib/python3.5/logging/init.py", line 1415, in _log
File "/usr/lib/python3.5/logging/init.py", line 1425, in handle
File "/usr/lib/python3.5/logging/init.py", line 1487, in callHandlers
File "/usr/lib/python3.5/logging/init.py", line 855, in handle
File "/usr/lib/python3.5/logging/init.py", line 1047, in emit
File "/usr/lib/python3.5/logging/init.py", line 1037, in _open
NameError: name 'open' is not defined
Exception ignored in: <bound method Task.del of <Task pending coro=<TelephonyProducer.telephony_producer() running at /usr/local/lib/python3.5/dist-packages/duologsync-1.0.0-py3.5.egg/duologsync/producer/telephony_producer.py:24> wait_for= cb=[gather.._done_callback(2)() at /usr/lib/python3.5/asyncio/tasks.py:637]>>
Traceback (most recent call last):
File "/usr/lib/python3.5/asyncio/tasks.py", line 92, in del
File "/usr/lib/python3.5/asyncio/base_events.py", line 1177, in call_exception_handler
File "/usr/lib/python3.5/logging/init.py", line 1308, in error
File "/usr/lib/python3.5/logging/init.py", line 1415, in _log
File "/usr/lib/python3.5/logging/init.py", line 1425, in handle
File "/usr/lib/python3.5/logging/init.py", line 1487, in callHandlers
File "/usr/lib/python3.5/logging/init.py", line 855, in handle
File "/usr/lib/python3.5/logging/init.py", line 1047, in emit
File "/usr/lib/python3.5/logging/init.py", line 1037, in _open
NameError: name 'open' is not defined
Exception ignored in: <bound method Task.del of <Task pending coro=<AdminactionProducer.adminaction_producer() running at /usr/local/lib/python3.5/dist-packages/duologsync-1.0.0-py3.5.egg/duologsync/producer/adminaction_producer.py:26> wait_for= cb=[gather.._done_callback(4)() at /usr/lib/python3.5/asyncio/tasks.py:637]>>
Traceback (most recent call last):
File "/usr/lib/python3.5/asyncio/tasks.py", line 92, in del
File "/usr/lib/python3.5/asyncio/base_events.py", line 1177, in call_exception_handler
File "/usr/lib/python3.5/logging/init.py", line 1308, in error
File "/usr/lib/python3.5/logging/init.py", line 1415, in _log
File "/usr/lib/python3.5/logging/init.py", line 1425, in handle
File "/usr/lib/python3.5/logging/init.py", line 1487, in callHandlers
File "/usr/lib/python3.5/logging/init.py", line 855, in handle
File "/usr/lib/python3.5/logging/init.py", line 1047, in emit
File "/usr/lib/python3.5/logging/init.py", line 1037, in _open
NameError: name 'open' is not defined
Exception ignored in: <bound method Task.del of <Task pending coro=<AuthlogProducer.auth_producer() running at /usr/local/lib/python3.5/dist-packages/duologsync-1.0.0-py3.5.egg/duologsync/producer/authlog_producer.py:27> wait_for= cb=[gather.._done_callback(0)() at /usr/lib/python3.5/asyncio/tasks.py:637]>>
Traceback (most recent call last):
File "/usr/lib/python3.5/asyncio/tasks.py", line 92, in del
File "/usr/lib/python3.5/asyncio/base_events.py", line 1177, in call_exception_handler
File "/usr/lib/python3.5/logging/init.py", line 1308, in error
File "/usr/lib/python3.5/logging/init.py", line 1415, in _log
File "/usr/lib/python3.5/logging/init.py", line 1425, in handle
File "/usr/lib/python3.5/logging/init.py", line 1487, in callHandlers
File "/usr/lib/python3.5/logging/init.py", line 855, in handle
File "/usr/lib/python3.5/logging/init.py", line 1047, in emit
File "/usr/lib/python3.5/logging/init.py", line 1037, in _open
NameError: name 'open' is not defined
Exception ignored in: <bound method Task.del of <Task pending coro=<AdminactionConsumer.consumer() running at /usr/local/lib/python3.5/dist-packages/duologsync-1.0.0-py3.5.egg/duologsync/consumer/adminaction_consumer.py:20> wait_for= cb=[gather.._done_callback(5)() at /usr/lib/python3.5/asyncio/tasks.py:637]>>
Traceback (most recent call last):
File "/usr/lib/python3.5/asyncio/tasks.py", line 92, in del
File "/usr/lib/python3.5/asyncio/base_events.py", line 1177, in call_exception_handler
File "/usr/lib/python3.5/logging/init.py", line 1308, in error
File "/usr/lib/python3.5/logging/init.py", line 1415, in _log
File "/usr/lib/python3.5/logging/init.py", line 1425, in handle
File "/usr/lib/python3.5/logging/init.py", line 1487, in callHandlers
File "/usr/lib/python3.5/logging/init.py", line 855, in handle
File "/usr/lib/python3.5/logging/init.py", line 1047, in emit
File "/usr/lib/python3.5/logging/init.py", line 1037, in _open
NameError: name 'open' is not defined
Exception ignored in: <bound method Task.del of <Task pending coro=<TelephonyConsumer.consumer() running at /usr/local/lib/python3.5/dist-packages/duologsync-1.0.0-py3.5.egg/duologsync/consumer/telephony_consumer.py:20> wait_for= cb=[gather.._done_callback(3)() at /usr/lib/python3.5/asyncio/tasks.py:637]>>
Traceback (most recent call last):
File "/usr/lib/python3.5/asyncio/tasks.py", line 92, in del
File "/usr/lib/python3.5/asyncio/base_events.py", line 1177, in call_exception_handler
File "/usr/lib/python3.5/logging/init.py", line 1308, in error
File "/usr/lib/python3.5/logging/init.py", line 1415, in _log
File "/usr/lib/python3.5/logging/init.py", line 1425, in handle
File "/usr/lib/python3.5/logging/init.py", line 1487, in callHandlers
File "/usr/lib/python3.5/logging/init.py", line 855, in handle
File "/usr/lib/python3.5/logging/init.py", line 1047, in emit
File "/usr/lib/python3.5/logging/init.py", line 1037, in _open
NameError: name 'open' is not defined
Exception ignored in: <bound method Task.del of <Task pending coro=<AuthlogConsumer.consumer() running at /usr/local/lib/python3.5/dist-packages/duologsync-1.0.0-py3.5.egg/duologsync/consumer/authlog_consumer.py:20> wait_for= cb=[gather.._done_callback(1)() at /usr/lib/python3.5/asyncio/tasks.py:637]>>
Traceback (most recent call last):
File "/usr/lib/python3.5/asyncio/tasks.py", line 92, in del
File "/usr/lib/python3.5/asyncio/base_events.py", line 1177, in call_exception_handler
File "/usr/lib/python3.5/logging/init.py", line 1308, in error
File "/usr/lib/python3.5/logging/init.py", line 1415, in _log
File "/usr/lib/python3.5/logging/init.py", line 1425, in handle
File "/usr/lib/python3.5/logging/init.py", line 1487, in callHandlers
File "/usr/lib/python3.5/logging/init.py", line 855, in handle
File "/usr/lib/python3.5/logging/init.py", line 1047, in emit
File "/usr/lib/python3.5/logging/init.py", line 1037, in _open
NameError: name 'open' is not defined
Exception ignored in: <coroutine object AdminactionConsumer.consumer at 0x7f3e974ae468>
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/duologsync-1.0.0-py3.5.egg/duologsync/consumer/adminaction_consumer.py", line 20, in consumer
File "/usr/lib/python3.5/asyncio/queues.py", line 170, in get
File "/usr/lib/python3.5/asyncio/futures.py", line 227, in cancel
File "/usr/lib/python3.5/asyncio/futures.py", line 242, in _schedule_callbacks
File "/usr/lib/python3.5/asyncio/base_events.py", line 497, in call_soon
File "/usr/lib/python3.5/asyncio/base_events.py", line 506, in _call_soon
File "/usr/lib/python3.5/asyncio/base_events.py", line 334, in _check_closed
RuntimeError: Event loop is closed
Exception ignored in: <coroutine object TelephonyConsumer.consumer at 0x7f3e974ae3b8>
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/duologsync-1.0.0-py3.5.egg/duologsync/consumer/telephony_consumer.py", line 20, in consumer
File "/usr/lib/python3.5/asyncio/queues.py", line 170, in get
File "/usr/lib/python3.5/asyncio/futures.py", line 227, in cancel
File "/usr/lib/python3.5/asyncio/futures.py", line 242, in _schedule_callbacks
File "/usr/lib/python3.5/asyncio/base_events.py", line 497, in call_soon
File "/usr/lib/python3.5/asyncio/base_events.py", line 506, in _call_soon
File "/usr/lib/python3.5/asyncio/base_events.py", line 334, in _check_closed
RuntimeError: Event loop is closed
Exception ignored in: <coroutine object AuthlogConsumer.consumer at 0x7f3e974ae308>
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/duologsync-1.0.0-py3.5.egg/duologsync/consumer/authlog_consumer.py", line 20, in consumer
File "/usr/lib/python3.5/asyncio/queues.py", line 170, in get
File "/usr/lib/python3.5/asyncio/futures.py", line 227, in cancel
File "/usr/lib/python3.5/asyncio/futures.py", line 242, in _schedule_callbacks
File "/usr/lib/python3.5/asyncio/base_events.py", line 497, in call_soon
File "/usr/lib/python3.5/asyncio/base_events.py", line 506, in _call_soon
File "/usr/lib/python3.5/asyncio/base_events.py", line 334, in _check_closed
RuntimeError: Event loop is closed
ubuntu@ip-172-31-39-49:
/duo_log_sync/duologsync$ pwd
/home/ubuntu/duo_log_sync/duologsync
ubuntu@ip-172-31-39-49:~/duo_log_sync/duologsync$ cd /tmp
ubuntu@ip-172-31-39-49:/tmp$ ls-lrt
ls-lrt: command not found
ubuntu@ip-172-31-39-49:/tmp$ ls -lrt
total 12
drwx------ 3 root root 4096 Jul 2 09:34 systemd-private-c29b79ac55534ed9a4b6a3030186db9a-systemd-timesyncd.service-NkF2QK
-rw-rw-r-- 1 ubuntu ubuntu 5880 Jul 2 11:28 duologsync.log
ubuntu@ip-172-31-39-49:/tmp$ tail -f duologsync.log
2020-07-02 11:28:52 INFO Starting duologsync...
2020-07-02 11:28:52 INFO Configuration loaded successfully...
2020-07-02 11:28:52 INFO Adminapi initialized for ikey and host ...
2020-07-02 11:28:52 INFO Polling duration set is too low. Defaulting to 2mins...
2020-07-02 11:28:52 INFO Consuming authlogs
2020-07-02 11:28:52 INFO Consuming telephony log...
2020-07-02 11:28:52 INFO Consuming adminaction logs...
2020-07-02 11:28:52 INFO Opening connection to server over tcp...
2020-07-02 11:28:52 ERROR Connection to server failed with exception [Errno 111] Connect call failed ('127.0.0.1', 8888)
2020-07-02 11:28:52 ERROR Terminating the application...

Receiving a 401 error after a long period, no logs sent to TCP receiver.

I am testing this utility, I setup a fluent-bit listening on tcp://localhost:8888 and then enter the appropriate information into the config but after a while I receive this error (full stacktrace below):

RuntimeError: Received 401 Invalid signature in request credentials

I believe this to be a bug in the duo_client, or in this utility, based on these articles: https://help.duo.com/s/article/5444?language=en_US, https://help.duo.com/s/article/1338?language=en_US

Versions:

[3:00:15] ~/duo_log_sync [master] λ pip3 list | grep duo
duo-client               4.2.3
duologsync               1.0.0

Config:

[2:13:45] ~/duo_log_sync [master] λ cat config.yml
duoclient:
  skey: "<skey>"
  ikey: "<ikey>"
  host: "<host>"

logs:
  logDir: "/Users/hgoscenski/duo_log_sync/log"
  endpoints:
    enabled: ["auth", "telephony", "adminaction"]
  polling:
    duration: 5
    daysinpast: 60
  checkpointDir: "/Users/hgoscenski/duo_log_sync/log"

transport:
  protocol: "TCP"
  host: "localhost"
  port: 8888
  certFileDir: ""
  certFileName: ""

recoverFromCheckpoint:
  enabled: False

Error:

[1:55:04] ~/duo_log_sync [master] λ duologsync /Users/hgoscenski/duo_log_sync/config.yml
Traceback (most recent call last):
  File "/usr/local/bin/duologsync", line 11, in <module>
    load_entry_point('duologsync==1.0.0', 'console_scripts', 'duologsync')()
  File "/usr/local/lib/python3.7/site-packages/duologsync-1.0.0-py3.7.egg/duologsync/app.py", line 12, in main
  File "/usr/local/lib/python3.7/site-packages/duologsync-1.0.0-py3.7.egg/duologsync/duo_log_sync_base.py", line 79, in start
  File "/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py", line 587, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.7/site-packages/duologsync-1.0.0-py3.7.egg/duologsync/producer/authlog_producer.py", line 32, in auth_producer
  File "/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.7/site-packages/duo_client-4.2.3-py3.7.egg/duo_client/admin.py", line 455, in get_authentication_log
    params,
  File "/usr/local/lib/python3.7/site-packages/duo_client-4.2.3-py3.7.egg/duo_client/client.py", line 383, in json_api_call
    return self.parse_json_response(response, data)
  File "/usr/local/lib/python3.7/site-packages/duo_client-4.2.3-py3.7.egg/duo_client/client.py", line 409, in parse_json_response
    (response, metadata) = self.parse_json_response_and_metadata(response, data)
  File "/usr/local/lib/python3.7/site-packages/duo_client-4.2.3-py3.7.egg/duo_client/client.py", line 438, in parse_json_response_and_metadata
    data['message'],
  File "/usr/local/lib/python3.7/site-packages/duo_client-4.2.3-py3.7.egg/duo_client/client.py", line 422, in raise_error
    raise error
RuntimeError: Received 401 Invalid signature in request credentials

Send logs to AWS ELB

Not sure if this is already supported. But I am not able to send the data to my AWS ELB. The duosync.log says it's writing the logs but I don's see it getting written/sent.

Config:

version: '1.0.0'
servers:
  - id: 'data-pipeline-dev'
    hostname: '***.elb.us-east-1.amazonaws.com'
    port: 8081
    protocol: 'TCP'

account:
  ikey: '***'
  skey: '***'
  hostname: 'api-***.duosecurity.com'
  endpoint_server_mappings:
    - endpoints: ['adminaction', 'auth']
      server: 'data-pipeline-dev'
  is_msp: False

duosync.log

2020-09-24 16:14:56 INFO     Starting DuoLogSync
2020-09-24 16:14:56 INFO     DuoLogSync: Opening connection to ***elb.us-east-1.amazonaws.com:8081
2020-09-24 16:14:56 INFO     duo_client Admin initialized for ikey: *****, host: api-**.duosecurity.com
2020-09-24 16:14:56 INFO     adminaction producer: fetching next logs after 120 seconds
2020-09-24 16:14:56 INFO     adminaction consumer: waiting for logs
2020-09-24 16:14:56 INFO     auth producer: fetching next logs after 120 seconds
2020-09-24 16:14:56 INFO     auth consumer: waiting for logs
2020-09-24 16:16:56 INFO     adminaction producer: fetching logs
2020-09-24 16:16:56 INFO     auth producer: fetching logs
2020-09-24 16:16:57 INFO     adminaction producer: adding 35 logs to the queue
2020-09-24 16:16:57 INFO     adminaction producer: added 35 logs to the queue
2020-09-24 16:16:57 INFO     adminaction producer: fetching next logs after 120 seconds
2020-09-24 16:16:57 INFO     adminaction consumer: received 35 logs from producer
2020-09-24 16:16:57 INFO     adminaction consumer: writing logs
2020-09-24 16:16:57 INFO     adminaction consumer: successfully wrote all logs
2020-09-24 16:16:57 INFO     adminaction consumer: saving latest log offset to a checkpointing file
2020-09-24 16:16:57 INFO     adminaction consumer: waiting for logs
2020-09-24 16:17:00 INFO     auth producer: adding 6 logs to the queue
2020-09-24 16:17:00 INFO     auth producer: added 6 logs to the queue
2020-09-24 16:17:00 INFO     auth producer: fetching next logs after 120 seconds
2020-09-24 16:17:00 INFO     auth consumer: received 6 logs from producer
2020-09-24 16:17:00 INFO     auth consumer: writing logs
2020-09-24 16:17:00 INFO     auth consumer: successfully wrote all logs
2020-09-24 16:17:00 INFO     auth consumer: saving latest log offset to a checkpointing file
2020-09-24 16:17:00 INFO     auth consumer: waiting for logs
2020-09-24 16:18:57 INFO     adminaction producer: fetching logs
2020-09-24 16:18:57 INFO     adminaction producer: no new logs available
2020-09-24 16:18:57 INFO     adminaction producer: fetching next logs after 120 seconds
2020-09-24 16:19:00 INFO     auth producer: fetching logs
2020-09-24 16:19:04 INFO     auth producer: adding 1 logs to the queue
2020-09-24 16:19:04 INFO     auth producer: added 1 logs to the queue
2020-09-24 16:19:04 INFO     auth producer: fetching next logs after 120 seconds
2020-09-24 16:19:04 INFO     auth consumer: received 1 logs from producer
2020-09-24 16:19:04 INFO     auth consumer: writing logs
2020-09-24 16:19:04 INFO     auth consumer: successfully wrote all logs
2020-09-24 16:19:04 INFO     auth consumer: saving latest log offset to a checkpointing file
2020-09-24 16:19:04 INFO     auth consumer: waiting for logs
2020-09-24 16:20:58 INFO     adminaction producer: fetching logs
2020-09-24 16:20:59 INFO     adminaction producer: no new logs available
2020-09-24 16:20:59 INFO     adminaction producer: fetching next logs after 120 seconds
2020-09-24 16:21:04 INFO     auth producer: fetching logs
2020-09-24 16:21:09 INFO     auth producer: adding 0 logs to the queue
2020-09-24 16:21:09 INFO     auth producer: added 0 logs to the queue
2020-09-24 16:21:09 INFO     auth producer: fetching next logs after 120 seconds
2020-09-24 16:21:09 INFO     auth consumer: received 0 logs from producer
2020-09-24 16:21:09 INFO     auth consumer: No logs to write
2020-09-24 16:21:09 INFO     auth consumer: waiting for logs

Option to exclude fields

Is there a way to exclude specific fields from being sent, I wasn't able to find anything in the config file that would allow for that. Currently we're seeing an issue where the logs are not parsing correctly because we're hitting the limit of 2048 characters per field.

This seems to be because of the groups portion is causing the data to go over max characters. Our users may be a part of up to 10 different groups so it's being cut off mid string.

If there was a way to exclude the users groups from sending, that would resolve the issue.

Current workaround is send the data as CEF instead of JSON but then the data that is sent is limited. Or remove some groups from the users, then the data fits within 2048 characters and is parsed by our SIEM.

Feature Requests: Fetch volume information and Offset in DLS Logs, Adjustable fetch amount

Hi there!
Id like to submit a request for future updates the DLS, specifically a bit more information in the logging and options in the config.
In the DLS logs it would be really nice to see the offset timestamp (it can be a pain to read the string in the offset file).
Also it would be great to get an estimation of the total log volume still to fetch. I dont know if the API supports this, but seeing "auth producer: adding 1000 logs to the queue" gives no indication of how many batches are pending.

On a similar note I understand the API limitation of 120 seconds between requests, is there also a limit of 1000 records per request?
When catching up after an interruption in DLS forwarding it can take a very log time to retrieve all of the auth logs at 1000 logs per request, even with our offset at only 3 days.

Thank you!

Defaults in example_config.yml do not work well with Trust Monitor endpoint

The default 180 day maximum in the example_config.yml is not handled properly by the API and results in a 400 error. Also, the default timeout value of 120 can result in a 429 error due to how the duo_client automatically handles silent retries for API calls. Inclusion of the "activity" endpoint will also cause the duo_log_sync app to fail when that endpoint is not available.

version: "1.0.0"

dls_settings:
  log_filepath: "/tmp/duologsync.log"
  log_format: "JSON"

  api:
    offset: 179
    timeout: 150

  checkpointing:
    enabled: True
    directory: "/tmp"

servers:
  - id: "test"
    hostname: "127.0.0.1"
    port: 8888
    protocol: "TCP"

account:
  ikey: "admin-api-ikey"
  skey: "admin-api-skey"
  hostname: "host.name.com"

  endpoint_server_mappings:
    - endpoints:
        ["adminaction", "auth", "telephony", "trustmonitor"]
      server: "test"
  is_msp: False

Mintime errors

Hi Im receiving errors with the trustmonitor, auth and activity logs.

2023-04-22 20:59:55 WARNING DuoLogSync: Shutting down due to [trustmonitor producer: [Received 400 Invalid request parameters (mintime must be within the past 180 days)]]
2023-04-22 20:59:55 WARNING DuoLogSync: Shutting down due to [activity producer: [Received 400 Invalid request parameters ('mintime' must be a timestamp in milliseconds)]]
2023-04-22 20:59:54 WARNING DuoLogSync: Shutting down due to [auth producer: [Received 400 Invalid request parameters ('mintime' must be a timestamp in milliseconds)]]

Am receiving queued logs admin action and telephony. Was receiving logs successfully until i updated the the code to the recently updated ones.
2023-04-22 20:59:55 ERROR DuoLogSync: check that the duoclient ikey and skey in the config file are correct

Have tried converting past 180 days to milliseconds to set mintime, with no success.

Documentation Poorly Written

Happy to see Duo moving forward with this but at its current state, documentation is barely there. The installation section details what is needed for the app install which is great. What it doesn't mention is:

  • You need the Admin API enabled with the permissions specified under the splunk connector: Grant Read Information, Grant Read Log, Grant Read Resource. This is briefly mentioned in the duo page linking to here but even then doesn't say perms required.

  • The destination/transport is where you are passing the data to. i.e. in my case ELK (particularly Log stash). So you need to make sure you have those port/paths open on the destination and the destination is listening. This was not clear at all on the installations stuff. I even originally though it was making a JSON file based on readings but figured out it was formatting and passing JSON formats to the transport as I tinkered more. Had to google python errors to get there.

  • errors in the log it makes aren't complete. I still don't have this working and am at "failed to write data to transport with". With no documentation support it makes it hard to get this going.

Hoping we receive better installation instructions/guide as this goes forward.

Receiving error while after first polling

Traceback (most recent call last):
File "/usr/local/bin/duologsync", line 33, in
sys.exit(load_entry_point('duologsync==1.0.0', 'console_scripts', 'duologsync')())
File "/usr/local/lib/python3.6/site-packages/duologsync-1.0.0-py3.6.egg/duologsync/app.py", line 62, in main
File "/usr/lib64/python3.6/asyncio/base_events.py", line 484, in run_until_complete
return future.result()
File "/usr/local/lib/python3.6/site-packages/duologsync-1.0.0-py3.6.egg/duologsync/consumer/consumer.py", line 84, in consume
File "/usr/local/lib/python3.6/site-packages/duologsync-1.0.0-py3.6.egg/duologsync/producer/producer.py", line 157, in get_log_offset
AttributeError: module 'six' has no attribute 'ensure_str'

Quits randomly - ssl.c error

021-01-06 03:34:44 ERROR    DuoLogSync: check that the duoclient host and/or proxy_server provided in the config file is correct
2021-01-06 03:34:44 WARNING  DuoLogSync: Shutting down due to [adminaction producer: [EOF occurred in violation of protocol (_ssl.c:877)]]

My config is correct, it will be running fine for a few days, a week, etc, then it will just stop.

SSL Error.

Hi

Python 3.9.1
duo_log_sync duo_log_sync-authlog_export_script.zip

config.yml
duoclient:
skey: "xxxx"
ikey: "xxx"
host: "xxxx"

logs:
logDir: "/tmp"
endpoints:
enabled: ["auth", "telephony", "adminaction"]
polling:
duration: 5
daysinpast: 180
checkpointDir: "/tmp"

transport:
protocol: "TCP"
host: "localhost"
port: 515
certFileDir: "/etc/ssl/certs"
certFileName: "ca-bundle.crt"

recoverFromCheckpoint:
enabled: False

but error
There is sparse communication to the duo api host.
What is the error and how can I deal with it?

Traceback (most recent call last):
File "/usr/local/python-3.9.1/bin/duologsync", line 33, in
sys.exit(load_entry_point('duologsync==1.0.0', 'console_scripts', 'duologsync')())
File "/usr/local/python-3.9.1/lib/python3.9/site-packages/duologsync-1.0.0-py3.9.egg/duologsync/app.py", line 12, in main
File "/usr/local/python-3.9.1/lib/python3.9/site-packages/duologsync-1.0.0-py3.9.egg/duologsync/duo_log_sync_base.py", line 79, in start
File "/usr/local/python-3.9.1/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/usr/local/python-3.9.1/lib/python3.9/site-packages/duologsync-1.0.0-py3.9.egg/duologsync/producer/telephony_producer.py", line 26, in telephony_producer
File "/usr/local/python-3.9.1/lib/python3.9/concurrent/futures/thread.py", line 52, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/python-3.9.1/lib/python3.9/site-packages/duo_client/admin.py", line 494, in get_telephony_log
response = self.json_api_call(
File "/usr/local/python-3.9.1/lib/python3.9/site-packages/duo_client/client.py", line 382, in json_api_call
(response, data) = self.api_call(method, path, params)
File "/usr/local/python-3.9.1/lib/python3.9/site-packages/duo_client/admin.py", line 195, in api_call
return super(Admin, self).api_call(method, path, params)
File "/usr/local/python-3.9.1/lib/python3.9/site-packages/duo_client/client.py", line 269, in api_call
return self._make_request(method, uri, body, encoded_headers)
File "/usr/local/python-3.9.1/lib/python3.9/site-packages/duo_client/client.py", line 337, in _make_request
response, data = self._attempt_single_request(
File "/usr/local/python-3.9.1/lib/python3.9/site-packages/duo_client/client.py", line 350, in _attempt_single_request
conn.request(method, uri, body, headers)
File "/usr/local/python-3.9.1/lib/python3.9/http/client.py", line 1255, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/python-3.9.1/lib/python3.9/http/client.py", line 1301, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/python-3.9.1/lib/python3.9/http/client.py", line 1250, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/python-3.9.1/lib/python3.9/http/client.py", line 1010, in _send_output
self.send(msg)
File "/usr/local/python-3.9.1/lib/python3.9/http/client.py", line 950, in send
self.connect()
File "/usr/local/python-3.9.1/lib/python3.9/site-packages/duo_client/https_wrapper.py", line 121, in connect
self.sock = self.default_ssl_context.wrap_socket(self.sock)
File "/usr/local/python-3.9.1/lib/python3.9/ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "/usr/local/python-3.9.1/lib/python3.9/ssl.py", line 1040, in _create
self.do_handshake()
File "/usr/local/python-3.9.1/lib/python3.9/ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
ConnectionResetError: [Errno 104] Connection reset by peer

Support Python 3.9+

Requesting support to run DLS with versions of Python newer than 3.8.

Detailed Description

Provide support for Python 3.9+ (3.9-3.12 as of 05/31/2024)

Use Case

We run DLS on Windows and cannot install Python 3.8.19 without having to compile it internally. Python 3.8 will go EOL October 2024 (this year).

demo data emitter

It would be nice for testing that this tool could emit demo data without pulling real log data for testing in dev and staging.

Authentication to Consumer

It seems that the expectation is that the consumer endpoint will accept all incoming requests. For a security tool that seems a little unusual. For instance, sending to something like Azure Log Analytics we need to be able to set up a header with the workspace id and key. Is there any plan to improve this tool soon or to add a specific connector for Azure Log Analytics?

duologsync crashes on timestamp formatting

When running, the logs do get sent, but the app will crash with:

ValueError: time data '2020-09-29T15:06:14+00:00' does not match format '%Y-%m-%dT%H:%M:%S.%f+00:00'

Looking at the logs I'm receiving and I didn't see that isotimestamp contains the microsecond part of the formatting:

"isotimestamp": "2020-09-29T15:06:14+00:00"

I checked through a few hundred entries and didn't see it anywhere. Removing the .%f from duologsync/producer/producer.py and reinstalling fixed this for me.

Note that I do not know if the time formatting is configurable from the Duo side or if this is the default way the isotimestamp is configured. It would probably be better for any time stamp formatting to be put into the config and not hardcoded.

Error starting application in windows

Simple bug when running this on windows.
Because i didn't have a /tmp folder, when i would try to start the application i was getting the following result, even though my config file was there.

C:>duologsync c:/config.yml
ERROR:root:Config file not found at location c:/config.yml...
ERROR:root:Please check path again...

Just quick and dirty discovery i changed the exception handling in the config_generator to spit out what was actually happening and thats when i discovered it wanted the tmp folder to exist.

except Exception as e
   logging.error("Config file not found at location {}...".format(config_file_path))
   logging.error("Please check path again...")
   logging.error(str(e))
   sys.exit(1)

C:\duo_log_sync>duologsync c:/config.yml
ERROR:root:Config file not found at location c:/config.yml...
ERROR:root:Please check path again...
ERROR:root:[Errno 2] No such file or directory: 'C:\tmp\duologsync.log'

Not sure if it should just be a documentation improvement, or if the logging should be improved in config_generator.py.

DuoLogSync: Shutting down due to [telephony producer: [Received 401 Invalid integration key in request credentials]]

Hi there,
In DUO admin I created an application called DuoSyncLog with type "Auth API" and used the integration key and secret key shown in the config.yml.
However when I started duo sync log I received the following error:

2020-10-30 00:12:37 INFO     Starting DuoLogSync
2020-10-30 00:12:37 INFO     DuoLogSync: Opening connection to mysyslogserver:514
2020-10-30 00:12:37 INFO     duo_client Admin initialized for ikey: DIIR45GSRB91OIDAA58C, host: api-12345.duosecurity.com
2020-10-30 00:12:37 INFO     telephony producer: fetching next logs after 120 seconds
2020-10-30 00:12:37 INFO     telephony consumer: waiting for logs

2020-10-30 00:14:37 INFO     telephony producer: fetching logs
2020-10-30 00:14:38 ERROR    DuoLogSync: check that the duoclient ikey and skey in the config file are correct
2020-10-30 00:14:38 WARNING  DuoLogSync: Shutting down due to [telephony producer: [Received 401 Invalid integration key in request credentials]]
2020-10-30 00:14:38 INFO     telephony producer: shutting down
2020-10-30 00:14:38 INFO     telephony consumer: shutting down

What could I be doing wrong? Am I using the wrong application type here in DUO?

appreciate any feedback!

Thanks J

Log streaming

This is a fairly large deployment with about 26,000 active users, so I expect there to be a lot of data to catch up on given I have just started this process.
I am running Python 3.8.8 on windows, and I started with an offset of 10 days, in the event something were to bug out on the windows box I should have 10 days to find it and start the DLS script again (at least that was my thinking).
I started the DLS script last night, and it started pulling in logs from ten days ago, so I figured I would let it run overnight and catch up.
I checked this morning and according to the DLS logs, it was pulling in 1-7 logs this morning. at each interval. Sweet so it caught up, or so I thought.
I made a small change in the log file placement, and restarted the DLS script and I was surprised to see it pulling in 1000 logs at a time again. But then this is a larger implementation, so perhaps some mass user edits were happening, so I checked the logs that were being brought in, and come to find out it was bringing in new logs from ten days ago and working it's way back to current time.

I have checked a couple of the Auth messages and the timestamp in the Auth messages are not repeats, and I have not removed or edited the checkpoint files, so it is leaving me wondering if I need to continually restart the process say every 8 hours to ensure I am capturing all the logs from the DUO cloud or if I need to downgrade Python to 3.7 or 3.6, or change it over to a Linux box to ensure it is processing all the files effectively and is reliable in it's gathering of logs. I just happened to have a windows box extra laying around in my VM space, so figured I would use it.

As a side note, I was hoping that commenting out the duologsync portion of the config would cause the log file to not be written, but instead it just started a new file in the c:/tmp folder. So some suggestions on log rotation for non-windows specialists would probably be helpful or include something in the script that handles the rotation that the user can specify how big or how many files to keep.

adminaction consumer: failed to write some logs

I am getting the following error about adminaction unable to write to logs. Any ideas?

2020-12-22 23:19:50 INFO Starting DuoLogSync
2020-12-22 23:19:50 INFO DuoLogSync: Opening connection to ls01-dev-qa.aofk.net:2514
2020-12-22 23:19:50 INFO duo_client Admin initialized for ikey: *******, host: api-**.duosecurity.com
2020-12-22 23:19:50 ERROR Could not read checkpoint file for adminaction logs, consuming logs from {log_offset} timestamp
2020-12-22 23:19:50 ERROR Could not read checkpoint file for auth logs, consuming logs from {log_offset} timestamp
2020-12-22 23:19:50 INFO adminaction producer: fetching next logs after 120 seconds
2020-12-22 23:19:50 INFO adminaction consumer: waiting for logs
2020-12-22 23:19:50 INFO auth producer: fetching next logs after 120 seconds
2020-12-22 23:19:50 INFO auth consumer: waiting for logs
2020-12-22 23:21:50 INFO adminaction producer: fetching logs
2020-12-22 23:21:50 INFO auth producer: fetching logs
Traceback (most recent call last):
2020-12-22 23:21:50 INFO adminaction producer: adding 57 logs to the queue
2020-12-22 23:21:50 INFO adminaction producer: added 57 logs to the queue
2020-12-22 23:21:50 INFO adminaction producer: fetching next logs after 120 seconds
2020-12-22 23:21:50 INFO adminaction consumer: received 57 logs from producer
2020-12-22 23:21:50 INFO adminaction consumer: writing logs
2020-12-22 23:21:50 WARNING adminaction consumer: failed to write some logs
File "/usr/local/lib/python3.6/dist-packages/duologsync-2.0.0-py3.6.egg/duologsync/consumer/consumer.py", line 66, in consume
File "/usr/local/lib/python3.6/dist-packages/duologsync-2.0.0-py3.6.egg/duologsync/writer.py", line 97, in write
File "/usr/lib/python3.6/asyncio/streams.py", line 329, in drain
raise exc
File "/usr/lib/python3.6/asyncio/selector_events.py", line 714, in _read_ready
data = self._sock.recv(self.max_size)
ConnectionResetError: [Errno 104] Connection reset by peer

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/bin/duologsync", line 11, in
load_entry_point('duologsync==2.0.0', 'console_scripts', 'duologsync')()
File "/usr/local/lib/python3.6/dist-packages/duologsync-2.0.0-py3.6.egg/duologsync/app.py", line 78, in main
File "/usr/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete
return future.result()
File "/usr/local/lib/python3.6/dist-packages/duologsync-2.0.0-py3.6.egg/duologsync/consumer/consumer.py", line 88, in consume
File "/usr/local/lib/python3.6/dist-packages/duologsync-2.0.0-py3.6.egg/duologsync/producer/producer.py", line 205, in get_log_offset
TypeError: 'NoneType' object is not subscriptable

Customize Log Storage Location

Hi-

I want to use this to synchronize logs with our SIEM across multiple customers, some of whom are on our MSP tenant and some of whom have their own that we don't manage. Therefore, I need to run multiple instances of the duologsync and have the log files and duologsync.log go to different subdirectories under /tmp.

Thanks!

File or STDOUT transport

It would be nice if a transport were available that, rather than sending the logs to some server, output them into files, or alternatively to STDOUT.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.