Giter Club home page Giter Club logo

ticktock's Introduction

  • πŸ‘‹ Hi, my name is Yongtao You. I'm an independent software professional.
  • πŸ‘€ I’m currently working on an open source dev op tool called TickTockDB, a time series database.
  • πŸ“« I can be reached at [email protected]

ticktock's People

Contributors

dependabot[bot] avatar jens-ylja avatar ylin30 avatar ytyou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ticktock's Issues

API returns 404 or 500

Trying to send data from home assistant to ticktock via the RESTful command, however the API keeps returning 404 (when method=put) or 500 (method=post). This is my payload:

b'put cpu _usage 1674096240.256369 0'

with content type application/x-www-form-urlencoded.

Do we need to create the table/metric in advance?

TT doesn't honor downsampling interval of milliseconds

Hello,
unfortunately I have to file another issue related to data querying from TickTock.
I've seen this right at my first experiments with the provided TT & Grafana Docker bundle, but interpreted as a stupid user error.

But now, with my "production" setup the same issue happens and thus I dug a little deeper.

The effect: When drilling down a metric with Grafana - more and more shortening the inspected interval - all works well up to a certain interval duration. When shortening the interval even more the response data collapses to a single point (or a few points).
How long this "magic" interval is, depends on the screen width and resolution - for my 2560x1440 monitor, this happens between 10 and 5 minutes interval. With my mobile phone 5 minutes still working.

The trigger for this effect are the maximum displayable data points and - derived from this - the downsampling interval used by Grafana. All works well as long as the downsampling interval is one second or more. Data collapses once Grafana switches to a milliseconds interval.

I verified this at command line. The origin data within the database have one point each 10 seconds.

For reference - number of metrics and total number of data points :

$ wget -O - --quiet 'http://localhost:6182/api/query?start=1681662000000&end=1681662600000&m=none:1s-avg:s10e.power{_field=Batterie}' | jq '.[].metric' | wc -l
1
$ wget -O - --quiet 'http://localhost:6182/api/query?start=1681662000000&end=1681662600000&m=none:10s-avg:s10e.power{_field=Batterie}' | jq '.[].dps' | wc -l
62
$ wget -O - --quiet 'http://localhost:6182/api/query?start=1681662000000&end=1681662600000&m=none:1s-avg:s10e.power{_field=Batterie}' | jq '.[].dps' | wc -l
62

This collapses to two points, when switching to 500 milliseconds downsampling interval:

$ wget -O - --quiet 'http://localhost:6182/api/query?start=1681662000000&end=1681662600000&m=none:500ms-avg:s10e.power{_field=Batterie}' | jq '.[].dps'
{
  "1681662000": -334.68,
  "1681662500": -570
}

It colapses to a single point when writing 1s as 1000ms:

$ wget -O - --quiet 'http://localhost:6182/api/query?start=1681662000000&end=1681662600000&m=none:1000ms-avg:s10e.power{_field=Batterie}' | jq '.[].dps'
{
  "1681662000": -373.9
}

I've dug even deeper and the result is: NNNms is simply interpreted as NNNs.

Note:
My TT instance is configured without a tsdb.timestamp.resolution and thus works with the default tsdb.timestamp.resolution = second. I didn't tested if the behaviour is the same with tsdb.timestamp.resolution = millisecond.

Proposed solution:

  • TT should accept and interpret milliseconds in downsampling interval.
  • If downsampling interval is < 1s for databases with tsdb.timestamp.resolution = second, it should be interpreted as if 1s was given as interval.

tt process hangs after ping request

Hello there!

First at all, project doesn't build due to error: include/kv.h:60:46: error: β€˜std::string’ has not been declared which can simple fixed by adding #include <string> to include/kv.h

After build, i just simple run ./bin/tt -c conf/tt.conf and in other console run curl -XPOST http://127.0.0.1:6182/api/admin?cmd=ping after that tt proces start eating 100% one of four CPU cores.

Checked versions: v0.11.8 - v0.12.1
Build command # make -f Makefile.ubuntu all
Build log: https://pastebin.com/ZscYapa5
System info: https://pastebin.com/qDnux7hs
$ ./bin/tt -c conf/tt.conf output: https://pastebin.com/34BP8YDw
ticktock.log (log.level = DEBUG): https://pastebin.com/mD5rWitT

json write should return !ok if --http.request.format=plain

0.11.7

By default TT --http.request.format=plain, /api/put only accepts plain put. However, if /api/put sends json, it still returns 200.

The other way around is correct, i.e., --http.request.format=json, /api/put fails if it sends plain put.

TT returns inconsistent data on query

Hallo,

I experimented a lot with the line format (OpenTSDB vs. Influx) and general layout of my time series.
With this the data stored in TT became some kind of a huddle.
So I decided to clean up. But don't want to loose my data and decided too, to fetch them all from TT, reorganize, cleanup, etc. and finally re-insert to a new clean TT instance.

Unfortunately I failed with the first task - fetch them all.
I tried the following (as bash script):

metric=some.metric
for month in 3 4
do
   for range in `ls -1rt /ticktock/ticktock/data/2023/${month} | grep -v '.back'`
   do
       wget -O - 'http://localhost:6182/api/query?start='`echo $range | sed -e 's/[.]/\&end=/'`'&m=none:'${metric} >  {range}.${metric}.json
       sleep 1
   done
done

This wrote me some files with data and some with only [].
Running it a second time maybe returned different results.

To verify if all the data is gone, I checked with Grafana and sometimes got data, some other times not. After waiting a couple of minutes and just running the Grafana query again, all is fine.

Finally I checked the logs and found a lot of error messages like:

2023-04-22 14:09:48.658 [ERROR] [qexe_task_0] Failed to mmap file /ticktock/ticktock/data/2023/4/1681084800.1681171200/data.0, errno = 12
2023-04-22 14:09:48.659 [ERROR] [qexe_task_0] Caught exception while performing query.

Maybe these are the reasons for the [] answers.

I'm running TT (still version 0.11.4) on a ODROID HC1 which has a total of 2 GB RAM only. Is it a valid assumption this low memory availability causes the misbehaviour?

The effect seems to self-heal after a while. If the low memory pressure is the reason I would propose to answer with HTTP 503 (temporary unavailable) instead of 200 and []. The 503 status code - in my opinion - would transport quite the correct information: "For now I cannot answer, please try later."

A a workaround, I will slow down my queries for data export or even move all (temporarily) to a larger scale machine.
But I assume, the same effect will happen if I run a query over the full time range (start == begin of March, end == now).

Thanks
Jens

how to add ticktock to systemd

I am trying to add ticktock to systemd so that it could auto-start on boot. The server seems to be running, but it does not respond to any connection. Here is the log from systemctl:

khadas@khadas-vim3:~$ sudo systemctl status ticktock
● ticktock.service - TickTock service
     Loaded: loaded (/etc/systemd/system/ticktock.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2023-01-22 20:41:58 CST; 3min 30s ago
   Main PID: 2174 (tt)
      Tasks: 26 (limit: 4203)
     Memory: 9.3M
        CPU: 2min 52.082s
     CGroup: /system.slice/ticktock.service
             └─2174 /home/khadas/ticktock/bin/tt -c /home/khadas/ticktock/conf/tt.conf

Jan 22 20:42:37 khadas-vim3 tt[2174]: /home/khadas/ticktock/bin/tt(+0x3ee08)[0xaaaaaf69ee08]
Jan 22 20:42:37 khadas-vim3 tt[2174]: /home/khadas/ticktock/bin/tt(+0x32238)[0xaaaaaf692238]
Jan 22 20:42:37 khadas-vim3 tt[2174]: /home/khadas/ticktock/bin/tt(+0x32608)[0xaaaaaf692608]
Jan 22 20:42:37 khadas-vim3 tt[2174]: /home/khadas/ticktock/bin/tt(+0x32edc)[0xaaaaaf692edc]
Jan 22 20:42:37 khadas-vim3 tt[2174]: /home/khadas/ticktock/bin/tt(+0x1d67c)[0xaaaaaf67d67c]
Jan 22 20:42:37 khadas-vim3 tt[2174]: /home/khadas/ticktock/bin/tt(+0x1de90)[0xaaaaaf67de90]
Jan 22 20:42:37 khadas-vim3 tt[2174]: /home/khadas/ticktock/bin/tt(+0x34130)[0xaaaaaf694130]
Jan 22 20:42:37 khadas-vim3 tt[2174]: /lib/aarch64-linux-gnu/libstdc++.so.6(+0xd31fc)[0xffffa56431fc]
Jan 22 20:42:37 khadas-vim3 tt[2174]: /lib/aarch64-linux-gnu/libc.so.6(+0x7d5c8)[0xffffa536d5c8]
Jan 22 20:42:37 khadas-vim3 tt[2174]: /lib/aarch64-linux-gnu/libc.so.6(+0xe5d1c)[0xffffa53d5d1c]

And this is the systemd config:

[Unit]
Description=TickTock service
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=always
RestartSec=1
User=khadas
ExecStart=/home/khadas/ticktock/bin/tt -c /home/khadas/ticktock/conf/tt.conf

[Install]
WantedBy=multi-user.target

Any idea or example on this?

crash on Raspberry PI zero

After running for a few hours ticktock will crash on an Raspberry PI zero:

Nov 5 01:50:02 zero kernel: [663210.374157] Alignment trap: not handling instruction ed810b01 at [<00020034>]
Nov 5 01:50:02 zero kernel: [663210.374183] 8<--- cut here ---
Nov 5 01:50:02 zero kernel: [663210.375328] Unhandled fault: alignment exception (0x811) at 0x7512fcf2
Nov 5 01:50:02 zero kernel: [663210.376498] pgd = 4a7079d8
Nov 5 01:50:02 zero kernel: [663210.377753] [7512fcf2] *pgd=070ec831, *pte=0645834f, *ppte=0645883f

Looks like an unaligned memory accesses. This seems to be a problem on some ARM chips.

I will compile with DEBUG_FLAGS set and watch.

Installing the TickTockDB Demo in Docker fails on a RaspberryPi model 2 with the 32 bit version of Bookworm

I followed the directions in the ReadMe and get all the way to the end however the ticktock:latest-grafana points to a 64 bit version

pmw@dockerpi:~ $ sudo docker run -d --name ticktock -p 3000:3000 -p 6181-6182:6181-6182 -p 6181:6181/udp ytyou/ticktock:latest-grafana
Unable to find image 'ytyou/ticktock:latest-grafana' locally
latest-grafana: Pulling from ytyou/ticktock
76769433fd8a: Pull complete
b658d5695fae: Pull complete
c467919ea0e7: Pull complete
a75884a1ca9f: Pull complete
86006b9b337f: Pull complete
137b558371e5: Pull complete
09305ca2cd2b: Pull complete
69fc7cd00537: Pull complete
a44d8a58e254: Pull complete
4f4fb700ef54: Pull complete
Digest: sha256:0be5c3face65575aa4590665cc954292d36d1f049f753091cecd26ab65e014da
Status: Downloaded newer image for ytyou/ticktock:latest-grafana
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm/v7) and no specific platform was requested
9f5f6f7b26bec270cff41113ae716d7e6adc15ab248c950af918bc39baea0390

is there a way to get the 32 bit version?

Issues installing on a RPI 2 Model B running Buster

I'm trying to install TICKTOCK (I don't use Docker) so I'm following the directions from 1.3 Install TickTock from a binary package I've been able to successfully (after a couple hiccups) download the ARMv6 32 bit binary and expand the files.

** Issue 1 **: Step 3. The structure of the ticktock is as:

The structure I have is:

pi@ryanpi:~/tmp/ticktock $ ls -l
total 76
drwxr-xr-x 2 pi pi  4096 Mar  2  2023 admin
drwxr-xr-x 4 pi pi  4096 Mar  2  2023 api-examples
drwxr-xr-x 2 pi pi  4096 Mar  2  2023 bin
drwxr-xr-x 2 pi pi  4096 Mar  2  2023 conf
drwxr-xr-x 2 pi pi  4096 Mar  2  2023 docker
drwxr-xr-x 4 pi pi  4096 Sep 23 18:36 docs
-rw-r--r-- 1 pi pi 35149 Mar  2  2023 LICENSE
drwxr-xr-x 7 pi pi  4096 Mar  2  2023 objs
-rw-r--r-- 1 pi pi  3972 Mar  2  2023 README.md
drwxr-xr-x 2 pi pi  4096 Mar  2  2023 scripts
drwxr-xr-x 2 pi pi  4096 Mar  2  2023 tools

the documentation shows directories ** append **, ** data ** and ** log ** and my list shows ** docker **, ** docs **, ** obis ** and ** README ** which are not in the documentation.

** Issue 2 **: Run ticktock:
When I run the command I get an error:

pi@ryanpi:~/tmp/ticktock $ bin/tt -c conf/tt.conf
bin/tt: /lib/arm-linux-gnueabihf/libm.so.6: version `GLIBC_2.29' not found (required by bin/tt)

The Runtime Requirements SAY:

To run TickTock, you will need,

A Linux system (e.g. Ubuntu, CentOS, etc.). We tested both Ubuntu and CentOS. Other Linux systems are not tested yet.
glibc 2.17 or up

But it looks like I have v2.28

pi@ryanpi:~/tmp/ticktock $ ldd --version
ldd (Debian GLIBC 2.28-10+rpi1) 2.28
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.

any thoughts?

Trying to under stand the format of PUT's and GET's

So I have TickTock running and I the ping/pong works and the example curl statements to put and get readings work fine but I can seem to get others to work.

Let's say I want to collect temperature, humidity and pressure readings from more than one location (say 'location1' and 'location2').

1)How would I code a put (or multiple put's) for the three values and for the two locations.
2) how would I code a GET to get the data from one of the locations?

Once I can do that I should be able to work more things out myself.
p.s. I have been looking at the documentation at opentsdb.net/docs but so far it hasn't helped.

Thanks!

Connecting TicktockDB to Grafana not working

I have ticktock running on a Raspberry Pi 2 model BΒ v1.1 and it has been running great for several weeks. NOTE: this is not a docker image.
I'm using Node-RED to collect data from some sensors (bme280's) in three different locations. In NR I can create graphs of the data with no problem.

Now I want to use Grafana to connect to the DB.
I installed Grafana and logged into Grafana
I then selected the Connections option from the hamburger menu and selected OpenTSDB as the Data source
Next I open the OpenTSDB Data source and set the HTTP URL to http://192.168.1.192:6182 (192.168.1.192 is the IP of the RPI)
I click 'Save and Test' and it shows 'Data source is working'
I click the 'Explore view' option (at bottom of the screen)

When I click on the 'Metric name' option it shows 'No options found'

Any suggestions?

Implementing variables?

I'm trying to add a variable in grafana from TickTockDB data source, and it doesn't produce any results. I suspect the API to do this isn't implemented in TickTockDB. Do you know more about this? I don't know enough about what's missing and the design in order to implement it.

I use collectd to feed the data, but I put telegraf in the middle so that it can add the necessary tags (collectd's opentsdb output doesn't have tags). I want mainly a variable from the "host" tag, that would be adequate. Then I could select the host for a chart from a drop down menu.

String value

I read in the docs that only int and float value types are allowed. I also have some strings. Coming from influx, this never was a problem. Any ideas how i could inject strings to metrics?

change healthcheck.sh to return a message

currently healthcheck.sh exits with a 0 or 1. Might I suggest adding an 'echo' to see the result, For example:

#!/bin/bash
#
# ping ticktock server

DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
. $DIR/common.sh

RESPONSE=`$CURL -XPOST "http://$HOST:$PORT/api/admin?cmd=ping"`

# exit 0 means healthy; exit 1 means unhealthy;
if [[ $RESPONSE == "pong" ]]; then
    echo "ticktock running"
    exit 0
else
    echo "ticktock down"
    exit 1
fi

very weird value reading

I have spend the last two hours trying to figure out why the data inside ticktock does not match the value inside home assistant.
Two differences:

  1. Somehow some reading automatically rounded to integer, instead of float, even the payload is something like put cpu_temperature 1674275610.127971 120.9. At beginning I thought it is a bug in home assistant, but after checking the log I am sure the payload contains float.
  2. Some float metrics seem to be constant for a very long time, even if I was using random generator to generate different values as payload each time. I also verified in the log that the random value are really "random", not constant.

RFE: allow 'ping' and/or 'healthcheck' to run with curl

Ping, and healthcheck, are great to check and see if ticktock is running, but you can only run them on the device that ticktock is running on.

While you can use curl on a different device to retrieve data, that device will only know if tick-tock is down by having a error in a curl command.

My case is using Node-red to access the ticktock. If I have Node-RED running on the device hosting ticktock (device A) everything is fine because NR has an 'exec' node so it can run 'ping.sh' on that machine and get the status.

However, if I run the Node-RED flow on another device (device B), the ping will fail as because the ping.sh file and database are not on device B.

If ping and/or healthcheck could be run via curl it would be a useful addition.

HTTP pipelining does not work

at some point it fails to insert a datapoint and returns error

using following snippet to write 1000 POSTS to /api/put

`import asyncio
import atexit
from dugong import HTTPConnection, AioFuture
...
with HTTPConnection(OPENTSDB_HOST, PORT) as conn:
# This generator function returns a coroutine that sends
# all the requests.

def send_requests():
    for metric in get_requests():
        #print("send: {}".format(metric))
        yield from conn.co_send_request(method='POST', path="/api/put", body=bytes(metric, "utf-8"))

# This generator function returns a coroutine that reads
# all the responses
def read_responses():
    bodies = []
    for i in range(TARGET_WRITES_PER_MIN):
        
        resp = yield from conn.co_read_response()
        sys.stdout.write("status={}\n".format(resp.status))
        assert resp.status == 200 or resp.status == 204 
        buf = yield from conn.co_readall()
        bodies.append(buf)
        print("recv so far {} responses".format(len(bodies)))
    return bodies

# Create the coroutines
send_crt = send_requests()
recv_crt = read_responses()

# Register the coroutines with the event loop
send_future = AioFuture(send_crt, loop=loop)
recv_future = AioFuture(recv_crt, loop=loop)
print("running loop")
# Run the event loop until the receive coroutine is done (which
# implies that all the requests must have been sent as well):
loop.run_until_complete(recv_future)

# Get the result returned by the coroutine
bodies = recv_future.result()

`

tsdb.timestamp.resolution

HI!

Great Job here! I like it.
One question:

  • I am running latest code
  • I put data in with timestamp in milliseconds
  • metrics have been created - but empty
  • I specified tsdb.timestamp.resolution to milliseconds (in the conf file)
  • I deleted all data and restarted with new config.
  • When I put data in - and provide timestamp in milliseconds - this works now!
  • But - they are still saved in ticktock as seconds.
  • When I query them - I get

[{"metric":"boiler_data_ww.wwcurtemp","tags":{"host":"BUDERUS"},"aggregateTags":[],"dps":{"1666436125":52.7999999999999972,"1666436135":52.7000000000000028,"1666436145":52.7000000000000028,"1666436155":52.7999999999999972,"1666436165":52.7999999999999972,"1666436175":52.7999999999999972,"1666436185":52.7999999999999972,"1666436195":52.7999999999999972,"1666436205":52.7999999999999972,"1666436215":52.7999999999999972,"1666436225":52.7999999999999972}}]%

What am I doing wrong.
My expectation was, that ticktock also saves them in millis, and not only accepting them and rounding.

Docs for Docker environment variables/cmnd lines options for volume storage etc ?

Hey Guys,

Not seeing anything in the Docs about what docker environment variables you support and options for redirecting data storage (volumes) outside of the container

Can you point me to the docs on this please ?

I have played with the demo container but that appears to store all data within the container and is not persistent

Craig

TT crashes with SIGSEGV when executing a query

I have some curious crashes with SIGSEGV (11) when TT executes a query issued via Grafana.

After several hours of struggling around, I drilled it down to the following simple procedure (reproducible with just telnet and curl):

  1. start a new clean TT instance
  2. put a measurement using the OpenTSDB line protocol: put sensor.status 1681147860 0 type=s10e device=s10e
  3. execute a query to verify: curl 'http://localhost:6182/api/query?start=0&m=first:sensor.status' -> you should see one metric
  4. put another measurement using the OpenTSDB line protocol - note, this has one tag more: put sensor.status 1681147870 0 type=s10e device=s10e _field=val
  5. re-execute the query -> you should see two metrics now - they differ in timestamp and the additional tag
  6. put a third measurement using the Influx line protocol: sensor.status,device=s10e,type=s10e val=0 1681147680
  7. re-execute the query -> crash

You can execute this in any order of steps 2, 4 and 6. You can even leave step 4 out.
If - on the other side - running 4 and 6 only, all works well -> you will see two metrics which are completely identical except the time stamps.

One can think - what a crazy idea to mix OpenTSDB and Influx line protocols. But I was unsure which one to use is the better idea.
Using OpenTSDB would use consistent APIs for read and write. But when having a series of fields within a measurement, the Influx line protocol seems to be much more compact.

Note: I ran this using TT version 0.11.4 on ARM v71. I'll try the behaviour with 0.11.6 within the next couple of days.

older data not accessible, and data corruption after a few days

Hi there,
I used version 0.10.2 since the 21/Jan, writing ongoing minimal data for 2 electricity meters in. I also use Grafana for showing the data. This worked fine for 6 days. Today TT had trouble to return data, randomly only returning data for some out of my 4 charts. I decided to try to restart TT (in a clean way), but it was hanging to shut down for a few minutes. The first start after did not work, so I shut it down again and the next start after succeeded. The issue was that all the old data is missing now.
Another thing what I tried yesterday was to access older data from the first day, and these requests run forever, no data returned at all, I mean the requests didn't get a response. Maybe these attempts also caused the issue I saw today.
I'm a bit down to have lost my data again (last time was with the incompatible version upgrade.
Anyway, I attached the data folder (maybe you could extract it and send it to me?), the log file and a screenshot of how it looked yesterday.
And I must say that I appreciate all your work, I'm aware that it is still very much Beta.
Have a lovely day
Dashboard-showing-good-data
data-backup_28-Jan.zip
ticktock.log.zip

TT produces huge `index` files

Hi,

I'm just back with installing TickTockDB.
After months of successful execution I'm setting up another system - using the latest version 0.12.1-beta build from main branch just a few days ago. The TT instance was set up at Dec 20 afternoon.

This worked well till yesterday, but today the data from yesterday cannot be queried any longer.
Within the logs I find error messages in series:

2023-12-27 08:56:11.227 [ERROR] [http_20_task_0] Failed to mmap file /var/lib/ticktock/data/2023/12/1703548800.1703635200/index, errno = 12

This remembers me to #48 which finally was a problem on 32 bit machines only.
Thus I ran the same steps to drill down.

  1. top -> gives me a virtual size of 2121952 and resistant size of 3792
  2. ps -eLf -> gives me 62 threads
  3. pmap -X -> gives me a size (sum) of 2121956 and resistant of 5536 with one file (index) having a size of 1593392

To cross-check, I stopped TT and created a backup (tar cvfz data.tgz data/) and switched back to TT version 0.11.8-beta (as used on my working first setup). But the behaviour doesn't changed. Thus I inspected the file system and found the following:

ticktock/data$ ls -lR | grep index
-rw-r--r-- 1 jba jba  24576 Dec 22 02:00 index
-rw-r--r-- 1 jba jba  24576 Dec 21 19:31 index
-rw-r--r-- 1 jba jba  24576 Dec 23 02:00 index
-rw-r--r-- 1 jba jba  24576 Dec 21 01:51 index
-rw-r--r-- 1 jba jba  24576 Dec 24 02:00 index
-rw-r--r-- 1 jba jba  24576 Dec 22 01:51 index
-rw-r--r-- 1 jba jba  24576 Dec 25 02:00 index
-rw-r--r-- 1 jba jba  24576 Dec 23 01:51 index
-rw-r--r-- 1 jba jba 1631632048 Dec 26 15:15 index
-rw-r--r-- 1 jba jba 1631632078 Dec 27 01:51 index

I verified this with the backup tar:

ticktock$ tar tvfz data.tgz | grep index
-rw-r--r-- jba/jba       24576 2023-12-21 19:31 data/2023/12/1703030400.1703116800.back/index
-rw-r--r-- jba/jba       24576 2023-12-21 01:51 data/2023/12/1703116800.1703203200.back/index
-rw-r--r-- jba/jba       24576 2023-12-24 02:00 data/2023/12/1703203200.1703289600/index
-rw-r--r-- jba/jba       24576 2023-12-22 02:00 data/2023/12/1703030400.1703116800/index
-rw-r--r-- jba/jba       24576 2023-12-25 02:00 data/2023/12/1703289600.1703376000/index
-rw-r--r-- jba/jba       24576 2023-12-22 01:51 data/2023/12/1703203200.1703289600.back/index
-rw-r--r-- jba/jba  1631632078 2023-12-27 01:51 data/2023/12/1703635200.1703721600/index
-rw-r--r-- jba/jba       24576 2023-12-23 01:51 data/2023/12/1703289600.1703376000.back/index
-rw-r--r-- jba/jba       24576 2023-12-23 02:00 data/2023/12/1703116800.1703203200/index
-rw-r--r-- jba/jba  1631632048 2023-12-26 15:15 data/2023/12/1703548800.1703635200/index

The data.tgz file itself has a size of 5MB only:

ticktock$ ls -lh data.tgz 
-rw-rw-r-- 1 jba jba 5.0M Dec 27 10:02 data.tgz

This means the both index files have a serious amount of compressible content (maybe gaps or zeros).

I double checked the index files with my other installation (running since April and having a total size for data of about 350MB) - all index files there have exactly the same size of 24576 bytes.

The failure happened this night, seven days after the DB was set up. Do we have an operation which runs with a delay of one week?
I'll remove the database and re-run for the next (seven) days with 0.12.1-beta - will see if it happens again.

unrecognized option '--http.request.format'

Hi there

running the following command from the doc

docker run -d --name ticktock -h ticktock -p 6182:6182 -p 6181:6181 --cpus 1 -m 8GB --memory-reservation 6GB ytyou/ticktock:latest --tsdb.timestamp.resolution millisecond --http.request.format json

I get the following error

/opt/ticktock/bin/ticktock: unrecognized option '--http.request.format'

Any idea?

Thanks

crash on low memory

I'm running ticktock on a Raspberry PI zero. Memory is constrained. After some days on collecting data ticktock crashes. But not just a simple crash or core dump, it rotates on "[tcp_1_task_0] Interrupted, shutting down...". So, the process never ends and fills the log file quite fast.

So, could we please fix the wrong crash handling. And memory handling has room for improvement, too.

Other that that: thank you for ticktock!

basic query test cases - single data point in a time series

This is a list of UTs to cover very basic query scenarios. It is not complete, to be added. Two issues are reported at the end of the ticket.

TT-dev: 512624f (future release: 0.11.6)

ticktock@546c164b2d87:~/ticktock$ ./bin/tt -c conf/tt.conf --http.server.port 6182,6183 --http.listener.count 2,2 &

Scenarios:
Just add a data point with 2 tag. Then issue queries with a combination of 3 parameters, e.g., m=avg:1m-avg:test.cpu.idle

  1. Aggregator (avg, sum, count)
  2. Downsample (1m-avg, 1m-sum, 1m-count)
  3. No Downsample

Steps:

  1. Add a data point with 2 tags.
    [Yi-MBP /]$ curl -XPOST 'http://192.168.1.41:6182/api/put' -d 'put test.cpu.idle 1679757000 100 host=aaa cpu=0'

  2. m=avg:1m-avg:test.cpu.idle{host=aaa,cpu=0}
    [Yi-MBP ~]$ curl 'http://192.168.1.41:6182/api/query?start=1679757000&end=1679757700&m=avg:1m-avg:test.cpu.idle\{host=aaa,cpu=0\}' [{"metric":"test.cpu.idle","tags":{"cpu":"0","host":"aaa"},"aggregateTags":[],"dps":{"1679757000":100.0}}]

  3. m=avg:1m-sum:test.cpu.idle{host=aaa,cpu=0}
    [Yi-MBP ~]$ curl 'http://192.168.1.41:6182/api/query?start=1679757000&end=1679757700&m=avg:1m-sum:test.cpu.idle\{host=aaa,cpu=0\}' [{"metric":"test.cpu.idle","tags":{"cpu":"0","host":"aaa"},"aggregateTags":[],"dps":{"1679757000":100.0}}]

  4. m=avg:1m-count:test.cpu.idle{host=aaa,cpu=0}
    [Yi-MBP ~]$ curl 'http://192.168.1.41:6182/api/query?start=1679757000&end=1679757700&m=avg:1m-count:test.cpu.idle\{host=aaa,cpu=0\}' [{"metric":"test.cpu.idle","tags":{"cpu":"0","host":"aaa"},"aggregateTags":[],"dps":{"1679757000":1.0}}]

  5. m=sum:1m-avg:test.cpu.idle{host=aaa,cpu=0}
    [Yi-MBP ~]$ curl 'http://192.168.1.41:6182/api/query?start=1679757000&end=1679757700&m=sum:1m-avg:test.cpu.idle\{host=aaa,cpu=0\}' [{"metric":"test.cpu.idle","tags":{"cpu":"0","host":"aaa"},"aggregateTags":[],"dps":{"1679757000":100.0}}]

  6. m=sum:1m-sum:test.cpu.idle{host=aaa,cpu=0}
    [Yi-MBP ~]$ curl 'http://192.168.1.41:6182/api/query?start=1679757000&end=1679757700&m=sum:1m-sum:test.cpu.idle\{host=aaa,cpu=0\}' [{"metric":"test.cpu.idle","tags":{"cpu":"0","host":"aaa"},"aggregateTags":[],"dps":{"1679757000":100.0}}]

  7. m=sum:1m-count:test.cpu.idle{host=aaa,cpu=0}
    [Yi-MBP ~]$ curl 'http://192.168.1.41:6182/api/query?start=1679757000&end=1679757700&m=sum:1m-count:test.cpu.idle\{host=aaa,cpu=0\}' [{"metric":"test.cpu.idle","tags":{"cpu":"0","host":"aaa"},"aggregateTags":[],"dps":{"1679757000":1.0}}]

  8. m=count:1m-avg:test.cpu.idle{host=aaa,cpu=0}
    [Yi-MBP ~]$ curl 'http://192.168.1.41:6182/api/query?start=1679757000&end=1679757700&m=count:1m-avg:test.cpu.idle\{host=aaa,cpu=0\}' [{"metric":"test.cpu.idle","tags":{"cpu":"0","host":"aaa"},"aggregateTags":[],"dps":{"1679757000":1.0}}]

  9. m=count:1m-sum:test.cpu.idle{host=aaa,cpu=0}
    [Yi-MBP ~]$ curl 'http://192.168.1.41:6182/api/query?start=1679757000&end=1679757700&m=count:1m-sum:test.cpu.idle\{host=aaa,cpu=0\}' [{"metric":"test.cpu.idle","tags":{"cpu":"0","host":"aaa"},"aggregateTags":[],"dps":{"1679757000":1.0}}]

  10. m=count:1m-count:test.cpu.idle{host=aaa,cpu=0}
    [Yi-MBP ~]$ curl 'http://192.168.1.41:6182/api/query?start=1679757000&end=1679757700&m=count:1m-count:test.cpu.idle\{host=aaa,cpu=0\}' [{"metric":"test.cpu.idle","tags":{"cpu":"0","host":"aaa"},"aggregateTags":[],"dps":{"1679757000":1.0}}]

  11. m=avg:test.cpu.idle{host=aaa,cpu=0}
    [Yi-MBP ~]$ curl 'http://192.168.1.41:6182/api/query?start=1679757000&end=1679757700&m=avg:test.cpu.idle\{host=aaa,cpu=0\}' [{"metric":"test.cpu.idle","tags":{"cpu":"0","host":"aaa"},"aggregateTags":[],"dps":{"1679757000":100.0}}]

  12. m=sum:test.cpu.idle{host=aaa,cpu=0}
    [Yi-MBP ~]$ curl 'http://192.168.1.41:6182/api/query?start=1679757000&end=1679757700&m=sum:test.cpu.idle\{host=aaa,cpu=0\}' [{"metric":"test.cpu.idle","tags":{"cpu":"0","host":"aaa"},"aggregateTags":[],"dps":{"1679757000":100.0}}]

  13. m=count:test.cpu.idle{host=aaa,cpu=0}
    [Yi-MBP ~]$ curl 'http://192.168.1.41:6182/api/query?start=1679757000&end=1679757700&m=count:test.cpu.idle\{host=aaa,cpu=0\}' [{"metric":"test.cpu.idle","tags":{"cpu":"0","host":"aaa"},"aggregateTags":[],"dps":{"1679757000":1.0}}]

Issue 1: Interestingly: If using coun or count2, the result still works, i.e., 1.0.

[Yi-MBP ~]$ curl 'http://192.168.1.41:6182/api/query?start=1679757000&end=1679757700&m=coun:test.cpu.idle\{host=aaa,cpu=0\}'
[{"metric":"test.cpu.idle","tags":{"cpu":"0","host":"aaa"},"aggregateTags":[],"dps":{"1679757000":1.0}}]
[Yi-MBP ~]$ curl 'http://192.168.1.41:6182/api/query?start=1679757000&end=1679757700&m=count2:test.cpu.idle\{host=aaa,cpu=0\}'
[{"metric":"test.cpu.idle","tags":{"cpu":"0","host":"aaa"},"aggregateTags":[],"dps":{"1679757000":1.0}}]

TickTock just shutting down

Hi there,
I'm in the middle of developing a little electricity dashboard, and while just manually running a few requests from python to TickTock the DB keeps on shutting down. There is no load on the system, enough cpu, ram, disc etc. Stored data is minimal (> 1000 entries)
The log only shows entries like this:
2023-01-20 23:50:46.964 [INFO] [tcp_2_task_2] Interrupted (11), shutting down...
2023-01-20 23:50:46.965 [INFO] [tcp_listener_0] listener 0 stopped.
2023-01-20 23:50:46.965 [INFO] [tcp_listener_2] TCP listener 2 stopped.
2023-01-20 23:50:46.965 [INFO] [tcp_listener_1] TCP listener 1 stopped.
2023-01-20 23:50:47.242 [INFO] [tcp_listener_0] listener 0 stopped.
2023-01-20 23:50:47.242 [INFO] [tcp_listener_1] TCP listener 1 stopped.
2023-01-20 23:50:47.243 [INFO] [tcp_listener_2] TCP listener 2 stopped.
2023-01-20 23:50:47.979 [INFO] [main] Start shutdown process...
2023-01-20 23:50:48.643 [INFO] [timer] Timer stopped
2023-01-20 23:50:49.310 [INFO] [main] QueryExecutor::shutdown complete
2023-01-20 23:50:49.314 [INFO] [main] Tsdb::shutdown complete
2023-01-20 23:50:49.314 [INFO] [main] Shutdown process complete

I'm running it in Debian 32bit on an Orange PI PC.
Can't the TickTock server restart itself or the component when an issue happens, instead of just going down? Is this kind of instability know? What would be a good way to increase availability?

Thanks in advance
Soren

ticktock-log-extract.log

reading values via http only works in curl and Grafana, but not in other clients like browsers

version: 0.10.2 on Debian
Hi there,
I'm trying to do a simply query from Python to get some data:
http://localhost:6182/api/query?start=1674172800&m=avg:1m-avg:energy.kitchen\{direction=consumed,type=kwh-last-10-min\}
It works in curl and Grafana, but not in any browser, Postman or any python client I tried. I analyzed the request in Wireshark to find any header parameters etc. that are different, but I could not find anything to explain that behavior. Please see the screenshots for the difference: The requests are the same, but in curl the response has values, and in Postman (or any other client I tried) the response is simply empty: [ ]

request-Postman
response-Postman
response-curl
request-curl

I tried to add all kinds of headers like 'accept' and 'host' and different 'User-Agent' values, no difference. There are only differences in the TCP part to establish a the HTTP request. But the DB should behave either the same or throw an error...

disk space consumption of data folder is pretty high

Hi there,
I'm using TickTockDB version: 0.12.0 for collecting iot data for the whole year now. I re-imported old data again into the new version of TT to have a fresh start. The issue I'm facing is that my actual data is very small, while the header files in each day's data folder consume a lot more space.
For this year it accumulates to more than 500 MB of disk space, which compresses down to less than 2 MB in the zip files, indicating lots of unused allocated space in the header files.
They (the header.0 files) are consistently 852kB big, while the actual data is only a few kBs.
Also, there are also always 2 folders for each day, one has the ending .back. They both contain the same header file.
I attached all the data for this year, and the latest log file for the last 2 month. It would be great if you could optimize this, since space on little 32bit computers is limited.
Otherwise I'm very happy with the stability and performance of your DB, it's really great. Thank you so much for all your amazing work.
Kind regards
Soren

ticktock.log.tar.gz
data.tar.gz

[ERROR] [main] Failed to bind to any network interfaces, errno=98

When stopping a ticktock instance and (re-)starting it a short time later - e.g. stop, run backup, re-start - the instance cannot bind to the socket ports. Within the logs one can see e.g.

[INFO] [main] Starting TCP Server on ports 5000... ... [ERROR] [main] Failed to bind to any network interfaces, errno=98

Errno 98 means address in use.

This happens in ./core/tcp.cpp when calling bind(). I assume it's due the socket ports are in TIME_WAIT state and SO_REUSEADDR isn't given.

Current workaround is to wait until the TIME_WAIT state is gone (~60s on Linux) before re-starting the instance.

Unable to launch

Building process seems to be fine.

 Yongtao You ([email protected]) and Yi Lin ([email protected]).
 This program comes with ABSOLUTELY NO WARRANTY. It is free software,
 and you are welcome to redistribute it under certain conditions.
 For details, see <https://www.gnu.org/licenses/>.
Failed to open file /var/log/ticktock.log for writing: 13
2023-01-18 07:26:08.650 [INFO] [main] TickTock version: 0.10.1, on khadas-vim3, pid: 30690
2023-01-18 07:26:08.650 [INFO] [main] mm::page-size = 4096
2023-01-18 07:26:08.650 [INFO] [main] mm::m_network_buffer_len = 524288
2023-01-18 07:26:08.650 [INFO] [main] mm::m_network_buffer_small_len = 326
2023-01-18 07:26:08.650 [INFO] [main] GC Freq: 300 secs
2023-01-18 07:26:08.650 [INFO] [main] Loading data from /ticktock/data
2023-01-18 07:26:08.650 [INFO] [main] number of ts locks: 1800
2023-01-18 07:26:08.650 [ERROR] [main] Not enough disk space at /ticktock/data (0 <= 32768)
2023-01-18 07:26:08.650 [ERROR] [main] Failed to open file /ticktock/data/ticktock.meta for append: 2
2023-01-18 07:26:08.650 [FATAL] [main] Failed to open meta file /ticktock/data/ticktock.meta for writing
Initialization failed. Abort!

Is it possible to delete data points?

I don't seem to be able to delete data points. OpenTSDB doc says tsd.http.query.allow_delete needs to be set to true but I don't see such option in the ticktock config. Any suggestion?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.