Giter Club home page Giter Club logo

py-questdb-client's Introduction

QuestDB Client Library for Python

This is the official Python client library for QuestDB.

This client library implements QuestDB's variant of the InfluxDB Line Protocol (ILP) over HTTP and TCP.

ILP provides the fastest way to insert data into QuestDB.

This implementation supports authentication and full-connection encryption with TLS.

Quickstart

The latest version of the library is 2.0.2 (changelog).

python3 -m pip install -U questdb[dataframe]

Please start by setting up QuestDB . Once set up, you can use this library to insert data.

The most common way to insert data is from a Pandas dataframe.

import pandas as pd
from questdb.ingress import Sender

df = pd.DataFrame({
    'id': pd.Categorical(['toronto1', 'paris3']),
    'temperature': [20.0, 21.0],
    'humidity': [0.5, 0.6],
    'timestamp': pd.to_datetime(['2021-01-01', '2021-01-02'])})

conf = f'http::addr=localhost:9000;'
with Sender.from_conf(conf) as sender:
    sender.dataframe(df, table_name='sensors', at='timestamp')

You can also send individual rows. This only requires a more minimal installation:

python3 -m pip install -U questdb
from questdb.ingress import Sender, TimestampNanos

conf = f'http::addr=localhost:9000;'
with Sender.from_conf(conf) as sender:
    sender.row(
        'sensors',
        symbols={'id': 'toronto1'},
        columns={'temperature': 20.0, 'humidity': 0.5},
        at=TimestampNanos.now())
    sender.flush()

To connect via the older TCP protocol, set the configuration string to:

conf = f'tcp::addr=localhost:9009;'
with Sender.from_conf(conf) as sender:
    ...

You can continue by reading the Sending Data Over ILP guide.

Community

If you need help, you can ask on Stack Overflow: We monitor the #questdb and #py-questdb-client tags.

Alternatively, you may find us on Slack.

You can also sign up to our mailing list to get notified of new releases.

License

The code is released under the Apache License 2.0.

py-questdb-client's People

Contributors

amunra avatar amyshwang avatar goodroot avatar jerrinot avatar marregui avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

py-questdb-client's Issues

Feature request: Annotation to auto generate the ILP types from a Python/Java class

ILP types are easy to get wrong and with python not always being obvious about this, it can happen every easily and with serious consequences for accuracy (doubles into floats etc).

Java is better about types but its still easy to make mistakes

My request is adding at annotation that can read a class's members generate the a method to create the ILP of the object that respects the types.

DNS resolution failure not auto resolving

With the 2.0.2 client, when http is being used it seems that every flush instantiates a new connection. This makes sense and therefore does not require any "reconnection" methodology. However, if a dns resolution failure occurs once, then for the life of the Sender it will no longer attempt to resolve.

Specifically, there is a scenario where when a pc boots, a service we use for networking may not be connected yet. In this case, dns resolution may fail for 30 seconds or so. However, if the Sender has been created and a flush already called then it seems the DNS failure is in someway cached and it never is attempting to resolve the dns again. This is the error I received:

questdb.ingress.IngressError: Could not flush buffer: http://kronus-nexus:4900/write?precision=n: Dns Failed: resolve dns name 'kronus-nexus:4900': failed to lookup address information: Temporary failure in name resolution

Then, for the life of the code, it keeps having the same error and is never able to reconnect again. But then if I change nothing but just restart the code, then dns resolves just fine and everything works as intended. Also, if I change the address to a hard coded ip, then it will also auto connect after the network service is up and runs just fine. This is a working solution for me for now but it does not seem intended that a single dns failure will result in a Sender that is forever in a failure state.

Client does not disconnect on failed authentication

When I create cloud QuestDB instance an use the code example to send data (slightly modified):

# This is a sample Python script.

# Press ⌃R to execute it or replace it with your code.
# Press Double ⇧ to search everywhere for classes, files, tool windows, actions, and settings.
from questdb.ingress import Sender, IngressError, TimestampNanos
import datetime


def example(host: str = 'blabla.questdb.net', port: int = 30906):
    # See: https://questdb.io/docs/reference/api/ilp/authenticate
    auth = (
        "ahmin",  # kid
        "real-key",  # d
        "real-key",  # x
        "real-key")  # y
    with Sender(host, port, auth=auth, tls=True) as sender:
        # Record with provided designated timestamp (using the 'at' param)
        # Notice the designated timestamp is expected in Nanoseconds,
        # but timestamps in other columns are expected in Microseconds.
        # The API provides convenient functions
        sender.row(
            'trades',
            symbols={
                'pair': 'USDGBP',
                'type': 'buy'},
            columns={
                'traded_price': 0.83,
                'limit_price': 0.84,
                'qty': 100,
                'traded_ts': datetime.datetime(
                    2022, 8, 6, 7, 35, 23, 189062,
                    tzinfo=datetime.timezone.utc)},
            at=TimestampNanos.now())

        sender.flush()

        for i in range(0, 1000):
            # If no 'at' param is passed, the server will use its own timestamp.
            sender.row(
                'trades',
                symbols={'pair': 'EURJPY'},
                columns={
                    'traded_price': 135.97,
                    'qty': 400,
                    'limit_price': None})  # NULL columns can be passed as None,
            # or simply be left out.

            sender.flush()
        print("sent")


if __name__ == '__main__':
    while (True):
        example()

Even though I deliberately changed kid from admin to ahmin the output prints sent, sent ... with no exception.

I see in the server logs

2023-10-05T10:37:05.850164Z I i.q.c.l.t.a.EllipticCurveAuthenticator [101] authentication failed, signature was not verified |  
2023-10-05T10:37:05.822096Z I i.q.c.l.t.a.EllipticCurveAuthenticator [101] authentication read key id [keyId=ahmin]

When I run the Java code example with a malformed key, a java exception is thrown:

	at io.questdb/io.questdb.cutlass.line.tcp.PlainTcpLineChannel.send(PlainTcpLineChannel.java:109)
	at io.questdb/io.questdb.cutlass.line.tcp.DelegatingTlsChannel.writeToUpstreamAndClear(DelegatingTlsChannel.java:409)
	at io.questdb/io.questdb.cutlass.line.tcp.DelegatingTlsChannel.wrapLoop(DelegatingTlsChannel.java:390)
	at io.questdb/io.questdb.cutlass.line.tcp.DelegatingTlsChannel.send(DelegatingTlsChannel.java:183)
	at io.questdb/io.questdb.cutlass.line.AbstractLineSender.sendAll(AbstractLineSender.java:409)
	at io.questdb/io.questdb.cutlass.line.LineTcpSender.send00(LineTcpSender.java:85)
	at io.questdb/io.questdb.cutlass.line.AbstractLineSender.put(AbstractLineSender.java:215)
	at io.questdb/io.questdb.cutlass.line.AbstractLineSender.putUtf8Special(AbstractLineSender.java:255)
	at io.questdb/io.questdb.std.str.CharSink.encodeUtf8(CharSink.java:43)
	at io.questdb/io.questdb.std.str.CharSink.encodeUtf8(CharSink.java:35)
	at io.questdb/io.questdb.cutlass.line.AbstractLineSender.tag(AbstractLineSender.java:296)
	at io.questdb/io.questdb.cutlass.line.AbstractLineSender.symbol(AbstractLineSender.java:280)
	at io.questdb/io.questdb.cutlass.line.AbstractLineSender.symbol(AbstractLineSender.java:38)
	at io.questdb.benchmarks/org.questdb.LineTCPSender03MultiTableMain.doSend(LineTCPSender03MultiTableMain.java:66)
	at io.questdb.benchmarks/org.questdb.LineTCPSender03MultiTableMain.lambda$main$0(LineTCPSender03MultiTableMain.java:44)
	at java.base/java.lang.Thread.run(Thread.java:829)
	Suppressed: io.questdb.cutlass.line.LineSenderException: [32] send error 
		at io.questdb/io.questdb.cutlass.line.tcp.PlainTcpLineChannel.send(PlainTcpLineChannel.java:109)
		at io.questdb/io.questdb.cutlass.line.tcp.DelegatingTlsChannel.writeToUpstreamAndClear(DelegatingTlsChannel.java:409)
		at io.questdb/io.questdb.cutlass.line.tcp.DelegatingTlsChannel.wrapLoop(DelegatingTlsChannel.java:390)
		at io.questdb/io.questdb.cutlass.line.tcp.DelegatingTlsChannel.send(DelegatingTlsChannel.java:183)
		at io.questdb/io.questdb.cutlass.line.AbstractLineSender.sendAll(AbstractLineSender.java:409)
		at io.questdb/io.questdb.cutlass.line.LineTcpSender.flush(LineTcpSender.java:80)
		at io.questdb/io.questdb.cutlass.line.AbstractLineSender.close(AbstractLineSender.java:121)
		at io.questdb.benchmarks/org.questdb.LineTCPSender03MultiTableMain.doSend(LineTCPSender03MultiTableMain.java:57)
		... 2 more

Dataframe timestamp column forcibly renamed to 'timestamp'

When importing a dataframe and specifying a timestamp column with `at=...', the column name is discarded and replaced with 'timestamp'.

df = pd.DataFrame({
    'temperature': [20.0, 21.0],
    'humidity': [0.5, 0.6],
    'time': pd.to_datetime(['2021-01-01', '2021-01-02'])})

with Sender('server', 9009) as sender:
    sender.dataframe(df, at='time')

image

I would expect that the column name is persisted in the final table.

Add more info about the questdb package

I'm adding the questdb package via PyCharm and it shows this window:
image

I'm all for conciseness, but this is probably too much :)
I'm not sure where is this info coming from, but we should add some details.

Possible stickiness in auto_flush_interval behaviour

Related to: https://questdb.slack.com/archives/C1NFJEER0/p1711587900594789

Code:

connection = Sender(Protocol.Http, self.config['IP'], self.config['HTTP_Port'],
                                    auto_flush=True,
                                    auto_flush_interval=timedelta(seconds=self.config['Flush_Time']))
                with connection as sender:
                    logger.info(f"Connect to QuestDB at {self.config['IP']}")
                    while True:
                        table_name, symbols, columns, timestamp = self.write_queue.get()
                        start = perf_counter()
                        sender.row(
                            table_name,
                            symbols=symbols,
                            columns=columns,
                            at=timestamp)

                        logger.info(f"Sent row | {perf_counter() - start:.2f} | {len(sender)}")

Output (truncated):

2024-03-27 21:01:21.108 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.00 | 76560
2024-03-27 21:01:21.109 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.00 | 77949
2024-03-27 21:01:22.087 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.00 | 78916
2024-03-27 21:01:22.107 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.00 | 79736
2024-03-27 21:01:22.136 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.00 | 80557
2024-03-27 21:01:22.153 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.00 | 81378
2024-03-27 21:01:22.161 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.00 | 82199
2024-03-27 21:01:22.163 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.00 | 83020
2024-03-27 21:01:22.183 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.00 | 83841
2024-03-27 21:01:22.185 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.00 | 84662
2024-03-27 21:01:22.186 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.00 | 85483
2024-03-27 21:01:22.187 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.00 | 86304
2024-03-27 21:01:22.188 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.00 | 87693
2024-03-27 21:01:23.816 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.74 | 0
2024-03-27 21:01:23.943 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.13 | 0
2024-03-27 21:01:24.102 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.16 | 0
2024-03-27 21:01:24.194 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.08 | 0
2024-03-27 21:01:24.395 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.20 | 0
2024-03-27 21:01:24.481 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.09 | 0
2024-03-27 21:01:24.579 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.09 | 0
2024-03-27 21:01:24.704 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.12 | 0
2024-03-27 21:01:24.820 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.10 | 0
2024-03-27 21:01:24.919 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.09 | 0
2024-03-27 21:01:25.027 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.11 | 0
2024-03-27 21:01:25.116 EDT [INFO] | QuestDB_Logger:write_loop:91 | Sent row | 0.09 | 0

After the first flush after the 10 second interval, the flush happens on each subsequent row.

Wrong data entry(date issue - shows year as 1970)

Describe the bug

While pushing data to questdb using ILP(questdb client in python), I'm passing the correct datetime data for my designated timestamp, but it shows up wrongly(shows date as 1970-01-20T13:49:27.971713Z even though I'm passing current time).
I've tried many different ways of passing the date data to questdb, and the issue persists.
Please help me out.
Thanks!

To reproduce

  1. Create any table in questdb
  2. Insert data using python
sender.row(
            'table_name',
            columns={
                'date': TimestampMicros.now()},
            at=TimestampNanos.now())
  1. See data on web console.

Expected Behavior

Correct timestamp should show up.

Environment

- **QuestDB version**: 7.2
- **OS**: Amazon Linux 2
- **Browser**: Chrome

Possible build issue with Python 3.12.2

See https://questdb.slack.com/archives/C1NFJEER0/p1711567492659309

Issue not present in 3.10.12.

Python 3.12.2
Pip 24

Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting questdb
  Using cached questdb-2.0.0.tar.gz (839 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... error
  error: subprocess-exited-with-error
  
  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [50 lines of output]
      Traceback (most recent call last):
        File "/usr/local/src/<...>/venv/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
          main()
        File "/usr/local/src/<...>/venv/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/usr/local/src/<...>/venv/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
          return hook(config_settings)
                 ^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-w0men68p/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 325, in get_requires_for_build_wheel
          return self._get_build_requires(config_settings, requirements=['wheel'])
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-w0men68p/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 295, in _get_build_requires
          self.run_setup()
        File "/tmp/pip-build-env-w0men68p/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 487, in run_setup
          super().run_setup(setup_script=setup_script)
        File "/tmp/pip-build-env-w0men68p/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 311, in run_setup
          exec(code, locals())
        File "<string>", line 172, in <module>
        File "/tmp/pip-build-env-w0men68p/overlay/lib/python3.12/site-packages/Cython/Build/Dependencies.py", line 1010, in cythonize
          module_list, module_metadata = create_extension_list(
                                         ^^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-w0men68p/overlay/lib/python3.12/site-packages/Cython/Build/Dependencies.py", line 859, in create_extension_list
          kwds = deps.distutils_info(file, aliases, base).values
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-w0men68p/overlay/lib/python3.12/site-packages/Cython/Build/Dependencies.py", line 707, in distutils_info
          return (self.transitive_merge(filename, self.distutils_info0, DistutilsInfo.merge)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-w0men68p/overlay/lib/python3.12/site-packages/Cython/Build/Dependencies.py", line 716, in transitive_merge
          return self.transitive_merge_helper(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-w0men68p/overlay/lib/python3.12/site-packages/Cython/Build/Dependencies.py", line 722, in transitive_merge_helper
          deps = extract(node)
                 ^^^^^^^^^^^^^
        File "/tmp/pip-build-env-w0men68p/overlay/lib/python3.12/site-packages/Cython/Build/Dependencies.py", line 687, in distutils_info0
          cimports, externs, incdirs = self.cimports_externs_incdirs(filename)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "Cython/Utils.py", line 129, in Cython.Utils.cached_method.wrapper
        File "/tmp/pip-build-env-w0men68p/overlay/lib/python3.12/site-packages/Cython/Build/Dependencies.py", line 574, in cimports_externs_incdirs
          for include in self.included_files(filename):
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "Cython/Utils.py", line 129, in Cython.Utils.cached_method.wrapper
        File "/tmp/pip-build-env-w0men68p/overlay/lib/python3.12/site-packages/Cython/Build/Dependencies.py", line 556, in included_files
          include_path = self.context.find_include_file(include, source_file_path=filename)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-w0men68p/overlay/lib/python3.12/site-packages/Cython/Compiler/Main.py", line 299, in find_include_file
          error(pos, "'%s' not found" % filename)
        File "/tmp/pip-build-env-w0men68p/overlay/lib/python3.12/site-packages/Cython/Compiler/Errors.py", line 178, in error
          raise InternalError(message)
      Cython.Compiler.Errors.InternalError: Internal compiler error: 'dataframe.pxi' not found
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

Sender does not throw an exception when connected to a non-ILP socket

We have come across an issue where the Sender does not throw an exception when connected to an open non-ILP TCP socket.
(eg. connecting to the port 8812 meant for SQL connections).

This results in the Sender "writing" to the database seemingly without a problem, when in reality nothing is being written to the database. No exception, no hint something may be incorrectly configured.

Steps to reproduce:

import time
from questdb.ingress import Sender

with Sender("valid_host", 8812, auto_flush=True) as sender: # Or any other open TCP socket
    while True:
        sender.row("exampleTable", symbols={"symbol": "test"}, columns={"column": "test"})
        time.sleep(1)

Additional Information:

OS: MacOS 12.6 and Windows 11 21H2

Inserting pandas dataframe silently failed if Symbol columns are not categorical

Describe the bug

The new Python client v1.1.0 supports inserting a Pandas dataframe, with all Symbol columns are converted to Pandas Categorical data type by pd.Categorical, as shown in the doc. However, if the dataframe columns corresponding to Symbol columns have string types, the client will silently fail to insert the data and return None. It's expected the api raises an exception or returns error.

To reproduce

  1. Create table
CREATE TABLE IF NOT EXISTS test_table
  (
    ts TIMESTAMP,
    code SYMBOL CAPACITY 256 INDEX CAPACITY 256,
    loc SYMBOL CAPACITY 256 INDEX CAPACITY 256,
    temperature double
  ) TIMESTAMP(ts);
  1. Insert data
df = pd.DataFrame({
    'ts': [dt.datetime(2023,1,18,1), dt.datetime(2023,1,18,2)],
    'code': ['sensorA', 'sensorB'], 
    'loc': ['east', 'west'], 
    'temperature': [45.5, 46.2]
})

with Sender('localhost', 9009) as sender:
    sender.dataframe(df, table_name='test_table')

Select from test_table returns empty.

Then convert symbol columns to pandas categorical:

for c in ['code', 'loc']:
    df[c] = pd.Categorical(df[c])

and repeat the insertion. It will successfully insert the data.

Expected Behavior

It's expected to raise IngressError when data insertion failed.

Environment

- **QuestDB version**: latest docker image. Python client v1.1.0
- **OS**: linux
- **Browser**: Firefox

Additional context

No response

How to create table if does not exists with py-quesdb-client

Hi,

can someone help me on how to create table if it does not exist with the library.

i know there is a restAPI method for it as described on the page

import sys
import requests
import json

host = 'http://localhost:9000'

def run_query(sql_query):
    query_params = {'query': sql_query, 'fmt' : 'json'}
    try:
        response = requests.get(host + '/exec', params=query_params)
        json_response = json.loads(response.text)
        print(json_response)
    except requests.exceptions.RequestException as e:
        print(f'Error: {e}', file=sys.stderr)

# create table
run_query("CREATE TABLE IF NOT EXISTS trades (name STRING, value INT)")

however not with the package.

Thank you for your time and answer to my request

Please publish the questdb Python package on conda-forge

Dear Maintainers,

Thanks for the amazing project! I wanted to evaluate this database for some quant work, and I see that questdb python client is not available on conda-forge. Could you please publish this package there as well, for increased security and ease of use?

Thank you very much in advance.
Kind Regards,
Ilya

Designated timestamp column name is always "timestamp" instead of the given name

  1. I run the below code, where I want the column my_ts to be the designated timestamp:
df = pd.DataFrame({
    'my_ts': [pd.Timestamp(2024, 3, 21)],
    'my_col': [42],
})

conf = f'http::addr=localhost:9000;'
with Sender.from_conf(conf) as sender:
    sender.dataframe(df, table_name='foo', at='my_ts')
  1. The resulting table does not contain a my_ts column but timestamp instead. It should be called my_ts.

image

HTTP timeouts not respected in flush

Using the questdb python client version 2.0.0 it seems that the default http timeouts are not being respected when the network connection degrades. The minimal reproducible code is as follows:

self.sender = Sender(Protocol.Http, self.config['IP'], self.config['HTTP_Port'], auto_flush=False)
self.sender.establish()
self.buffer = Buffer()
'''
success = False
try:
    self.flush_start = perf_counter()
    with self.buffer_lock:
        after_lock = perf_counter()
        self.sender.flush(self.buffer)
    success = True            
except:
    logger.opt(exception=True).warning(f"QuestDB write error | {self.config['IP']}")
logger.debug(f"QuestDB flush | {self.config['IP']} | Success: {success} | Lock obtained: {after_lock-self.flush_start}s | Flush: {perf_counter()-after_lock}s")

For reference, there is another loop that feeds this buffer using the lock. But this is irrelevant to the problem I'm having and I've printed the lock acquire time to isolate it as an issue. The results show a growing flush time, for example this is an an exact output from my logs:

2024-03-30 12:26:52.741 EDT [DEBUG] | QuestDB_Logger:flush_loop:133 | QuestDB flush | kronus-nexus | Success: False | Lock obtained: 9.148032404482365e-06s | Flush: 822.764772413997s

You can see the lock isn't taking almost any time so that's not the problem, but we're seeing growing flush time now seen here at 822 seconds. Here are the ping stats for reference during this time:

21 packets transmitted, 17 received, 19.0476% packet loss, time 20072ms
rtt min/avg/max/mdev = 59.412/318.782/639.467/145.486 ms

The expectation would be that with the default http configuration, for the flush to never take longer than 10 seconds no matter what. Currently this time is increasing over time. It may be growing in conjunction with the growing buffer but that's difficult to know without knowledge of how the internals of flush work. It also seems that once the connection degrades and the buffer starts growing, all consecutive flush attempts just fail. If the buffer gets small enough and the connection gets slightly better then the flush will finally succeed as normal.

BUG: When saving a pd.DataFrame with column name "timestamp" it fails

I am saving a pd.DataFrame where one of the columns is named timestamp.

image

image

I save it in the following way:
from questdb.ingress import Sender

with Sender('localhost', 9009) as sender:
    sender.dataframe(df, table_name='my_table')

The result in QDB is the following:
image
Note how the timestamp column is wrongly inputted in QDB for some reason.

Can't install from source package distribution.

For reasons yet to be determined, on some platforms py 3.12 will attempt to build from source rather than use the provided binaries.

See: #81

  • That issue is being left open to investigage why the binary release isn't being used.
  • This issue is to fix the source package as it seems to not include all files necessary to build from source.

Pip install fails on python 3.12

When trying to install the questdb client using pip, it fails when using 3.12. It fails with both pip install questdb and with pip install questdb[dataframe]. Seems to be something related to dataframe.pxi.

Python version: 3.12.2
Pip version: 24.0

Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting questdb
  Using cached questdb-2.0.2.tar.gz (864 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... error
  error: subprocess-exited-with-error
  
  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [50 lines of output]
      Traceback (most recent call last):
        File "/home/nebula/test-venv/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
          main()
        File "/home/nebula/test-venv/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/home/nebula/test-venv/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
          return hook(config_settings)
                 ^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-758g3lhi/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 325, in get_requires_for_build_wheel
          return self._get_build_requires(config_settings, requirements=['wheel'])
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-758g3lhi/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 295, in _get_build_requires
          self.run_setup()
        File "/tmp/pip-build-env-758g3lhi/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 487, in run_setup
          super().run_setup(setup_script=setup_script)
        File "/tmp/pip-build-env-758g3lhi/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 311, in run_setup
          exec(code, locals())
        File "<string>", line 172, in <module>
        File "/tmp/pip-build-env-758g3lhi/overlay/lib/python3.12/site-packages/Cython/Build/Dependencies.py", line 1010, in cythonize
          module_list, module_metadata = create_extension_list(
                                         ^^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-758g3lhi/overlay/lib/python3.12/site-packages/Cython/Build/Dependencies.py", line 859, in create_extension_list
          kwds = deps.distutils_info(file, aliases, base).values
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-758g3lhi/overlay/lib/python3.12/site-packages/Cython/Build/Dependencies.py", line 707, in distutils_info
          return (self.transitive_merge(filename, self.distutils_info0, DistutilsInfo.merge)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-758g3lhi/overlay/lib/python3.12/site-packages/Cython/Build/Dependencies.py", line 716, in transitive_merge
          return self.transitive_merge_helper(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-758g3lhi/overlay/lib/python3.12/site-packages/Cython/Build/Dependencies.py", line 722, in transitive_merge_helper
          deps = extract(node)
                 ^^^^^^^^^^^^^
        File "/tmp/pip-build-env-758g3lhi/overlay/lib/python3.12/site-packages/Cython/Build/Dependencies.py", line 687, in distutils_info0
          cimports, externs, incdirs = self.cimports_externs_incdirs(filename)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "Cython/Utils.py", line 129, in Cython.Utils.cached_method.wrapper
        File "/tmp/pip-build-env-758g3lhi/overlay/lib/python3.12/site-packages/Cython/Build/Dependencies.py", line 574, in cimports_externs_incdirs
          for include in self.included_files(filename):
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "Cython/Utils.py", line 129, in Cython.Utils.cached_method.wrapper
        File "/tmp/pip-build-env-758g3lhi/overlay/lib/python3.12/site-packages/Cython/Build/Dependencies.py", line 556, in included_files
          include_path = self.context.find_include_file(include, source_file_path=filename)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-758g3lhi/overlay/lib/python3.12/site-packages/Cython/Compiler/Main.py", line 299, in find_include_file
          error(pos, "'%s' not found" % filename)
        File "/tmp/pip-build-env-758g3lhi/overlay/lib/python3.12/site-packages/Cython/Compiler/Errors.py", line 178, in error
          raise InternalError(message)
      Cython.Compiler.Errors.InternalError: Internal compiler error: 'dataframe.pxi' not found
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

No module named 'install_rust' with 1.0.1

Hi,

When trying to install the client via poetry add questdb I end up with the error

 ModuleNotFoundError: No module named 'install_rust'

I am using

# poetry --version
Poetry (version 1.2.2)
# python --version
Python 3.11.0
# pip --version
pip 22.3 from /usr/local/lib/python3.11/site-packages/pip (python 3.11)

When looking at the wheel, it seems like it is just missing in the release:

(after tar xzfv questdb-1.0.1.tar.gz )

root@3e922048426b:/tmp/questdb-1.0.1# ls -al
total 60
drwxr-xr-x  5  501 staff  4096 Aug 16 20:53 .
drwxrwxrwt  1 root root   4096 Oct 30 23:13 ..
-rw-r--r--  1  501 staff 11357 Jul 14 10:54 LICENSE.txt
-rw-r--r--  1  501 staff   267 Jul 14 10:54 MANIFEST.in
-rw-r--r--  1  501 staff   177 Aug 16 20:53 PKG-INFO
-rw-r--r--  1  501 staff  1513 Aug 16 20:44 README.rst
drwxr-xr-x 13  501 staff  4096 Aug 16 20:53 c-questdb-client
-rw-r--r--  1  501 staff  2201 Aug 16 20:44 pyproject.toml
-rw-r--r--  1  501 staff    38 Aug 16 20:53 setup.cfg
-rwxr-xr-x  1  501 staff  4284 Aug 16 20:44 setup.py
drwxr-xr-x  4  501 staff  4096 Aug 16 20:53 src
drwxr-xr-x  2  501 staff  4096 Aug 16 20:53 test

Verbose log:

p# poetry add questdb
Skipping virtualenv creation, as specified in config file.
Using version ^1.0.1 for questdb

Updating dependencies
Resolving dependencies... (0.3s)

Writing lock file

Package operations: 1 install, 0 updates, 0 removals

  • Installing questdb (1.0.1): Failed

  CalledProcessError

  Command '['/usr/local/bin/python3.11', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--prefix', '/usr/local', '--no-deps', '/root/.cache/pypoetry/artifacts/4d/de/02/2f89912d9c708ff1810665dfac7ce69c885d4f1b64558ebbb5c91a1723/questdb-1.0.1.tar.gz']' returned non-zero exit status 1.

  at /usr/local/lib/python3.11/subprocess.py:569 in run
       565│             # We don't call process.wait() as .__exit__ does that for us.
       566│             raise
       567│         retcode = process.poll()
       568│         if check and retcode:
    →  569│             raise CalledProcessError(retcode, process.args,
       570│                                      output=stdout, stderr=stderr)
       571│     return CompletedProcess(process.args, retcode, stdout, stderr)
       572│ 
       573│ 

The following error occurred when trying to handle this error:


  EnvCommandError

  Command ['/usr/local/bin/python3.11', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--prefix', '/usr/local', '--no-deps', '/root/.cache/pypoetry/artifacts/4d/de/02/2f89912d9c708ff1810665dfac7ce69c885d4f1b64558ebbb5c91a1723/questdb-1.0.1.tar.gz'] errored with the following return code 1, and output: 
  Processing /root/.cache/pypoetry/artifacts/4d/de/02/2f89912d9c708ff1810665dfac7ce69c885d4f1b64558ebbb5c91a1723/questdb-1.0.1.tar.gz
    Installing build dependencies: started
    Installing build dependencies: finished with status 'done'
    Getting requirements to build wheel: started
    Getting requirements to build wheel: finished with status 'error'
    error: subprocess-exited-with-error
    
    × Getting requirements to build wheel did not run successfully.
    │ exit code: 1
    ╰─> [21 lines of output]
        Traceback (most recent call last):
          File "/usr/local/lib/python3.11/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 351, in <module>
            main()
          File "/usr/local/lib/python3.11/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 333, in main
            json_out['return_val'] = hook(**hook_input['kwargs'])
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
          File "/usr/local/lib/python3.11/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 118, in get_requires_for_build_wheel
            return hook(config_settings)
                   ^^^^^^^^^^^^^^^^^^^^^
          File "/tmp/pip-build-env-8rmhyd0c/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 338, in get_requires_for_build_wheel
            return self._get_build_requires(config_settings, requirements=['wheel'])
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
          File "/tmp/pip-build-env-8rmhyd0c/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 320, in _get_build_requires
            self.run_setup()
          File "/tmp/pip-build-env-8rmhyd0c/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 484, in run_setup
            self).run_setup(setup_script=setup_script)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
          File "/tmp/pip-build-env-8rmhyd0c/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 335, in run_setup
            exec(code, locals())
          File "<string>", line 15, in <module>
        ModuleNotFoundError: No module named 'install_rust'
        [end of output]
    
    note: This error originates from a subprocess, and is likely not a problem with pip.
  error: subprocess-exited-with-error
  
  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> See above for output.
  
  note: This error originates from a subprocess, and is likely not a problem with pip.

Could not connect to "localhost:9009"

After writing approx 16,000 rows in multiple tables I can no longer write data.
Error -

Could not connect to "localhost:9009": Only one usage of each socket address (protocol/network address/port) is normally permitted. (os error 10048)

A few days back I didn't have any such issue, I wrote millions of rows in a similar fashion. My program is writing data in QuestDB from 2 WebSockets running in parallel processes.

`rewind to the marker` failure

Hi,

I am sending medium (about 20 inserts per second) amount of rows into QuestDB instance, and I am getting following error. Any suggestion?

Exception in thread xxx-processing-thread-140173359330640:
Traceback (most recent call last):
  File "src/questdb/ingress.pyx", line 650, in questdb.ingress.Buffer._row
  File "src/questdb/ingress.pyx", line 618, in questdb.ingress.Buffer._at
  File "src/questdb/ingress.pyx", line 613, in questdb.ingress.Buffer._at_now
  File "src/questdb/ingress.pyx", line 592, in questdb.ingress.Buffer._may_trigger_row_complete
  File "src/questdb/ingress.pyx", line 329, in questdb.ingress.may_flush_on_row_complete
  File "src/questdb/ingress.pyx", line 1288, in questdb.ingress.Sender.flush
  File "src/questdb/ingress.pyx", line 1278, in questdb.ingress.Sender.flush
questdb.ingress.IngressError: Could not flush buffer: Broken pipe (os error 32)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/threading.py", line 980, in _bootstrap_inner
    self.run()
  File "/usr/local/lib/python3.9/threading.py", line 917, in run
    self._target(*self._args, **self._kwargs)
  File "/venv/lib/python3.9/site-packages/library/interface/handlers/abstract_solace.py", line 365, in _queue_lift_and_run
    self.process_inbound_message(inbound_message)
  File "/venv/lib/python3.9/site-packages/library/interface/exception_handling.py", line 53, in wrapper
    raise e
  File "/venv/lib/python3.9/site-packages/library/interface/exception_handling.py", line 44, in wrapper
    return func(*args, **kwargs)
  File "/venv/lib/python3.9/site-packages/library/interface/handlers/abstract_solace.py", line 141, in process_inbound_message
    self.process_data_update(incoming_data=incoming_data)
  File "/usr/local/lib/python3.9/contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "/venv/lib/python3.9/site-packages/library/interface/handlers/abstract.py", line 115, in process_data_update
    self.data_catalog_listener.on_update(self.data_catalog, update_identification)
  File "/venv/lib/python3.9/site-packages/library/services/calculation/single_thread.py", line 14, in on_update
    return self._execution_wrapper.execute(data_subset, content_id)
  File "/usr/local/lib/python3.9/contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "/venv/lib/python3.9/site-packages/library/interface/exception_handling.py", line 53, in wrapper
    raise e
  File "/venv/lib/python3.9/site-packages/library/interface/exception_handling.py", line 44, in wrapper
    return func(*args, **kwargs)
  File "/venv/lib/python3.9/site-packages/library/services/calculation/base.py", line 99, in execute
    process_response(response, data_catalog, self)
  File "/usr/local/lib/python3.9/functools.py", line 888, in wrapper
    return dispatch(args[0].__class__)(*args, **kw)
  File "/venv/lib/python3.9/site-packages/library/services/post_calculation/solace_archive.py", line 30, in _
    execution_wrapper.questdb_archiver.insert(table=obj.series, symbols=symbols, columns=columns)
  File "/venv/lib/python3.9/site-packages/library/interface/connectors/questdb.py", line 96, in insert
    self._sender.row(
  File "src/questdb/ingress.pyx", line 1237, in questdb.ingress.Sender.row
  File "src/questdb/ingress.pyx", line 719, in questdb.ingress.Buffer.row
  File "src/questdb/ingress.pyx", line 653, in questdb.ingress.Buffer._row
  File "src/questdb/ingress.pyx", line 490, in questdb.ingress.Buffer._rewind_to_marker
questdb.ingress.IngressError: Can't rewind to the marker: No marker set.

Should flush on `SystemExit` exception

See the following lines:

def __exit__(self, exc_type, _exc_val, _exc_tb):
        """
        Flush pending and disconnect at the end of a ``with`` block.
        If the ``with`` block raises an exception, any pending data will
        *NOT* be flushed.
        This is implemented by calling :func:`Sender.close`.
        """
        self.close(not exc_type)

Columns of type long256 are not supported

There doesn't seem to be a way to pass values for long256 columns at the moment:

buffer.row(mytable, columns={col256=<very_large_int>})
src/questdb/ingress.pyx in questdb.ingress.Buffer.row()
src/questdb/ingress.pyx in questdb.ingress.Buffer._row()
src/questdb/ingress.pyx in questdb.ingress.Buffer._row()
src/questdb/ingress.pyx in questdb.ingress.Buffer._column()
OverflowError: Python int too large to convert to C long

Maybe this could be implemented by checking the value of integer arguments and using the 0x1234i syntax when appropriate, or via some Long256 wrapper?

Partition Planning

Is there any way to modify the partition schema when generating new tables via a pandas dataframe. It seems the default is by day.

Pandas: Support `datetime64[ns]` dataframe index as designated timestamp.

It's common to use a datetime64[ns] df.index in Pandas when dealing with timeseries.
In such case our API should just be:

buffer.dataframe(df, table_name="some_name")

This means changing the default logic of the at argument to also accept two new singleton types:

buffer.dataframe(df, ..., at=Server)  # timestamps are set by the server -- the current default.
buffer.dataframe(df, ..., at=Index)  # Use the index.

The new behaviour for the at=None default would be to:

  • Use at=Index logic if the index column is a datetime64,
  • or use at=Server logic if the index is any other type.

Whilst technically a breaking change, the feature change is minor and is very unlikely to affect any of our users, thus this feature will not require a new major software release number.

Fix examples, `utcnow()` is a bug

There's still some examples with utcnow() these are buggy as will create data in the wrong timezone.

These should all use TimestampNanos.now() instead.

remove single code from python example doc.

import pandas as pd
from questdb.ingress import Sender

df = pd.DataFrame({
'id': pd.Categorical(['toronto1', 'paris3']),
'temperature': [20.0, 21.0],
'humidity': [0.5, 0.6],
'timestamp': pd.to_datetime(['2021-01-01', '2021-01-02'])'})

with Sender('localhost', 9009) as sender:
sender.dataframe(df, table_name='sensors')

Line Protocols UInteger doesn't seam to be supported

Can't find a way to insert unsigned integers as described here into my database.

When i try to provide the column value as an np.int64 or a ctypes.c_uint64 i get an unsupported type error.
e.g.

    buffer.row(
        "interval",
        symbols={
            "foo":"bar",
            },
        columns={
            "volume": ctypes.c_uint64(1)
        },
        at=TimestampNanos(12345)
    )

throws

Unsupported type: <class 'ctypes.c_ulonglong'>. Must be one of: bool, int, float, str, TimestampMicros, datetime.datetime

When i provide the value as a python integer its always commited as a line protocol integer which leads to problems when dealing with huge numbers.

Is there a workaround for this behaviour or is this the intended behaviour?

Timestamps before 1970 should be allowed for columns which are not the designated timestamp when inserting dataframes over ILP

When inserting a dataframe with questdb.ingress.sender.dataframe I get the following error, if a timestamp column holds values which are older than 1.1.1970:

questdb.ingress.IngressError: Failed to serialize value of column 'xxxxx' at row index 346 (Timestamp('1962-01-05 00:00:00')): Timestamp -252115200000000 is negative. It must be >= 0. [dc=501007]

I am aware that this is supposed to happen if it would be the designated timestamp column (due to the known limitation of questdb), but questdb would accept older timestamps for timestamp columns other than the designated timestamp.
-> The ingress library throws an error in both cases

Workaround:

  • create the table manually and define the column which holds the timestamp before 1970 as type timestamp
  • convert the timestamp column in pandas to unix time in ms

unfortunately this limitation makes it necessary to create the table manually and it can not be auto created by the library.

Importing pandas dataframes does not assign timestamp correctly

Even this sample code provided in the docs is broken:

`
df = pd.DataFrame({
'id': pd.Categorical(['toronto1', 'paris3']),
'temperature': [20.0, 21.0],
'humidity': [0.5, 0.6],
'timestamp': pd.to_datetime(['2021-01-01', '2021-01-02'])
})

with Sender('localhost', 9009) as sender:
sender.dataframe(df, table_name='sensors')
`

image

I've tried various things, including sending a UNIX timestamp, UNIX timestamp converted to microseconds, nanoseconds, etc.

Non Timeseries Data

Is there a way to send a pandas dataframe with no time column?

My use case would be to pull existing time series, run calcs on them, and then reupload kpis back to questdb
This would save me from maintaining a SQL database.

I see questdb handles it natively by importing a test.csv with their web portal's csv import function.

Sender.row() should allow insertion of NULL values using python None value

Describe the bug

When specifying None for a column value in Sender.row(), I see this error message:

Traceback (most recent call last):
File "reproduce.py", line 7, in <module>
    sender.row('test',
  File "src/questdb/ingress.pyx", line 1237, in questdb.ingress.Sender.row
  File "src/questdb/ingress.pyx", line 719, in questdb.ingress.Buffer.row
  File "src/questdb/ingress.pyx", line 654, in questdb.ingress.Buffer._row
  File "src/questdb/ingress.pyx", line 649, in questdb.ingress.Buffer._row
  File "src/questdb/ingress.pyx", line 583, in questdb.ingress.Buffer._column
TypeError: Unsupported type: <class 'NoneType'>. Must be one of: bool, int, float, str, TimestampMicros, datetime.datetime

To reproduce

Web console:

CREATE TABLE test(data STRING)

reproduce.py:

from questdb.ingress import Sender

with Sender('localhost', 9009) as sender:
    # This works as expected:
    sender.row('test',
        columns={'data': 'somedata'}
    )
    # This fails:
    sender.row('test',
        columns={'data': None}
    )
    sender.flush()

Expected behaviour:

I would expect a NULL value to be inserted into the table when I specify None.

I am aware that I can work around this issue by leaving out the column I want to NULL from the columns parameter. However it would be nice if the client library would take care of that. Also, such a workaround fails if the column to be nulled is the only one or if I want to null all columns ("questdb.ingress.IngressError: Must specify at least one symbol or column").

Environment

Linux desktop 5.15.0-41-generic #44-Ubuntu SMP Wed Jun 22 14:20:53 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
questdb-6.4.3-rt-linux-amd64
Python 3.10.4

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.