Giter Club home page Giter Club logo

python-mysql-replication's Introduction

python-mysql-replication

Pure Python Implementation of MySQL replication protocol build on top of PyMYSQL. This allows you to receive event like insert, update, delete with their datas and raw SQL queries.

Use cases

  • MySQL to NoSQL database replication
  • MySQL to search engine replication
  • Invalidate cache when something change in database
  • Audit
  • Real time analytics

Documentation

A work in progress documentation is available here: https://python-mysql-replication.readthedocs.org/en/latest/

Instruction about building documentation is available here: https://python-mysql-replication.readthedocs.org/en/latest/developement.html

Installation

pip install mysql-replication

Getting support

You can get support and discuss about new features on: https://github.com/julien-duponchelle/python-mysql-replication/discussions

Project status

The project is test with:

  • MySQL 5.5, 5.6 and 5.7 (v0.1 ~ v0.45)
  • MySQL 8.0.14 (v1.0 ~)
  • MariaDB 10.6
  • Python 3.7, 3.11
  • PyPy 3.7, 3.9 (really faster than the standard Python interpreter)

MySQL version 8.0.14 and later Set global variable binlog_row_metadata='FULL' and binlog_row_image='FULL'

The project is used in production for critical stuff in some medium internet corporations. But all use case as not been perfectly test in the real world.

Limitations

https://python-mysql-replication.readthedocs.org/en/latest/limitations.html

Featured

Data Pipelines Pocket Reference (by James Densmore, O'Reilly): Introduced and exemplified in Chapter 4: Data Ingestion: Extracting Data.

Streaming Changes in a Database with Amazon Kinesis (by Emmanuel Espina, Amazon Web Services)

Near Zero Downtime Migration from MySQL to DynamoDB (by YongSeong Lee, Amazon Web Services)

Enable change data capture on Amazon RDS for MySQL applications that are using XA transactions (by Baruch Assif, Amazon Web Services)

Projects using this library

MySQL server settings

In your MySQL server configuration file you need to enable replication:

[mysqld]
server-id		           = 1
log_bin			           = /var/log/mysql/mysql-bin.log
binlog_expire_logs_seconds = 864000
max_binlog_size            = 100M
binlog-format              = ROW #Very important if you want to receive write, update and delete row events
binlog_row_metadata        = FULL
binlog_row_image           = FULL

reference: https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html

Examples

All examples are available in the examples directory

This example will dump all replication events to the console:

from pymysqlreplication import BinLogStreamReader

mysql_settings = {'host': '127.0.0.1', 'port': 3306, 'user': 'root', 'passwd': ''}

stream = BinLogStreamReader(connection_settings = mysql_settings, server_id=100)

for binlogevent in stream:
    binlogevent.dump()

stream.close()

For this SQL sessions:

CREATE DATABASE test;
use test;
CREATE TABLE test4 (id int NOT NULL AUTO_INCREMENT, data VARCHAR(255), data2 VARCHAR(255), PRIMARY KEY(id));
INSERT INTO test4 (data,data2) VALUES ("Hello", "World");
UPDATE test4 SET data = "World", data2="Hello" WHERE id = 1;
DELETE FROM test4 WHERE id = 1;

Output will be:

=== RotateEvent ===
Date: 1970-01-01T01:00:00
Event size: 24
Read bytes: 0

=== FormatDescriptionEvent ===
Date: 2012-10-07T15:03:06
Event size: 84
Read bytes: 0

=== QueryEvent ===
Date: 2012-10-07T15:03:16
Event size: 64
Read bytes: 64
Schema: test
Execution time: 0
Query: CREATE DATABASE test

=== QueryEvent ===
Date: 2012-10-07T15:03:16
Event size: 151
Read bytes: 151
Schema: test
Execution time: 0
Query: CREATE TABLE test4 (id int NOT NULL AUTO_INCREMENT, data VARCHAR(255), data2 VARCHAR(255), PRIMARY KEY(id))

=== QueryEvent ===
Date: 2012-10-07T15:03:16
Event size: 49
Read bytes: 49
Schema: test
Execution time: 0
Query: BEGIN

=== TableMapEvent ===
Date: 2012-10-07T15:03:16
Event size: 31
Read bytes: 30
Table id: 781
Schema: test
Table: test4
Columns: 3

=== WriteRowsEvent ===
Date: 2012-10-07T15:03:16
Event size: 27
Read bytes: 10
Table: test.test4
Affected columns: 3
Changed rows: 1
Values:
--
* data : Hello
* id : 1
* data2 : World

=== XidEvent ===
Date: 2012-10-07T15:03:16
Event size: 8
Read bytes: 8
Transaction ID: 14097

=== QueryEvent ===
Date: 2012-10-07T15:03:17
Event size: 49
Read bytes: 49
Schema: test
Execution time: 0
Query: BEGIN

=== TableMapEvent ===
Date: 2012-10-07T15:03:17
Event size: 31
Read bytes: 30
Table id: 781
Schema: test
Table: test4
Columns: 3

=== UpdateRowsEvent ===
Date: 2012-10-07T15:03:17
Event size: 45
Read bytes: 11
Table: test.test4
Affected columns: 3
Changed rows: 1
Affected columns: 3
Values:
--
* data : Hello => World
* id : 1 => 1
* data2 : World => Hello

=== XidEvent ===
Date: 2012-10-07T15:03:17
Event size: 8
Read bytes: 8
Transaction ID: 14098

=== QueryEvent ===
Date: 2012-10-07T15:03:17
Event size: 49
Read bytes: 49
Schema: test
Execution time: 1
Query: BEGIN

=== TableMapEvent ===
Date: 2012-10-07T15:03:17
Event size: 31
Read bytes: 30
Table id: 781
Schema: test
Table: test4
Columns: 3

=== DeleteRowsEvent ===
Date: 2012-10-07T15:03:17
Event size: 27
Read bytes: 10
Table: test.test4
Affected columns: 3
Changed rows: 1
Values:
--
* data : World
* id : 1
* data2 : Hello

=== XidEvent ===
Date: 2012-10-07T15:03:17
Event size: 8
Read bytes: 8
Transaction ID: 14099

Tests

When it's possible we have a unit test.

More information is available here: https://python-mysql-replication.readthedocs.org/en/latest/developement.html

Changelog

https://github.com/julien-duponchelle/python-mysql-replication/blob/main/CHANGELOG

Similar projects

Special thanks

Contributors

Major contributor:

Maintainer:

Other contributors:

Thanks to GetResponse for their support

Licence

Copyright 2012-2023 Julien Duponchelle

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

python-mysql-replication's People

Contributors

alex avatar baloo avatar bentsku avatar bjoernhaeuser avatar chungeun-choi avatar darnuria avatar dongwook-chan avatar ebang091 avatar etern avatar groom2hub avatar heehehe avatar honggeonui avatar julien-duponchelle avatar junha6316 avatar kunaldhand avatar lxyu avatar mirageoasis avatar mjs1995 avatar oseemann avatar paulvic avatar phoenixluo avatar romuald avatar roy0424 avatar sean-k1 avatar siddontang avatar soulee-dev avatar starcat37 avatar suhwan-cheon avatar why-arong avatar yatoff avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

python-mysql-replication's Issues

working with log_file and log_pos

I have the following config:

stream = BinLogStreamReader(
    connection_settings=MYSQL_SETTINGS, 
    server_id=3,
    only_events=[DeleteRowsEvent, WriteRowsEvent, UpdateRowsEvent], 
    blocking=True,
    log_file='mysql.004356',
    log_pos=85760046)

The log_file and log_pos is from the last backup, also Im showing binlogevent.packet.log_pos, if I stop the script and try to start again from the last binlogevent.packet.log_pos, this number is less than 85760046.
What is this position in binlogevent.packet.log_pos?

Can I get the real last position and log file read from the binlog?

Thanks in advance

disordered columns

Hello, How can I get the columns in order like structure of table?

Structure:

CREATE TABLE test4 (
id int(11) NOT NULL AUTO_INCREMENT,
data varchar(255) DEFAULT NULL,
data2 varchar(255) DEFAULT NULL,
PRIMARY KEY (id)
) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=latin1

Insert:

insert into test4 set data = 'hello', data2 = 'world';

Dump:

('', u'data', ':', u'hello')
('
', u'id', ':', 8)
('*', u'data2', ':', u'world')
()

Improve the documentation of row event

We have a beginning of documentation but row event fields are not documented. It's important because we add new thing like primary_key list in the library.

Inconsistent row events after ALTER

The table_map dict is not cleared after an ALTER TABLE, so subsequent update/insert/delete events will have incorrect schemas. I might be able to throw a patch your way later, but may not have time.

Breaking main loop - same user

Hi,
when I run twice the same script, as dump_events.py, with same user/password), the main loop breaks.
Rights to this user are: GRANT SELECT, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON .
I do not understand why ? Does anyone have trouble as describe below ?
I try to create an another user with same rights. Same result
Yves

Error with python3

Traceback (most recent call last):
  File "/sources/dc/vg_mysql_adapter/lab.py", line 53, in <module>
    main()
  File "/sources/dc/vg_mysql_adapter/lab.py", line 32, in main
    for binlogevent in stream:
  File "/home/tarzan/.virtualenvs/vma/lib/python3.4/site-packages/pymysqlreplication/binlogstream.py", line 127, in fetchone
    self.__connect_to_stream()
  File "/home/tarzan/.virtualenvs/vma/lib/python3.4/site-packages/pymysqlreplication/binlogstream.py", line 117, in __connect_to_stream
    if pymysql.VERSION < (0, 6, None):
TypeError: unorderable types: int() < NoneType()

Process finished with exit code 1

I figured out that is caused by pymysql.VERSION = (0, 6, 2, None). The error is raised because python3 does not allow comparison between 2 < None.

Decoding binary as utf-8

I get this trace:

Traceback (most recent call last):
  File "./test.py", line 33, in <module>
    main()
  File "./test.py", line 26, in main
    for binlogevent in stream:
  File "/usr/local/lib/python2.7/dist-packages/pymysqlreplication/binlogstream.py", line 262, in fetchone
    self.__freeze_schema)
  File "/usr/local/lib/python2.7/dist-packages/pymysqlreplication/packet.py", line 98, in __init__
    freeze_schema = freeze_schema)
  File "/usr/local/lib/python2.7/dist-packages/pymysqlreplication/event.py", line 141, in __init__
    self.query = tmp.decode("utf-8")
  File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode
    return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xae in position 460: invalid start byte

I printed a repr of the packet and it is essentially:

INSERT INTO x(......, ip) VALUES (....,  'b\xae\xe1\xbd');

This is row based replication where the master was originally sent INET6_ATON('::1') for example.

What's the recommended solution here? I'm surprised no one else has hit this as many column types leverage binary.

Thanks!

UTF-8 error on rethinkdb_sync.py

Hi ,

I try to do work the rethinkdb_sync example file but shoiw me error:

Traceback (most recent call last):
File "./rethinkdb_sync.py", line 67, in
main()
File "./rethinkdb_sync.py", line 60, in main
vals = dict((str(k), str(v)) for k, v in row["values"].iteritems())
File "./rethinkdb_sync.py", line 60, in
vals = dict((str(k), str(v)) for k, v in row["values"].iteritems())

Where i do encode or decode utf-8 ?

Replaying binlog with dropped columns causes unhandled exception.

Steps to reproduce:

  1. create a table with few columns
  2. generate some row events
  3. drop a column from that table
  4. generate some more row events
  5. start pymysqlreplication replaying the binlog from the position prior to dropping the column

ER:
Working fine.

AR:
Unhandled error:

  File "/usr/local/lib/python2.7/dist-packages/pymysqlreplication/binlogstream.py", line 262, in fetchone
    self.__freeze_schema)
  File "/usr/local/lib/python2.7/dist-packages/pymysqlreplication/packet.py", line 98, in __init__
    freeze_schema = freeze_schema)
  File "/usr/local/lib/python2.7/dist-packages/pymysqlreplication/row_event.py", line 550, in __init__
    column_schema = self.column_schemas[i]
IndexError: list index out of range

This is partially solved by 4c48538, but that doesn't solve the deeper issue at hand which is how the schema for tables is obtained. Schema is always obtained from the current version of information_schema no matter how far in the past the RowEvent processed is.

fail when 'database' is used as connection setting

Connecting with 'database' as a connection setting (instead of 'db'), will raise an error down the streaming line

    settings = {
        'host': 'localhost',
        'port': 3306,
        'user': 'root',
        'passwd': '',
        'database': 'stuff',
    }
    stream = BinLogStreamReader(connection_settings=settings, server_id=42)

    for event in stream:
         event.dump()

Will raise the following error in TableMapEvent

pymysql.err.ProgrammingError: (1146, u"Table 'stuff.columns' doesn't exist")

It seems that 'db' is overwritten internaly, but PyMysql will favor 'database' when present

Attempting to run tests locally fails

I'm not sure whether this is a result of a bug, or some missing documentation about how to get this running, but when I attempt to run the tests, almost all of them fail with:

======================================================================
ERROR: test_update_multiple_row_event (pymysqlreplication.tests.test_basic.TestMultipleRowBinLogStreamReader)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/dreid/code/python-mysql-replication/pymysqlreplication/tests/test_basic.py", line 320, in test_update_multiple_row_event
    self.stream.fetchone()
  File "/Users/dreid/code/python-mysql-replication/pymysqlreplication/binlogstream.py", line 128, in fetchone
    pkt = self._stream_connection.read_packet()
  File "/Users/dreid/.virtualenvs/cache_buster/lib/python2.7/site-packages/pymysql/connections.py", line 686, in read_packet
    packet.check_error()
  File "/Users/dreid/.virtualenvs/cache_buster/lib/python2.7/site-packages/pymysql/connections.py", line 328, in check_error
    raise_mysql_exception(self.__data)
  File "/Users/dreid/.virtualenvs/cache_buster/lib/python2.7/site-packages/pymysql/err.py", line 142, in raise_mysql_exception
    _check_mysql_exception(errinfo)
  File "/Users/dreid/.virtualenvs/cache_buster/lib/python2.7/site-packages/pymysql/err.py", line 138, in _check_mysql_exception
    raise InternalError, (errno, errorvalue)
InternalError: (1236, u"Slave can not handle replication events with the checksum that master is configured to log; the first event 'mysql-bin.000001' at 4, the last event read from './mysql-bin.000001' at 120, the last byte read from './mysql-bin.000001' at 120.")

This is on OS X 10.8 with MySQL 5.6.13

Wrong handling of RotateEvents

Hey,

it's me again :)
I just found out that the Module does not handle RotateEvents correctly. After receiving a RotateEvent the log_pos of stream gets resetted to Zero. This does net seem to be a valid value.

What is happening:
The RotateEvent.log_pos is 0. In binlogstream.py the log_pos of the stream will be set to the value of the Event: self.__log_pos = binlog_event.log_pos (binlogstream.py#91).
When the stream needs reconnecting the position is invalid, and following error will occur:
InternalError: (1236, u"Client requested master to start replication from impossible position; the first event 'mysql-bin.000001' at 0, the last event read from './mysql-bin.000001' at 4, the last byte read from './mysql-bin.000001' at 4.")

You can test this easily with this code:

    def test_read_query_event(self):
        query = "CREATE TABLE test (id INT NOT NULL AUTO_INCREMENT, data VARCHAR (50) NOT NULL, PRIMARY KEY (id))"
        self.execute(query)

        #RotateEvent
        self.stream.fetchone()
        self.stream._BinLogStreamReader__connected = False
        #FormatDescription
        self.stream.fetchone()

        event = self.stream.fetchone()
        self.assertIsInstance(event, QueryEvent)
        self.assertEqual(event.query, query)

I maybe have time to create a proper testcase for this tonight, but just wanted to let you know and asking if you have some useful insight into this problem.

1000 thanks in advance
Björn

Test freeze

With the last version of master on my mac (MySQL 5.6)

When i run test:

=> python pymysqlreplication/tests/test_basic.py
...

It's freeze and never finish. Someone have same troubles?

examples/rethinkdb_sync.py not work

./rethinkdb_sync.py
Traceback (most recent call last):
File "./rethinkdb_sync.py", line 65, in
main()
File "./rethinkdb_sync.py", line 36, in main
rethinkdb.db_create("mysql").run()
File "/usr/lib/python2.6/site-packages/rethinkdb/ast.py", line 116, in run
" a connection to run on.")
rethinkdb.errors.ReqlDriverError: RqlQuery.run must be given a connection to run on.

Anybody help-me ?

The new BinLogStreamReader will kill the old one?

When I kicked off my second BinlogStreamReader, the first one(running) was suddenly stopped. Was the new BinlogStreamReader killed the current running one? The two StreamReader were monitoring the same bin-log file.
My code as following:
self.stream = BinLogStreamReader(connection_settings = mysql_settings,
only_events = [DeleteRowsEvent, WriteRowsEvent, UpdateRowsEvent],
blocking = True, resume_stream = True, server_id=2, log_file=bin-log.1098, log_pos=88569)
Thanks in advance!

Unknown MySQL bin log event type: 0x3 (3)

"/opt/tiger/ss_lib/python_package/lib/python2.7/site-packages/pymysqlreplication/binlogstream.py", line 98, in fetchone
binlog_event = BinLogPacketWrapper(pkt, self.table_map, self._ctl_connection)
File "/opt/tiger/ss_lib/python_package/lib/python2.7/site-packages/pymysqlreplication/packet.py", line 67, in init
raise NotImplementedError("Unknown MySQL bin log event type: " + hex(self.event_type) + " (" + str(self.event_type) + ")")
NotImplementedError: Unknown MySQL bin log event type: 0x3 (3)

utf8mb4 support?

ran into this

 File "/nail/home/cheng/replication_handler/virtualenv_run/local/lib/python2.7/site-packages/pymysqlreplication/row_event.py", line 372, in __getattr__
    self._fetch_rows()
  File "/nail/home/cheng/replication_handler/virtualenv_run/local/lib/python2.7/site-packages/pymysqlreplication/row_event.py", line 367, in _fetch_rows
    self.__rows.append(self._fetch_one_row())
  File "/nail/home/cheng/replication_handler/virtualenv_run/local/lib/python2.7/site-packages/pymysqlreplication/row_event.py", line 462, in _fetch_one_row
    row["after_values"] = self._read_column_data(null_bitmap)
  File "/nail/home/cheng/replication_handler/virtualenv_run/local/lib/python2.7/site-packages/pymysqlreplication/row_event.py", line 98, in _read_column_data
    values[name] = self.__read_string(2, column)
  File "/nail/home/cheng/replication_handler/virtualenv_run/local/lib/python2.7/site-packages/pymysqlreplication/row_event.py", line 176, in __read_string
    string = string.decode(column.character_set_name)
LookupError: unknown encoding: utf8mb4

Connection Lost, I use the newest version 0.1.0

First, Thks for this project, It is helpful to me.

I remember fix #2 bug, I use the fixed version, But my project is throw exception.

fixed code is

try:
pkt = self._stream_connection.read_packet()
except pymysql.OperationalError as error:
code, message = error.args
#2013: Connection Lost

if code == 2013:
self.__connected_stream = False
continue

My execption throw from this.

binlog_event = BinLogPacketWrapper(pkt, self.table_map,
self._ctl_connection)

I found pymysql will be lost when the connection has been exists long time. In My project the connection is lost once day.

this is my exception log

File "/u/app/poself-mysql-to-redis/redis_cache.py", line 61, in mysql_to_redis
for binlogevent in stream:
File "/usr/local/lib/python2.7/site-packages/pymysqlreplication/binlogstream.py", line 118, in fetchone
self._ctl_connection)
File "/usr/local/lib/python2.7/site-packages/pymysqlreplication/packet.py", line 88, in init
ctl_connection)
File "/usr/local/lib/python2.7/site-packages/pymysqlreplication/row_event.py", line 481, in init
self.table)
File "/usr/local/lib/python2.7/site-packages/pymysqlreplication/row_event.py", line 508, in __get_table_information
""", (schema, table))
File "/usr/local/lib/python2.7/site-packages/PyMySQL-0.5-py2.7.egg/pymysql/cursors.py", line 262, in execute
result = super(DictCursor, self).execute(query, args)
File "/usr/local/lib/python2.7/site-packages/PyMySQL-0.5-py2.7.egg/pymysql/cursors.py", line 117, in execute
self.errorhandler(self, exc, value)
File "/usr/local/lib/python2.7/site-packages/PyMySQL-0.5-py2.7.egg/pymysql/connections.py", line 189, in defaulterrorhandler
raise errorclass, errorvalue
OperationalError: (2013, 'Lost connection to MySQL server during query')

Connection Lost, I use the newest version 0.1.0

First, Thks for this project, It is helpful to me.

I remember fix #2 bug, I use the fixed version, But my project is throw exception.

fixed code is

try:
   pkt = self._stream_connection.read_packet()
except pymysql.OperationalError as error:
   code, message = error.args
   #2013: Connection Lost
   if code == 2013:
       self.__connected_stream = False
       continue

My execption throw from this.

binlog_event = BinLogPacketWrapper(pkt, self.table_map, self._ctl_connection)

I found pymysql will be lost when the connection has been exists long time. In My project the connection is lost once day.

this is my exception log

  File "/u/app/poself-mysql-to-redis/redis_cache.py", line 61, in mysql_to_redis
    for binlogevent in stream:
  File "/usr/local/lib/python2.7/site-packages/pymysqlreplication/binlogstream.py", line 118, in fetchone
    self._ctl_connection)
  File "/usr/local/lib/python2.7/site-packages/pymysqlreplication/packet.py", line 88, in __init__
    ctl_connection)
  File "/usr/local/lib/python2.7/site-packages/pymysqlreplication/row_event.py", line 481, in __init__
    self.table)
  File "/usr/local/lib/python2.7/site-packages/pymysqlreplication/row_event.py", line 508, in __get_table_information
    """, (schema, table))
  File "/usr/local/lib/python2.7/site-packages/PyMySQL-0.5-py2.7.egg/pymysql/cursors.py", line 262, in execute
    result = super(DictCursor, self).execute(query, args)
  File "/usr/local/lib/python2.7/site-packages/PyMySQL-0.5-py2.7.egg/pymysql/cursors.py", line 117, in execute
    self.errorhandler(self, exc, value)
  File "/usr/local/lib/python2.7/site-packages/PyMySQL-0.5-py2.7.egg/pymysql/connections.py", line 189, in defaulterrorhandler
    raise errorclass, errorvalue
OperationalError: (2013, 'Lost connection to MySQL server during query')

Mysql to Hive

Do we can able to replicate mysql to Hive tables. I cant find any document on this. I saw a comment in Hadoop Applier blog about python-mysql-replication which serves as same.

Can i able to replicate mysql to Hive tables, if so can you pls point me to any documentation.

Duplicate data received

Dear All

   Using python-mysql-replication have created script with blocking=True for Write,Update and Delete event to send events to Vertica. It works fine when the script stopped and started again it re-reads binlog and send already sent all events which makes duplicate in the vertica DB. Kindly help in this issue.

TypeError: unsupported operand type(s) for ^: 'NoneType' and 'int'

Hey there,

I am currently having a problem with this awesome module, I was not able to fix it by myself, but here is what I was able to figure out.

This is the stacktrace:

  File "/data/auditgeneration/src/pymysqlreplication/pymysqlreplication/row_event.py", line 233, in __getattr__
    self._fetch_rows()
  File "/data/auditgeneration/src/pymysqlreplication/pymysqlreplication/row_event.py", line 228, in _fetch_rows
    self.__rows.append(self._fetch_one_row())
  File "/data/auditgeneration/src/pymysqlreplication/pymysqlreplication/row_event.py", line 290, in _fetch_one_row
    row["before_values"] = self._read_column_data(null_bitmap)
  File "/data/auditgeneration/src/pymysqlreplication/pymysqlreplication/row_event.py", line 74, in _read_column_data
    values[name] = self.__read_new_decimal(column)
  File "/data/auditgeneration/src/pymysqlreplication/pymysqlreplication/row_event.py", line 199, in __read_new_decimal
    value = self.packet.read_int_be_by_size(size) ^ mask 
TypeError: unsupported operand type(s) for ^: 'NoneType' and 'int'

What I was able to figure it that size is 3, and this case is not handled by read_int_be_by_size in the BinLogPacketWrapper.

Whats the right way to fix this? Dont return a 3, handle the size of 3 or handle the empty return value read_int_be_by_size?

Thanks in advance
Björn

UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 125: ordinal not in range(128)

Hi!

I'm getting the following error while trying to run the following script (gist - https://gist.github.com/sureshsaggar/5805394). The script has been derived from the examples, but throws UnicodeDecodeError while reading binary log event from the stream.

ubuntu@mysql-ab1:~$ python mysqltail.py
Traceback (most recent call last):
  File "mysqltail.py", line 42, in 
    main()
  File "mysqltail.py", line 27, in main
    for binlogevent in stream:
  File "/usr/local/lib/python2.7/dist-packages/pymysqlreplication/binlogstream.py", line 98, in fetchone
    binlog_event = BinLogPacketWrapper(pkt, self.table_map, self._ctl_connection)
  File "/usr/local/lib/python2.7/dist-packages/pymysqlreplication/packet.py", line 68, in __init__
    self.event = event_class(self, event_size_without_header, table_map, ctl_connection)
  File "/usr/local/lib/python2.7/dist-packages/pymysqlreplication/event.py", line 93, in __init__
    self.query = self.packet.read(event_size - 13 - self.status_vars_length - self.schema_length - 1).decode()
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 125: ordinal not in range(128)

Any idea? TIA
Ss

Can not resume from events in the middle of a transaction

Row operation events that belong to the same transaction may be grouped into sequences, in which case each such sequence of events begins with a sequence of TABLE_MAP_EVENT events: one per table used by events in the sequence.

When resume from events in the middle of a transaction, KeyError exception is raised because that no TABLE_MAP_EVENT event is received.

disconnection and reconnection when using fetchone() after EOF (with block=False)

Hello,
I have a python program that uses python-mysql-replication with block=False (and resume_stream=True) and gets events with fetchone() within a loop.

With block=False, fetchone() returns None when it gets an EOF from PyMySQL (no new event available in binary log).

However the next attempt with fetchone(), if there are still no new events, does not return None (EOF) again, but it causes PyMySQL to trigger an error 2013 ("Lost connection to MySQL server during query") because pymysql _read_packet() will try to read a new 4 bytes header but 0 bytes are available (_read_packet() and _read_bytes() definition from https://github.com/PyMySQL/PyMySQL/blob/1f9222feb2a668c4e918000e5358b98afbde38d7/pymysql/connections.py)

This exception causes python-mysql-replication to assume that the connection is gone, it sets self.__connected_stream = False and re-creates the stream connection upon the next loop iteration in fetchone()
(fetchone() definition in https://github.com/noplay/python-mysql-replication/blob/afe27999ccc9df6d47f9a69064e481cff4798a62/pymysqlreplication/binlogstream.py)

This keeps going with disconnections and reconnections until a new event is available.

Is this the intended behavior?
I would have expected fetchone() to keep returning None until something new is available without re-creating the connection every time, but perhaps this is due to how PyMySQL handles further attempts to read beyond EOF?

On a sidenote, with PyPy 2.6 (based on python 2.7) this behavior is especially troublesome because the faulted connections are not automatically closed (as they are with standard python 2.6.6) and each disconnection/reconnection waiting for a new event causes one more connection to stay in CLOSE_WAIT state forever, until resources starvation occurs.
I fixed this issue with PyPy by setting an explicit close in python-mysql-replication when self.__connected_stream is set False:

            except pymysql.OperationalError as error:
                code, message = error.args
                if code in MYSQL_EXPECTED_ERROR_CODES:
                    self._stream_connection.close() # <--- prevents CLOSE_WAIT socket spam with PyPy
                    self.__connected_stream = False
                    continue

Any thoughts about the behavior described above is appreciated.

Cannot get all the bin-log info

when i run the redis_cache.py, i found an issue:
i had executed some sql(more than 30), but it can be only got the ahead of 11 sql from bin-log.

mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 |     4437 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (5.47 sec)

and then,i executed another sql,but it(redis_cache.py) ignore the others(such update and delete).

SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 |     9430 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+

as we can see,the mysql server's bin-log had increased.

FormatDescriptionEvent has zero log_pos

Hey,

I just updated to the latest commit and since then my tests are failing. It seems to be that the parsing of a FORMAT_DESCRIPTION_EVENT fails and the stream gets a zero log_pos. Subsequent requests will fail, because zero is not a allowed position in the file.

Here is the dump of the binlog:


mysqlbinlog --base64-output=auto --hexdump mysql-bin.000003 
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;
/*!40019 SET @@session.max_insert_delayed_threads=0*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
# at 4
#130525 21:42:04 server id 1  end_log_pos 107 
# Position  Timestamp   Type   Master ID        Size      Master Pos    Flags 
#        4 0c 14 a1 51   0f   01 00 00 00   67 00 00 00   6b 00 00 00   00 00
#       17 04 00 35 2e 35 2e 33 31  2d 6c 6f 67 00 00 00 00 |..5.5.31.log....|
#       27 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00 |................|
#       37 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00 |................|
#       47 00 00 00 00 0c 14 a1 51  13 38 0d 00 08 00 12 00 |.......Q.8......|
#       57 04 04 04 04 12 00 00 54  00 04 1a 08 00 00 00 08 |.......T........|
#       67 08 08 02 00  |....|
#   Start: binlog v 4, server v 5.5.31-log created 130525 21:42:04 at startup
ROLLBACK/*!*/;
BINLOG '
DBShUQ8BAAAAZwAAAGsAAAAAAAQANS41LjMxLWxvZwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAMFKFREzgNAAgAEgAEBAQEEgAAVAAEGggAAAAICAgCAA==
'/*!*/;
# at 107

This is the packet:

binlog_event.event.dump()
print vars(binlog_event)

generates

=== FormatDescriptionEvent ===
Date: 2013-05-25T22:27:33
Event size: 84
Read bytes: 0
()
{'server_id': 1, 'event_type': 15, 'timestamp': 1369513653, 'log_pos': 0, 'charset': 'utf8', 'packet': <pymysql.connections.MysqlPacket object at 0x2ca9b90>, 'read_bytes': 0, 'flags': 0, '_BinLogPacketWrapper__data_buffer': '', 'event': <pymysqlreplication.event.FormatDescriptionEvent object at 0x2ca9d50>, 'event_size': 103}

stream loop stop abnormally

here is the log:

2014-06-07 11:04:01 in16-118 : list_article_category start
2014-06-07 11:06:01 in16-118 : list_article_category start
2014-06-07 11:08:02 in16-118 : list_article_category start
2014-06-07 11:10:01 in16-118 : list_article_category start
2014-06-07 11:12:01 in16-118 : list_article_category start
2014-06-07 11:14:01 in16-118 : list_article_category start
2014-06-07 11:16:01 in16-118 : list_article_category start
2014-06-07 11:18:01 in16-118 : list_article_category start
2014-06-07 11:20:01 in16-118 : list_article_category start
2014-06-07 11:22:01 in16-118 : list_article_category start
2014-06-07 11:24:01 in16-118 : list_article_category start
2014-06-07 11:26:01 in16-118 : list_article_category start

looks like some timeout settings of mysql, any idea?

blocking=False doesn't appear to work.

When I set blocking to False, I expect the print statement at the bottom of this function to be executed, but it never makes it there (just hangs after reading the bin log). Am I misunderstanding the blocking parameter?

def main():
    stream = BinLogStreamReader(
        connection_settings=config.MYSQL_SETTINGS,
        only_events=[DeleteRowsEvent, WriteRowsEvent, UpdateRowsEvent],
        blocking=False)

    for bin_log_event in stream:
        [log_row_operation(bin_log_event, row) for row in bin_log_event.rows]

    print "This statement is never executed."
    stream.close()

Values missing if database rights aren't granted

Took me lot of time to uncover that user without database rights (on with REPLICATION SLAVE and REPLICATION CLIENT) can't see table structure from information_schema and it's why event values aren't dumped.

It is possible read data from strem even without that rights?

KeyError when getting a table from table map

Traceback (most recent call last):
File "essay_category_sync.py", line 68, in main
for binlogevent in stream:
File "/opt/tiger/ss_lib/python_package/lib/python2.7/site-packages/pymysqlreplication/binlogstream.py", line 98, in fetchone
binlog_event = BinLogPacketWrapper(pkt, self.table_map, self._ctl_connection)
File "/opt/tiger/ss_lib/python_package/lib/python2.7/site-packages/pymysqlreplication/packet.py", line 68, in init
self.event = event_class(self, event_size_without_header, table_map, ctl_connection)
File "/opt/tiger/ss_lib/python_package/lib/python2.7/site-packages/pymysqlreplication/row_event.py", line 397, in init
super(UpdateRowsEvent, self).init(from_packet, event_size, table_map, ctl_connection)
File "/opt/tiger/ss_lib/python_package/lib/python2.7/site-packages/pymysqlreplication/row_event.py", line 31, in init
self.columns = self.table_map[self.table_id].columns
KeyError: 40

`IndexError` in TableMapEvent

Traceback (most recent call last):
  File "/srv/virtualenvs/zeus/local/lib/python2.7/site-packages/pymysqlreplication/binlogstream.py", line 118, in fetchone
    self._ctl_connection)
  File "/srv/virtualenvs/zeus/local/lib/python2.7/site-packages/pymysqlreplication/packet.py", line 88, in __init__
    ctl_connection)
  File "/srv/virtualenvs/zeus/local/lib/python2.7/site-packages/pymysqlreplication/row_event.py", line 488, in __init__
    column_schema = self.column_schemas[i]
IndexError: tuple index out of range

More info:

>>> print column_types
['\x03', '\x03', '\x03', '\x0f', '\x0f', '\x03', '\x0c', '\x0c']
>>> print self.columns
[]
>>> print table_map
{}
>>> print self.column_schemas
()

And the self.packet.read_length_coded_binary() at L485 returns 4.

Put this line in the first line of TableMapEvent init shows this:

>>> print repr(self.packet.read(58))
'N\x01\x00\x00\x00\x00\x01\x00\x06sentry\x00\x19sentry_messagefiltervalue\x00\x08\x03\x03\x03\x0f\x0f\x03\x0c\x0c\x04`\x00X\x02\xe0'

Connection Lost, I use the newest version 0.1.0

First, Thks for this project, It is helpful to me.

I remember fix #2 bug, I use the fixed version, But my project is throw exception.

fixed code is

try:
pkt = self._stream_connection.read_packet()
except pymysql.OperationalError as error:
code, message = error.args
#2013: Connection Lost

if code == 2013:
self.__connected_stream = False
continue

My execption throw from this.

binlog_event = BinLogPacketWrapper(pkt, self.table_map,
self._ctl_connection)

I found pymysql will be lost when the connection has been exists long time. In My project the connection is lost once day.

this is my exception log

File "/u/app/poself-mysql-to-redis/redis_cache.py", line 61, in mysql_to_redis
for binlogevent in stream:
File "/usr/local/lib/python2.7/site-packages/pymysqlreplication/binlogstream.py", line 118, in fetchone
self._ctl_connection)
File "/usr/local/lib/python2.7/site-packages/pymysqlreplication/packet.py", line 88, in init
ctl_connection)
File "/usr/local/lib/python2.7/site-packages/pymysqlreplication/row_event.py", line 481, in init
self.table)
File "/usr/local/lib/python2.7/site-packages/pymysqlreplication/row_event.py", line 508, in __get_table_information
""", (schema, table))
File "/usr/local/lib/python2.7/site-packages/PyMySQL-0.5-py2.7.egg/pymysql/cursors.py", line 262, in execute
result = super(DictCursor, self).execute(query, args)
File "/usr/local/lib/python2.7/site-packages/PyMySQL-0.5-py2.7.egg/pymysql/cursors.py", line 117, in execute
self.errorhandler(self, exc, value)
File "/usr/local/lib/python2.7/site-packages/PyMySQL-0.5-py2.7.egg/pymysql/connections.py", line 189, in defaulterrorhandler
raise errorclass, errorvalue
OperationalError: (2013, 'Lost connection to MySQL server during query')

Strange error in event_row.py

Hi,

Nice library.

Here is a problem that has started happening, and I don't know why. Here is the Python error:

File "/usr/lib/python2.6/site-packages/pymysqlreplication/binlogstream.py", line 146, in fetchone
self.use_checksum)
File "/usr/lib/python2.6/site-packages/pymysqlreplication/packet.py", line 93, in __init

ctl_connection)
File "/usr/lib/python2.6/site-packages/pymysqlreplication/row_event.py", line 491, in init
column_schema = self.column_schemas[i]
IndexError: tuple index out of range

I think this is due to some MySQL internal query that pymysqlreplication has trouble with. The EventType that appears to trigger the failure is a QueryEvent, but the event is an internal MySQL event. Here some events that occur just before the failure. You can see that these are internal MySQL queries:

=== QueryEvent ===
Date: 2013-12-11T16:31:15
Log position: 30510
Event size: 80
Read bytes: 80
Schema: mysql
Execution time: 0
Query: TRUNCATE TABLE time_zone_transition
=== QueryEvent ===
Date: 2013-12-11T16:31:15
Log position: 30614
Event size: 85
Read bytes: 85
Schema: mysql
Execution time: 0
Query: TRUNCATE TABLE time_zone_transition_type
=== QueryEvent ===
Date: 2013-12-11T16:31:15
Log position: 30683
Event size: 50
Read bytes: 50
Schema: mysql
Execution time: 0
Query: BEGIN

^^ error occurs immediately after this

You can see that the date is 2013.12.11.

I looked at the source code where the error occurs:

485 for i in range(0, len(column_types)):
486 column_type = column_types[i]
487 column_schema = self.column_schemas[i]
488 col = Column(byte2int(column_type), column_schema, from_packet)
489 self.columns.append(col)

The problem is that self.column_schemas is None, so index fails.

Any ideas on this? Thanks.

No SELECT privileges result in silent failure

Steps to reproduce:

  1. GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'a_user'@'%'
  2. REVOKE SELECT ONyour_db.* TO 'a_user'@'%' (or just don't grant it).
  3. use pymysqlreplication

Expected result:
Raising an Exception stating that user has no privilege to read table's column information

Actual result:
Receiving RowEvents with no statement rows associated with it.

Background:
BinLogStreamReader. __get_table_information tries to access column information. Having no privileges this does not return error, but rather no rows. This information is later used in RowEvent.__init__.
https://github.com/vartec/python-mysql-replication/blob/a4aa08c746cf69ed141301db81b6660655eff3c4/pymysqlreplication/row_event.py#L55-L58

Having no column information causes it to silently marked as completed = False, and subsequently not fetching any rows.

Stream loop stop abnormally

Here is the logs. No exception is catched. Just like it reaches the end of stream.

2013-12-15 01:32:06,440, INFO SAVE_POS: log_file= mysql-bin.119561 pos= 32335177 info= exit
2013-12-15 01:32:07,714, INFO SAVE_POS: log_file= mysql-bin.119561 pos= 32449264 info= exit
2013-12-15 01:32:08,924, INFO SAVE_POS: log_file= mysql-bin.119561 pos= 32546593 info= exit
2013-12-15 01:32:10,137, INFO SAVE_POS: log_file= mysql-bin.119561 pos= 32651193 info= exit
2013-12-15 01:32:11,380, INFO SAVE_POS: log_file= mysql-bin.119561 pos= 32738018 info= exit
2013-12-15 01:32:12,623, INFO SAVE_POS: log_file= mysql-bin.119561 pos= 32832237 info= exit
2013-12-15 01:32:13,835, INFO SAVE_POS: log_file= mysql-bin.119561 pos= 32965993 info= exit
2013-12-15 01:32:15,045, INFO SAVE_POS: log_file= mysql-bin.119561 pos= 33229939 info= exit
2013-12-15 01:32:16,280, INFO SAVE_POS: log_file= mysql-bin.119561 pos= 33518281 info= exit
2013-12-15 01:32:17,349, INFO SAVE_POS: log_file= mysql-bin.119561 pos= 33774337 info= exit

Here is my code:

class MySQLBinlogSubject(object):

    def __init__(self, server_id, host, port, user, passwd, pos_file=".current_pos"):
        self.host = host
        self.port = port
        self.user = user
        self.passwd = passwd
        self.pos_file = pos_file
        self.server_id = server_id
        self.is_run = threading.Event()

    def load(self):
        try:
            with open(self.pos_file, 'a+') as fp:
                self.bin_log_file, self.file_pos = json.load(fp)
        except:
            self.bin_log_file, self.file_pos = None, 0

    def save(self, info=""):
        logging.info('SAVE_POS: log_file= %s pos= %s info= %s', self.bin_log_file, self.file_pos, info)

        with open(self.pos_file + '.tmp', 'w+') as fp:
            json.dump((self.bin_log_file, self.file_pos), fp)

        shutil.copy(self.pos_file + '.tmp', self.pos_file)

    def start(self):

        self.load()

        mysql_settings = {
            'host': self.host,
            'port': self.port,
            'user': self.user,
            'passwd': self.passwd,
        }

        self.is_run.set()

        stream = BinLogStreamReader(
            connection_settings=mysql_settings,
            server_id=self.server_id,
            blocking=True,
            resume_stream=True,
            only_events={UpdateRowsEvent, WriteRowsEvent, RotateEvent},
            log_file=self.bin_log_file, log_pos=self.file_pos
        )

        try:
            for binlogevent in stream:

                try:
                    if isinstance(binlogevent, RotateEvent):
                        self.bin_log_file = stream.log_file
                        self.file_pos = stream.log_pos

                        # update when rotate
                        self.save("rotate")
                        continue

                    self.process(binlogevent)
                    if not self.is_run.is_set():
                        break

                finally:
                    self.file_pos = stream.log_pos

        except Exception, ex:
            logging.exception("binlog monitor error: ex= %s", ex)
        finally:
            self.save("exit")
            stream.close()

    def stop(self):
        self.is_run.clear()

    def process(self, binlogevent):
        raise NotImplementedError()

binlogevent fetch rows fail when meeting chinese chars coded by utf-8

This is the error message, thanks.

File "./go.py", line 14, in <module>
    for row in binlogevent.rows:
  File "/home/lifei/.local/lib/python2.7/site-packages/PyMySQLReplication-0.0.1-py2.7.egg/pymysqlreplication/row_event.py", line 230, in __g
etattr__
    self._fetch_rows()
  File "/home/lifei/.local/lib/python2.7/site-packages/PyMySQLReplication-0.0.1-py2.7.egg/pymysqlreplication/row_event.py", line 225, in _fe
tch_rows
    self.__rows.append(self._fetch_one_row())
  File "/home/lifei/.local/lib/python2.7/site-packages/PyMySQLReplication-0.0.1-py2.7.egg/pymysqlreplication/row_event.py", line 287, in _fe
tch_one_row
    row["before_values"] = self._read_column_data(null_bitmap)
  File "/home/lifei/.local/lib/python2.7/site-packages/PyMySQLReplication-0.0.1-py2.7.egg/pymysqlreplication/row_event.py", line 76, in _rea
d_column_data
    values[name] = self.__read_string(column.length_size, column)
  File "/home/lifei/.local/lib/python2.7/site-packages/PyMySQLReplication-0.0.1-py2.7.egg/pymysqlreplication/row_event.py", line 107, in __r
ead_string
    str = str.decode(column.character_set_name)
  File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode
    return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0x84 in position 11717: invalid start byte```

Connection lost

Traceback (most recent call last):
    for binlogevent in stream:
  File "/home/noplay/code/python-mysql-replication/pymysqlreplication/binlogstream.py", line 70, in fetchone
    pkt = self.__stream_connection.read_packet()
  File "/home/noplay/.virtualenvs/pymysqlreplication/local/lib/python2.7/site-packages/PyMySQL-0.5-py2.7.egg/pymysql/connections.py", line 685, in read_packet
    packet = packet_type(self)
  File "/home/noplay/.virtualenvs/pymysqlreplication/local/lib/python2.7/site-packages/PyMySQL-0.5-py2.7.egg/pymysql/connections.py", line 200, in __init__
    self.__recv_packet()
  File "/home/noplay/.virtualenvs/pymysqlreplication/local/lib/python2.7/site-packages/PyMySQL-0.5-py2.7.egg/pymysql/connections.py", line 217, in __recv_packet
    raise OperationalError(2013, "Lost connection to MySQL server during query")
pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')

Memory leak

About my case, I need to read new datas from a few tables of an mysql slave server. I am using you library only on WriteRowEvent, catching datas and inserting them in a cache db (Redis).
The program runs always on a server, waiting the next filtering event.

Standard libraries in the program are sys, os, logging, OptionParser, cPickle etc..
At this point of my development, I do not begin the Redis part job.
For debugging, I am using  WMI lib (for Win) to see memory size at each new event
If you need, I can send you the program and logs.

Yves

I am using the mysqlreplication lib to fill a cache memory (Redis) from 
a few tables of a slave mysql server. 
Program is always running, waiting for WriteRowEvent event about these 
tables. At this point of the developement, I do not write the Redis part. 
Program is using std lib as logging (TimedRotatingFileHandler), 
OptionsParser (command line args), CPickle and WMI (to trace memory size 
wih WinXYZ). Last source code and log file could be availables 

Yves 

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.