Giter Club home page Giter Club logo

dbdeployer's Introduction

The end of dbdeployer

dbdeployer

DBdeployer is a tool that deploys MySQL database servers easily. This is a port of MySQL-Sandbox, originally written in Perl, and re-designed from the ground up in Go. See the features comparison for more detail.

Documentation updated for version 1.66.0 (26-Jul-2022 10:30 UTC)

Build Status

dbdeployer's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dbdeployer's Issues

Failed to install and start

Describe the bug
dbdeployer deploy replication 5.7.22 --nodes 2 --force fails.

To Reproduce
Steps to reproduce the behavior:

docker pull circleci/ruby:2.5.0
docker run -it --name ruby25 circleci/ruby:2.5.0
# now inside the container
bash
cd /home/circleci
wget https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.22-linux-glibc2.12-x86_64.tar.gz
wget https://github.com/datacharmer/dbdeployer/releases/download/1.8.0/dbdeployer-1.8.0.osx.tar.gz
tar -xzf dbdeployer-1.8.0.linux.tar.gz
sudo mv ./dbdeployer-1.8.0.linux /usr/local/bin/dbdeployer
mkdir -p opt/mysql
dbdeployer unpack mysql-5.7.22-linux-glibc2.12-x86_64.tar.gz
dbdeployer deploy replication 5.7.22 --nodes 2 --force

Expected behavior
Expected nodes to start, but instead:

ircleci@f7c244e897bb:~$ dbdeployer deploy replication 5.7.22 --nodes 2 --force
Creating directory /home/circleci/sandboxes
Installing and starting master
err: exit status 1
cmd: &exec.Cmd{Path:"/home/circleci/sandboxes/rsandbox_5_7_22/master/start", Args:[]string{"/home/circleci/sandboxes/rsandbox_5_7_22/master/start", ""}, Env:[]string(nil), Dir:"", Stdin:io.Reader(nil), Stdout:(*bytes.Buffer)(0x1a820360), Stderr:(*exec.prefixSuffixSaver)(0x1a795e30), ExtraFiles:[]*os.File(nil), SysProcAttr:(*syscall.SysProcAttr)(nil), Process:(*os.Process)(0x1a75c120), ProcessState:(*os.ProcessState)(0x1a65e5c0), ctx:context.Context(nil), lookPathErr:error(nil), finished:true, childFiles:[]*os.File{(*os.File)(0x1a669d80), (*os.File)(0x1a669da0), (*os.File)(0x1a669dc0)}, closeAfterStart:[]io.Closer{(*os.File)(0x1a669d80), (*os.File)(0x1a669da0), (*os.File)(0x1a669dc0)}, closeAfterWait:[]io.Closer{(*os.File)(0x1a669d98), (*os.File)(0x1a669db8)}, goroutine:[]func() error{(func() error)(0x813dfe0), (func() error)(0x813dfe0)}, errch:(chan error)(0x1a838440), waitDone:(chan struct {})(nil)}
stdout: ................................................................................................................................................................................... sandbox server not started yet

err: exit status 1
cmd: &exec.Cmd{Path:"/home/circleci/sandboxes/rsandbox_5_7_22/master/load_grants", Args:[]string{"/home/circleci/sandboxes/rsandbox_5_7_22/master/load_grants", ""}, Env:[]string(nil), Dir:"", Stdin:io.Reader(nil), Stdout:(*bytes.Buffer)(0x1a8204e0), Stderr:(*exec.prefixSuffixSaver)(0x1a795e90), ExtraFiles:[]*os.File(nil), SysProcAttr:(*syscall.SysProcAttr)(nil), Process:(*os.Process)(0x1a75c240), ProcessState:(*os.ProcessState)(0x1a65e740), ctx:context.Context(nil), lookPathErr:error(nil), finished:true, childFiles:[]*os.File{(*os.File)(0x1a669e40), (*os.File)(0x1a669e60), (*os.File)(0x1a669e80)}, closeAfterStart:[]io.Closer{(*os.File)(0x1a669e40), (*os.File)(0x1a669e60), (*os.File)(0x1a669e80)}, closeAfterWait:[]io.Closer{(*os.File)(0x1a669e58), (*os.File)(0x1a669e78)}, goroutine:[]func() error{(func() error)(0x813dfe0), (func() error)(0x813dfe0)}, errch:(chan error)(0x1a838740), waitDone:(chan struct {})(nil)}
stdout:
Installing and starting slave1
err: exit status 1
cmd: &exec.Cmd{Path:"/home/circleci/sandboxes/rsandbox_5_7_22/node1/start", Args:[]string{"/home/circleci/sandboxes/rsandbox_5_7_22/node1/start", ""}, Env:[]string(nil), Dir:"", Stdin:io.Reader(nil), Stdout:(*bytes.Buffer)(0x1a93d8c0), Stderr:(*exec.prefixSuffixSaver)(0x1a676d80), ExtraFiles:[]*os.File(nil), SysProcAttr:(*syscall.SysProcAttr)(nil), Process:(*os.Process)(0x1a75c150), ProcessState:(*os.ProcessState)(0x1a65e000), ctx:context.Context(nil), lookPathErr:error(nil), finished:true, childFiles:[]*os.File{(*os.File)(0x1a8936c8), (*os.File)(0x1a8936e8), (*os.File)(0x1a893708)}, closeAfterStart:[]io.Closer{(*os.File)(0x1a8936c8), (*os.File)(0x1a8936e8), (*os.File)(0x1a893708)}, closeAfterWait:[]io.Closer{(*os.File)(0x1a8936e0), (*os.File)(0x1a893700)}, goroutine:[]func() error{(func() error)(0x813dfe0), (func() error)(0x813dfe0)}, errch:(chan error)(0x1a50f5c0), waitDone:(chan struct {})(nil)}
stdout: ................................................................................................................................................................................... sandbox server not started yet

$HOME/sandboxes/rsandbox_5_7_22/initialize_slaves
initializing slave 1
Replication directory installed in $HOME/sandboxes/rsandbox_5_7_22
run 'dbdeployer usage multiple' for basic instructions'
circleci@f7c244e897bb:~$

Environment:

  • OS:
circleci@f7c244e897bb:~$ cat /proc/version
Linux version 4.9.87-linuxkit-aufs (root@95fa5ec30613) (gcc version 6.4.0 (Alpine 6.4.0) ) #1 SMP Wed Mar 14 15:12:16 UTC 2018

Hardware: (if applicable)

  • Free storage
circleci@f7c244e897bb:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
overlay          59G   29G   27G  52% /
tmpfs            64M     0   64M   0% /dev
tmpfs          1000M     0 1000M   0% /sys/fs/cgroup
/dev/sda1        59G   29G   27G  52% /etc/hosts
shm              64M     0   64M   0% /dev/shm
tmpfs          1000M     0 1000M   0% /sys/firmware
  • Total RAM
circleci@f7c244e897bb:~$ free -h
              total        used        free      shared  buff/cache   available
Mem:           2.0G        143M         72M         10M        1.7G        1.6G
Swap:          1.0G        133M        890M

Having the possibility to use different instance of the same environment

If I would like to have for example two different setup of a MySQL Group Replication for the same version is not possible:

$ dbdeployer-0.1.11.linux --topology=group replication 8.0.4
Directory /home/fred/sandboxes/group_msb_8_0_4 already exists

It would be great to be able to do this. (or maybe I didn't find the right option to achieve this)

Thank you for such useful tool :)

dbdeployer fails when .mylogin.cnf is found

When the file $HOME/.mylogin.cnf exists, it bypasses the effects of --no-defaults and --defaults-file, which are the foundation os some of the isolation features of MySQL sandboxes.

The behavior is documented in MySQL manual.

To overcome this problem, dbdeployer must alter the environment variable MYSQL_TEST_LOGIN_FILE so that it points to a non-existing path.

Add support for --client-from=X

Is your feature request related to a problem? Please describe.

I am working on adding TiDB support to dbdeployer. One of the differences that it has from regular MySQL packages, is that it ships without clients (so the dbdeployer use command for example will fail).

I could work around this by requiring a mysql to be configured in the $PATH, but in discussion at FOSDEM @datacharmer suggested there is a use case in MySQL for both Docker and using a newer client against an older server and vice-versa.

Describe the solution you'd like

dbdeployer deploy single 8.0.14 --client-from=5.7.23

Describe alternatives you've considered

The alternative is to require MYSQL_EDITOR to be specified, or require a global mysql in $PATH.

Additional context

Suggestion: my.sandbox.cnf to include report_port by default

When setting up a replication environment (dbdeployer deploy replication), and since everything is obviously installed on the same box, it makes sense that dbdeployer should add report_port=<same-as-port> automatically. There is no harm in doing so, and I find that this is a manual step I'm always taking after installing a sandbox setup.

report_host is a bit different, and I think it makes sense that I would include --my-cnf-options report_host=127.0.0.1 if need be.

Options --db-user and --rpl-user should not allow 'root' as value

We can customize the default user ('msandbox') with the options --db-user (default database user) and --rpl-user (user for replication.)

Neither of these options should accept "root" as value, as it would clash with the configuration of the real root user.

To Reproduce

dbdeployer deploy single --db-user=root 8.0

Expected behavior
dbdeployer should refuse the "root" value with a clear error indicating the reason.

zsh: exec format error: /usr/local/bin/dbdeployer

Describe the bug
I ran the following script:

set -ex

VERSION=1.8.0
OS=osx
origin=https://github.com/datacharmer/dbdeployer/releases/download/$VERSION
filename=dbdeployer-$VERSION.$OS.tar.gz
wget $origin/$filename
tar -xzf $filename
chmod +x $filename
sudo mv $filename /usr/local/bin/dbdeployer
rm dbdeployer-$VERSION.$OS

When I invoke dbdeployer I get the error: zsh: exec format error: /usr/local/bin/dbdeployer

Environment:

Hardware: (if applicable)

  • Free storage: 300GB
  • Total RAM: 16GB

can't enable mysqlx and group replication at the same time

Describe the bug
Enable mysqlx and group replication at the same time. Failed to start.

The parameters plugin-load are overwritten.

To Reproduce

dbdeployer deploy --enable-mysqlx --topology=group replication 5.7 --single-primary -n 6

plugin-load=group_replication.so
group_replication=FORCE_PLUS_PERMANENT
group_replication_start_on_boot=OFF
group_replication_bootstrap_group=OFF
transaction_write_set_extraction=XXHASH64
report-host=127.0.0.1
loose-group_replication_group_name="aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
loose-group-replication-single-primary-mode=on
gtid_mode=ON
enforce-gtid-consistency
replication crash-safe options
master-info-repository=table
relay-log-info-repository=table
relay-log-recovery=on
loose-group-replication-local-address=127.0.0.1:21048
loose-group-replication-group-seeds=127.0.0.1:21048,127.0.0.1:21049,127.0.0.1:21050,127.0.0.1:21051,127.0.0.1:21052,127.0.0.1:21053

plugin_load=mysqlx=mysqlx.so
mysqlx-port=30923
mysqlx-socket=/var/folders/2p/6cyrmdx10b98cwgrpqz2y26r0000gn/T//mysqlx-30923.sock

Expected behavior
[Note] Plugin mysqlx reported: 'Scheduler "network" started.'
2019-02-02T11:32:24.211820Z 0 [ERROR] unknown variable 'group_replication=FORCE_PLUS_PERMANENT'
2019-02-02T11:32:24.211836Z 0 [ERROR] Aborting

Environment:

  • OS: maxos 10.14
  • dbdeployer version 1.17.0
  • mysql 5.7.22

Add support for Galera/PXC

This addition should work, in principle, in the same way that group replication does.

The implementation will require:

  • identifying which tarballs support this feature out of the box (i.e. no separate downloads);
  • pre-define the my.cnf options needed for the installation
  • deploy a multiple sandbox (similar to what we do for multi-source replication
  • run a configuration script.

defaults.go has already potential support for Galera and PXC ports handling (commented out).

Add build files to .gitignore

Is your feature request related to a problem? Please describe.

May I suggest adding build artifacts to .gitignore? This will prevent them from being accidentally committed by a contributor.

Describe the solution you'd like

diff --git a/.gitignore b/.gitignore
index 786de81..31f5865 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,3 +1,12 @@
 *~
 .idea/
-
+dbdeployer-*-docs.linux
+dbdeployer-*-docs.linux.tar.gz
+dbdeployer-*-docs.osx
+dbdeployer-*-docs.osx.tar.gz
+dbdeployer-*.linux
+dbdeployer-*.linux.tar.gz
+dbdeployer-*.osx
+dbdeployer-*.tar.gz
+test/sort_versions.Darwin
+test/sort_versions.linux

Describe alternatives you've considered

N/A

Single deployment doesn't show the location of the sandbox.

When a new sandbox is deployed, the installer should show where it is.

$ dbdeployer deploy single 8.0.4
run 'dbdeployer usage single' for basic instructions'
.. sandbox server started

It should say "Sandbox directory installed in $HOME/sandboxes/rsandbox_x_x_x"

Supposedly simple command fails

First time trying the new dbdeployer tool. I used MySQL sandbox before with success.

I downloaded a 5.6 server and unpacked it, which gives me a folder mysql-5.6.41-macos10.13-x86_64. Now I run deploy with this result:

mike$ ./dbdeployer deploy single --port 5600 -u root -p localRoot#123 sandbox-packages/mysql-5.6.41-macos10.13-x86_64
No version detected for directory /Users/mike/sandbox-packages/mysql-5.6.41-macos10.13-x86_64

No idea what the tool wanna tell me with that message, but the setup failed. I ran this in my homedir which contains the 2 folders sandbox-packages and sandboxes. Obviously, this is on macOS 10.13.

--gtid should include relay-log-recovery

The option --gtid wants to put the server in a crash-safe replication state. For that reason, it includes:

master-info-repository=table
relay-log-info-repository=table
gtid_mode=ON
log-slave-updates
enforce-gtid-consistency

As shown in MySQL bug#92093, crash-safe state should include the option relay-log-recovery=ON

Add feature to extract and merge a mysql-shell tarball into a regular one

MySQL distributes MySQL shell as a separate tarball. Yet, the shell is needed to run operations on MySQL document stores and InnoDb clusters. As of today, the MySQL team does not want to merge the two products.

It is possible to unpack the MySQL shell and add the pieces to an unpacked binary tarball, but it's a time consuming and error prone operation.

There should be a new option under unpack:

dbdeployer unpack --shell mysql-shell-8.0.12-macos10.13-x86-64bit.tar.gz 

This command will unpack the shell tarball in a temporary directory, then merge the files under the directory $SANDBOX_BINARY/8.0.12 (which contains the database binaries and must already exist.)

If we want to put the shell under a different database tree, we can use the additional option:

dbdeployer unpack --shell \
    --target-server=labs8.0.21 \
    mysql-shell-8.0.12-macos10.13-x86-64bit.tar.gz 

Also in this case, the directory $SANDBOX_BINARY/labs8.0.21 must exist and contain valid server binary files.

unpack command fails when tarball name doesn't include a version

The current unpack command fails with a tarball whose name does not contain a version.

To Reproduce

wget http://download.pingcap.org/tidb-latest-linux-amd64.tar.gz
dbdeployer unpack --prefix=tidb --unpack-version=3.0.0 ./tidb-latest-linux-amd64.tar.gz

Expected behavior
the command should extract the tarball contents into a directory $SANDBOX_BINARY/tidb3.0.0

Instead, we get a panic error.

cc @morgo

Change evaluation of version to include a flavor

Problem.
The database server capabilities (such as GTID, semi-synch, MySQLX) are now evaluated only by comparing the server version with the minimal version where the feature was enabled.

This method works well if we limit ourselves to MySQL server, but will fail if we start adding other flavors, such as NBD server, Percona Server + PXC, or MariaDB with or without Galera.
The method will also fail when we try to add non-MySQL forks such as TiDB.

What can we do
Instead of simply comparing the version, which becomes quickly untenable, we can use a method made of the flavor (e.g. "mysql", "mariadb", "ndb", tidb", and so on) and a version.

cc @morgo

create multi instance large 24,produce follow error

dbdeployer deploy multiple 5.7.21 -n 121 --sandbox-binary=/usr/local/mysql --sandbox-home=/data/mysql --base-port=10000
Installing and starting node 1
. sandbox server started
Installing and starting node 2
. sandbox server started
Installing and starting node 3
. sandbox server started
Installing and starting node 4
. sandbox server started
Installing and starting node 5
. sandbox server started
Installing and starting node 6
. sandbox server started
Installing and starting node 7
. sandbox server started
Installing and starting node 8
. sandbox server started
Installing and starting node 9
. sandbox server started
Installing and starting node 10
. sandbox server started
Installing and starting node 11
. sandbox server started
Installing and starting node 12
. sandbox server started
Installing and starting node 13
. sandbox server started
Installing and starting node 14
. sandbox server started
Installing and starting node 15
. sandbox server started
Installing and starting node 16
. sandbox server started
Installing and starting node 17
. sandbox server started
Installing and starting node 18
. sandbox server started
Installing and starting node 19
. sandbox server started
Installing and starting node 20
. sandbox server started
Installing and starting node 21
. sandbox server started
Installing and starting node 22
. sandbox server started
Installing and starting node 23
. sandbox server started
Installing and starting node 24
. sandbox server started
Installing and starting node 25
err: exit status 1
cmd: &exec.Cmd{Path:"/data/mysql/multi_msb_5_7_21/node25/start", Args:[]string{"/data/mysql/multi_msb_5_7_21/node25/start", ""}, Env:[]string(nil), Dir:"", Stdin:io.Reader(nil), Stdout:(*bytes.Buffer)(0x19de2600), Stderr:(*exec.prefixSuffixSaver)(0x19f9c480), ExtraFiles:[]*os.File(nil), SysProcAttr:(*syscall.SysProcAttr)(nil), Process:(*os.Process)(0x1a27c2d0), ProcessState:(*os.ProcessState)(0x19e66000), ctx:context.Context(nil), lookPathErr:error(nil), finished:true, childFiles:[]*os.File{(*os.File)(0x19e704f0), (*os.File)(0x19e70510), (*os.File)(0x19e70530)}, closeAfterStart:[]io.Closer{(*os.File)(0x19e704f0), (*os.File)(0x19e70510), (*os.File)(0x19e70530)}, closeAfterWait:[]io.Closer{(*os.File)(0x19e70508), (*os.File)(0x19e70528)}, goroutine:[]func() error{(func() error)(0x813d2b0), (func() error)(0x813d2b0)}, errch:(chan error)(0x1a0c6440), waitDone:(chan struct {})(nil)}
stdout: ................................................................................................................................................................................... sandbox server not started yet

err: exit status 1
cmd: &exec.Cmd{Path:"/data/mysql/multi_msb_5_7_21/node25/load_grants", Args:[]string{"/data/mysql/multi_msb_5_7_21/node25/load_grants", ""}, Env:[]string(nil), Dir:"", Stdin:io.Reader(nil), Stdout:(*bytes.Buffer)(0x19de2180), Stderr:(*exec.prefixSuffixSaver)(0x19f9c060), ExtraFiles:[]*os.File(nil), SysProcAttr:(*syscall.SysProcAttr)(nil), Process:(*os.Process)(0x1a27c300), ProcessState:(*os.ProcessState)(0x19e661f0), ctx:context.Context(nil), lookPathErr:error(nil), finished:true, childFiles:[]*os.File{(*os.File)(0x19e700f0), (*os.File)(0x19e70110), (*os.File)(0x19e70130)}, closeAfterStart:[]io.Closer{(*os.File)(0x19e700f0), (*os.File)(0x19e70110), (*os.File)(0x19e70130)}, closeAfterWait:[]io.Closer{(*os.File)(0x19e70108), (*os.File)(0x19e70128)}, goroutine:[]func() error{(func() error)(0x813d2b0), (func() error)(0x813d2b0)}, errch:(chan error)(0x1a0c62c0), waitDone:(chan struct {})(nil)}
stdout:
Installing and starting node 26

Add TiDB single sandbox support

Is your feature request related to a problem? Please describe.

I would like to propose single sandbox support for TiDB, to allow developers to easily test compatibility without having to install a full distributed system (tidb+tikv+pd).

Describe the solution you'd like

TiDB can run with something called mocktikv (goleveldb) instead of tikv, which means that only one server component needs to be setup. In my own testing it is 5x slower than tikv, but the advantage is that you only need a tidb-server setup, and no pd or tikv.

The memory footprint is smaller than MySQL, so it's possible some might even prefer it for development.

Describe alternatives you've considered

I've looked at alternatives installation methods like Homebrew etc, but they do not solve the use case of having multiple versions installed concurrently. The TiDB release schedule is every 6 months, so dbdeployer is really useful in providing a system of organizing binaries for TiDB developers.

Additional context

I have looked at the initial work, and much of it can be implemented by sandbox templates and some core changes that @datacharmer already has planned (discussed at FOSDEM). Let me try and comment in terms of each:

Core

  • Requires support for detecting TiDB flavor #53 #52
  • Requires support for unnumbered tarballs #51
  • Requires support for --client-from (or similar) #49
  • Requires support for 'capabilities' of which TiDB will be only single, with no gtids, binlog (initially), relay log, show log (initially, because there is no general log), add option, semisync, expose dd tables. #50
  • Tarball inspection for files lib/libmysqlclient.so and scripts/mysql_install_db needs to be moved into either "capabilities" or "flavors".

Templates

I will send a pull request after some discussion, but my initial testing shows that the major template changes are:

TBD:
I think that these two templates can be kept vanila assuming support for --client-from is added:

error creating sandbox: 'error updating catalog: error decoding catalog: unexpected end of JSON input'

Describe the bug
Not able to create sandbox

To Reproduce
Steps to reproduce the behavior:
1.Run
dbdeployer remote download 8.0.14
2.
dbdeployer unpack 8.0.14.tar.xz
3.
dbdeployer deploy single 8.0.14
it fails with the below error

Database installed in $HOME/sandboxes/msb_8_0_14
run 'dbdeployer usage single' for basic instructions'
error creating sandbox: 'error updating catalog: error decoding catalog: unexpected end of JSON input'

Expected behavior
A sandbox to be created

Environment:

Hardware: (if applicable)

cat /etc/fstab && free -h && lscpu && sudo hdparm -I /dev/sda |grep -i model && sudo hdparm -I /dev/sdb |grep -i model

Static information about the filesystems.

See fstab(5) for details.

/dev/sda2

UUID=f5e21fab-e921-4e37-ac58-bf2ee4eefb2c / ext4 rw,noatime,barrier=0,data=ordered 0 1

/dev/sda1

UUID=f30d9013-a27e-4195-a05c-4fc61f56764c /boot ext4 rw,relatime,stripe=4,data=ordered 0 2

/dev/sda3

/dev/sda3 none swap defaults,pri=-2 0 0

/dev/sdb5 /data ext4 rw,noatime,barrier=0,data=ordered 0 2
total used free shared buff/cache available
Mem: 11Gi 3.1Gi 5.6Gi 345Mi 2.9Gi 8.1Gi
Swap: 511Mi 0B 511Mi
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 36 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 42
Model name: Intel(R) Core(TM) i7-2760QM CPU @ 2.40GHz
Stepping: 7
CPU MHz: 797.367
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 4786.60
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 6144K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts flush_l1d
Model Number: CT120BX300SSD1
Model Number: Hitachi HTS541616J9SA00

[martin@anarchy]: ~>$ locate /home/ json |grep deployer
/home/martin/.dbdeployer
/home/martin/.dbdeployer/sandboxes.json
[martin@anarchy]: ~>$ cat /home/martin/.dbdeployer/sandboxes.json
[martin@anarchy]: ~>$ stat /home/martin/.dbdeployer/sandboxes.json
  File: /home/martin/.dbdeployer/sandboxes.json
  Size: 0         	Blocks: 0          IO Block: 4096   regular empty file
Device: 802h/2050d	Inode: 3149784     Links: 1
Access: (0644/-rw-r--r--)  Uid: ( 1000/  martin)   Gid: (  985/   users)
Access dbdeployer --version
dbdeployer version 1.17.0
: 2018-03-28 14:51:50.754754304 +0300
Modify: 2018-04-20 11:34:33.440563175 +0300
Change: 2018-04-20 11:34:33.440563175 +0300
 Birth: -
[martin@anarchy]: ~>$ ls -lht /home/martin/.dbdeployer/sandboxes.json
-rw-r--r-- 1 martin users 0 Apr 20  2018 /home/martin/.dbdeployer/sandboxes.json
dbdeployer --version
dbdeployer version 1.17.0


dbdeployer deploy single hangs

Describe the bug

Running dbdeployer deploy single 3.0.0 hangs when using TiDB.

To Reproduce
Steps to reproduce the behavior (on Linux):

wget http://download.pingcap.org/tidb-latest-linux-amd64.tar.gz
dbdeployer unpack tidb-latest-linux-amd64.tar.gz --unpack-version=3.0.0
git clone [email protected]:morgo/dbdeployer-tidb-template.git
dbdeployer defaults template import single dbdeployer-tidb-template/new-templates/
dbdeployer deploy single 3.0.0 

Notes:

  • It doesn't hang if I add --skip-start, and then manually starting works fine.
  • It is not related to load_grants_template as I suspected. I set that to exit 0 at the start of the script to no effect.
  • Adding --client-from has no effect.

Expected behavior

Not hang

Environment:

  • Linux ryzen 4.18.0-15-generic #16-Ubuntu SMP Thu Feb 7 10:56:39 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  • dbdeployer version: https://github.com/datacharmer/dbdeployer/tree/capabilities (up to date)

Also reproduced on Debian Linux container (my laptop)

Hardware: (if applicable)

morgo@ryzen:~/go/src/github.com/morgo/dbdeployer-tidb-template$ df -h .
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3       166G   77G   81G  49% /
morgo@ryzen:~/go/src/github.com/morgo/dbdeployer-tidb-template$ free -g
              total        used        free      shared  buff/cache   available
Mem:             62           7          47           0           7          54
Swap:             0           0           0

Additional context
Add any other context about the problem here.

Option to download MySQL

I suspect there are good reasons this isn't done now (technical and/or non-technical), but just in case this request is for a download feature.

For example I imagine executing dbdeployer download 8.0.11 (or somesuch) that'd use common defaults (e.g., 64-bit, generic glibc 2.12, ...) to download 8.0.11. It could become more complicated in the future.

On Linux it'd probably download this:
https://dev.mysql.com/get/Downloads/MySQL-8.0/mysql-8.0.11-linux-glibc2.12-x86_64.tar.gz

On macOS:
https://dev.mysql.com/get/Downloads/MySQL-8.0/mysql-8.0.11-macos10.13-x86_64.dmg

But that brings us to our first (and main) issue; how does it know the path? For example, might it ever be 8.0.11-1 instead of 8.0.11? Or 10.15 instead of 10.13? What about RC releases? Good questions and there are many more :)

Thankfully this functionality is a nice-to-have (not critical) so unexpected changes can cause failure without too much pain. But, download paths can (for the most part) be predicted. Not yet sure if other commands should use it (e.g., whether dbdeployer deploy single 8.0.11 might prompt I could not find v8.0.11; shall I attempt to download it?) Maybe.

Before I (or anyone) does further research, I'm curious if you see this as a possibility. If so then I'll dig a little deeper.

Switch build system to go modules + make file

Is your feature request related to a problem? Please describe.

I took a look at ./scripts/build.sh and it looks to be doing dependency management. There is nothing wrong with it, but I wonder if it would be more accessible to contributors if it were replaced by a makefile + use go 1.11 modules.

Describe the solution you'd like

./scripts/build.sh would become make all or make linux.

Describe alternatives you've considered

The alternative would be to implement part of this (go modules only), or leave as is. This is a fairly function neutral enhancement.

Additional context

I am happy to contribute. Just wanted to create an issue to discuss first.

Add an option to enable replication crash safety without GTID

Currently, to set replication crash safety, we use --gtid, which adds all the needed options (but See issue #35 for more.)

We want an option that enables crash safety without using GTID, and without spelling out all the options explicitly.

The new option, called --repl-crash-safe will include:

master-info-repository=table
relay-log-info-repository=table
relay-log-recovery=on

Option to override the default binaries path for a specific sandbox

I'm missing an option that allows to override the location of the MySQL binaries when deploying a new sandbox. mysql::sandbox has the option -b that does just that.

My requirements include the possibility to have MySQL binaries in directories other than x.x.xx (it seems dbdeployer requires this directory naming pattern, right?). Lets say I want to compile some MySQL version using different options. In the end I'd like to save each result in directories named 5.7.22_A, 5.7.22_B, 5.7.22_C, etc, and be able to create sandboxes from each one.

AFAIS, with dbdeployer, I can achieve the same if I save each compiled server in a different path using the same, normalized, version name, for example: A/5.7.22, B/5.7.22..., and then provide the binaries path using the option --sandbox-binary=<path_to_binaries_parent> or prefix the command with SANDBOX_BINARY=<path_to_binaries_parent> dbdeployer .... Although it seems to work, it would more interesting if I could maintain all binaries folders in the same place.

Or am I missing something?

error log was not created in data directory

After a recent improvement in 1.8.1, the error log was not created as before.
I need to change the path to include a "/" (it was missing in the template) and add a test that makes sure the file is created in the data directory.

unpack command should create a FLAVOR file for the extracted tarball

problem
We want to be able to detect which flavor a given tarball is (MySQL, Percona Server, NDB, etc)

Describe the solution you'd like
To facilitate the task, we could tell the unpack command to create a FLAVOR file when extracting from the tarball, indicating what flavor we are dealing with.

cc @morgo

dbdeployer should check whether the binaries are for the current OS

Describe the bug
If we use a tarball that is for an operating system different from the current one, dbdeployer shows an unclear error.

To Reproduce
Steps to reproduce the behavior:

  1. Unpack MySQL 5.7.22 for Linux on a Mac
  2. Try deploying a sandbox from that package
  3. See that the server start fails/

Expected behavior
There should be a clear error when we try using a tarball that is not for the current OS.

Environment:

  • OS: any MacOS, any Linux OS
  • dbdeployer version: 1.7.0

Additional context
The easiest solution is for dbdeployer to check whether libmysqlclient (or libperconaclient) has the appropriate extension for our OS: .dylib or .so.

build.sh does not handle dependencies

Installation referenced https://github.com/datacharmer/dbdeployer/ basically gets you to pull down a pre-built set of binaries. That's fine but I like to build my go binaries directly from source. Other projects I use such as orchestrator or vitess basically provide a command line tool which pulls in any required dependencies to enable the build to take place yet this does not happen with dbdeployer.

This leads to something like:

[myuser@myhost ~/src/dbdeployer/src/github.com/datacharmer/dbdeployer]$ ./build.sh OSX 1.1.1
+ env GOOS=darwin GOARCH=386 go build -o dbdeployer-1.1.1.osx .
main.go:19:2: cannot find package "github.com/datacharmer/dbdeployer/abbreviations" in any of:
	/usr/local/Cellar/go/1.10/libexec/src/github.com/datacharmer/dbdeployer/abbreviations (from $GOROOT)
	/Users/smudd/src/orchestrator/src/github.com/datacharmer/dbdeployer/abbreviations (from $GOPATH)
main.go:20:2: cannot find package "github.com/datacharmer/dbdeployer/cmd" in any of:
	/usr/local/Cellar/go/1.10/libexec/src/github.com/datacharmer/dbdeployer/cmd (from $GOROOT)
	/Users/smudd/src/orchestrator/src/github.com/datacharmer/dbdeployer/cmd (from $GOPATH)
tar: dbdeployer-1.1.1.osx: Cannot stat: No such file or directory
tar: Error exit delayed from previous errors.

One of the issues with go is tracking/handling of dependencies. It looks like this may change soon but given the dependencies you are using are quite small putting them under vendor/ and in your tree means there is no external dependency to your packages.

Failing that looking for the missing dependencies and pulling them down can work but then it's not clear if you used the same version of these dependencies that I may be using. Orchestrator has broken because of this and thus keeps its dependencies under vendor/ and these can be tracked in various ways. Vitess has an initial bootstrap.sh script (as this is more complex and includes other things like zookeeper etc) and pulls down the specific versions that are needed.

I see that:

$ go get github.com/datacharmer/dbdeployer/abbreviations
$ go get github.com/spf13/cobra

is all that is required to allow dbdeployer to build but do worry that if either of these libraries change unexpectedly you may see breakage.

So commenting on the best way to build from source would be good as it adds a bit more information in addition to the offered binaries.

Define criteria to detect the flavor of a tarball contents

We need some criteria that would help us identify which flavor of database server we are dealing with.

Possible values are:

  • mysql
  • percona server
  • percona server with PXC
  • ndb cluster
  • mariadb
  • mariadb with Galera
  • tidb

There are three methods to detect the flavor:

  1. Using an explicit option during the deployment (--flavor=X)
  2. Using a FLAVOR file inside the extracted tarball (Issue #52)
  3. Evaluating the files in the tarball to decide what we are dealing with.

The difficult part is item n. 3, because sometimes the difference between flavors is not easily detectable.

Related to Issues #50 and #52.

cc @morgo

deploy --force does not check for lock status

When overwriting an existing sandbox using --force, dbdeployer should check whether the sandbox is locked, and refuse to overwrite, even if force was specified.

To Reproduce

  1. dbdeployer deploy single 5.7.23
  2. dbdeployer admin lock msb_5_7_23
  3. dbdeployer deploy single 5.7.23 --force
    Operation # 3 should fail. Currently, it doesn't.

Include all from --my-cnf-file

When deploying sandboxes with the option --my-cnf-file please include all contents (properties, comments and blank lines) in the corresponding section.

This is important because some options can't be set in the setup process, for example default-time-zone requires the time zones to be loaded beforehand. Including the original file allows me to come back later and tweak the generated my.sandbox.cnf and adjust it as needed.

Thanks,

rename during unpack command fails

Using dbdeployer in a local install fails renaming the extracted tar ball.
for this the deploy also fails, as it does not accept the full directory path but expects the version string as directory name.

./dbdeployer-1.5.2.linux --version
dbdeployer version 1.5.2

./dbdeployer-1.5.2.linux unpack --sandbox-binary sandbox-bin --verbosity 3 Percona-Server-5.7.21-21-Linux.x86_64.ssl102.tar.gz 
Unpacking tarball Percona-Server-5.7.21-21-Linux.x86_64.ssl102.tar.gz to sandbox-bin/5.7.21
.... lists all extracted files ...
Percona-Server-5.7.21-21-Linux.x86_64.ssl102/mysql-test/README
Files 18502
rename sandbox-bin/Percona-Server-5.7.21-21-Linux.x86_64.ssl102 sandbox-bin/5.7.21: no such file or directory

Same behaviour with the mysql-8 tar ball:

./dbdeployer-1.5.2.linux unpack --sandbox-binary sandbox-bin --verbosity 3 mysql-8.0.11-linux-glibc2.12-i686.tar.gz 
Unpacking tarball mysql-8.0.11-linux-glibc2.12-i686.tar.gz to sandbox-bin/8.0.11
.... lists all extracted files ...
Files 289
rename sandbox-bin/mysql-8.0.11-linux-glibc2.12-i686 sandbox-bin/8.0.11: no such file or directory

Add support for MySQL Cluster

This addition should work, in principle, in the same way that group replication does.

The implementation will require:

  • identifying which tarballs support this feature out of the box (i.e. no separate downloads);
  • pre-define the my.cnf options needed for the installation
  • deploy a multiple sandbox (similar to what we do for multi-source replication
  • run a configuration script.

defaults.go has already potential support for MySQL Cluster ports handling (commented out).

Port management glitch

It seems that when installing two consecutive versions of the same MySQL series, from higher to lower, the ports are not chosen correctly.

Steps to reproduce the behavior:

  1. Deploy MySQL 5.7.22 with default port. It runs at 5722.
  2. Deploy MySQL 5.7.21 with default port. It runs at 5723, but 5721 is available.

I also tried deploying the second server with the option --port 5721 but with no luck, it was overridden with 5723.

To work around this I had to move the "conflicting" sandbox msb_5_7_22 out of the sandboxes directory, then deploy 5.7.21 with defaults again (now runs at 5721) and finally move the other sandbox back.

I'm using dbdeployer version 1.8.0, on Linux.

Add support for database upgrade

dbdeployer should offer an east way of upgrading a database to a new version.

The easiest implementation goes as follows:

  1. Deploy the old version (e.g. 5.7.23)
  2. Deploy the new version (e.g. 8.0.12)
  3. Ask dbdeployer to upgrade the database in 5.7.23 to 8.0.12.
  4. dbdeployer will do the following:
    a. Check that the target can run mysql_upgrade
    b. Check that the from database has a lower version than the to one.
    c. Check that there is only one version interval between the two databases (5.6 to 5.7 is OK, but 5.6 to 8.0 is not.)
    d. stop both databases
    e. rename the new version data directory;
    f. move the old version data directory to the new sandbox
    g. start the new version database
    h. run mysql_upgrade

undocumented go deps when building with MKDOCS=1 and build should fail properly

Describe the bug
MKDOCS=1 build.sh linux 1.8.0 fail because of unmet dependencies

To Reproduce
Steps to reproduce the behavior:

  1. Not having the following dependencies in your GOPATH
    github.com/cpuguy83/go-md2man/md2man
    gopkg.in/yaml.v2
  2. Run the command 'MKDOCS=1 build.sh linux 1.8.0'
  3. See error:
+ env GOOS=linux GOARCH=386 go build --tags docs -o dbdeployer-1.8.0-docs.linux .
../../spf13/cobra/doc/man_docs.go:26:2: cannot find package "github.com/cpuguy83/go-md2man/md2man" in any of:
	/usr/lib/go/src/github.com/cpuguy83/go-md2man/md2man (from $GOROOT)
	/home/john/go/src/github.com/cpuguy83/go-md2man/md2man (from $GOPATH)
tar: dbdeployer-1.8.0-docs.linux: Cannot stat: No such file or directory
tar: Exiting with failure status due to previous errors

+ env GOOS=linux GOARCH=386 go build --tags docs -o dbdeployer-1.8.0-docs.linux .
../../spf13/cobra/doc/yaml_docs.go:26:2: cannot find package "gopkg.in/yaml.v2" in any of:
	/usr/lib/go/src/gopkg.in/yaml.v2 (from $GOROOT)
	/home/john/go/src/gopkg.in/yaml.v2 (from $GOPATH)

Expected behavior

  1. docs should state
    github.com/cpuguy83/go-md2man/md2man
    and
    gopkg.in/yaml.v2
    Although they seems to be dependencies of the cobra lib, and not dbdeployer

  2. build should properly fail

Environment:

  • OS: Arch Linux
  • dbdeployer version 1.8.0
  • tarball full name: NA
  • tarball origin: NA

Additional context
My GO SRC only contained:
spf13/cobra
jteeuwen/go-bindata
datacharmer/dbdeployer
before trying to build

Add support for Docker containers

dbdeployer should be able to deploy docker containers with almost the same syntax used to deploy local sandboxes.

INTERFACE

The deployment interface for this extension could be one of the following:

  1. dbdeployer deploy --docker {single|replication} 5.7.22. This has the advantage of reusing all the options already defined for sandbox deployment. A minor inconvenience would be disabling the options that aren't applicable to containers.
  2. dbdeployer docker-deploy {single|replication} 5.7.22. The advantage here is that we define only the options that are relevant for containers, although there will be many duplications.

As for working with the containers, dbdeployer would create the same directory used for local sandboxes, with the same scripts, which will allow to access the container with the same ease. Of course, the contents of the scripts need to change to access the containers appropriately.

INFRASTRUCTURE

Deploying containers presents challenges different from the ones faced with sandboxes. We need to detect the IP address of the container for further usage (e.g. to set up replication) and forward container ports to normal ones.

A properly defined container needs the following customization volumes:

  • data directory: redirected to the ./data directory of the sandbox;
  • socket directory: redirected to the new ./var directory;
  • user initialization scripts: placed in /docker-entrypoint-initdb.d/;
  • secure password file;
  • A directory to upload/download data.

Practical considerations

This extension requires the adaptation of many templates to be used with containers. Some of them, however, will work well just the way they are now, due to the port forwarding that will enable the client to reach the database in the current host.

An important point is where to get the client from. I'd like to avoid using docker exec whenever possible, and therefore we could use the client from MySQL tarball binaries (same as with local sandboxes) with the server running in a container. If an appropriate MySQL client (= same version or higher than the server) cannot be found in the local host, we need to fall back to docker exec.

Cc: @jaypipes

error reading from file /path/sandboxes/lost+found/sbdescription.json

Creating a new sandbox fails when the sandbox directory is the root of an ext4 filesystem (/home/jgagne/sandboxes in my case).

To Reproduce
Steps to reproduce the behavior:

  1. Mount an ext4 filesystem in ~/sandbox (expected setup in [1] below)
  2. Run "dbdeployer deploy single ps_5.7.24" (error message in [2] below)

[1]:

$ ls -la ~sandboxes
total 24
drwxr-xr-x 3 jgagne jgagne  4096 Jan  5 21:31 .
drwxr-xr-x 7 jgagne jgagne  4096 Jan  5 21:27 ..
drwx------ 2 root   root   16384 Jan  5 21:31 lost+found

[2]:

$ dbdeployer --version
dbdeployer version 1.14.0
$ dbdeployer deploy single ps_5.7.24 --gtid --repl-crash-safe -c "bind-address = *"
error reading from file /home/jgagne/sandboxes/lost+found/sbdescription.json: open /home/jgagne/sandboxes/lost+found/sbdescription.json: permission denied

Expected behavior
I would expect dbdeployer to skip directories that it cannot read from the sandbox directory.

Environment:

Hardware

  • GCP vm

Additional context

  • Deleting the lost+found directory solves the problem (as shown in [3] below)
  • Maybe another solution could be to have a .dbdeployer_ignore file in the sandbox directory containing directories to ignore when scanning for sandboxes

[3]:

$ rm -rf sandboxes/lost+found
$ dbdeployer deploy single ps_5.7.24 --gtid --repl-crash-safe -c "bind-address = *"
Database installed in $HOME/sandboxes/msb_ps_5_7_24
run 'dbdeployer usage single' for basic instructions'
. sandbox server started

dbdeployer versions looks for 'mysqld' executable

Is your feature request related to a problem? Please describe.

Currently dbdeployer versions will only list unpacked tarballs that have a file in the path ./bin/mysqld. This is safe for MariaDB/Percona/MySQL, but not TiDB.

Describe the solution you'd like

It would be nice to have a wider list that included:

mysqld-debug
mysqld
tidb-server

I don't think any packages exist that ship a debug-build without a regular, but it might be nice to be more inclusive.

Describe alternatives you've considered

An alternative might be that this can probe flavors to find a list of binary names, but I am not sure if the complexity is warranted given TiDB is the only outlier.

Additional context

If I manually touch a file in ./bin/mysqld it shows in the list, so I can confirm that it is this and not another issue.

Add support for directly using path to binaries

Hi Guiseppe!

First of all, thanks for creating another great tool :)

Is your feature request related to a problem? Please describe.
We use /opt/ (/opt/mysql/, /opt/percona-server/, etc) to store binaries in a shared-user testing server (mainly to avoid wasting disk space by storing repeated binaries in home directories). In this case, the only way to run dbdeployer seems to be by using the --sandbox-binary argument, like:

dbdeployer --sandbox-binary=/opt/mysql/ deploy single 8.0.4

This works ok, but what I'm lacking (that we had with mysql_sandbox) is the ability to "find" binaries while writing the command (by using Tab autocompletion on the path), like:

dbdeployer deploy single /opt/mysql/^tab^tab
... get list of dirs under /opt/mysql/ ...
dbdeployer deploy single /opt/mysql/8.0.4

Describe the solution you'd like
To be able to use commands like:

dbdeployer deploy single /opt/mysql/8.0.4

Describe alternatives you've considered
Maybe using a conf file, or templating? But I haven't checked that further, until discussing with you first. Those seem like hacks that will probably not work correctly for what I explained above.

Thanks!

Unexpected "test" replication user on replication master

When setting up a replication snadbox, as follows:

dbdeployer deploy replication 5.7.21 --nodes 7 --gtid --my-cnf-options log_slave_updates --my-cnf-options log_bin --my-cnf-options binlog_format=ROW --my-cnf-options performance_schema=0

I see the following:

$ ./m -e "select User_name,User_password from mysql.slave_master_info;"
+-----------+---------------+
| User_name | User_password |
+-----------+---------------+
| test      |               |
+-----------+---------------+

While on replicas the value is as expected:

$ ./s1 -e "select User_name,User_password from mysql.slave_master_info;"
+-----------+---------------+
| User_name | User_password |
+-----------+---------------+
| rsandbox  | rsandbox      |
+-----------+---------------+

The fact the master has a username/password combo is confusing to begin with, and causes for further misunderstandings. In my particular use case, orchestrator sees the master's credentials and assumes the master has credentials and that they must be correct.

Suggestion: make the username '' on the master.

Ring replication (master-master, master-master-master)

Is your feature request related to a problem? Please describe.
Deploy MASTER-MASTER or MASTER-MASTER-MASTER ring replication

Describe the solution you'd like
-> deploy MASTER-MASTER replication
dbdeployer deploy ring2 8.0.14

-> deploy MASTER-MASTER-MASTER ring replication
dbdeployer deploy ring3 8.0.14

Additional context
Adding few parameters would be cool,

  • the first that comes to mind is auto increment step
  • second one would be to auto load semi-sync plugin and set it up

Now, ring replication is not a very best option and is debatable how useful it is in the production but for testing purposes it's often needed

Sandbox code is not easy to use outside dbdeployer

Description

Sandbox creation functions should return an error instead of running an Exit instruction directly. This behavior may be correct for dbdeployer, but not for other tools that want to use sandboxes as part of their operations. Additionally, calling functions cannot set a custom logger.

To Reproduce

  • Use a sandbox from a program other than dbdeployer.
  • try getting the result for a failure (the program exits).
  • Try writing a Go test that checks on failures

Expected behavior

  • Creation and removal functions should return an error in addition to what they return now.
  • Explicit Exit should never happen (this may require abandoning some of the shortcut functions in common.fileutils.go
  • Calling functions should be able to set a custom logger, instead of the one instantiated by current Create* calls.

cc: @percona-csalguero

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.