Giter Club home page Giter Club logo

pillar's Introduction

Pillar

Pillar manages migrations for your Cassandra data stores.

Pillar grew from a desire to automatically manage Cassandra schema as code. Managing schema as code enables automated build and deployment, a foundational practice for an organization striving to achieve Continuous Delivery.

Pillar is to Cassandra what Rails ActiveRecord migrations or Play Evolutions are to relational databases with one key difference: Pillar is completely independent from any application development framework.

Installation

Prerequisites

  1. Java SE 6 or more recent runtime environment
  2. Cassandra 2.x or 3.x

From Source

This method requires Simple Build Tool (sbt). Building an RPM also requires Effing Package Management (fpm).

% sbt assembly   # builds just the jar file in the target/ directory

% sbt rh-package # builds the jar and the RPM in the target/ directory
% sudo rpm -i target/pillar-1.0.0-DEV.noarch.rpm

The RPM installs Pillar to /opt/pillar.

Packages

Pillar is available at Maven Central under the GroupId com.chrisomeara and ArtifactId pillar_2.10 or pillar_2.11. The current version is 2.3.0.

sbt

libraryDependencies += "com.chrisomeara" % "pillar_2.10" % "2.3.0"

Gradle

compile 'com.chrisomeara:pillar_2.10:2.3.0'

Usage

Terminology

Data Store : A logical grouping of environments. You will likely have one data store per application.

Environment : A context or grouping of settings for a single data store. You will likely have at least development and production environments for each data store.

Migration : A single change to a data store. Migrations have a description and a time stamp indicating the time at which it was authored. Migrations are applied in ascending order and reversed in descending order.

Command Line

Here's the short version:

  1. Write migrations, place them in conf/pillar/migrations/myapp.
  2. Add pillar settings to conf/application.conf.
  3. % pillar -e development initialize myapp
  4. % pillar -e development migrate myapp

Migration Files

Migration files contain metadata about the migration, a CQL statement used to apply the migration and, optionally, a CQL statement used to reverse the migration. Each file describes one migration. You probably want to name your files according to time stamp and description, 1370028263000_creates_views_table.cql, for example. Pillar reads and parses all files in the migrations directory, regardless of file name.

Pillar supports reversible, irreversible and reversible with a no-op down statement migrations. Here are examples of each:

Reversible migrations have up and down properties.

-- description: creates views table
-- authoredAt: 1370028263000
-- up:

CREATE TABLE views (
  id uuid PRIMARY KEY,
  url text,
  person_id int,
  viewed_at timestamp
)

-- down:

DROP TABLE views

Irreversible migrations have an up property but no down property.

-- description: creates events table
-- authoredAt: 1370023262000
-- up:

CREATE TABLE events (
  batch_id text,
  occurred_at uuid,
  event_type text,
  payload blob,
  PRIMARY KEY (batch_id, occurred_at, event_type)
)

Reversible migrations with no-op down statements have an up property and an empty down property.

-- description: adds user_agent to views table
-- authoredAt: 1370028264000
-- up:

ALTER TABLE views
ADD user_agent text

-- down:

Each migration may optionally specify multiple stages. Stages are executed in the order specified.

-- description: creates users and groups tables
-- authoredAt: 1469630066000
-- up:

-- stage: 1
CREATE TABLE groups (
  id uuid,
  name text,
  PRIMARY KEY (id)
)

-- stage: 2
CREATE TABLE users (
  id uuid,
  group_id uuid,
  username text,
  password text,
  PRIMARY KEY (id)
)


-- down:

-- stage: 1
DROP TABLE users

-- stage: 2
DROP TABLE groups

The Pillar command line interface expects to find migrations in conf/pillar/migrations unless overriden by the -d command-line option.

Configuration

Pillar uses the Typesafe Config library for configuration. The Pillar command-line interface expects to find an application.conf file in ./conf or ./src/main/resources. Given a data store called faker, the application.conf might look like the following:

pillar.faker {
    development {
        cassandra-seed-address: "127.0.0.1"
        cassandra-keyspace-name: "pillar_development"
    }
    test {
        cassandra-seed-address: "127.0.0.1"
        cassandra-keyspace-name: "pillar_test"
    }
    acceptance_test {
        cassandra-seed-address: ${?PILLAR_SEED_ADDRESS}
        cassandra-port: ${?PILLAR_PORT}
        cassandra-keyspace-name: "pillar_acceptance_test"
        cassandra-keyspace-name: ${?PILLAR_KEYSPACE_NAME}
        cassandra-ssl: ${?PILLAR_SSL}
        cassandra-username: ${?PILLAR_USERNAME}
        cassandra-password: ${?PILLAR_PASSWORD}
    }
}

Notice the use of environment varaibles in the acceptance_test environment example. This is a feature of Typesafe Config that can greatly increase the security and portability of your Pillar configuration.

Transport Layer Security (TLS/SSL)

Pillar will optionally enable TLS/SSL for client-to-node communications. As Pillar runs on the Java virtual machine, normal JVM TLS/SSL configuration options apply. If the JVM executing Pillar does not already trust the certificate presented by the Cassandra cluster, you may need to configure the trust store as documented by Oracle and DataStax.

Pillar does not install a custom trust manager but rather relies on the default trust manager implementation. Configuring the default trust store requires setting two system properties, like this:

JAVA_OPTS='-Djavax.net.ssl.trustStore=/opt/pillar/conf/truststore -Djavax.net.ssl.trustStorePassword=cassandra'

$JAVA_OPTS are passed through to the JVM when using the pillar executable.

The pillar Executable

The package installs to /opt/pillar by default. The /opt/pillar/bin/pillar executable usage looks like this:

Usage: pillar [OPTIONS] command data-store

OPTIONS

-d directory
--migrations-directory directory  The directory containing migrations

-e env
--environment env                 environment

-t time
--time-stamp time                 The migration time stamp

PARAMETERS

command     migrate or initialize

data-store  The target data store, as defined in application.conf

Examples

Initialize the faker datastore development environment

% pillar -e development initialize faker

Apply all migrations to the faker datastore development environment

% pillar -e development migrate faker

Library

You can also integrate Pillar directly into your application as a library. Reference the acceptance spec suite for details.

Forks

Several organizations and people have forked the Pillar code base. The most actively maintained alternative is the Galeria-Kaufhof fork.

Release Notes

1.0.1

  • Add a "destroy" method to drop a keyspace (iamsteveholmes)

1.0.3

  • Clarify documentation (pvenable)
  • Update DataStax Cassandra driver to version 2.0.2 (magro)
  • Update Scala to version 2.10.4 (magro)
  • Add cross-compilation to Scala version 2.11.1 (magro)
  • Shutdown cluster in migrate & initialize (magro)
  • Transition support from StreamSend to Chris O'Meara (comeara)

2.0.0

  • Allow configuration of Cassandra port (fkoehler)
  • Rework Migrator interface to allow passing a Session object when integrating Pillar as a library (magro, comeara)

2.0.1

  • Update a argot dependency to version 1.0.3 (magro)

2.1.0

  • Update DataStax Cassandra driver to version 3.0.0 (MarcoPriebe)
  • Fix documentation issue where authored_at represented as seconds rather than milliseconds (jhungerford)
  • Introduce PILLAR_SEED_ADDRESS environment variable (comeara)

2.1.1

  • Fix deduplicate error during merge, ref. issue #32 (ilovezfs)

2.2.0

  • Add feature to read registry from files (sadowskik)
  • Add TLS/SSL support(bradhandy, comeara)
  • Add authentication support (bradhandy, comeara)

2.3.0

  • Add multiple stages per migration (sadowskik)

pillar's People

Contributors

comeara avatar fkoehler avatar magro avatar marcopriebe avatar pvenable avatar sadowskik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pillar's Issues

Auto USE keyspace in session once Pillar is initialized

With 2.0.0 I run into the issue that when creating a new keyspace with Pillar.initilaize I get an
[error] An error occurred while performing task: com.datastax.driver.core.exceptions.InvalidQueryException: no keyspace has been specified com.datastax.driver.core.exceptions.InvalidQueryException: no keyspace has been specified
when running the real migrations. One solution is to execute
session.execute("USE $keyspace")
AFTER Pillar.initialize run but before Pillar.migrate

I think Pillar should do this automatically, meaning in Pillar.migrate it should "USE $keyspace"

problem with cassandra 3

Hi,

It seems that pillar is not compatible with Cassandra 3.
I'm using
"com.datastax.cassandra" % "cassandra-driver-core" % "3.0.0",

And when using latest version of pillar, I get the following error:

Exception encountered when attempting to run a suite with class name: common.utils.cassandra.ConnectionAndQuerySpec *** ABORTED ***
[info]   java.lang.NoSuchMethodError: com.datastax.driver.core.Row.getDate(Ljava/lang/String;)Ljava/util/Date;
[info]   at com.chrisomeara.pillar.AppliedMigrations$$anonfun$apply$1.apply(AppliedMigrations.scala:12)
[info]   at com.chrisomeara.pillar.AppliedMigrations$$anonfun$apply$1.apply(AppliedMigrations.scala:12)
[info]   at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
[info]   at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
[info]   at scala.collection.Iterator$class.foreach(Iterator.scala:743)
[info]   at scala.collection.AbstractIterator.foreach(Iterator.scala:1195)
[info]   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
[info]   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
[info]   at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
[info]   at scala.collection.AbstractTraversable.map(Traversable.scala:104)

Allow to specify consistency levels for reads/writes of applied migrations

We just ran into an issue that (most probably) was caused by the consistency level that's used by default (ONE): when running migrate this fails with an AlreadyExistsException, which (most probably) is caused by an incomplete read with the default consistency ONE.

We just fixed this issue (for us) by changing the default/session consistency level in the sbt-pillar-plugin to QUORUM.

Because we're running in a multi DC environment it would be optimal if it would be possible to specify different consistency levels for reads and writes: Writes should be done using EACH_QUORUM by default, which allows you to run Reads with LOCAL_QUORUM - assuming that the applied migrations more often read than written, this combination provides the best overall performance (relatively fast reads).

support copy command

Support 'copy' command.

Actually, cassandra doesn't support rename table name, and to rename a table we need:

  • migrate the data to a file or something like this with the command: 'copy';
  • drop the table;
  • recreate with the new name;
  • use copy again to migrate the data from file to the renamed table;

Example:
-- description: recreate foobar table
-- authoredAt: lucasoliveiracampos
-- up:

-- stage: 1
copy foobar to 'foobar_2017_08_19_11_17_00_data.csv';

-- stage: 2
drop table foobar;

-- stage: 3
create table foobar (
foo text,
bar bigint,
primary key ((foo))
);

-- stage: 4
copy foobar from 'foobar_2017_08_19_11_17_00_data.csv';

-- down:

Config file has to be in install location

The README says the command line tool can be used by placing an application.conf in ./conf, but that doesn't seem to be the case and only works if JAVA_OPTS="-Dconfig.file=conf/application.conf" is set. Otherwise the conf file has to be placed in /opt/pillar.

How to ignore applied scripts?

Hi
Having trouble with pillar when using it a play app.
If I have some scripts already applied how do I add a new one such that the applied ones are ignored and the new ones are applied?

I plan to use this on a continuous integration environment so that I can simply add a script, launch and upgrade, a bit like the play evolutions.
https://github.com/typesafehub/playframework/blob/master/framework/src/play-jdbc/src/main/scala/play/api/db/evolutions/Evolutions.scala

Thanks
Peter

Add migration lock table

When running migrations in parallel (e.g. from integration tests running in parallel or parallel app deployments) there might occur conflicts, because the same migration is run from different clients. The error might look like this:

com.datastax.driver.core.exceptions.InvalidQueryException: Invalid
column name mycolumn because it conflicts with an existing column

To prevent this a separate migrations_lock table should/could be used, where a lock is created for the time migrations are performed (liquibase has this for example).

Provide SLF4J binding when running command line application

When running the command line, SLF4J outputs the following.

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

The command line app should bind SLF4J such that log messages are output correctly.

This should be done in a way that does not conflict with SLF4J bindings for applications that include Pillar as a library.

Migrations and Consitency

Hi,

I am wondering: given that migrations often involve schema changes and subsequent data inserts of updates (e.g. update on a column just added by 'alter table'), what are your experiences with the fact that C* does not provide a way to control that the schema change actually occurred on all nodes before the update hits the cluster.

Do you have any pointers to discussions with the community that suggests that such migrations are even possible in a predictable fashion?

Jan

Multi-stage queries fail on C* 3.0.9

running the example multi stage query fails on C* 3.0.9

-- description: creates users and groups tables
-- authoredAt: 1469630066000
-- up:

-- stage: 1
CREATE TABLE groups (
  id uuid,
  name text,
  PRIMARY KEY (id)
)

-- stage: 2
CREATE TABLE users (
  id uuid,
  group_id uuid,
  username text,
  password text,
  PRIMARY KEY (id)
)


-- down:

-- stage: 1
DROP TABLE users

-- stage: 2
DROP TABLE groups

Is 3.0.9 not supported?

cqlsh 5.0.1
CQL spec 3.4.0

Support authenticated Cassandra

Are there any plans to support authentication for Cassandra cluster? I suspect many might have this requirement specially in a production environment.

Migration fails silently when no directory with datastore name is found

Pillar seeks for a folder with a datastore name, and if it is not found, it terminates silently. The problem is, when migration scripts are placed in some custom directory and the path of this directory is then passed to the Pillar's cli, migration is not executed and no warning or error message is shown either. This seems to be a counterintuitive and obscure behavior.

Please, change it (make Pillar load files from the root of the folder) or explicitly note this behavior in the README and put some warning message in the Registry class for the case of a nonexistent datastore directory.

Multiple DML or DDL Statements per migration?

Hi,

I wonder whether it is possible to place more than one statement in a single migration file. Either as two migration sin one file (as YAML multi doc) or simply as separate statements in a single migration.

I tried but it did not work and I am unsure whether to should.

Can anyone clarify?

Jan

authored_at timestamp is set to a long time in the past during migration

I have been playing with Pillar for Cassandra schema migrations. I noticed that the applied_migrations.authored_at column is not set up correctly.

For instance, my migration CQL files have the following authoredAt markup:

rdawe@cstar:~/MO-3530/test-pillar$ grep -i authoredat conf/pillar/migrations/mydata/*
conf/pillar/migrations/mydata/1420779600_create_test.cql:-- authoredAt: 1420779600
conf/pillar/migrations/mydata/1420783200_add_column_test.cql:-- authoredAt: 1420783200
rdawe@cstar:~/MO-3530/test-pillar$ perl -e 'print gmtime(1420779600)."\n";'
Fri Jan  9 05:00:00 2015
rdawe@cstar:~/MO-3530/test-pillar$ perl -e 'print gmtime(1420783200)."\n";'
Fri Jan  9 06:00:00 2015

and this results in the following in the applied migrations table:

rdawe@cstar:~/MO-3530/test-pillar$ cqlsh
Connected to Test Cluster at localhost:9160.
[cqlsh 4.1.1 | Cassandra 2.0.8 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
Use HELP for help.
cqlsh> USE test;
cqlsh:test> SELECT * from applied_migrations ;

 authored_at              | description                 | applied_at
--------------------------+-----------------------------+--------------------------
 1970-01-17 05:39:43-0500 | add column to example table | 2015-01-13 08:30:45-0500
 1970-01-17 05:39:39-0500 |           create test table | 2015-01-13 08:30:44-0500

(2 rows)

Allow loading migrations from a jar file

We are using Pillar inside a Play2 application where it is deployed as a jar. One can not read migrations from a directory inside a jar. We worked around this with something like the following. I am not creating a pull request as I am not sure on how to integrate this and if it's desired behaviour nor do I know how to correctly test this. sorry.

  private val registry = Registry(loadMigrationsFromJarOrFilesystem())
  private val migrator = Migrator(registry, new LoggerReporter)

  private def loadMigrationsFromJarOrFilesystem() = {
    val migrationsDir = "migrations/"
    val migrationNames = JarUtils.getResourceListing(getClass, migrationsDir).toList.filter(_.nonEmpty)
    val parser = Parser()

    migrationNames.map(name => getClass.getClassLoader.getResourceAsStream(migrationsDir + name)).map {
      stream =>
        try {
          parser.parse(stream)
        } finally {
          stream.close()
        }
    }.toList
  }

where JarUtils.getResourceListing is taken from the top answer here: http://stackoverflow.com/questions/6247144/how-to-load-a-folder-from-a-jar

Hope that helps

Failing to install pillar using Maven

I've tried to install pillar using Maven, and I got this error:

โ€บ mvn com.chrisomeara:pillar_2.10:2.0.1
[INFO] Scanning for projects...
Downloading: http://repo.maven.apache.org/maven2/com/chrisomeara/pillar_2.10/maven-metadata.xml
Downloaded: http://repo.maven.apache.org/maven2/com/chrisomeara/pillar_2.10/maven-metadata.xml (395 B at 0.6 KB/sec)
Downloading: http://repo.maven.apache.org/maven2/com/chrisomeara/pillar_2.10/2.0.1/pillar_2.10-2.0.1.pom
Downloaded: http://repo.maven.apache.org/maven2/com/chrisomeara/pillar_2.10/2.0.1/pillar_2.10-2.0.1.pom (3 KB at 35.6 KB/sec)
Downloading: http://repo.maven.apache.org/maven2/com/chrisomeara/pillar_2.10/2.0.1/pillar_2.10-2.0.1.jar
Downloaded: http://repo.maven.apache.org/maven2/com/chrisomeara/pillar_2.10/2.0.1/pillar_2.10-2.0.1.jar (92 KB at 345.8 KB/sec)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.578s
[INFO] Finished at: Tue Feb 03 16:58:53 PST 2015
[INFO] Final Memory: 4M/81M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to parse plugin descriptor for com.chrisomeara:pillar_2.10:2.0.1 (/Users/itay/.m2/repository/com/chrisomeara/pillar_2.10/2.0.1/pillar_2.10-2.0.1.jar): No plugin descriptor found at META-INF/maven/plugin.xml -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginDescriptorParsingException

Also, Is it possible to install pillar as a binary?

Applied migration table is not ordered by the correct authored_at time

-- description: V001 - creates experiment table
-- authoredAt: 1370028263000
-- up:
-- stage: 1
create table experiment (
    id uuid,
...
)
-- description: V025 - creates user assignment export 
-- authoredAt: 1370028263024
-- up:

create table user_assignment_export(
...
)
cqlsh:wassabi_experiment_local> select * from applied_migrations  ;

 authored_at              | description                                                                         | applied_at
--------------------------+-------------------------------------------------------------------------------------+--------------------------
 2013-05-31 19:24:23+0000 |                                               V025 - creates user assignment export | 2016-08-05 00:17:01+0000
 2013-05-31 19:24:23+0000 |                                                               V002 - creates bucket | 2016-08-05 00:16:33+0000
 2013-05-31 19:24:23+0000 |                                                        V020 - creates user feedback | 2016-08-05 00:16:56+0000
 2013-05-31 19:24:23+0000 |                                              V004 - creates experiment label lookup | 2016-08-05 00:16:36+0000
 2013-05-31 19:24:23+0000 |                                                            V023 - creates audit log | 2016-08-05 00:16:59+0000
 2013-05-31 19:24:23+0000 |                      V026 - insert admin user to user_roles, app_users, superadmins | 2016-08-05 00:17:01+0000
 2013-05-31 19:24:23+0000 |                                                            V010 - creates exclusion | 2016-08-05 00:16:43+0000
 2013-05-31 19:24:23+0000 |                                                 V007 - creates experiment audit log | 2016-08-05 00:16:40+0000
 2013-05-31 19:24:23+0000 |                                                            V017 - creates app roles | 2016-08-05 00:16:50+0000
 2013-05-31 19:24:23+0000 |                                                              V021 - creates staging | 2016-08-05 00:16:57+0000
 2013-05-31 19:24:23+0000 |                                                     V001 - creates experiment table | 2016-08-05 00:16:32+0000
 2013-05-31 19:24:23+0000 |                                             V022 - creates bucket assingment counts | 2016-08-05 00:16:58+0000
 2013-05-31 19:24:23+0000 |                                                           V016 - creates user roles | 2016-08-05 00:16:49+0000
 2013-05-31 19:24:23+0000 |                                                       V015 - creates app page index | 2016-08-05 00:16:48+0000
 2013-05-31 19:24:23+0000 |                                              V011 - creates experiment user look up | 2016-08-05 00:16:44+0000
 2013-05-31 19:24:23+0000 |                                                      V005 - creates user assignment | 2016-08-05 00:16:37+0000
 2013-05-31 19:24:23+0000 |                                                      V014 - creates experiment page | 2016-08-05 00:16:47+0000
 2013-05-31 19:24:23+0000 |                                                            V019 - creates user info | 2016-08-05 00:16:52+0000
 2013-05-31 19:24:23+0000 |           V007 - creates user bucket lookup by experiment id, context, bucket label | 2016-08-05 00:16:39+0000
 2013-05-31 19:24:23+0000 | V006 - creates user experiment look up by app name, user id, context, experiment id | 2016-08-05 00:16:38+0000
 2013-05-31 19:24:23+0000 |                                                     V027 - creates applicaiton list | 2016-08-05 00:17:02+0000
 2013-05-31 19:24:23+0000 |                                              V003 - creates experiment label lookup | 2016-08-05 00:16:35+0000
 2013-05-31 19:24:23+0000 |                                                          V018 - creates superadmins | 2016-08-05 00:16:51+0000
 2013-05-31 19:24:23+0000 |                                                V013 - creates page experiment index | 2016-08-05 00:16:46+0000
 2013-05-31 19:24:23+0000 |                                               V024 - creates user assingment lookup | 2016-08-05 00:17:00+0000
 2013-05-31 19:24:23+0000 |                                                     V009 - creates bucket audit log | 2016-08-05 00:16:41+0000
 2013-05-31 19:24:23+0000 |                                              V012 - creates experiment label lookup | 2016-08-05 00:16:45+0000

missing EOF at 'CREATE' when running multiple stages migration script.

Trying to execute migrations script with multiple stages (from documentation) and getting following error:

com.datastax.driver.core.exceptions.SyntaxError: line 8:0 missing EOF at 'CREATE' (... KEY (id))-- stage: 2[CREATE] TABLE...)
	at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:58)
	at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:24)
	at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
	at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
	at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
	at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
	at de.kaufhof.pillar.Migration$class.executeUpStatement(Migration.scala:38)
	at de.kaufhof.pillar.ReversibleMigration.executeUpStatement(Migration.scala:75)
	at de.kaufhof.pillar.CassandraMigrator$$anonfun$migrate$2.apply(CassandraMigrator.scala:12)
	at de.kaufhof.pillar.CassandraMigrator$$anonfun$migrate$2.apply(CassandraMigrator.scala:12)
	at scala.collection.immutable.List.foreach(List.scala:318)
	at de.kaufhof.pillar.CassandraMigrator.migrate(CassandraMigrator.scala:12)
	at io.ino.sbtpillar.Plugin$Pillar$.migrate(Plugin.scala:194)
	at io.ino.sbtpillar.Plugin$Pillar$$anonfun$migrate$1.apply(Plugin.scala:187)
	at io.ino.sbtpillar.Plugin$Pillar$$anonfun$migrate$1.apply(Plugin.scala:186)
	at io.ino.sbtpillar.Plugin$Pillar$.withSession(Plugin.scala:158)
	at io.ino.sbtpillar.Plugin$Pillar$.migrate(Plugin.scala:186)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3$$anonfun$apply$5.apply(Plugin.scala:59)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3$$anonfun$apply$5.apply(Plugin.scala:55)
	at io.ino.sbtpillar.Plugin$Pillar$.withCassandraUrl(Plugin.scala:131)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3.apply(Plugin.scala:55)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3.apply(Plugin.scala:51)
	at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
	at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
	at sbt.std.Transform$$anon$4.work(System.scala:63)
	at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
	at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
	at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
	at sbt.Execute.work(Execute.scala:237)
	at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
	at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
	at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
	at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 8:0 missing EOF at 'CREATE' (... KEY (id))-- stage: 2[CREATE] TABLE...)
	at com.datastax.driver.core.Responses$Error.asException(Responses.java:132)
	at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
	at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:184)
	at com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:43)
	at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:798)
	at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:617)
	at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1005)
	at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:928)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:263)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
	at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
	at java.lang.Thread.run(Thread.java:745)
[error] An error occurred while performing task: com.datastax.driver.core.exceptions.SyntaxError: line 8:0 missing EOF at 'CREATE' (... KEY (id))-- stage: 2[CREATE] TABLE...)
com.datastax.driver.core.exceptions.SyntaxError: line 8:0 missing EOF at 'CREATE' (... KEY (id))-- stage: 2[CREATE] TABLE...)
	at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:58)
	at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:24)
	at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
	at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
	at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
	at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
	at de.kaufhof.pillar.Migration$class.executeUpStatement(Migration.scala:38)
	at de.kaufhof.pillar.ReversibleMigration.executeUpStatement(Migration.scala:75)
	at de.kaufhof.pillar.CassandraMigrator$$anonfun$migrate$2.apply(CassandraMigrator.scala:12)
	at de.kaufhof.pillar.CassandraMigrator$$anonfun$migrate$2.apply(CassandraMigrator.scala:12)
	at scala.collection.immutable.List.foreach(List.scala:318)
	at de.kaufhof.pillar.CassandraMigrator.migrate(CassandraMigrator.scala:12)
	at io.ino.sbtpillar.Plugin$Pillar$.migrate(Plugin.scala:194)
	at io.ino.sbtpillar.Plugin$Pillar$$anonfun$migrate$1.apply(Plugin.scala:187)
	at io.ino.sbtpillar.Plugin$Pillar$$anonfun$migrate$1.apply(Plugin.scala:186)
	at io.ino.sbtpillar.Plugin$Pillar$.withSession(Plugin.scala:158)
	at io.ino.sbtpillar.Plugin$Pillar$.migrate(Plugin.scala:186)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3$$anonfun$apply$5.apply(Plugin.scala:59)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3$$anonfun$apply$5.apply(Plugin.scala:55)
	at io.ino.sbtpillar.Plugin$Pillar$.withCassandraUrl(Plugin.scala:131)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3.apply(Plugin.scala:55)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3.apply(Plugin.scala:51)
	at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
	at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
	at sbt.std.Transform$$anon$4.work(System.scala:63)
	at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
	at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
	at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
	at sbt.Execute.work(Execute.scala:237)
	at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
	at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
	at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
	at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 8:0 missing EOF at 'CREATE' (... KEY (id))-- stage: 2[CREATE] TABLE...)
	at com.datastax.driver.core.Responses$Error.asException(Responses.java:132)
	at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
	at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:184)
	at com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:43)
	at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:798)
	at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:617)
	at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1005)
	at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:928)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:263)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
	at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
	at java.lang.Thread.run(Thread.java:745)
com.datastax.driver.core.exceptions.SyntaxError: line 8:0 missing EOF at 'CREATE' (... KEY (id))-- stage: 2[CREATE] TABLE...)
	at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:58)
	at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:24)
	at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
	at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
	at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
	at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
	at de.kaufhof.pillar.Migration$class.executeUpStatement(Migration.scala:38)
	at de.kaufhof.pillar.ReversibleMigration.executeUpStatement(Migration.scala:75)
	at de.kaufhof.pillar.CassandraMigrator$$anonfun$migrate$2.apply(CassandraMigrator.scala:12)
	at de.kaufhof.pillar.CassandraMigrator$$anonfun$migrate$2.apply(CassandraMigrator.scala:12)
	at scala.collection.immutable.List.foreach(List.scala:318)
	at de.kaufhof.pillar.CassandraMigrator.migrate(CassandraMigrator.scala:12)
	at io.ino.sbtpillar.Plugin$Pillar$.migrate(Plugin.scala:194)
	at io.ino.sbtpillar.Plugin$Pillar$$anonfun$migrate$1.apply(Plugin.scala:187)
	at io.ino.sbtpillar.Plugin$Pillar$$anonfun$migrate$1.apply(Plugin.scala:186)
	at io.ino.sbtpillar.Plugin$Pillar$.withSession(Plugin.scala:158)
	at io.ino.sbtpillar.Plugin$Pillar$.migrate(Plugin.scala:186)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3$$anonfun$apply$5.apply(Plugin.scala:59)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3$$anonfun$apply$5.apply(Plugin.scala:55)
	at io.ino.sbtpillar.Plugin$Pillar$.withCassandraUrl(Plugin.scala:131)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3.apply(Plugin.scala:55)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3.apply(Plugin.scala:51)
	at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
	at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
	at sbt.std.Transform$$anon$4.work(System.scala:63)
	at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
	at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
	at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
	at sbt.Execute.work(Execute.scala:237)
	at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
	at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
	at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
	at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 8:0 missing EOF at 'CREATE' (... KEY (id))-- stage: 2[CREATE] TABLE...)
	at com.datastax.driver.core.Responses$Error.asException(Responses.java:132)
	at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
	at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:184)
	at com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:43)
	at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:798)
	at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:617)
	at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1005)
	at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:928)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:263)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
	at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
	at java.lang.Thread.run(Thread.java:745)
[error] (sharingAdapter/*:migrate) com.datastax.driver.core.exceptions.SyntaxError: line 8:0 missing EOF at 'CREATE' (... KEY (id))-- stage: 2[CREATE] TABLE...)
[error] Total time: 1 s, completed Jan 16, 2017 5:43:40 PM```

Publish new release / project progress

There were some changes that are waiting to get released. It would be great if #8 could make it into the release as well.

What can we do to get changes more quickly into pillar, and to get them released?

I can offer to take care of releases to sonatype if that's an issue (I should have commit rights for this). Is there anything else I could do to speed up progress?

Cheers,
Martin

Build for Scala 2.12

This looks like it might take some work. At very least Argot needs to be replaced with a maintained CLI argument parser that is published for 2.12.

Support migrations with multiple statements (batch)

It would be great if a single migration file could contain multiple statements.

I tried to create 2 tables in a migration file, but this seems not to be supported. The migration failed with this error:

com.datastax.driver.core.exceptions.SyntaxError: line 9:0 missing EOF at 'CREATE'
    at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:35) ~[cassandra-driver-core-2.0.1.jar:na]
    at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:256) ~[cassandra-driver-core-2.0.1.jar:na]
    at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:172) ~[cassandra-driver-core-2.0.1.jar:na]
    at com.datastax.driver.core.SessionManager.execute(SessionManager.java:91) ~[cassandra-driver-core-2.0.1.jar:na]
    at com.datastax.driver.core.SessionManager.execute(SessionManager.java:83) ~[cassandra-driver-core-2.0.1.jar:na]
    at com.streamsend.pillar.Migration$class.executeUpStatement(Migration.scala:38) ~[pillar_2.10-1.0.3.jar:1.0.3]

Looking at the Parser and Migration classes it seems obvious that it's just not supported.

To support this (without really parsing cql) perhaps a pragmatic/simple solution would be to use some statement separator, like e.g. a line only containing -- with an empty line above/below or s.th. like this.

What do you think?

When using multi-stage migrations, a failure in stage 2 leaves stage 1 committed, even with a proper down statement

Based on the documentation of this projects README.md on stage migrations.... Given the migrationfile:

-- description: Example multistage
-- authoredAt: 1474034307117
-- up:

-- stage: 1
CREATE TYPE foo (
  username text,
  comments text
);

-- stage: 2
CREATE TYPE bar (
  line_one text,
  line_two text
);

-- stage: 3
CREATE TYPE invalid (
  herp derp,
  omg syntax
);

-- down:

-- stage: 1
DROP TYPE invalid
;

-- stage: 2
DROP TYPE bar
;

-- stage: 3
DROP TYPE foo
;

When run, it will error on the obvious syntax bomb in stage 3, and not write a line into the applied_migrations table (meaning this file will re-execute next run)

However, this leaves behind types foo and bar. So next run, you get a different cql error: "type foo already exists"

It feels like the tool should attempt to run stage downs if stage ups failed. This can be seen as trying to bring it back into a state where the whole thing can be run again.

This behavior forces you to use "IF NOT EXIST" on pretty much all statements to mitigate, which is an effective workaround, but could be undesirable in practice.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.