Giter Club home page Giter Club logo

strong_migrations's Introduction

Strong Migrations

Catch unsafe migrations in development

  ✓  Detects potentially dangerous operations
  ✓  Prevents them from running by default
  ✓  Provides instructions on safer ways to do what you want

Supports PostgreSQL, MySQL, and MariaDB

🍊 Battle-tested at Instacart

Build Status

Installation

Add this line to your application’s Gemfile:

gem "strong_migrations"

And run:

bundle install
rails generate strong_migrations:install

Strong Migrations sets a long statement timeout for migrations so you can set a short statement timeout for your application.

How It Works

When you run a migration that’s potentially dangerous, you’ll see an error message like:

=== Dangerous operation detected #strong_migrations ===

Active Record caches attributes, which causes problems
when removing columns. Be sure to ignore the column:

class User < ApplicationRecord
  self.ignored_columns += ["name"]
end

Deploy the code, then wrap this step in a safety_assured { ... } block.

class RemoveColumn < ActiveRecord::Migration[7.1]
  def change
    safety_assured { remove_column :users, :name }
  end
end

An operation is classified as dangerous if it either:

  • Blocks reads or writes for more than a few seconds (after a lock is acquired)
  • Has a good chance of causing application errors

Checks

Potentially dangerous operations:

Postgres-specific checks:

Config-specific checks:

Best practices:

You can also add custom checks or disable specific checks.

Removing a column

Bad

Active Record caches database columns at runtime, so if you drop a column, it can cause exceptions until your app reboots.

class RemoveSomeColumnFromUsers < ActiveRecord::Migration[7.1]
  def change
    remove_column :users, :some_column
  end
end

Good

  1. Tell Active Record to ignore the column from its cache
class User < ApplicationRecord
  self.ignored_columns += ["some_column"]
end
  1. Deploy the code
  2. Write a migration to remove the column (wrap in safety_assured block)
class RemoveSomeColumnFromUsers < ActiveRecord::Migration[7.1]
  def change
    safety_assured { remove_column :users, :some_column }
  end
end
  1. Deploy and run the migration
  2. Remove the line added in step 1

Changing the type of a column

Bad

Changing the type of a column causes the entire table to be rewritten. During this time, reads and writes are blocked in Postgres, and writes are blocked in MySQL and MariaDB.

class ChangeSomeColumnType < ActiveRecord::Migration[7.1]
  def change
    change_column :users, :some_column, :new_type
  end
end

Some changes don’t require a table rewrite and are safe in Postgres:

Type Safe Changes
cidr Changing to inet
citext Changing to text if not indexed, changing to string with no :limit if not indexed
datetime Increasing or removing :precision, changing to timestamptz when session time zone is UTC in Postgres 12+
decimal Increasing :precision at same :scale, removing :precision and :scale
interval Increasing or removing :precision
numeric Increasing :precision at same :scale, removing :precision and :scale
string Increasing or removing :limit, changing to text, changing citext if not indexed
text Changing to string with no :limit, changing to citext if not indexed
time Increasing or removing :precision
timestamptz Increasing or removing :limit, changing to datetime when session time zone is UTC in Postgres 12+

And some in MySQL and MariaDB:

Type Safe Changes
string Increasing :limit from under 63 up to 63, increasing :limit from over 63 to the max (the threshold can be different if using an encoding other than utf8mb4 - for instance, it’s 85 for utf8mb3 and 255 for latin1)

Good

A safer approach is to:

  1. Create a new column
  2. Write to both columns
  3. Backfill data from the old column to the new column
  4. Move reads from the old column to the new column
  5. Stop writing to the old column
  6. Drop the old column

Renaming a column

Bad

Renaming a column that’s in use will cause errors in your application.

class RenameSomeColumn < ActiveRecord::Migration[7.1]
  def change
    rename_column :users, :some_column, :new_name
  end
end

Good

A safer approach is to:

  1. Create a new column
  2. Write to both columns
  3. Backfill data from the old column to the new column
  4. Move reads from the old column to the new column
  5. Stop writing to the old column
  6. Drop the old column

Renaming a table

Bad

Renaming a table that’s in use will cause errors in your application.

class RenameUsersToCustomers < ActiveRecord::Migration[7.1]
  def change
    rename_table :users, :customers
  end
end

Good

A safer approach is to:

  1. Create a new table
  2. Write to both tables
  3. Backfill data from the old table to the new table
  4. Move reads from the old table to the new table
  5. Stop writing to the old table
  6. Drop the old table

Creating a table with the force option

Bad

The force option can drop an existing table.

class CreateUsers < ActiveRecord::Migration[7.1]
  def change
    create_table :users, force: true do |t|
      # ...
    end
  end
end

Good

Create tables without the force option.

class CreateUsers < ActiveRecord::Migration[7.1]
  def change
    create_table :users do |t|
      # ...
    end
  end
end

If you intend to drop an existing table, run drop_table first.

Adding an auto-incrementing column

Bad

Adding an auto-incrementing column (serial/bigserial in Postgres and AUTO_INCREMENT in MySQL and MariaDB) causes the entire table to be rewritten. During this time, reads and writes are blocked in Postgres, and writes are blocked in MySQL and MariaDB.

class AddIdToCitiesUsers < ActiveRecord::Migration[7.1]
  def change
    add_column :cities_users, :id, :primary_key
  end
end

With MySQL and MariaDB, this can also generate different values on replicas if using statement-based replication.

Good

Create a new table and migrate the data with the same steps as renaming a table.

Adding a stored generated column

Bad

Adding a stored generated column causes the entire table to be rewritten. During this time, reads and writes are blocked in Postgres, and writes are blocked in MySQL and MariaDB.

class AddSomeColumnToUsers < ActiveRecord::Migration[7.1]
  def change
    add_column :users, :some_column, :virtual, type: :string, as: "...", stored: true
  end
end

Good

Add a non-generated column and use callbacks or triggers instead (or a virtual generated column with MySQL and MariaDB).

Adding a check constraint

🐢 Safe by default available

Bad

Adding a check constraint blocks reads and writes in Postgres and blocks writes in MySQL and MariaDB while every row is checked.

class AddCheckConstraint < ActiveRecord::Migration[7.1]
  def change
    add_check_constraint :users, "price > 0", name: "price_check"
  end
end

Good - Postgres

Add the check constraint without validating existing rows:

class AddCheckConstraint < ActiveRecord::Migration[7.1]
  def change
    add_check_constraint :users, "price > 0", name: "price_check", validate: false
  end
end

Then validate them in a separate migration.

class ValidateCheckConstraint < ActiveRecord::Migration[7.1]
  def change
    validate_check_constraint :users, name: "price_check"
  end
end

Good - MySQL and MariaDB

Let us know if you have a safe way to do this (check constraints can be added with NOT ENFORCED, but enforcing blocks writes).

Executing SQL directly

Strong Migrations can’t ensure safety for raw SQL statements. Make really sure that what you’re doing is safe, then use:

class ExecuteSQL < ActiveRecord::Migration[7.1]
  def change
    safety_assured { execute "..." }
  end
end

Backfilling data

Bad

Active Record creates a transaction around each migration, and backfilling in the same transaction that alters a table keeps the table locked for the duration of the backfill.

class AddSomeColumnToUsers < ActiveRecord::Migration[7.1]
  def change
    add_column :users, :some_column, :text
    User.update_all some_column: "default_value"
  end
end

Also, running a single query to update data can cause issues for large tables.

Good

There are three keys to backfilling safely: batching, throttling, and running it outside a transaction. Use the Rails console or a separate migration with disable_ddl_transaction!.

class BackfillSomeColumn < ActiveRecord::Migration[7.1]
  disable_ddl_transaction!

  def up
    User.unscoped.in_batches do |relation|
      relation.update_all some_column: "default_value"
      sleep(0.01) # throttle
    end
  end
end

Adding an index non-concurrently

🐢 Safe by default available

Bad

In Postgres, adding an index non-concurrently blocks writes.

class AddSomeIndexToUsers < ActiveRecord::Migration[7.1]
  def change
    add_index :users, :some_column
  end
end

Good

Add indexes concurrently.

class AddSomeIndexToUsers < ActiveRecord::Migration[7.1]
  disable_ddl_transaction!

  def change
    add_index :users, :some_column, algorithm: :concurrently
  end
end

If you forget disable_ddl_transaction!, the migration will fail. Also, note that indexes on new tables (those created in the same migration) don’t require this.

With gindex, you can generate an index migration instantly with:

rails g index table column

Adding a reference

🐢 Safe by default available

Bad

Rails adds an index non-concurrently to references by default, which blocks writes in Postgres.

class AddReferenceToUsers < ActiveRecord::Migration[7.1]
  def change
    add_reference :users, :city
  end
end

Good

Make sure the index is added concurrently.

class AddReferenceToUsers < ActiveRecord::Migration[7.1]
  disable_ddl_transaction!

  def change
    add_reference :users, :city, index: {algorithm: :concurrently}
  end
end

Adding a foreign key

🐢 Safe by default available

Bad

In Postgres, adding a foreign key blocks writes on both tables.

class AddForeignKeyOnUsers < ActiveRecord::Migration[7.1]
  def change
    add_foreign_key :users, :orders
  end
end

or

class AddReferenceToUsers < ActiveRecord::Migration[7.1]
  def change
    add_reference :users, :order, foreign_key: true
  end
end

Good

Add the foreign key without validating existing rows:

class AddForeignKeyOnUsers < ActiveRecord::Migration[7.1]
  def change
    add_foreign_key :users, :orders, validate: false
  end
end

Then validate them in a separate migration.

class ValidateForeignKeyOnUsers < ActiveRecord::Migration[7.1]
  def change
    validate_foreign_key :users, :orders
  end
end

Adding a unique constraint

Bad

In Postgres, adding a unique constraint creates a unique index, which blocks reads and writes.

class AddUniqueConstraint < ActiveRecord::Migration[7.1]
  def change
    add_unique_constraint :users, :some_column
  end
end

Good

Create a unique index concurrently, then use it for the constraint.

class AddUniqueConstraint < ActiveRecord::Migration[7.1]
  disable_ddl_transaction!

  def up
    add_index :users, :some_column, unique: true, algorithm: :concurrently
    add_unique_constraint :users, using_index: "index_users_on_some_column"
  end

  def down
    remove_unique_constraint :users, :some_column
  end
end

Adding an exclusion constraint

Bad

In Postgres, adding an exclusion constraint blocks reads and writes while every row is checked.

class AddExclusionConstraint < ActiveRecord::Migration[7.1]
  def change
    add_exclusion_constraint :users, "number WITH =", using: :gist
  end
end

Good

Let us know if you have a safe way to do this (exclusion constraints cannot be marked NOT VALID).

Adding a json column

Bad

In Postgres, there’s no equality operator for the json column type, which can cause errors for existing SELECT DISTINCT queries in your application.

class AddPropertiesToUsers < ActiveRecord::Migration[7.1]
  def change
    add_column :users, :properties, :json
  end
end

Good

Use jsonb instead.

class AddPropertiesToUsers < ActiveRecord::Migration[7.1]
  def change
    add_column :users, :properties, :jsonb
  end
end

Setting NOT NULL on an existing column

🐢 Safe by default available

Bad

In Postgres, setting NOT NULL on an existing column blocks reads and writes while every row is checked.

class SetSomeColumnNotNull < ActiveRecord::Migration[7.1]
  def change
    change_column_null :users, :some_column, false
  end
end

Good

Instead, add a check constraint.

For Rails 6.1+, use:

class SetSomeColumnNotNull < ActiveRecord::Migration[7.1]
  def change
    add_check_constraint :users, "some_column IS NOT NULL", name: "users_some_column_null", validate: false
  end
end

For Rails < 6.1, use:

class SetSomeColumnNotNull < ActiveRecord::Migration[6.0]
  def change
    safety_assured do
      execute 'ALTER TABLE "users" ADD CONSTRAINT "users_some_column_null" CHECK ("some_column" IS NOT NULL) NOT VALID'
    end
  end
end

Then validate it in a separate migration. A NOT NULL check constraint is functionally equivalent to setting NOT NULL on the column (but it won’t show up in schema.rb in Rails < 6.1). In Postgres 12+, once the check constraint is validated, you can safely set NOT NULL on the column and drop the check constraint.

For Rails 6.1+, use:

class ValidateSomeColumnNotNull < ActiveRecord::Migration[7.1]
  def change
    validate_check_constraint :users, name: "users_some_column_null"

    # in Postgres 12+, you can then safely set NOT NULL on the column
    change_column_null :users, :some_column, false
    remove_check_constraint :users, name: "users_some_column_null"
  end
end

For Rails < 6.1, use:

class ValidateSomeColumnNotNull < ActiveRecord::Migration[6.0]
  def change
    safety_assured do
      execute 'ALTER TABLE "users" VALIDATE CONSTRAINT "users_some_column_null"'
    end

    # in Postgres 12+, you can then safely set NOT NULL on the column
    change_column_null :users, :some_column, false
    safety_assured do
      execute 'ALTER TABLE "users" DROP CONSTRAINT "users_some_column_null"'
    end
  end
end

Adding a column with a default value

Bad

In earlier versions of Postgres, adding a column with a default value to an existing table causes the entire table to be rewritten. During this time, reads and writes are blocked in Postgres.

class AddSomeColumnToUsers < ActiveRecord::Migration[7.1]
  def change
    add_column :users, :some_column, :text, default: "default_value"
  end
end

In Postgres 11+, this no longer requires a table rewrite and is safe (except for volatile functions like gen_random_uuid()).

Good

Instead, add the column without a default value, then change the default.

class AddSomeColumnToUsers < ActiveRecord::Migration[7.1]
  def up
    add_column :users, :some_column, :text
    change_column_default :users, :some_column, "default_value"
  end

  def down
    remove_column :users, :some_column
  end
end

Then backfill the data.

Changing the default value of a column

Bad

Rails < 7 enables partial writes by default, which can cause incorrect values to be inserted when changing the default value of a column.

class ChangeSomeColumnDefault < ActiveRecord::Migration[6.1]
  def change
    change_column_default :users, :some_column, from: "old", to: "new"
  end
end

User.create!(some_column: "old") # can insert "new"

Good

Disable partial writes in config/application.rb. For Rails < 7, use:

config.active_record.partial_writes = false

For Rails 7+, use:

config.active_record.partial_inserts = false

Keeping non-unique indexes to three columns or less

Bad

Adding a non-unique index with more than three columns rarely improves performance.

class AddSomeIndexToUsers < ActiveRecord::Migration[7.1]
  def change
    add_index :users, [:a, :b, :c, :d]
  end
end

Good

Instead, start an index with columns that narrow down the results the most.

class AddSomeIndexToUsers < ActiveRecord::Migration[7.1]
  def change
    add_index :users, [:b, :d]
  end
end

For Postgres, be sure to add them concurrently.

Assuring Safety

To mark a step in the migration as safe, despite using a method that might otherwise be dangerous, wrap it in a safety_assured block.

class MySafeMigration < ActiveRecord::Migration[7.1]
  def change
    safety_assured { remove_column :users, :some_column }
  end
end

Certain methods like execute and change_table cannot be inspected and are prevented from running by default. Make sure what you’re doing is really safe and use this pattern.

Safe by Default

Make operations safe by default.

  • adding and removing an index
  • adding a foreign key
  • adding a check constraint
  • setting NOT NULL on an existing column

Add to config/initializers/strong_migrations.rb:

StrongMigrations.safe_by_default = true

Custom Checks

Add your own custom checks with:

StrongMigrations.add_check do |method, args|
  if method == :add_index && args[0].to_s == "users"
    stop! "No more indexes on the users table"
  end
end

Use the stop! method to stop migrations.

Note: Since remove_column always requires a safety_assured block, it’s not possible to add a custom check for remove_column operations.

Opt-in Checks

Removing an index non-concurrently

Postgres supports removing indexes concurrently, but removing them non-concurrently shouldn’t be an issue for most applications. You can enable this check with:

StrongMigrations.enable_check(:remove_index)

Disable Checks

Disable specific checks with:

StrongMigrations.disable_check(:add_index)

Check the source code for the list of keys.

Down Migrations / Rollbacks

By default, checks are disabled when migrating down. Enable them with:

StrongMigrations.check_down = true

Custom Messages

To customize specific messages, create an initializer with:

StrongMigrations.error_messages[:add_column_default] = "Your custom instructions"

Check the source code for the list of keys.

Migration Timeouts

It’s extremely important to set a short lock timeout for migrations. This way, if a migration can’t acquire a lock in a timely manner, other statements won’t be stuck behind it. We also recommend setting a long statement timeout so migrations can run for a while.

Create config/initializers/strong_migrations.rb with:

StrongMigrations.lock_timeout = 10.seconds
StrongMigrations.statement_timeout = 1.hour

Or set the timeouts directly on the database user that runs migrations. For Postgres, use:

ALTER ROLE myuser SET lock_timeout = '10s';
ALTER ROLE myuser SET statement_timeout = '1h';

Note: If you use PgBouncer in transaction mode, you must set timeouts on the database user.

App Timeouts

We recommend adding timeouts to config/database.yml to prevent connections from hanging and individual queries from taking up too many resources in controllers, jobs, the Rails console, and other places.

For Postgres:

production:
  connect_timeout: 5
  variables:
    statement_timeout: 15s
    lock_timeout: 10s

Note: If you use PgBouncer in transaction mode, you must set the statement and lock timeouts on the database user as shown above.

For MySQL:

production:
  connect_timeout: 5
  read_timeout: 5
  write_timeout: 5
  variables:
    max_execution_time: 15000 # ms
    lock_wait_timeout: 10 # sec

For MariaDB:

production:
  connect_timeout: 5
  read_timeout: 5
  write_timeout: 5
  variables:
    max_statement_time: 15 # sec
    lock_wait_timeout: 10 # sec

For HTTP connections, Redis, and other services, check out this guide.

Lock Timeout Retries [experimental]

There’s the option to automatically retry statements for migrations when the lock timeout is reached. Here’s how it works:

  • If a lock timeout happens outside a transaction, the statement is retried
  • If it happens inside the DDL transaction, the entire migration is retried (only applicable to Postgres)

Add to config/initializers/strong_migrations.rb:

StrongMigrations.lock_timeout_retries = 3

Set the delay between retries with:

StrongMigrations.lock_timeout_retry_delay = 10.seconds

Existing Migrations

To mark migrations as safe that were created before installing this gem, create an initializer with:

StrongMigrations.start_after = 20230101000000

Use the version from your latest migration.

Target Version

If your development database version is different from production, you can specify the production version so the right checks run in development.

StrongMigrations.target_version = 10 # or "8.0.12", "10.3.2", etc

The major version works well for Postgres, while the full version is recommended for MySQL and MariaDB.

For safety, this option only affects development and test environments. In other environments, the actual server version is always used.

If your app has multiple databases with different versions, with Rails 6.1+, you can use:

StrongMigrations.target_version = {primary: 13, catalog: 15}

Analyze Tables

Analyze tables automatically (to update planner statistics) after an index is added. Create an initializer with:

StrongMigrations.auto_analyze = true

Faster Migrations

Only dump the schema when adding a new migration. If you use Git, add to config/environments/development.rb:

config.active_record.dump_schema_after_migration = `git status db/migrate/ --porcelain`.present?

Schema Sanity

Columns can flip order in db/schema.rb when you have multiple developers. One way to prevent this is to alphabetize them. Add to config/initializers/strong_migrations.rb:

StrongMigrations.alphabetize_schema = true

Permissions

We recommend using a separate database user for migrations when possible so you don’t need to grant your app user permission to alter tables.

Smaller Projects

You probably don’t need this gem for smaller projects, as operations that are unsafe at scale can be perfectly safe on smaller, low-traffic tables.

Additional Reading

Credits

Thanks to Bob Remeika and David Waller for the original code and Sean Huber for the bad/good readme format.

Contributing

Everyone is encouraged to help improve this project. Here are a few ways you can help:

To get started with development:

git clone https://github.com/ankane/strong_migrations.git
cd strong_migrations
bundle install

# Postgres
createdb strong_migrations_test
bundle exec rake test

# MySQL and MariaDB
mysqladmin create strong_migrations_test
ADAPTER=mysql2 bundle exec rake test

strong_migrations's People

Contributors

ajsharma avatar alvir avatar ankane avatar bartoszkopinski avatar chaadow avatar chadwilken avatar charlesdelannoy avatar fatkodima avatar geoff-lee-lendesk avatar geoffharcourt avatar ghiculescu avatar jarosluv avatar jdufresne avatar jturkel avatar karimtimer avatar mlarraz avatar olimart avatar pocke avatar roguelazer avatar ryotarai avatar twonegatives avatar viamin avatar ybiquitous avatar ydah avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

strong_migrations's Issues

Are unsafe operations stackable between themselves?

Hi!

First, thanks for that Gem and the neat documentation.

We're looking for best practices here, we're unsure of a use case so let me try to explain it.

When reading the unsafe operations documentation, to rename a column we need to:
1st deployment:

  • Create the new column,
  • Write to both columns
  • Backfill the data from old to new column

2nd deployment

  • Write only on new column
  • Drop the old column <<<<===

Question is pretty simple: When we drop the column, should we apply the sames rules as described here for removing a column: https://github.com/ankane/strong_migrations#removing-a-column
(So a 2-step deployment, to first ignore the column from the AR cache, then a second deployment to actually drop the column)?

So, when we want to rename a column, the process is more a 4-steps deployments than a 2-steps deployments.

The same question applies for all operations described in the documentation.

What do you think?

LMK if something is obscure, happy to clarify.

[Idea] Make it easier to disable rules

Awesome gem! We would like to ignore one of the rules, but there is no configuration for that. We though in monkeypatching the code, but the hash with warnings or the method_missing are huge. We will end up forking the gem, but that isn't ideal. The gem should be better modularized so it's easier to override.

NoMethodError: undefined method `perform' for nil:NilClass

Hi,

We are in process of upgrading a rails 4.2. app to latest rails version. We successfully upgraded to 5.0 and while upgrading to 5.1 we are blocked by this error, which seems to be coming from strong_migrations.

If i remove strong_migrations from the app, everything works.

Ruby: 2.6.3
Rails: 5.1.7

Caused by:
NoMethodError: undefined method `perform' for nil:NilClass
/.../.rvm/gems/ruby-2.6.3/gems/strong_migrations-0.4.1/lib/strong_migrations/migration.rb:14:in `method_missing'
/.../.rvm/gems/ruby-2.6.3/gems/activerecord-5.1.7/lib/active_record/migration.rb:600:in `method_missing'
/....../db/migrate/20190802132723_add_xyz_table.rb:2:in `<class:AddXyzTable>'
/....../db/migrate/20190802132723_add_xyz.rb:1:in `<top (required)>'
/.../.rvm/gems/ruby-2.6.3/gems/activesupport-5.1.7/lib/active_support/dependencies.rb:292:in `require'
/.../.rvm/gems/ruby-2.6.3/gems/activesupport-5.1.7/lib/active_support/dependencies.rb:292:in `block in require'
/.../.rvm/gems/ruby-2.6.3/gems/activesupport-5.1.7/lib/active_support/dependencies.rb:258:in `load_dependency'
/.../.rvm/gems/ruby-2.6.3/gems/activesupport-5.1.7/lib/active_support/dependencies.rb:292:in `require'
/.../.rvm/gems/ruby-2.6.3/gems/activerecord-5.1.7/lib/active_record/migration.rb:962:in `load_migration'
/.../.rvm/gems/ruby-2.6.3/gems/activerecord-5.1.7/lib/active_record/migration.rb:958:in `migration'
/.../.rvm/gems/ruby-2.6.3/gems/activerecord-5.1.7/lib/active_record/migration.rb:953:in `disable_ddl_transaction'
/.../.rvm/gems/ruby-2.6.3/gems/activerecord-5.1.7/lib/active_record/migration.rb:1305:in `use_transaction?'
/.../.rvm/gems/ruby-2.6.3/gems/activerecord-5.1.7/lib/active_record/migration.rb:1297:in `ddl_transaction'
/.../.rvm/gems/ruby-2.6.3/gems/activerecord-5.1.7/lib/active_record/migration.rb:1229:in `execute_migration_in_transaction'
/.../.rvm/gems/ruby-2.6.3/gems/activerecord-5.1.7/lib/active_record/migration.rb:1201:in `block in migrate_without_lock'
/.../.rvm/gems/ruby-2.6.3/gems/activerecord-5.1.7/lib/active_record/migration.rb:1200:in `each'
/.../.rvm/gems/ruby-2.6.3/gems/activerecord-5.1.7/lib/active_record/migration.rb:1200:in `migrate_without_lock'
/.../.rvm/gems/ruby-2.6.3/gems/activerecord-5.1.7/lib/active_record/migration.rb:1148:in `block in migrate'
/.../.rvm/gems/ruby-2.6.3/gems/activerecord-5.1.7/lib/active_record/migration.rb:1317:in `with_advisory_lock'
/.../.rvm/gems/ruby-2.6.3/gems/activerecord-5.1.7/lib/active_record/migration.rb:1148:in `migrate'
/.../.rvm/gems/ruby-2.6.3/gems/activerecord-5.1.7/lib/active_record/migration.rb:1007:in `up'
/.../.rvm/gems/ruby-2.6.3/gems/activerecord-5.1.7/lib/active_record/migration.rb:985:in `migrate'
/.../.rvm/gems/ruby-2.6.3/gems/activerecord-5.1.7/lib/active_record/tasks/database_tasks.rb:171:in `migrate'
/.../.rvm/gems/ruby-2.6.3/gems/strong_migrations-0.4.1/lib/strong_migrations/database_tasks.rb:4:in `migrate'
/.../.rvm/gems/ruby-2.6.3/gems/activerecord-5.1.7/lib/active_record/railties/databases.rake:58:in `block (2 levels) in <top (required)>'
/.../.rvm/gems/ruby-2.6.3/gems/rake-12.3.3/exe/rake:27:in `<top (required)>'

Our migration is simple, something like:

class AddXyzTable < ActiveRecord::Migration[4.2]
  create_table :xyz do |t|
    t.integer "abc_id",             null: false
    t.integer "efg_id", null: false
  end

  add_index "xyz", ["abc_id", "efg_id"], name: "index_xyz_on_abc_id_and_efg_id", using: :btree
  add_index "xyz", ["efg_id", "abc_id"], name: "index_xyz_on_efg_id_and_abc_id", using: :btree
end

Till now, it seems to be nothing related to our code, if it turns out to be, I will report here.

Thanks!

When adding a column with a non-null default the suggested migration does not work

PostgreSQL version 9.5.13, Ruby 2.3.8 and Rails 4.2.11.

When running this migration

class AddIsAdminToUsers < ActiveRecord::Migration
  def change
    add_column :users, :is_admin, :boolean, default: false, null: false
  end
end

I get this from strong_migrations:

== 20190314162335 AddIsAdminToUsers: migrating - Shard: master ================
rake aborted!
StandardError: An error has occurred, this and all later migrations canceled:

=== Dangerous operation detected #strong_migrations ===

Adding a column with a non-null default causes the entire table to be rewritten.
Instead, add the column without a default value, then change the default.

class AddIsAdminToUsers < ActiveRecord::Migration
  def up
    add_column :users, :is_admin, :boolean, null: false
    change_column_default :users, :is_admin, false
  end

  def down
    remove_column :users, :is_admin
  end
end

Then backfill the existing rows in the Rails console or a separate migration with disable_ddl_transaction!.

class BackfillAddIsAdminToUsers < ActiveRecord::Migration
  disable_ddl_transaction!

  def change
    User.find_in_batches do |records|
      User.where(id: records.map(&:id)).update_all is_admin: false
    end
  end
end

But I create those suggested migrations and run them, the first one fails with the error:

== 20190314162335 AddIsAdminToUsers: migrating - Shard: master ================
-- add_column(:users, :is_admin, :boolean, {:null=>false})
rake aborted!
StandardError: An error has occurred, this and all later migrations canceled:

PG::NotNullViolation: ERROR:  column "is_admin" contains null values
: ALTER TABLE "users" ADD "is_admin" boolean NOT NULL

This is because of null: false in the suggested line

add_column :users, :is_admin, :boolean, null: false

Should instead this line be the following instead?

add_column :users, :is_admin, :boolean, null: true

and an additional third migration should be suggested to set the NOT NULL constraint?

change_column_null :users, :is_admin, false

Are unsafe operations stackable between themselves?

Hi!

First, thanks for that Gem and the neat documentation.

We're looking for best practices here, we're unsure of a use case so let me try to explain it.

When reading the unsafe operations documentation, to rename a column we need to:
1st deployment:

  • Create the new column,
  • Write to both columns
  • Backfill the data from old to new column

2nd deployment

  • Write only on new column
  • Drop the old column <<<<===

Question is pretty simple: When we drop the column, should we apply the sames rules as described here for removing a column: https://github.com/ankane/strong_migrations#removing-a-column
(So a 2-step deployment, to first ignore the column from the AR cache, then a second deployment to actually drop the column)?

So, when we want to rename a column, the process is more a 4-steps deployments than a 2-steps deployments.

The same question applies for all operations described in the documentation.

What do you think?

LMK if something is obscure, happy to clarify.

undefined method 'columns' for class

First off, I love the gem and appreciate all of your open source work.

When I run a DB migration it get the following error:

NameError: undefined method `columns' for class `#<Class:0x00007f9c1b3dea78>'
bin/rails:4:in `<main>'
Tasks: TOP => db:schema:dump => strong_migrations:alphabetize_columns

I am using makara, with the makara_postgis adapter and version 0.1.9 of strong_migrations, but also updated and tried with 0.2.2 with the same results. I have it alphabetizing the columns after a schema dump obviously and that is the part that fails. The migration "succeeds" but looks as if it fails because the schema doesn't dump.

Full backtrace

NameError: undefined method `columns' for class `#<Class:0x00007fd33a842320>'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/strong_migrations-0.2.2/lib/tasks/strong_migrations.rake:14:in `alias_method'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/strong_migrations-0.2.2/lib/tasks/strong_migrations.rake:14:in `singleton class'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/strong_migrations-0.2.2/lib/tasks/strong_migrations.rake:13:in `block (2 levels) in <main>'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:271:in `block in execute'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:271:in `each'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:271:in `execute'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:213:in `block in invoke_with_call_chain'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:193:in `invoke_with_call_chain'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:237:in `block in invoke_prerequisites'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:235:in `each'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:235:in `invoke_prerequisites'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:212:in `block in invoke_with_call_chain'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:193:in `invoke_with_call_chain'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:182:in `invoke'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/activerecord-5.2.0/lib/active_record/railties/databases.rake:68:in `block (2 levels) in <main>'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:271:in `block in execute'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:271:in `each'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:271:in `execute'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:213:in `block in invoke_with_call_chain'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:193:in `invoke_with_call_chain'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:182:in `invoke'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/activerecord-5.2.0/lib/active_record/railties/databases.rake:61:in `block (2 levels) in <main>'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:271:in `block in execute'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:271:in `each'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:271:in `execute'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:213:in `block in invoke_with_call_chain'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:193:in `invoke_with_call_chain'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/task.rb:182:in `invoke'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/application.rb:160:in `invoke_task'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/application.rb:116:in `block (2 levels) in top_level'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/application.rb:116:in `each'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/application.rb:116:in `block in top_level'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/application.rb:125:in `run_with_threads'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/application.rb:110:in `top_level'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/railties-5.2.0/lib/rails/commands/rake/rake_command.rb:23:in `block in perform'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.1/lib/rake/application.rb:186:in `standard_exception_handling'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/railties-5.2.0/lib/rails/commands/rake/rake_command.rb:20:in `perform'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/railties-5.2.0/lib/rails/command.rb:48:in `invoke'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/railties-5.2.0/lib/rails/commands.rb:18:in `<main>'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/bootsnap-1.3.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `require'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/bootsnap-1.3.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `block in require_with_bootsnap_lfi'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/bootsnap-1.3.0/lib/bootsnap/load_path_cache/loaded_features_index.rb:65:in `register'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/bootsnap-1.3.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:20:in `require_with_bootsnap_lfi'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/bootsnap-1.3.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:29:in `require'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/activesupport-5.2.0/lib/active_support/dependencies.rb:283:in `block in require'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/activesupport-5.2.0/lib/active_support/dependencies.rb:249:in `load_dependency'
/Users/chadwilken/.rbenv/versions/2.5.0/lib/ruby/gems/2.5.0/gems/activesupport-5.2.0/lib/active_support/dependencies.rb:283:in `require'
bin/rails:4:in `<main>'

Customized warning messages for each unsafe migration

Has there been any thought to allowing customization of the warning messages for each type of migration? There are a few migrations (like rename_table for instance) that have onerous enough workarounds that we instead just recommend scheduling downtime.

rake db:migrate -> fatal: exception reentered

After running:

rake db:migrate --trace

I got:

Running via Spring preloader in process 20919
** Invoke db:migrate (first_time)
** Invoke environment (first_time)
** Execute environment
** Invoke db:load_config (first_time)
** Execute db:load_config
** Execute db:migrate
rake aborted!
can't modify frozen object
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/task.rb:188:in `extend_object'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/task.rb:188:in `extend'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/task.rb:188:in `add_chain_to'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/task.rb:182:in `rescue in invoke_with_call_chain'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/task.rb:171:in `invoke_with_call_chain'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/task.rb:165:in `invoke'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/application.rb:150:in `invoke_task'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/application.rb:106:in `block (2 levels) in top_level'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/application.rb:106:in `each'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/application.rb:106:in `block in top_level'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/application.rb:115:in `run_with_threads'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/application.rb:100:in `top_level'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/application.rb:78:in `block in run'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/application.rb:176:in `standard_exception_handling'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/application.rb:75:in `run'
~/code/beautydate/bin/rake:12:in `<top (required)>'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/activesupport-4.2.5.1/lib/active_support/dependencies.rb:268:in `load'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/activesupport-4.2.5.1/lib/active_support/dependencies.rb:268:in `block in load'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/activesupport-4.2.5.1/lib/active_support/dependencies.rb:240:in `load_dependency'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/activesupport-4.2.5.1/lib/active_support/dependencies.rb:268:in `load'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/spring-1.6.2/lib/spring/command_wrapper.rb:40:in `call'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/spring-1.6.2/lib/spring/application.rb:185:in `block in serve'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/spring-1.6.2/lib/spring/application.rb:156:in `fork'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/spring-1.6.2/lib/spring/application.rb:156:in `serve'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/spring-1.6.2/lib/spring/application.rb:131:in `block in run'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/spring-1.6.2/lib/spring/application.rb:125:in `loop'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/spring-1.6.2/lib/spring/application.rb:125:in `run'
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/spring-1.6.2/lib/spring/application/boot.rb:18:in `<top (required)>'
~/.rbenv/versions/2.2.3/lib/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in `require'
~/.rbenv/versions/2.2.3/lib/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in `require'
-e:1:in `<main>'
fatal: exception reentered
~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/application.rb:228:in `display_exception_backtrace': undefined method `join' for nil:NilClass (NoMethodError)
  from ~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/application.rb:210:in `display_exception_details'
  from ~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/application.rb:211:in `display_exception_details'
  from ~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/application.rb:198:in `display_error_message'
  from ~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/application.rb:185:in `rescue in standard_exception_handling'
  from ~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/application.rb:176:in `standard_exception_handling'
  from ~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/rake-10.5.0/lib/rake/application.rb:75:in `run'
  from ~/code/beautydate/bin/rake:12:in `<top (required)>'
  from ~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/activesupport-4.2.5.1/lib/active_support/dependencies.rb:268:in `load'
  from ~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/activesupport-4.2.5.1/lib/active_support/dependencies.rb:268:in `block in load'
  from ~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/activesupport-4.2.5.1/lib/active_support/dependencies.rb:240:in `load_dependency'
  from ~/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/activesupport-4.2.5.1/lib/active_support/dependencies.rb:268:in `load'
  from ~/.rbenv/versions/2.2.3/lib/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in `require'
  from ~/.rbenv/versions/2.2.3/lib/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in `require'
  from -e:1:in `<main>'

I am using Ruby 2.2.3 on the Mac OS X Yosemite, with Rails 4.2.5.1

[Idea] Including dropping an index as a potentially dangerous operation

Hi!

Thanks for your work with strong_migrations - it's a great gem and I'm glad this exists!

Proposal: add drop_index to the list of dangerous operations that strong_migrations catches. As you probably know, this is dangerous because it can turn index lookups into table scans if done improperly.

We've definitely incurred downtime as a result of migrations dropping indexes while using strong_migrations, and would LOVE to roll this check into this gem. Thanks for your work! 👍

[Proposal]: Make all reference constraints deferrable

The foreign key constraints often raise issues when moving data, requiring the programmers to remove them to complete the data load tasks. A better approach is to make them deferrable (only in Postgres) and SET CONSTRAINTS ALL DEFERRED; in a transaction block before running your data load script.

The proposed addition would warn the user if the migration includes a non-deferrable foreign_key constraint.

Instructions for adding a column with a default value are different in README and in the warning message

The instructions for adding a column with a default value in the README are:

1. Add the column without a default value
2. Add the default value
3. Commit the transaction
4. Backfill the column

Meanwhile, running an unsafe migration will return this list of instructions:

1. Add the column without a default value
2. Commit the transaction
3. Backfill the column
4. Add the default value

I think the console instructions are correct, because you need to commit the transaction before adding the default value, or the DB will start backfilling inside the transaction, probably locking.

Doesn't know i've fixed the problem.

WARNING:  there is no transaction in progress
rake aborted!
StandardError: An error has occurred, this and all later migrations canceled:


 __          __     _____ _______ _
 \ \        / /\   |_   _|__   __| |
  \ \  /\  / /  \    | |    | |  | |
   \ \/  \/ / /\ \   | |    | |  | |
    \  /\  / ____ \ _| |_   | |  |_|
     \/  \/_/    \_\_____|  |_|  (_)

Adding a non-concurrent index locks the table. Instead, use:

  def change
    commit_db_transaction
    add_index :users, :some_column, algorithm: :concurrently
  end

Here's my code:

class IndexShippingInfosOnDropOffManifestId < ActiveRecord::Migration
  def change
    commit_db_transaction
    add_index :shipping_infos, column: :drop_off_manifest_id, algorithm: :concurrently, where: %("shipping_infos"."drop_off_manifest_id" IS NOT NULL)
  end
end

Would it be possible to catch enum columns that don't have a NOT NULL constraint and default value?

I just realized that I wasn't setting a NOT NULL constraint for any of my enum columns. This came to my attention when my API test returned null for one of my state columns, and I realized that null is never a valid value (this state must always start at pending.) I think it's a much better practice to define an integer value that represents nothing, e.g. none: 0, if that's necessary. This way you can also control the string value that represents 0, and the ActiveRecord helper methods are consistent when you use a _prefix. E.g. you can call model.foo_type_none? instead of !model.foo? (which checks for the opposite of presence.)

I suppose this is up for debate, because technically nil is a valid value for an enum column. It just means that the column hasn't been defined yet. But I think that this will lead to errors more often than being a useful default.

My idea is to scan the Rails model source code for enum calls and parse out the attribute. It would find the following enums:

class Model < ApplicationRecord
  enum state: {
    pending: 0,
    complete: 1,
  }

  enum :foo => %w[bar baz]
  enum 'bar' => %w[hello world]
  enum "baz" => %w[one last example]
end

Then it would raise 4 errors for the following migration:

class CreateModels < ActiveRecord::Migration[5.2]
  def change
    create_table :models do |t|
      t.integer :state
      t.integer :foo
      t.integer :bar
      t.integer :baz
    end

    change_column_default :models, :state, from: 0, to: 0
    change_column_default :models, :foo, from: 0, to: 0
    change_column_default :models, :bar, from: 0, to: 0
    change_column_default :models, :baz, from: 0, to: 0
  end
end

Saying that all of the integer columns must be written as:

      t.integer :state, null: false
      t.integer :foo, null: false
      t.integer :bar, null: false
      t.integer :baz, null: false

This would have been a very helpful heuristic for me, because I just forgot that integers can be nil, even when I'm setting the default to 0. Obviously it wouldn't handle all of the cases (custom DSLs, metaprogramming, or calls to add_column and change_column_null), but it would be nice to catch 99% of these cases.

And also maybe as a configurable option, if you would rather leave it disabled by default.

Fresh install, no warning

Apologies I must be missing something obvious... installed the gem, created this migration:

class AddBadStuff < ActiveRecord::Migration[5.2]
  def change
    add_column :events, :dumb_text, :text, default: "lame"
  end
end

and ran rake db:migrate... no warning. Is this not how the gem is intended to be used? Additional installation instructions?

Rails 5.2.2.1
Postgres 9.x

Consider running arbitrary SQL a dangerous operation

This gem provides awesome protection against dangerous operations when using the Rails migration DSL. Unfortunately it's easy for developers to inadvertently bypass this safety net when running arbitrary SQL e.g.

class AddNumWheelsToCars < ActiveRecord::Migration
  def up
    execute('ALTER TABLE cars ADD COLUMN num_wheels integer DEFAULT 4 NOT NULL')
  end

  def down
    remove_column(:cars, :num_wheels)
  end
end

It would be great if execute was considered a dangerous operation that required developers to explicitly indicate that they know what they're doing. I'd be happy to put up a PR if you're open to the change.

Use of 'commit_db_transaction'

While this speeds things up when everything works according to plan, I've found that it breaks migrations if anything that runs after the command goes wrong. If the migration fails and reverts to the previous schema version, the add_column isn't undone, meaning the database is out of sync with the schema. Running rake db:migrate again yields a "column already exists" error.

Do you have an official stance on this situation?

Ignore Migrations before a certain Timestamp

As a developer adding StrongMigrations to my long-existing application, I would like the ability to only use StrongMigrations moving forward, and not on historical migrations.

StrongMigrations.start_after does not appear to work.

[Idea] suggest developers to use timestamptz not timestamp type

Description :
Suggest Developers to Use timestamptz datatype not timestamp datatype will creating any new migration, this is always a pain having timestamp without timezone. Data while migrating between two different databases it is always a pain having columns with timestamp type.

@ankane will be ok to get this merged if I create a PR for this ?

Documentation for tri state problem for booleans

Hello,
It's not clear to me how to add null: false and default: false to a boolean column using strong_migrations. Using your approach, I would 1) create the column 2) add a default 3) flip everything to false 4) add null: false but when I add try to add the null: false, it complains. How would you recommend handling this?

Rails 5 in_batches method

First off, thanks a ton for this fantastic guide.

I was recently following the instructions for adding a column and wondered if there was a cleaner way to batch update yet.

Turns out in rails 5 there's a new in_batches method that can yield enumerators of relation objects, which lets you rewrite

User.find_in_batches do |users|
  User.where(id: users.map(&:id)).update_all some_column: "default_value"
end

as

User.in_batches.update_all some_column: "default_value"

If you also prefer the aesthetics of this new method, I can go ahead and open a PR with that option for Rails 5. But if you don't want to clutter the guide with more "For Rails < 5 use this, for Rails >= 5 use that", I understand. Just thought I'd share this new method :)

[Idea] Allow setting postgresql_version in config

Add a target_postgresql_version (or something like that) to StrongMigrations config.
i.e. StrongMigrations.target_postgres_version = 10.7 which is checked in postgresql_version

def postgresql_version

Background:
Adding a column with a default is unsafe in PostgreSQL < 11, and will raise an error. This passed on my local machine because I'm running PostgreSQL v11.3. However, on our staging/production servers, we're running PostgreSQL 10.7. So strong_migrations let the migration through in development, but failed when building on staging.

It would be safer to allow setting a production target version in StrongMigrations config so this doesn't slip through in development.

What is an extremely large table?

Thanks for an amazing project! There is this message:

Adding an index with more than three columns only helps on extremely large tables

How large is extremely large? 1G? 1TB?

Have any reading I can do on this? I can then do a PR adding some info to this message and to the readme.

Thanks!
John

change_column is overly inclusive

The change_column type error raises for any column change, not just the unsafe ones I'm aware of (changing the column type or the name, making a default value, or adding a constraint of some kind). I am in fact removing a constraint from the column (null: true). Is this meant to be flagged as unsafe for some reason?

Stacktrace with remove_column and optional column type

Hi there,

I just added this gem and immediately bumped into an issue. :-)

When using remove_column with the column type (to allow the down migration to recreate the column) such as:

def change
  remove_column :table, :column, :integer
end

It is failing with a stack trace:

NoMethodError: undefined method `each' for :integer:Symbol
.../gems/strong_migrations-0.3.0/lib/strong_migrations/migration.rb:191:in `options_str'
.../gems/strong_migrations-0.3.0/lib/strong_migrations/migration.rb:47:in `method_missing'
.../db/migrate/20181015190940_remove_column_from_table.rb:3:in `change'
...

From looking at the code, it appears to assume the third argument will always be an option hash instead of a column type symbol.

How to create a field with null: false and default: 0

Hello guys, Before rails 1_000 it was possible create field with default value and without null values, I known now 21 centry, and we send Tesla to space, and all world is changed, how now i can create i simple field? Just to known. Also, will this be possible in rails 100_000 ? Thanks.!

add_column should raise warning

I was wondering why add_column does not always raise a warning to add self.ignored_columns = [:column].

ActiveRecord::PreparedStatementCacheExpired gets raised if you run the migration and roll out your servers.

The structure of the `find_in_batches` example in README.md

Given that this gem is designed to prevent us all from shooting ourselves in the foot, I was curious why the example of backfilling a default column in the README.md was structured in this way:

User.find_in_batches do |users|
  User.where(id: users.map(&:id)).update_all some_column: "default_value"
end

There seems to be some hidden knowledge in doing it this way vs. the simpler

User.find_in_batches do |users|
  users.update_all some_column: "default_value"
end

I was wondering if this is important in making good migrations, or if I could use the simpler version.

[QUESTION] : Does Custom Checks don't run for safety assured blocks ?

I am doing a remove column as below

def up
      remove_column :sample, :region_id 
end

Strong migrations enforces me to put a safety assured block as below once I do that successfully

Deploy the code, then wrap this step in a safety_assured { ... } block.

class RemoveColumnrulesets < ActiveRecord::Migration[5.1]
  def change
    safety_assured { remove_column :sample, :region_id }
  end
end

In the initializer I have a custom check to stop someone from removing the column for the above table it doesn't really stop it the migrations works successfully

custom check below :

StrongMigrations.add_check do |method, args|
  if method == :remove_column && args[0].to_s == "sample"
  stop! Don't remove columns on sample table 
 end
end

[Idea] drop_table as a dangerous operation?

I know this was also mentioned briefly in #49, but I think drop_table should be included in the list of dangerous operations. When a drop_table call is detected, we could recommend that the user first removes any code that uses the table and then deploys the migration separately.

I've just been bit by this (I dropped a table that was still used by the rest of the servers and the load balancer kicked them all out) and it would have been really useful to have strong_migrations warn me about it.

NOT AN ISSUE , JUST A QUERY

Regards to Bigint Primary Keys (Postgres & MySQL), is there a way I can raise an error if there are any new primary KEY additions which don't use BIGINT as the primary key?

Similarly, if I am adding a new column like created_at to a table A, Can I raise an error like it should follow a Standard like adding TimeZone to the date type?

Default column with not null constraint recommendation is incorrect

With a migration that looks like this:

class AddMetricUnitsToPassengerProfiles < ActiveRecord::Migration[5.1]
  def change
    add_column :passenger_profiles, :metric_units, :boolean, default: true, null: false
  end
end

We are getting this suggestion:

== 20190507182930 AddMetricUnitsToPassengerProfiles: migrating ================
rails aborted!
StandardError: An error has occurred, this and all later migrations canceled:

=== Dangerous operation detected #strong_migrations ===

Adding a column with a non-null default causes the entire table to be rewritten.
Instead, add the column without a default value, then change the default.

class AddMetricUnitsToPassengerProfiles < ActiveRecord::Migration[5.1]
  def up
    add_column :passenger_profiles, :metric_units, :boolean, null: false
    change_column_default :passenger_profiles, :metric_units, true
  end

  def down
    remove_column :passenger_profiles, :metric_units
  end
end

Then backfill the existing rows in the Rails console or a separate migration with disable_ddl_transaction!.

class BackfillAddMetricUnitsToPassengerProfiles < ActiveRecord::Migration[5.1]
  disable_ddl_transaction!

  def change
    PassengerProfile.in_batches.update_all metric_units: true
  end
end

The problem is that the first operation removes the default, but doesn't remove the null: false constraint. So the first line: add_column :passenger_profiles, :metric_units, :boolean, null: false will explode due to null values in a new column.

Removing that part of the line removes the null: false constraint on the column.

I looked at previous issues but didn't find anything. We're running the most recent version of the gem: 0.3.1.

Anyone have any insights into this? I know how to write the migration, but I just want to make the recommendation better for other engineers that stumble into this.

[Idea] Warn on the addition of a validated foreign key

I recently ran into a problem adding a new foreign key constraint at scale. It seems that by default, when adding a foreign key, the entire table is validated for the new constraint which performs some table locking.

From the PostgreSQL docs

If the constraint is marked NOT VALID, the potentially-lengthy initial check to verify that all rows in the table satisfy the constraint is skipped.

In my particular case, I was adding the foreign key to a new column which would be NULL for all existing records anyway thus not needing the validation at all. Here's the full example that caused me problems:

add_column :foos, :bar_id, :integer
add_foreign_key :foos, :bars

And here is the preferred way:

add_column :foos, :bar_id, :integer
add_foreign_key :foos, :bars, validate: false

Note: validate: false which results in NOT VALID option being added to the produced ALTER TABLE statement.

What do you think?

Internal use of `execute` preventing `change` autoreversibility

Hi!

There are paths when the Migration module internally uses connection.execute. When these paths are hit while rolling back a migration that uses change, it causes ActiveRecord to throw IrreversibleMigration: https://github.com/rails/rails/blob/master/activerecord/lib/active_record/migration/command_recorder.rb#L88. ActiveRecord is ignorant of strong_migrations and thinks the user is trying to automatically reverse an arbitrary execute statement.

The offending lines are: https://github.com/ankane/strong_migrations/blob/master/lib/strong_migrations/migration.rb#L84

https://github.com/ankane/strong_migrations/blob/master/lib/strong_migrations/migration.rb#L97

so for now this prevents:

  1. Rolling back add_index within change if StrongMigrations.auto_analyze = true and using Postgresql.
  2. Rolling back add_column with type JSON within change if using Postgresql.

I'm working around these by using explicit up/down for now, but it's easy to forget and get tripped up.

[Proposal]: unsafe change_column_null default value

Rails change_column_null method accepts four arguments, with the fourth one replacing existing NULLs with some other value. Looks kinda useful feature although the implementation of such a replacement might be pretty dangerous on large tables (postgres, mysql):

UPDATE #{quote_table_name(table_name)}
SET #{quote_column_name(column_name)}=#{quote_default_expression(default, column)}
WHERE #{quote_column_name(column_name)} IS NULL

What we're doing here is a WHERE query on a table which might be lacking of needed index on column_name and UPDATE for (potentially) millions of records at once.
A little more safe way might be to require user to make this UPDATE in batches manually before firing change_column_null up.
WDYT?

change_column_null (without 4th argument) still locks postgres

This is related to:
#63
#69

I've just run the following on a postgres DB with 7M rows.

class AddUserVisibleToEvents < ActiveRecord::Migration[5.1]
  def up
    add_column :events, :user_visible, :boolean, null: true
    change_column_default :events, :user_visible, false
  end

  def down
    remove_column :events, :user_visible
  end
end

The above was fast. 👍

Then I ran:

class BackfillAddUserVisibleToEvents < ActiveRecord::Migration[5.1]
  disable_ddl_transaction!

  def change
    Event.in_batches.update_all user_visible: true
  end
end

As expected, this took several hours, however it didn't stop the production DB running 👍

Then I ran this:

class AddNullFalseToUserVisibleInEvents < ActiveRecord::Migration[5.1]	
  def change	
    change_column_null(:events, :user_visible, false)	
  end	
end

... and it locked the DB 👎

Note that I have not used the 4th argument but it was still "dangerous".

I removed that migration and then ran:

class AddIndexToUserVisibleInEvents < ActiveRecord::Migration[5.1]
  disable_ddl_transaction!

  def change
    add_index :events, :user_visible, algorithm: :concurrently
  end
end

And it ran for about 15 minutes and did not lock the DB 👍

So my question... now that the index is in place, will change_column_null(:events, :user_visible, false) be faster now (since there are no NULL values!) or is there some other way I can SET NOT NULL on the column?

If the index will help, can we warn not to run change_column_null until the index is in place? Alternatively, if there is some SQL that will add NOT NULL without checking all the existing values, could that be suggested if change_column_null is used?

Thanks

Adding table constraints

Hi, I just ran across a scenario that might be interesting to add to this gem.

We are using Postgres and if you add a new constraint to an existing table it will lock the whole table with an exclusive lock. For example:

ALTER TABLE products ADD CONSTRAINT positive_price CHECK (price > 0);`

However, a much better approach is using the NOT VALID option. With that option it's possible to add a new constraint that only applies to new and updated records while temporarily skipping the expensive check for all existing rows. The existing rows can be checked using a separate VALIDATE CONSTRAINT command with a much less intrusive SHARE UPDATE EXCLUSIVE lock. For example:

ALTER TABLE products ADD CONSTRAINT positive_price CHECK (price > 0) NOT VALID;
ALTER TABLE products VALIDATE CONSTRAINT positive_price;

Source: https://www.postgresql.org/docs/9.6/static/sql-altertable.html

I'm not sure if this scenario is a good fit for this gem, though, as there is no default Rails method for adding such constraints and thus intercepting it is much harder. Also it probably does not apply to all SQL database systems. What do you think?

Safer defaults for lock timeouts

Commit b57486d added some helpful suggestions around setting lock timeouts in migrations. The activerecord-safer_migrations gem provides some nice capabilities to set default lock/statement timeouts and override them on a per-migration basis. Do you think it would be worth adding similar functionality to this gem? I'd love to only include one gem in my projects for safe/strong migrations :)

No way to disable for migration as a whole

Very excited about this gem and adding it to a large app. Unfortunately, I am stuck combing through hundreds of migrations to mark them as safe. Is there an easier way, such as marking the whole migration as safe without having to edit individual lines?

For adding new columns, warning messages are mixed

__          __     _____ _______ _
 \ \        / /\   |_   _|__   __| |
  \ \  /\  / /  \    | |    | |  | |
   \ \/  \/ / /\ \   | |    | |  | |
    \  /\  / ____ \ _| |_   | |  |_|
     \/  \/_/    \_\_____|  |_|  (_)

Adding a column with a non-null default requires
the entire table and indexes to be rewritten. Instead:

1. Add the column without a default value
2. Commit the transaction
3. Backfill the column
4. Add the default value

screen shot 2017-10-11 at 2 26 43 pm

2..4 seem to be mixed, which is correct?

[Idea] Add unless index_exists? on index creation

I recently used this gem to add concurrently an index on a large and busy table on Heroku.

The operation lost connection midway but the migration finished.

Seems like some parts of Rails book keeping failed and now I get failing new builds since it think the migration need to be done but the index exists.

I think that adding unless index_exists? to the recommendation will be useful for future similar issues others may have.

`ANALYZE` after creating an index, is it useful?

Hi there!

I've noticed that you suggest running ANALYZE after creating an index but is it really useful? Of course, it always makes sense to run ANALYZE frequently or tune it to be more efficient which is a separate topic, but how is it connected with the newly created index? I'm afraid that in no way.

What ANALYZE do is gathering some statistics about the content of the table column. The Planner uses it to make an assumption about efficiency. Since column content is not affected by any index, in the majority of the cases it doesn't make sense to run ANALYZE after index creation. It would be more efficient to run it after some bulk updates/inserts/deletes.

Am I wrong with my thoughts?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.