Giter Club home page Giter Club logo

puppet-consul's Introduction

puppet-consul

Build Status Release Puppet Forge Puppet Forge - downloads Puppet Forge - endorsement Puppet Forge - scores puppetmodule.info docs Apache-2 License Donated by KyleAnderson

This module manages Consul servers and agents.

Compatibility

WARNING: Backwards incompatible changes happen in order to more easily support new versions of consul. Pin to the version that works for your setup!

Consul Version Recommended Puppet Module Version
>= 1.11.x >= 6.0.0
1.1.0-1.10.x 4.0.0-7.0.x
0.9-1.1.0 <= 3.4.2
0.8.x <= 3.2.4
0.7.0 <= 2.1.1
0.6.0 <= 2.1.1
0.5.x 1.0.3
0.4.x 0.4.6

What This Module Affects

  • Installs the consul daemon (via url or package)
    • If installing from zip, you must ensure the unzip utility is available.
    • If installing from docker, you must ensure puppetlabs-docker_platform module is available.
    • If installing on windows, you must install the puppetlabs/powershell module.
  • Optionally installs a user to run it under
    • NOTE: users enabling this and just starting with Consul should consider setting manage_user_home_location to true. It defaults to false for backwards compatibility.
  • Installs a configuration file (/etc/consul/config.json)
  • Manages the consul service via upstart, sysv, systemd, or nssm.
  • Optionally installs the Web UI

Usage

To set up a single consul server, with several agents attached: On the server:

class { 'consul':
  config_hash => {
    'bootstrap_expect' => 1,
    'data_dir'         => '/opt/consul',
    'datacenter'       => 'east-aws',
    'log_level'        => 'INFO',
    'node_name'        => 'server',
    'server'           => true,
  },
}

On the agent(s):

class { 'consul':
  config_hash => {
    'data_dir'   => '/opt/consul',
    'datacenter' => 'east-aws',
    'log_level'  => 'INFO',
    'node_name'  => 'agent',
    'retry_join' => ['172.16.0.1'],
  },
}

Disable install and service components:

class { 'consul':
  install_method => 'none',
  init_style     => false,
  manage_service => false,
  config_hash => {
    'data_dir'   => '/opt/consul',
    'datacenter' => 'east-aws',
    'log_level'  => 'INFO',
    'node_name'  => 'agent',
    'retry_join' => ['172.16.0.1'],
  },
}

Install the (HashiCorp) packages:

class { 'consul':
  install_method  => 'package',
  manage_repo     => $facts['os']['name'] != 'Archlinux',
  init_style      => 'unmanaged',
  manage_data_dir => true,
  manage_group    => false,
  manage_user     => false,
  config_dir      => '/etc/consul.d/',
  config_hash     => {
    'server'   => true,
  },
}
systemd::dropin_file { 'foo.conf':
  unit           => 'consul.service',
  content        => "[Unit]\nConditionFileNotEmpty=\nConditionFileNotEmpty=/etc/consul.d/config.json",
  notify_service => true,
}

Web UI

To install and run the Web UI on the server, include ui => true in the config_hash. You may also want to change the client_addr to 0.0.0.0 from the default 127.0.0.1, for example:

class { 'consul':
  config_hash => {
    'bootstrap_expect' => 1,
    'client_addr'      => '0.0.0.0',
    'data_dir'         => '/opt/consul',
    'datacenter'       => 'east-aws',
    'log_level'        => 'INFO',
    'node_name'        => 'server',
    'server'           => true,
    'ui'               => true,
  },
}

For more security options, consider leaving the client_addr set to 127.0.0.1 and use with a reverse proxy:

$aliases = ['consul', 'consul.example.com']

# Reverse proxy for Web interface
include 'nginx'

$server_names = [$facts['networking']['fqdn'], $aliases]

nginx::resource::vhost { $facts['networking']['fqdn']:
  proxy       => 'http://localhost:8500',
  server_name => $server_names,
}

Service Definition

To declare the availability of a service, you can use the service define. This will register the service through the local consul client agent and optionally configure a health check to monitor its availability.

consul::service { 'redis':
  checks  => [
    {
      script   => '/usr/local/bin/check_redis.py',
      interval => '10s'
    },
  ],
  port    => 6379,
  tags    => ['master'],
  meta    => {
    SLA => '1'
  },
}

See the service.pp docstrings for all available inputs.

You can also use consul::services which accepts a hash of services, and makes it easy to declare in hiera. For example:

consul::services:
  service1:
    address: "%{facts.networking.ip}"
    checks:
      - http: http://localhost:42/status
        interval: 5s
    port: 42
    tags:
      - "foo:%{facts.custom.bar}"
    meta:
      SLA: 1
  service2:
    address: "%{facts.networking.ip}"
    checks:
      - http: http://localhost:43/status
        interval: 5s
    port: 43
    tags:
      - "foo:%{facts.custom.baz}"
    meta:
      SLA: 4

Watch Definitions

consul::watch { 'my_watch':
  handler     => 'handler_path',
  passingonly => true,
  service     => 'serviceName',
  service_tag => 'serviceTagName',
  type        => 'service',
}

See the watch.pp docstrings for all available inputs.

You can also use consul::watches which accepts a hash of watches, and makes it easy to declare in hiera.

Check Definitions

consul::check { 'true_check':
  interval => '30s',
  script   => '/bin/true',
}

See the check.pp docstrings for all available inputs.

You can also use consul::checks which accepts a hash of checks, and makes it easy to declare in hiera.

Removing Service, Check and Watch definitions

Do ensure => absent while removing existing service, check and watch definitions. This ensures consul will be reloaded via SIGHUP. If you have purge_config_dir set to true and simply remove the definition it will cause consul to restart.

ACL Definitions

Policy/Token system

Starting with version 1.4.0, a new ACL system was introduces separating rules (policies) from tokens.

Tokens and policies may be both managed by Puppet:

consul_policy {'test_policy':
  description   => 'test description',
  rules         => [
    {
      'resource'    => 'service_prefix',
      'segment'     => 'test_service',
      'disposition' => 'read'
    },
    {
      'resource'    => 'key',
      'segment'     => 'test_key',
      'disposition' => 'write'
    },
  ],
  acl_api_token => 'e33653a6-0320-4a71-b3af-75f14578e3aa',
}

consul_token {'test_token':
  accessor_id       => '7c4e3f11-786d-44e6-ac1d-b99546a1ccbd',
  policies_by_name  => [
   'test_policy'
  ],
  policies_by_id    => [
    '652f27c9-d08d-412b-8985-9becc9c42fb2'
  ],
}

Here is an example to automatically create a policy and token for each host. For development environments acl_api_token can be the bootstrap token. For production it should be a dedicated token with access to write/read from the acls.

accessor_id must be provided. It is a uuid. It can be generated in several different ways.

  1. Statically generated and assigned to the resource. See /usr/bin/uuidgen on unix systems.
  2. Dynamically derived from the $facts['dmi']['product']['uuid'] fact in puppet (useful when consul_token has 1:1 mapping to hosts).
  3. Dynamically derived from arbitrary string using fqdn_uuid() (useful for giving all instances of a resource unique id).
  # Create ACL policy that allows nodes to update themselves and read others
  consul_policy { $facts['networking']['hostname']:
    description => "${facts['networking']['hostname']}, generated by puppet",
    rules => [
      {
        'resource' => 'node',
        'segment' => "$facts['networking']['hostname']",
        'disposition' => 'write'
      },
      {
        'resource' => 'node',
        'segment' => '',
        'disposition' => 'read'
      }
    ],
    acl_api_token => $acl_api_token,
  }

  consul_token { $facts['networking']['hostname']:
    accessor_id => fqdn_uuid($facts['networking']['hostname']),
    policies_by_name => ["${facts['networking']['hostname']}"],
    acl_api_token => $acl_api_token,
  }

Predefining token secret is supported by setting secret_id property.

Externally created tokens and policies may be used by referencing them by ID (Token: accessor_id property, Policy: ID property, linking: policies_by_id property)

Legacy system

consul_acl { 'ctoken':
  ensure => 'present',
  rules  => {
    'key' => {
      'test' => {
        'policy' => 'read'
      },
    },
  },
  type   => 'client',
}

Do not use duplicate names, and remember that the ACL ID (a read-only property for this type) is used as the token for requests, not the name

Optionally, you may supply an acl_api_token. This will allow you to create ACLs if the anonymous token doesn't permit ACL changes (which is likely). The api token may be the master token, another management token, or any client token with sufficient privileges.

NOTE: This module currently cannot parse ACL tokens generated through means other than this module. Don't mix Puppet and Non-puppet ACLs for best results! (pull requests welcome to allow it to co-exist with ACLs generated with normal HCL)

Prepared Queries and Prepared Query Templates

consul_prepared_query { 'consul':
  ensure               => 'present',
  service_name         => 'consul',
  service_failover_n   => 1,
  service_failover_dcs => [ 'dc1', 'dc2' ],
  service_only_passing => true,
  service_tags         => [ 'tag1', 'tag2' ],
  service_meta         => { 'version' => '1.2.3' },
  ttl                  => 10,
}

or a prepared query template:

consul_prepared_query { 'consul':
  ensure               => 'present',
  service_name         => 'consul',
  service_name         => 'consul-${match(1)}' # lint:ignore:single_quote_string_with_variables
  service_failover_n   => 1,
  service_failover_dcs => [ 'dc1', 'dc2' ],
  service_only_passing => true,
  service_tags         => [ '${match(2)}' ], # lint:ignore:single_quote_string_with_variables
  node_meta            => { 'is_virtual' => 'false' },
  template             => true,
  template_regexp      => '^consul-(.*)-(.*)$',
  template_type        => 'name_prefix_match',
}

Key/Value Objects

Example:

consul_key_value { 'key/path':
  ensure     => 'present',
  value      => 'myvaluestring',
  flags      => 12345,
  datacenter => 'dc1',
}

This provider allows you to manage key/value pairs. It tries to be smart in two ways:

  1. It caches the data accessible from the kv store with the specified acl token.
  2. It does not update the key if the value & flag are already correct.

These parameters are mandatory when using consul_key_value:

  • name Name of the key/value object. Path in key/value store.
  • value value of the key.

The optional parameters only need to be specified if you require changes from default behaviour.

  • flags {Integer} an opaque unsigned integer that can be attached to each entry. Clients can choose to use this however makes sense for their application. Default is 0.
  • acl\_api_token {String} Token for accessing the ACL API. Default is ''.
  • datacenter {String} Use the key/value store in specified datacenter. If '' (default) it will use the datacenter of the Consul agent at the HTTP address.
  • protocol {String} protocol to use. Either 'http' (default) or 'https'.
  • port {Integer} consul port. Defaults to 8500.
  • hostname {String} consul hostname. Defaults to 'localhost'.
  • api_tries {Integer} number of tries when contacting the Consul REST API. Timeouts are not retried because a timeout already takes long. Defaults to 3.

Limitations

Depends on the JSON gem, or a modern ruby. (Ruby 1.8.7 is not officially supported) Current versions of puppetserver are new enough (2.0.3 & greater are known to work).

Windows Experimental Support

Windows service does no longer need [NSSM] to host the service. Consul will be installed as a native windows service using build-in sc.exe. The following caveats apply:

  • By defult eveything will be installed into c:\ProgramData\Consul\ and $consul::config_hash['data_dir'] will default point to that location, so you don't need that in your config_hash
  • The service user needs logon as a service permission to run things as a service(not yet supported by this module). therefore will consul::manage_user and consul::manage_group be default false.
  • consul::user will default be NT AUTHORITY\NETWORK SERVICE (Has by default logon as a service permission).
  • consul::group will default be Administrators

Example:

class { 'consul':
  config_hash => {
    'bootstrap_expect' => 1,
    'datacenter'       => 'dc1',
    'log_level'        => 'INFO',
    'node_name'        => 'server',
    'server'           => true,
  },
}

Telemetry

The Consul agent collects various runtime metrics about the performance of different libraries and subsystems. These metrics are aggregated on a ten second interval and are retained for one minute.

To view this data, you must send a signal to the Consul process: on Unix, this is USR1 while on Windows it is BREAK. Once Consul receives the signal, it will dump the current telemetry information to the agent's stderr.

This telemetry information can be used for debugging or otherwise getting a better view of what Consul is doing.

Example:

class { 'consul':
  config_hash => {
    'bootstrap_expect' => 1,
    'data_dir'         => '/opt/consul',
    'datacenter'       => 'east-aws',
    'log_level'        => 'INFO',
    'node_name'        => 'server',
    'server'           => true,
    'telemetry' => {
      'statsd_address' => 'localhost:9125',
      'prefix_filter' => [
        '+consul.client.rpc',
        '+consul.client.rpc.exceeded',
        '+consul.acl.cache_hit',
        '+consul.acl.cache_miss',
        '+consul.dns.stale_queries',
        '+consul.raft.state.leader',
        '+consul.raft.state.candidate',
        '+consul.raft.apply',
        '+consul.raft.commitTime',
        '+consul.raft.leader.dispatchLog',
        '+consul.raft.replication.appendEntries',
        '+consul.raft.leader.lastContact',
        '+consul.rpc.accept_conn',
        '+consul.catalog.register',
        '+consul.catalog.deregister',
        '+consul.kvs.apply',
        '+consul.leader.barrier',
        '+consul.leader.reconcile',
        '+consul.leader.reconcileMember',
        '+consul.leader.reapTombstones',
        '+consul.rpc.raft_handoff',
        '+consul.rpc.request_error',
        '+consul.rpc.request',
        '+consul.rpc.query',
        '+consul.rpc.consistentRead',
        '+consul.memberlist.msg.suspect',
        '+consul.serf.member.flap',
        '+consul.serf.events',
        '+consul.session_ttl.active',
      ],
    },
  },
}

The metrics for the consul system you can look them in the Official Consul Site with all the description for every metric. Url: https://www.consul.io/docs/agent/telemetry.html

Consul Template

Consul Template is a piece of software to dynamically write out config files using templates that are populated with values from Consul. This module does not configure consul template. See gdhbashton/consul_template for a module that can do that.

Development

Open an issue or fork and open a Pull Request

Transfer Notice

This module was originally authored by solarkennedy. The maintainer preferred that Vox Pupuli take ownership of the module for future improvement and maintenance. Existing pull requests and issues were transferred over, please fork and continue to contribute here instead of KyleAnderson.

Previously: https://github.com/solarkennedy/puppet-consul

puppet-consul's People

Contributors

amiryal avatar asasfu avatar bastelfreak avatar benoniecarette avatar benschw avatar duritong avatar evankrall avatar genebean avatar gozer avatar hopperd avatar jardleex avatar jfroche avatar jlambert121 avatar l-lotz avatar madandroid avatar marius-meissner avatar maxadamo avatar mrwulf avatar natemccurdy avatar nukemberg avatar potto007 avatar robrankin avatar solarkennedy avatar sorenisanerd avatar spuder avatar tayzlor avatar tmu-sprd avatar tylerwalts avatar vchan2002 avatar zxjinn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

puppet-consul's Issues

Configuring consul client nodes

Can you point me in the right direction for configuring a consul client node? I'd like to add a consul agent and configure it to register a service and track its health.

Thanks!

Meta stuff Not up to snuff

It had been a while since I've been to the forge, I didn't realize they were exposing quality metrics!

https://forge.puppetlabs.com/KyleAnderson/consul/scores

Lint Results: 0 errors, 6 warnings, and 0 notices.

 Double quoted string containing no variables - 8 occurrences.
 Ensure found on line but it's not the first attribute - 1 occurrence.
 Indentation of => is not properly aligned - 13 occurrences.
 Selector inside resource block - 1 occurrence.
 String containing only a variable - 3 occurrences.
 Unquoted file mode - 1 occurrence.
Hide full Code Quality results...
Puppet Lint analyzed this module and found something that can be improved. Learn more about linting modules at the project's website.


Metadata Quality1.5
Metadata Quality: 1 error, 2 warnings, and 3 notices.

 Does not contain os_support information.
 Dependencies contain unbounded ranges.
 Unrecognized license in metadata.
 Issues url verified.
 Project page verified.
 Source url verified.

I should work on this junk.

Send SIGHUP to consul agent when new checks/services are detected

Currently, when a new check or service is added, Consul does not load the check/service unless a "consul reload" is manually run on the node (or SIGHUP sent to the agent process).

Could we start a discussion on the appropriate times to trigger a reload of configuration files (i.e. new check added)?

Tests need ruby >= 1.9.2

This is somewhat frustrating for those of us stuck on older OSes.

Gem::InstallError: celluloid requires Ruby version >= 1.9.2.

In the mean time, I'm just doing development in a Trusty vagrant box.

Watches key in config_hash should expect an array of hashes?

Hey, so. Looking at the consul configuration syntax, it looks like the watches value should be an array of hashes. Each watcher is a hash and indexed in an array. I'm not sure how to configure my watches using this puppet module since the watches key in the configuration hash is expecting a hash value instead of an array value. If this is by design, would you mind briefly explaining how to configure this key properly? Otherwise, this is probably a bug.

Thanks.

join_cluster doesn't seem to work in some cases

In the following example to set up a client agent, join_cluster doesn't work (but including start_join does.)

note: using join_cluster works fine for creating my server cluster in this same environment

#doesn't work:
class { 'consul':
    join_cluster => hiera('join_addr'),
    config_hash => {
        'datacenter' => 'dc1',
        'data_dir'   => '/opt/consul',
        'log_level'  => 'INFO',
        'node_name'  => $::hostname,
        'bind_addr'  => $::ipaddress_eth1,
        'server'     => false,
    }
}

#works
class { 'consul':
    config_hash => {
        'datacenter' => 'dc1',
        'data_dir'   => '/opt/consul',
        'log_level'  => 'INFO',
        'node_name'  => $::hostname,
        'bind_addr'  => $::ipaddress_eth1,
        'server'     => false,
        'start_join' => [hiera('join_addr')],
    }
}

Is this a bug? is there some reason I should be joining the cluster differently as a client than as a server?

Debian: /var/run/consul/consul.pid user affinity inconsistent

The init template for Debian uses start-stop-daemon to create a pidfile owned by the daemon user root. https://github.com/solarkennedy/puppet-consul/blob/master/templates/consul.debian.erb#L57

But:

It passes the -pid-file parameter along to the consul agent. See: https://github.com/solarkennedy/puppet-consul/blob/master/templates/consul.debian.erb#L21

This causes the launch of the consul agent to fail, since it cannot write to /var/run/consul/consul.pid using the user:group => consul:consul

There are 2 solutions:

  • Make /var/run/consul/consul.pid writeable by the consul user, which runs consul agent
  • Let start-stop-daemon write the pid to /var/run/consul/consul.pid (this means removing the pid-file param from consul agent

Support consul-template

It'd be nice if this module supported consul-template

I'm thinking this would be a consul::template resource, with source, destination, command, etc. parameters. We'd also need to manage the consul-template service, which will involve making an init script, etc.

Strange (probably unnecessary) behavior in sysv stop script

The sysv stop script has this in it for the stop section:

stop() {
        echo -n "Shutting down consul: "
        # If consul is not acting as a server, exit gracefully
        if ("${CONSUL}" info 2>/dev/null | grep -q 'server = false' 2>/dev/null) ; then
            "$CONSUL" leave
        fi

        # If acting as a server, or if leave failed, kill it.
        mkpidfile
        killproc $KILLPROC_OPT $CONSUL -9

        retcode=$?
        rm -f /var/lock/subsys/consul $PID_FILE
        return $retcode
}

I read the original PR (#87), and I agree with the theory to have clients leave but servers stay in "failed" state to preserve their state for a rejoin. However, the implementation doesn't seem to address this correctly, and "kill -9" is extremely heavy-handed for a distributed consensus system. It doesn't seem like that's ever going to be the right move.

There's also a problem where if the leave works on a client, the kill will fail, resulting in a "FAILED" response from $retcode.

I also have observed some cases of clients in a "failed" state where they should have left, which I think is down to a race condition between issuing a leave and the subsequent 'kill -9'.

I have a PR almost ready to go for this, but then I saw #173 in the queue working in exactly the same files (and same lines), so to avoid a conflict I've held off.

I also figured some discussion about what the actual effect should be was in order. Consul will normally quit quickly (without issuing a leave) when given a TERM, however this can be controlled by the leave_on_terminate config option. Seems like issuing a TERM is correct for servers wanting to preserve state, and can still be controlled if desired in the config_hash.

What to do with a client that fails to leave is a little harder. In a few cases, I've seen a failure to leave immediately, which manifested as this message.

Shutting down consul: Error leaving: client closed

However, looking in the logs it appears this is a temporary issue in resending gossip, and doesn't actually affect the leave process.

The logic I've used is like this (irrelevant stuff removed):

  if client
     leave OR kill -TERM
     retcode = $?
  else
    kill -TERM
    retcode = $?
  end
  return retcode $?

But honestly, I'm questioning the use of the TERM case at all in the client section. Any thoughts on this before I send in a PR?

As far as I can tell, this "kill -9" usage is unique to the sysv script. Every other method uses "consul" leave, or possibly TERM. Debian escalates TERM to KILL after a timeout, but doesn't start there.

tl;dr : The sysv script uses kill -9 on consul and I don't think it should.

umask feature breaks CentOS init scripts

The newly added umask feature causes failures on startup with the following error:

/etc/init.d/consul: Usage: daemon [+/-nicelevel] {program} [FAILED]

This happens with CentOS 6.5 and 6.6

GOMAXPROCS discarded by upstart init due to sudo's env_reset option

Hi,

In upstart init version GOMAXPROCS is set correctly but then the actual consul-agent process is started via sudo.
In Ubuntu the default sudo configuration has env_reset configured and GOMAXPROCS is not part of the env_keep configuration.
I'd suggest adding a default for the consul user which has GOMAXPROCS as part of env_keep configuration to avoid it being ignored.

Allow Consul clients to join cluster

It appears that consul agents running as clients are unable to join the cluster because of the check here. When I configure the agent as a client (server => false), consul info does not output num_peers so the linked check returns 1.

Example configuration:

class { 'consul':
  join_cluster => '172.16.78.100',
  config_hash => {
    'datacenter' => 'dc1',
    'data_dir'   => '/opt/consul',
    'server'     => false,
  }
}

Client output of consul info:

agent:
        check_monitors = 0
        check_ttls = 0
        checks = 0
        services = 0
build:
        prerelease =
        revision =
        version = 0.4.1
consul:
        known_servers = 0
        server = false
runtime:
        arch = amd64
        cpu_count = 1
        goroutines = 32
        max_procs = 2
        os = linux
        version = go1.3.1
serf_lan:
        event_queue = 0
        event_time = 1
        failed = 0
        intent_queue = 0
        left = 0
        member_time = 1
        members = 1
        query_queue = 0
        query_time = 1

Server output of consul info:

agent:
        check_monitors = 0
        check_ttls = 0
        checks = 0
        services = 1
build:
        prerelease =
        revision =
        version = 0.4.1
consul:
        bootstrap = false
        known_datacenters = 1
        leader = false
        server = true
raft:
        applied_index = 0
        commit_index = 0
        fsm_pending = 0
        last_contact = never
        last_log_index = 0
        last_log_term = 0
        last_snapshot_index = 0
        last_snapshot_term = 0
        num_peers = 0
        state = Follower
        term = 0
runtime:
        arch = amd64
        cpu_count = 1
        goroutines = 52
        max_procs = 2
        os = linux
        version = go1.3.1
serf_lan:
        event_queue = 0
        event_time = 1
        failed = 0
        intent_queue = 0
        left = 0
        member_time = 7
        members = 3
        query_queue = 0
        query_time = 1
serf_wan:
        event_queue = 0
        event_time = 1
        failed = 0
        intent_queue = 0
        left = 0
        member_time = 1
        members = 1
        query_queue = 0
        query_time = 1

Multiple consul::service with same name causes ArgumentError

I need to create multiple services with the same name (but unique ID) on the same agent, e.g. a service definition like:

{
  "service": {
    "name": "gateway",
    "id": "foo"
  }
}

{
  "service": {
    "name": "gateway",
    "id": "bar"
  }
}

So I created the resources:

consul::service { 'foo':
  name => 'gateway',
}

consul::service { 'bar':
  name => 'gateway',
}

And puppet (2.7) is failing with:

Puppet::Parser::AST::Resource failed with error ArgumentError: Cannot alias Consul::Service[bar] to ["gateway"] at /etc/puppet/roles/app/manifests/gateway.pp:10; resource ["Consul::Service", "gateway"] already defined at /etc/puppet/roles/app/manifests/gateway.pp:10 at /etc/puppet/roles/app/manifests/gateway.pp:10 on node ip-xx-xx-xx-xx.ec2.internal

README for consul::service is out of date

In the README the Service Definition section code example looks like it's showing an older way of configuration. The redis example should look like so:

  ::consul::service { 'redis':
    checks  => [
      { 
        script   => "/usr/local/bin/check_redis.py",
        interval => "10s"
      }
    ],
    port    => 8000,
    tags    => ['master']
  }

Just like my other issue I just made, I'm more than willing to submit a PR if you don't want to do the legwork.

Consul init scripts sometimes not installed in the correct order

Because we install the init scripts in install.pp, what can happen is that sometimes the init script will get put in place and then then package will be installed over the top, blowing the init script from the module away. If the package that was installed has a slightly different init script, this could lead to the service running with unexpected circumstances (as I have experienced). This is because there is no dependency between the installation > the placing of the init script.

To solve this the easiest fix would be to just move the logic from install.pp into either config.pp or run_service.pp. This would ensure the dependency works out correctly because of the ordering in init.pp

delete_undef_values required stdlib 4.2.0, dependency not set properly

Hey guys, first of all: I appreciate all the hard work you're doing here, this module is great.

I noticed a small problem when pulling from master (specifically I'm working off of: 0bbbea6). When pulling the module via librarian-puppet it checks the dependencies in the metadata.json file. I currently have puppetlabs-stdlib 4.1.0 installed, which passed the dependency check in the metadata.json file, because that file claims your module only requires puppetlabs-stdlib version 0.1.6 or greater. However, a the new function delete_undef_values introduced in this commit requires at least puppetlabs-stdlib 4.2.0.

It seems that the metadata.json file needs to be updated to reflect that fact. I can submit a PR if you desire, I'm more than willing to do that.

Thanks!

passingonly needs to be a boolean for watch type

Problem

The documentation contains the example:

consul::watch { 'my_watch':
  ...
  passingonly => 'true',
}

This results in a config file like:

{
  "watches": [
    {
     ...
     "passingonly": "true"
    }
  ]
}

This will in turn cause an issue for Consul starting with an error similar to:

==> Failed to parse watch (map[string]interface {}{"passingonly":"true", "handler":"/usr/bin/runpuppet.sh"}): Expecting %!s(MISSING) to be a boolean

Changing the config file by hand to the following fixes the problem.

"passingonly": true

Passing in the value to Puppet as a boolean also fixes the problem.

consul::watch { 'my_watch':
  ...
  passingonly => true,
}

Fix

  • Docs could be updated to show working values
  • Manifests could validate values being passed are booleans

Additional thoughts

  • This may affect other types which take booleans, and currently show passing strings in the READM.
  • This might be version specific, just in case:
consul --version
Consul v0.4.1
Consul Protocol: 2 (Understands back to: 1)

add maintenance mode option to init scripts

In order to not have critical health checks (and the duration of the ttl as a window when a stopped service is still showing up available) would it make sense to integrate calls to consul maint -enable and consul main -disable in the init script start and stop calls?

https://consul.io/docs/commands/maint.html

This command can be made to take either an entire consul agent, or just a single service into and out of maintenance mode: where it continues to show up in the consul cluster but isn't made available to the discovery services.

service definition file will be changed frequently

Hi,

for some reason my service defintion files will be changed every few runs, even though nothing changes.

The following file sometimes looks like this:

{
  "service": {
    "port": 80,
    "name": "clientinterface",
    "id": "clientinterface",
    "tags": [

    ]
  }
}

and sometimes like this:

{
  "service": {
    "tags": [

    ],
    "port": 80,
    "name": "clientinterface",
    "id": "clientinterface"
  }
}

This will result in consul restarting every few runs :-(

join_cluster not working on agents

Or maybe I'm using this module wrong.
I've setup one server with

class { 'consul':
  config_hash => {
      'datacenter'  => 'car',
      'data_dir'    => '/opt/consul',
      'ui_dir'      => '/opt/consul/ui',
      'client_addr' => '0.0.0.0',
      'log_level'   => 'INFO',
      'node_name'   => "${hostname}",
      'server'      => true,
      'bootstrap_expect' => 1
  }
}

and three agents with

class { 'consul':
join_cluster => 'test-cluster-vm.car.dmz',
  config_hash => {
      'datacenter'  => 'car',
      'data_dir'    => '/opt/consul',
      'client_addr' => '0.0.0.0',
      'log_level'   => 'INFO',
      'node_name'   => "${hostname}",
      'server'      => false,
  }
}
    consul::service { 'puppetmaster':
        port           => 8140,
    }

But exec exec { 'join consul cluster': is not working 'cause

/usr/local/bin/consul info | grep num

shows nothing on those three agents

Debian support

Good work on starting a module for consul.

It would be great if this could support Debian as well as Ubuntu.
Looking at the module, I think it's only in need of an init script to get it off the ground.

If I get time this week I'll submit a pull request.

Validate and document all params that could be passed to `consul`

The documentation on what params we could pass is kinda lacking. I think we should document all of the params (most are missing right now) and perhaps update the readme as well.

Also, while doing this we need to ensure these params are properly validated, that is also lacking, only a few are being validated.

I can work on this but if someone else wants to feel free and let me know if I can help ๐Ÿ‘

Invalid resource type staging::file

Hi,

After successfully running ./deps.sh and build.sh, doing vagrant up halts with a puppet error:
nicu@nmarasoiu:~/tools/consul-cluster-puppet$ vagrant up
Bringing machine 'consul0' up with 'virtualbox' provider...
Bringing machine 'consul1' up with 'virtualbox' provider...
Bringing machine 'consul2' up with 'virtualbox' provider...
Bringing machine 'webui' up with 'virtualbox' provider...
Bringing machine 'demo' up with 'virtualbox' provider...
Bringing machine 'foo0' up with 'virtualbox' provider...
Bringing machine 'foo1' up with 'virtualbox' provider...
==> consul0: VirtualBox VM is already running.
==> consul1: VirtualBox VM is already running.
==> consul2: Importing base box 'trusty64'...
==> consul2: Matching MAC address for NAT networking...
==> consul2: Setting the name of the VM: consul-cluster-puppet_consul2_1431592945090_95794
==> consul2: Clearing any previously set forwarded ports...
==> consul2: Fixed port collision for 22 => 2222. Now on port 2202.
==> consul2: Clearing any previously set network interfaces...
==> consul2: Preparing network interfaces based on configuration...
consul2: Adapter 1: nat
consul2: Adapter 2: hostonly
==> consul2: Forwarding ports...
consul2: 22 => 2202 (adapter 1)
==> consul2: Running 'pre-boot' VM customizations...
==> consul2: Booting VM...
==> consul2: Waiting for machine to boot. This may take a few minutes...
consul2: SSH address: 127.0.0.1:2202
consul2: SSH username: vagrant
consul2: SSH auth method: private key
consul2: Warning: Connection timeout. Retrying...
consul2: Warning: Remote connection disconnect. Retrying...
consul2:
consul2: Vagrant insecure key detected. Vagrant will automatically replace
consul2: this with a newly generated keypair for better security.
consul2:
consul2: Inserting generated public key within guest...
consul2: Removing insecure key from the guest if its present...
consul2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> consul2: Machine booted and ready!
==> consul2: Checking for guest additions in VM...
==> consul2: Setting hostname...
==> consul2: Configuring and enabling network interfaces...
==> consul2: Mounting shared folders...
consul2: /vagrant => /home/nicu/tools/consul-cluster-puppet
consul2: /tmp/vagrant-puppet/modules-6a51abd267c5ea0234a37cae97c7e310 => /home/nicu/tools/consul-cluster-puppet/puppet/modules
consul2: /tmp/vagrant-puppet/manifests-768747907b90c39ab6f16fcb3320897a => /home/nicu/tools/consul-cluster-puppet/puppet
==> consul2: Running provisioner: puppet...
==> consul2: Running Puppet with server.pp...
==> consul2: stdin: is not a tty
==> consul2: Error: Puppet::Parser::AST::Resource failed with error ArgumentError: Invalid resource type staging::file at /tmp/vagrant-puppet/modules-6a51abd267c5ea0234a37cae97c7e310/consul/manifests/install.pp:23 on node consul2.local
==> consul2: Wrapped exception:
==> consul2: Invalid resource type staging::file
==> consul2: Error: Puppet::Parser::AST::Resource failed with error ArgumentError: Invalid resource type staging::file at /tmp/vagrant-puppet/modules-6a51abd267c5ea0234a37cae97c7e310/consul/manifests/install.pp:23 on node consul2.local

Host OS: Ubuntu 14.04 LTS
uname: 3.13.0-52-generic #86-Ubuntu SMP Mon May 4 04:32:59 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Vagrant version: 1.7.2

Please advice,
Nicu

Cycling dependancy in Hiera-based config

  • ruby 1.9.3p194
  • puppet 3.7.3

Yaml-based hiera setup, applied in order (merge deeper @ hiera):

Main config shared everywhere (init style = false because we have a local .deb repo where init is already taken care of when installing consul):

consul::config_dir: '/etc/consul.d'
consul::install_method: 'package'
consul::init_style: false
consul::ui_package_name: 'consul-web-ui'
consul::package_name: 'consul'
consul::config_hash:
  datacenter: 'datacentername'
  data_dir: '/var/lib/consul'
  domain: 'dcs.consul'
  retry_join: 
    - '10.0.0.1'
    - '10.0.0.2'
    - '10.0.0.3'

Consul-master config:

consul::config_hash:
  log_level: 'INFO'
  server: true
  bootstrap_expect: 3
  ui_dir: '/usr/share/consul/web-ui'
  client_addr: '0.0.0.0'

consul::services:
  'zookeeper':
    tags: [ 'mesos' , 'master' ]
    port: 2181

Apply results in:

Error: Could not apply complete catalog: Found 1 dependency cycle:
(File[/etc/consul.d/service_zookeeper.json] => Class[Consul::Run_service] => Service[consul] => Class[Consul::Run_service] => Class[Consul] => Consul::Service[zookeeper] => File[/etc/consul.d/service_zookeeper.json])

Unless I'm doing something wrong in Hiera of course...

Log rotation?

Something I noticed while working in the sysv startup script is that the output is redirected into /var/log/consul, but there isn't any way provided to rotate that file.

This is also the case in the sles script.

Should there be a logrotate.d file included? Is this something considered outside the scope of the module?

If it's outside the scope, then feel free to close this issue.

init script doesn't have data-dir (0.5.0)

I never used consul 0.4.0, so this may be me missing something, but the current init script bundled here doesn't work.

consul expects a data-dir option to be passed to it, which isn't included:

[root@host tmp]# tail /var/log/consul 
==> Must specify data directory using -data-dir
==> Must specify data directory using -data-dir
==> Must specify data directory using -data-dir
==> Must specify data directory using -data-dir

The module should probably include support for setting this up, too

consul should not 'leave' for init script 'stop' action

When consul is provided with the leave subcommand, the node is removed from the cluster. This requires that the cluster be added back in at reboot with a join action. This breaks expected behavior for the service, in which the node automatically rejoins the cluster upon service start. A more acceptable init-style script behavior would be to kill the consul process so it does not remove itself from the cluster.

Dependency cycle using consul::services

Just took a stab at adding a service (1) using the consul::services hash, via hiera. I started getting a dependency cycle right away.

Hiera:

consul::services:
  puppet_httpd:
    tags:
      - httpd
      - puppet
    port: 8140
    check_script: '/my/check/script'

output:

Info: Applying configuration version '1425588453'
Error: Could not apply complete catalog: Found 1 dependency cycle:
(File[/etc/consul/service_puppet_httpd.json] => Class[Consul::Run_service] => Service[consul] => Class[Consul::Run_service] => Class[Consul] => Consul::Service[puppet_httpd] => File[/etc/consul/service_puppet_httpd.json])
Cycle graph written to /var/lib/puppet/state/graphs/cycles.dot.

Support for Consul 0.5.0 and multiple check configuration

I just reviewed the Consul 0.5.0-rc1 change log and it looks like there will be a few changes that will require changes in puppet-consul. Some quick notes:

  • 'consul::service' will need to be updated as the checks property for services is now an array instead of a hash.
  • 'consul::check' will also need updates for the new HTTP check type as well as optional service_id property.

ensure_packages unzip with 'before' breaks interoperability

In install.pp there is

ensure_packages(['unzip'], { 'before' => Staging::File['consul.zip'] })

which breaks interoperability since if you have

ensure_packages(['unzip'])

in any other place it's a redeclaration error (because attributes are different). Minimal code that shows similar problem:

puppet apply -e 'ensure_packages(["unzip"]) ensure_packages(["unzip"], {stage=>"main"})'

Output excerpt:

Error: Duplicate declaration: Package[unzip] is already declared

Path to /home/kyle is hard coded, somewhere

I'm trying to install the consul module on my OSX Laptop using librarian-puppet, the puppet module install command fails citing an non-existent path of /home/kyle.

Librarian puppet is generating the following module install command: https://gist.github.com/bhourigan/6fc71c3506fdd9cede81

Output of running it with --debug:
https://gist.github.com/bhourigan/8c3b9c1e72ad54e4fc66

I must be missing something important, as I can't find a telling location of the substrings 'home', or 'kyle'. Running puppet module installer under dtruss did give me more information about the full path, but alas, it has not given me more insight into where the problem lies.

Debug command: sudo dtruss -b 32m -f <command in gist 6fc71c3506fdd9cede81>

Interesting snippit (full output available upon request)
65195/0x3bfd22: symlink("/home/kyle/Projects/puppet_modules/puppet-consul\0", "KyleAnderson-consul-0.4.2/spec/fixtures/modules/consul\0") = 0 0

I'm running ruby 2.0.0p481, and according to https://docs.puppetlabs.com/guides/platforms.html#ruby-versions it is supported.

config_hash converts strings to integers => breaks port mappings

In the consul config_hash it is possible to disable the http port by setting it to -1, and enabling the https service instead by setting it to the appropiate port, 8500:

      'ports' =>              {
        'http'  =>  -1,
        'https' => 8500,
      },

The generated config.json results in:

  "ports": {
    "https": "8500",
    "http": -1
  },

Current behaviour (Release 1.0.0)

-1 is looking good, and is still an integer, whilst the https ports was converted to a string. The consul agent does not support strings as port numbers and therefore fails to launch.

Expected behaviour

-1 as well as 8500 should be integers.

Workaround

Tell puppet to treat "8500" as an integer by multiplying with 1 *:

      'ports' =>              {
        'http'  =>  -1,
        'https' =>  1 * 8500,
      },

Setting consul::version in hiera does not change the download_url

I attempted to set consul::version to 0.5.2 in hiera, but the zip file downloaded is for 0.5.0. I added the following to install.pp:

notify{"Download url: ${$consul::download_url}": }
notify{"Version: ${$consul::version}": }

And this is the output:

Notice: Download url: https://dl.bintray.com/mitchellh/consul/0.5.0_linux_amd64.zip
Notice: Version: 0.5.2

new function sorted_json does not work if keys are set to undef

in the case where hash keys are explicitly set to undef, the following exception is encountered:

       undefined method `Exception' for #<Puppet::Parser::Scope:0x007fe77ec64d38> at /Users/danbode/dev/reliance/apply_resources/rjil/spec/fixtures/modules/consul/manifests/config.pp:35 on node danslaptop-2.local
     # ./spec/fixtures/modules/consul/lib/puppet/parser/functions/consul_sorted_json.rb:22:in `sorted_json'
     # ./spec/fixtures/modules/consul/lib/puppet/parser/functions/consul_sorted_json.rb:16:in `block in sorted_json'
     # ./spec/fixtures/modules/consul/lib/puppet/parser/functions/consul_sorted_json.rb:15:in `each'
     # ./spec/fixtures/modules/consul/lib/puppet/parser/functions/consul_sorted_json.rb:15:in `sorted_json'
     # ./spec/fixtures/modules/consul/lib/puppet/parser/functions/consul_sorted_json.rb:42:in `block in <module:Functions>'
     # ./spec/classes/jiocloud_spec.rb:37:in `block (3 levels) in <top (required)>'

cannot generate right retry_join string

the only directive consul accepts looks like

"retry_join": ["test-cluster-vm.example.org"],

With this module I cannot get that string in config file, because it is generating something else all the time, for example

"retry_join": "[\"test-cluster-vm.caravan.dmz\"]",

when in puppet manifest I specify

 'retry_join'  => '["test-cluster-vm.example.org"]'

Staging missing dependency on `Package['unzip']`

For obvious reasons this package is needed. It doesn't seem like staging itself manages this (since it doesn't manage the unzip package), so consul should manage this dependency if it is managing the unzip package (ie, on non-darwin).

It should be sufficient (probably better) only to supply this to the files that consul itself downloads. For myself I am currently using Package['unzip'] -> Staging::File <| |> as a workaround.

Ruby 1.8 support

Ruby 1.8 isn't supported at the moment, as within https://github.com/solarkennedy/puppet-consul/blob/master/lib/puppet/parser/functions/consul_validate_checks.rb, it is trying to call 'Puppet::ParseError' as a function. Which results in

undefined method `ParseError' for Puppet:Module at /etc/puppet/ext_modules/consul/manifests/service.pp:36 on node ip-172-18-0-234.ec2.internal

It appears that I can fix this by using raise(Puppet::ParseError, 'message') rather than raise Puppet::ParseError('message'), however I'm not a ruby developer so I'm not sure if this is the correct change to make or not.

Ruby 1.8 is the default ruby on Ubuntu 12.04 (which is why this is causing me issues). Would you be interested in a patch to fix support for 1.8 on this system configuration?

Add support for joining multiple datacenters

It would be nice if this module allows joining a cluster to another datacenter. This is accomplished via consul join -wan <server1> <server2>. I think this can be accomplished very similarly to what the join_cluster parameter does.

Are you open to this feature? I would be happy to try and put together a PR.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.