Giter Club home page Giter Club logo

puppet-consul's Issues

consul should not 'leave' for init script 'stop' action

When consul is provided with the leave subcommand, the node is removed from the cluster. This requires that the cluster be added back in at reboot with a join action. This breaks expected behavior for the service, in which the node automatically rejoins the cluster upon service start. A more acceptable init-style script behavior would be to kill the consul process so it does not remove itself from the cluster.

Setting consul::version in hiera does not change the download_url

I attempted to set consul::version to 0.5.2 in hiera, but the zip file downloaded is for 0.5.0. I added the following to install.pp:

notify{"Download url: ${$consul::download_url}": }
notify{"Version: ${$consul::version}": }

And this is the output:

Notice: Download url: https://dl.bintray.com/mitchellh/consul/0.5.0_linux_amd64.zip
Notice: Version: 0.5.2

service definition file will be changed frequently

Hi,

for some reason my service defintion files will be changed every few runs, even though nothing changes.

The following file sometimes looks like this:

{
  "service": {
    "port": 80,
    "name": "clientinterface",
    "id": "clientinterface",
    "tags": [

    ]
  }
}

and sometimes like this:

{
  "service": {
    "tags": [

    ],
    "port": 80,
    "name": "clientinterface",
    "id": "clientinterface"
  }
}

This will result in consul restarting every few runs :-(

Ruby 1.8 support

Ruby 1.8 isn't supported at the moment, as within https://github.com/solarkennedy/puppet-consul/blob/master/lib/puppet/parser/functions/consul_validate_checks.rb, it is trying to call 'Puppet::ParseError' as a function. Which results in

undefined method `ParseError' for Puppet:Module at /etc/puppet/ext_modules/consul/manifests/service.pp:36 on node ip-172-18-0-234.ec2.internal

It appears that I can fix this by using raise(Puppet::ParseError, 'message') rather than raise Puppet::ParseError('message'), however I'm not a ruby developer so I'm not sure if this is the correct change to make or not.

Ruby 1.8 is the default ruby on Ubuntu 12.04 (which is why this is causing me issues). Would you be interested in a patch to fix support for 1.8 on this system configuration?

Debian support

Good work on starting a module for consul.

It would be great if this could support Debian as well as Ubuntu.
Looking at the module, I think it's only in need of an init script to get it off the ground.

If I get time this week I'll submit a pull request.

Support for Consul 0.5.0 and multiple check configuration

I just reviewed the Consul 0.5.0-rc1 change log and it looks like there will be a few changes that will require changes in puppet-consul. Some quick notes:

  • 'consul::service' will need to be updated as the checks property for services is now an array instead of a hash.
  • 'consul::check' will also need updates for the new HTTP check type as well as optional service_id property.

join_cluster not working on agents

Or maybe I'm using this module wrong.
I've setup one server with

class { 'consul':
  config_hash => {
      'datacenter'  => 'car',
      'data_dir'    => '/opt/consul',
      'ui_dir'      => '/opt/consul/ui',
      'client_addr' => '0.0.0.0',
      'log_level'   => 'INFO',
      'node_name'   => "${hostname}",
      'server'      => true,
      'bootstrap_expect' => 1
  }
}

and three agents with

class { 'consul':
join_cluster => 'test-cluster-vm.car.dmz',
  config_hash => {
      'datacenter'  => 'car',
      'data_dir'    => '/opt/consul',
      'client_addr' => '0.0.0.0',
      'log_level'   => 'INFO',
      'node_name'   => "${hostname}",
      'server'      => false,
  }
}
    consul::service { 'puppetmaster':
        port           => 8140,
    }

But exec exec { 'join consul cluster': is not working 'cause

/usr/local/bin/consul info | grep num

shows nothing on those three agents

Strange (probably unnecessary) behavior in sysv stop script

The sysv stop script has this in it for the stop section:

stop() {
        echo -n "Shutting down consul: "
        # If consul is not acting as a server, exit gracefully
        if ("${CONSUL}" info 2>/dev/null | grep -q 'server = false' 2>/dev/null) ; then
            "$CONSUL" leave
        fi

        # If acting as a server, or if leave failed, kill it.
        mkpidfile
        killproc $KILLPROC_OPT $CONSUL -9

        retcode=$?
        rm -f /var/lock/subsys/consul $PID_FILE
        return $retcode
}

I read the original PR (#87), and I agree with the theory to have clients leave but servers stay in "failed" state to preserve their state for a rejoin. However, the implementation doesn't seem to address this correctly, and "kill -9" is extremely heavy-handed for a distributed consensus system. It doesn't seem like that's ever going to be the right move.

There's also a problem where if the leave works on a client, the kill will fail, resulting in a "FAILED" response from $retcode.

I also have observed some cases of clients in a "failed" state where they should have left, which I think is down to a race condition between issuing a leave and the subsequent 'kill -9'.

I have a PR almost ready to go for this, but then I saw #173 in the queue working in exactly the same files (and same lines), so to avoid a conflict I've held off.

I also figured some discussion about what the actual effect should be was in order. Consul will normally quit quickly (without issuing a leave) when given a TERM, however this can be controlled by the leave_on_terminate config option. Seems like issuing a TERM is correct for servers wanting to preserve state, and can still be controlled if desired in the config_hash.

What to do with a client that fails to leave is a little harder. In a few cases, I've seen a failure to leave immediately, which manifested as this message.

Shutting down consul: Error leaving: client closed

However, looking in the logs it appears this is a temporary issue in resending gossip, and doesn't actually affect the leave process.

The logic I've used is like this (irrelevant stuff removed):

  if client
     leave OR kill -TERM
     retcode = $?
  else
    kill -TERM
    retcode = $?
  end
  return retcode $?

But honestly, I'm questioning the use of the TERM case at all in the client section. Any thoughts on this before I send in a PR?

As far as I can tell, this "kill -9" usage is unique to the sysv script. Every other method uses "consul" leave, or possibly TERM. Debian escalates TERM to KILL after a timeout, but doesn't start there.

tl;dr : The sysv script uses kill -9 on consul and I don't think it should.

Dependency cycle using consul::services

Just took a stab at adding a service (1) using the consul::services hash, via hiera. I started getting a dependency cycle right away.

Hiera:

consul::services:
  puppet_httpd:
    tags:
      - httpd
      - puppet
    port: 8140
    check_script: '/my/check/script'

output:

Info: Applying configuration version '1425588453'
Error: Could not apply complete catalog: Found 1 dependency cycle:
(File[/etc/consul/service_puppet_httpd.json] => Class[Consul::Run_service] => Service[consul] => Class[Consul::Run_service] => Class[Consul] => Consul::Service[puppet_httpd] => File[/etc/consul/service_puppet_httpd.json])
Cycle graph written to /var/lib/puppet/state/graphs/cycles.dot.

cannot generate right retry_join string

the only directive consul accepts looks like

"retry_join": ["test-cluster-vm.example.org"],

With this module I cannot get that string in config file, because it is generating something else all the time, for example

"retry_join": "[\"test-cluster-vm.caravan.dmz\"]",

when in puppet manifest I specify

 'retry_join'  => '["test-cluster-vm.example.org"]'

Log rotation?

Something I noticed while working in the sysv startup script is that the output is redirected into /var/log/consul, but there isn't any way provided to rotate that file.

This is also the case in the sles script.

Should there be a logrotate.d file included? Is this something considered outside the scope of the module?

If it's outside the scope, then feel free to close this issue.

Send SIGHUP to consul agent when new checks/services are detected

Currently, when a new check or service is added, Consul does not load the check/service unless a "consul reload" is manually run on the node (or SIGHUP sent to the agent process).

Could we start a discussion on the appropriate times to trigger a reload of configuration files (i.e. new check added)?

Staging missing dependency on `Package['unzip']`

For obvious reasons this package is needed. It doesn't seem like staging itself manages this (since it doesn't manage the unzip package), so consul should manage this dependency if it is managing the unzip package (ie, on non-darwin).

It should be sufficient (probably better) only to supply this to the files that consul itself downloads. For myself I am currently using Package['unzip'] -> Staging::File <| |> as a workaround.

Support consul-template

It'd be nice if this module supported consul-template

I'm thinking this would be a consul::template resource, with source, destination, command, etc. parameters. We'd also need to manage the consul-template service, which will involve making an init script, etc.

Multiple consul::service with same name causes ArgumentError

I need to create multiple services with the same name (but unique ID) on the same agent, e.g. a service definition like:

{
  "service": {
    "name": "gateway",
    "id": "foo"
  }
}

{
  "service": {
    "name": "gateway",
    "id": "bar"
  }
}

So I created the resources:

consul::service { 'foo':
  name => 'gateway',
}

consul::service { 'bar':
  name => 'gateway',
}

And puppet (2.7) is failing with:

Puppet::Parser::AST::Resource failed with error ArgumentError: Cannot alias Consul::Service[bar] to ["gateway"] at /etc/puppet/roles/app/manifests/gateway.pp:10; resource ["Consul::Service", "gateway"] already defined at /etc/puppet/roles/app/manifests/gateway.pp:10 at /etc/puppet/roles/app/manifests/gateway.pp:10 on node ip-xx-xx-xx-xx.ec2.internal

README for consul::service is out of date

In the README the Service Definition section code example looks like it's showing an older way of configuration. The redis example should look like so:

  ::consul::service { 'redis':
    checks  => [
      { 
        script   => "/usr/local/bin/check_redis.py",
        interval => "10s"
      }
    ],
    port    => 8000,
    tags    => ['master']
  }

Just like my other issue I just made, I'm more than willing to submit a PR if you don't want to do the legwork.

Validate and document all params that could be passed to `consul`

The documentation on what params we could pass is kinda lacking. I think we should document all of the params (most are missing right now) and perhaps update the readme as well.

Also, while doing this we need to ensure these params are properly validated, that is also lacking, only a few are being validated.

I can work on this but if someone else wants to feel free and let me know if I can help ๐Ÿ‘

passingonly needs to be a boolean for watch type

Problem

The documentation contains the example:

consul::watch { 'my_watch':
  ...
  passingonly => 'true',
}

This results in a config file like:

{
  "watches": [
    {
     ...
     "passingonly": "true"
    }
  ]
}

This will in turn cause an issue for Consul starting with an error similar to:

==> Failed to parse watch (map[string]interface {}{"passingonly":"true", "handler":"/usr/bin/runpuppet.sh"}): Expecting %!s(MISSING) to be a boolean

Changing the config file by hand to the following fixes the problem.

"passingonly": true

Passing in the value to Puppet as a boolean also fixes the problem.

consul::watch { 'my_watch':
  ...
  passingonly => true,
}

Fix

  • Docs could be updated to show working values
  • Manifests could validate values being passed are booleans

Additional thoughts

  • This may affect other types which take booleans, and currently show passing strings in the READM.
  • This might be version specific, just in case:
consul --version
Consul v0.4.1
Consul Protocol: 2 (Understands back to: 1)

config_hash converts strings to integers => breaks port mappings

In the consul config_hash it is possible to disable the http port by setting it to -1, and enabling the https service instead by setting it to the appropiate port, 8500:

      'ports' =>              {
        'http'  =>  -1,
        'https' => 8500,
      },

The generated config.json results in:

  "ports": {
    "https": "8500",
    "http": -1
  },

Current behaviour (Release 1.0.0)

-1 is looking good, and is still an integer, whilst the https ports was converted to a string. The consul agent does not support strings as port numbers and therefore fails to launch.

Expected behaviour

-1 as well as 8500 should be integers.

Workaround

Tell puppet to treat "8500" as an integer by multiplying with 1 *:

      'ports' =>              {
        'http'  =>  -1,
        'https' =>  1 * 8500,
      },

add maintenance mode option to init scripts

In order to not have critical health checks (and the duration of the ttl as a window when a stopped service is still showing up available) would it make sense to integrate calls to consul maint -enable and consul main -disable in the init script start and stop calls?

https://consul.io/docs/commands/maint.html

This command can be made to take either an entire consul agent, or just a single service into and out of maintenance mode: where it continues to show up in the consul cluster but isn't made available to the discovery services.

Tests need ruby >= 1.9.2

This is somewhat frustrating for those of us stuck on older OSes.

Gem::InstallError: celluloid requires Ruby version >= 1.9.2.

In the mean time, I'm just doing development in a Trusty vagrant box.

GOMAXPROCS discarded by upstart init due to sudo's env_reset option

Hi,

In upstart init version GOMAXPROCS is set correctly but then the actual consul-agent process is started via sudo.
In Ubuntu the default sudo configuration has env_reset configured and GOMAXPROCS is not part of the env_keep configuration.
I'd suggest adding a default for the consul user which has GOMAXPROCS as part of env_keep configuration to avoid it being ignored.

Watches key in config_hash should expect an array of hashes?

Hey, so. Looking at the consul configuration syntax, it looks like the watches value should be an array of hashes. Each watcher is a hash and indexed in an array. I'm not sure how to configure my watches using this puppet module since the watches key in the configuration hash is expecting a hash value instead of an array value. If this is by design, would you mind briefly explaining how to configure this key properly? Otherwise, this is probably a bug.

Thanks.

init script doesn't have data-dir (0.5.0)

I never used consul 0.4.0, so this may be me missing something, but the current init script bundled here doesn't work.

consul expects a data-dir option to be passed to it, which isn't included:

[root@host tmp]# tail /var/log/consul 
==> Must specify data directory using -data-dir
==> Must specify data directory using -data-dir
==> Must specify data directory using -data-dir
==> Must specify data directory using -data-dir

The module should probably include support for setting this up, too

Allow Consul clients to join cluster

It appears that consul agents running as clients are unable to join the cluster because of the check here. When I configure the agent as a client (server => false), consul info does not output num_peers so the linked check returns 1.

Example configuration:

class { 'consul':
  join_cluster => '172.16.78.100',
  config_hash => {
    'datacenter' => 'dc1',
    'data_dir'   => '/opt/consul',
    'server'     => false,
  }
}

Client output of consul info:

agent:
        check_monitors = 0
        check_ttls = 0
        checks = 0
        services = 0
build:
        prerelease =
        revision =
        version = 0.4.1
consul:
        known_servers = 0
        server = false
runtime:
        arch = amd64
        cpu_count = 1
        goroutines = 32
        max_procs = 2
        os = linux
        version = go1.3.1
serf_lan:
        event_queue = 0
        event_time = 1
        failed = 0
        intent_queue = 0
        left = 0
        member_time = 1
        members = 1
        query_queue = 0
        query_time = 1

Server output of consul info:

agent:
        check_monitors = 0
        check_ttls = 0
        checks = 0
        services = 1
build:
        prerelease =
        revision =
        version = 0.4.1
consul:
        bootstrap = false
        known_datacenters = 1
        leader = false
        server = true
raft:
        applied_index = 0
        commit_index = 0
        fsm_pending = 0
        last_contact = never
        last_log_index = 0
        last_log_term = 0
        last_snapshot_index = 0
        last_snapshot_term = 0
        num_peers = 0
        state = Follower
        term = 0
runtime:
        arch = amd64
        cpu_count = 1
        goroutines = 52
        max_procs = 2
        os = linux
        version = go1.3.1
serf_lan:
        event_queue = 0
        event_time = 1
        failed = 0
        intent_queue = 0
        left = 0
        member_time = 7
        members = 3
        query_queue = 0
        query_time = 1
serf_wan:
        event_queue = 0
        event_time = 1
        failed = 0
        intent_queue = 0
        left = 0
        member_time = 1
        members = 1
        query_queue = 0
        query_time = 1

ensure_packages unzip with 'before' breaks interoperability

In install.pp there is

ensure_packages(['unzip'], { 'before' => Staging::File['consul.zip'] })

which breaks interoperability since if you have

ensure_packages(['unzip'])

in any other place it's a redeclaration error (because attributes are different). Minimal code that shows similar problem:

puppet apply -e 'ensure_packages(["unzip"]) ensure_packages(["unzip"], {stage=>"main"})'

Output excerpt:

Error: Duplicate declaration: Package[unzip] is already declared

Consul init scripts sometimes not installed in the correct order

Because we install the init scripts in install.pp, what can happen is that sometimes the init script will get put in place and then then package will be installed over the top, blowing the init script from the module away. If the package that was installed has a slightly different init script, this could lead to the service running with unexpected circumstances (as I have experienced). This is because there is no dependency between the installation > the placing of the init script.

To solve this the easiest fix would be to just move the logic from install.pp into either config.pp or run_service.pp. This would ensure the dependency works out correctly because of the ordering in init.pp

Configuring consul client nodes

Can you point me in the right direction for configuring a consul client node? I'd like to add a consul agent and configure it to register a service and track its health.

Thanks!

Debian: /var/run/consul/consul.pid user affinity inconsistent

The init template for Debian uses start-stop-daemon to create a pidfile owned by the daemon user root. https://github.com/solarkennedy/puppet-consul/blob/master/templates/consul.debian.erb#L57

But:

It passes the -pid-file parameter along to the consul agent. See: https://github.com/solarkennedy/puppet-consul/blob/master/templates/consul.debian.erb#L21

This causes the launch of the consul agent to fail, since it cannot write to /var/run/consul/consul.pid using the user:group => consul:consul

There are 2 solutions:

  • Make /var/run/consul/consul.pid writeable by the consul user, which runs consul agent
  • Let start-stop-daemon write the pid to /var/run/consul/consul.pid (this means removing the pid-file param from consul agent

umask feature breaks CentOS init scripts

The newly added umask feature causes failures on startup with the following error:

/etc/init.d/consul: Usage: daemon [+/-nicelevel] {program} [FAILED]

This happens with CentOS 6.5 and 6.6

Add support for joining multiple datacenters

It would be nice if this module allows joining a cluster to another datacenter. This is accomplished via consul join -wan <server1> <server2>. I think this can be accomplished very similarly to what the join_cluster parameter does.

Are you open to this feature? I would be happy to try and put together a PR.

join_cluster doesn't seem to work in some cases

In the following example to set up a client agent, join_cluster doesn't work (but including start_join does.)

note: using join_cluster works fine for creating my server cluster in this same environment

#doesn't work:
class { 'consul':
    join_cluster => hiera('join_addr'),
    config_hash => {
        'datacenter' => 'dc1',
        'data_dir'   => '/opt/consul',
        'log_level'  => 'INFO',
        'node_name'  => $::hostname,
        'bind_addr'  => $::ipaddress_eth1,
        'server'     => false,
    }
}

#works
class { 'consul':
    config_hash => {
        'datacenter' => 'dc1',
        'data_dir'   => '/opt/consul',
        'log_level'  => 'INFO',
        'node_name'  => $::hostname,
        'bind_addr'  => $::ipaddress_eth1,
        'server'     => false,
        'start_join' => [hiera('join_addr')],
    }
}

Is this a bug? is there some reason I should be joining the cluster differently as a client than as a server?

Cycling dependancy in Hiera-based config

  • ruby 1.9.3p194
  • puppet 3.7.3

Yaml-based hiera setup, applied in order (merge deeper @ hiera):

Main config shared everywhere (init style = false because we have a local .deb repo where init is already taken care of when installing consul):

consul::config_dir: '/etc/consul.d'
consul::install_method: 'package'
consul::init_style: false
consul::ui_package_name: 'consul-web-ui'
consul::package_name: 'consul'
consul::config_hash:
  datacenter: 'datacentername'
  data_dir: '/var/lib/consul'
  domain: 'dcs.consul'
  retry_join: 
    - '10.0.0.1'
    - '10.0.0.2'
    - '10.0.0.3'

Consul-master config:

consul::config_hash:
  log_level: 'INFO'
  server: true
  bootstrap_expect: 3
  ui_dir: '/usr/share/consul/web-ui'
  client_addr: '0.0.0.0'

consul::services:
  'zookeeper':
    tags: [ 'mesos' , 'master' ]
    port: 2181

Apply results in:

Error: Could not apply complete catalog: Found 1 dependency cycle:
(File[/etc/consul.d/service_zookeeper.json] => Class[Consul::Run_service] => Service[consul] => Class[Consul::Run_service] => Class[Consul] => Consul::Service[zookeeper] => File[/etc/consul.d/service_zookeeper.json])

Unless I'm doing something wrong in Hiera of course...

new function sorted_json does not work if keys are set to undef

in the case where hash keys are explicitly set to undef, the following exception is encountered:

       undefined method `Exception' for #<Puppet::Parser::Scope:0x007fe77ec64d38> at /Users/danbode/dev/reliance/apply_resources/rjil/spec/fixtures/modules/consul/manifests/config.pp:35 on node danslaptop-2.local
     # ./spec/fixtures/modules/consul/lib/puppet/parser/functions/consul_sorted_json.rb:22:in `sorted_json'
     # ./spec/fixtures/modules/consul/lib/puppet/parser/functions/consul_sorted_json.rb:16:in `block in sorted_json'
     # ./spec/fixtures/modules/consul/lib/puppet/parser/functions/consul_sorted_json.rb:15:in `each'
     # ./spec/fixtures/modules/consul/lib/puppet/parser/functions/consul_sorted_json.rb:15:in `sorted_json'
     # ./spec/fixtures/modules/consul/lib/puppet/parser/functions/consul_sorted_json.rb:42:in `block in <module:Functions>'
     # ./spec/classes/jiocloud_spec.rb:37:in `block (3 levels) in <top (required)>'

Invalid resource type staging::file

Hi,

After successfully running ./deps.sh and build.sh, doing vagrant up halts with a puppet error:
nicu@nmarasoiu:~/tools/consul-cluster-puppet$ vagrant up
Bringing machine 'consul0' up with 'virtualbox' provider...
Bringing machine 'consul1' up with 'virtualbox' provider...
Bringing machine 'consul2' up with 'virtualbox' provider...
Bringing machine 'webui' up with 'virtualbox' provider...
Bringing machine 'demo' up with 'virtualbox' provider...
Bringing machine 'foo0' up with 'virtualbox' provider...
Bringing machine 'foo1' up with 'virtualbox' provider...
==> consul0: VirtualBox VM is already running.
==> consul1: VirtualBox VM is already running.
==> consul2: Importing base box 'trusty64'...
==> consul2: Matching MAC address for NAT networking...
==> consul2: Setting the name of the VM: consul-cluster-puppet_consul2_1431592945090_95794
==> consul2: Clearing any previously set forwarded ports...
==> consul2: Fixed port collision for 22 => 2222. Now on port 2202.
==> consul2: Clearing any previously set network interfaces...
==> consul2: Preparing network interfaces based on configuration...
consul2: Adapter 1: nat
consul2: Adapter 2: hostonly
==> consul2: Forwarding ports...
consul2: 22 => 2202 (adapter 1)
==> consul2: Running 'pre-boot' VM customizations...
==> consul2: Booting VM...
==> consul2: Waiting for machine to boot. This may take a few minutes...
consul2: SSH address: 127.0.0.1:2202
consul2: SSH username: vagrant
consul2: SSH auth method: private key
consul2: Warning: Connection timeout. Retrying...
consul2: Warning: Remote connection disconnect. Retrying...
consul2:
consul2: Vagrant insecure key detected. Vagrant will automatically replace
consul2: this with a newly generated keypair for better security.
consul2:
consul2: Inserting generated public key within guest...
consul2: Removing insecure key from the guest if its present...
consul2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> consul2: Machine booted and ready!
==> consul2: Checking for guest additions in VM...
==> consul2: Setting hostname...
==> consul2: Configuring and enabling network interfaces...
==> consul2: Mounting shared folders...
consul2: /vagrant => /home/nicu/tools/consul-cluster-puppet
consul2: /tmp/vagrant-puppet/modules-6a51abd267c5ea0234a37cae97c7e310 => /home/nicu/tools/consul-cluster-puppet/puppet/modules
consul2: /tmp/vagrant-puppet/manifests-768747907b90c39ab6f16fcb3320897a => /home/nicu/tools/consul-cluster-puppet/puppet
==> consul2: Running provisioner: puppet...
==> consul2: Running Puppet with server.pp...
==> consul2: stdin: is not a tty
==> consul2: Error: Puppet::Parser::AST::Resource failed with error ArgumentError: Invalid resource type staging::file at /tmp/vagrant-puppet/modules-6a51abd267c5ea0234a37cae97c7e310/consul/manifests/install.pp:23 on node consul2.local
==> consul2: Wrapped exception:
==> consul2: Invalid resource type staging::file
==> consul2: Error: Puppet::Parser::AST::Resource failed with error ArgumentError: Invalid resource type staging::file at /tmp/vagrant-puppet/modules-6a51abd267c5ea0234a37cae97c7e310/consul/manifests/install.pp:23 on node consul2.local

Host OS: Ubuntu 14.04 LTS
uname: 3.13.0-52-generic #86-Ubuntu SMP Mon May 4 04:32:59 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Vagrant version: 1.7.2

Please advice,
Nicu

delete_undef_values required stdlib 4.2.0, dependency not set properly

Hey guys, first of all: I appreciate all the hard work you're doing here, this module is great.

I noticed a small problem when pulling from master (specifically I'm working off of: 0bbbea6). When pulling the module via librarian-puppet it checks the dependencies in the metadata.json file. I currently have puppetlabs-stdlib 4.1.0 installed, which passed the dependency check in the metadata.json file, because that file claims your module only requires puppetlabs-stdlib version 0.1.6 or greater. However, a the new function delete_undef_values introduced in this commit requires at least puppetlabs-stdlib 4.2.0.

It seems that the metadata.json file needs to be updated to reflect that fact. I can submit a PR if you desire, I'm more than willing to do that.

Thanks!

Meta stuff Not up to snuff

It had been a while since I've been to the forge, I didn't realize they were exposing quality metrics!

https://forge.puppetlabs.com/KyleAnderson/consul/scores

Lint Results: 0 errors, 6 warnings, and 0 notices.

 Double quoted string containing no variables - 8 occurrences.
 Ensure found on line but it's not the first attribute - 1 occurrence.
 Indentation of => is not properly aligned - 13 occurrences.
 Selector inside resource block - 1 occurrence.
 String containing only a variable - 3 occurrences.
 Unquoted file mode - 1 occurrence.
Hide full Code Quality results...
Puppet Lint analyzed this module and found something that can be improved. Learn more about linting modules at the project's website.


Metadata Quality1.5
Metadata Quality: 1 error, 2 warnings, and 3 notices.

 Does not contain os_support information.
 Dependencies contain unbounded ranges.
 Unrecognized license in metadata.
 Issues url verified.
 Project page verified.
 Source url verified.

I should work on this junk.

Path to /home/kyle is hard coded, somewhere

I'm trying to install the consul module on my OSX Laptop using librarian-puppet, the puppet module install command fails citing an non-existent path of /home/kyle.

Librarian puppet is generating the following module install command: https://gist.github.com/bhourigan/6fc71c3506fdd9cede81

Output of running it with --debug:
https://gist.github.com/bhourigan/8c3b9c1e72ad54e4fc66

I must be missing something important, as I can't find a telling location of the substrings 'home', or 'kyle'. Running puppet module installer under dtruss did give me more information about the full path, but alas, it has not given me more insight into where the problem lies.

Debug command: sudo dtruss -b 32m -f <command in gist 6fc71c3506fdd9cede81>

Interesting snippit (full output available upon request)
65195/0x3bfd22: symlink("/home/kyle/Projects/puppet_modules/puppet-consul\0", "KyleAnderson-consul-0.4.2/spec/fixtures/modules/consul\0") = 0 0

I'm running ruby 2.0.0p481, and according to https://docs.puppetlabs.com/guides/platforms.html#ruby-versions it is supported.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.