voxpupuli / puppet-system Goto Github PK
View Code? Open in Web Editor NEWManage Linux system resources and services from hiera configuration
Home Page: https://forge.puppet.com/puppet/system
License: Apache License 2.0
Manage Linux system resources and services from hiera configuration
Home Page: https://forge.puppet.com/puppet/system
License: Apache License 2.0
It would be nice to have the option to also include these variables in ifcfg-ethX for ipv6 manual configuration.
IPV6INIT=yes
USERCTL=no
IPV6_AUTOCONF=no
IPV6ADDR=
IPV6ADDR_SECONDARIES=
/etc/puppetlabs/code/environments/production/data/common.yaml:
system::templates:
/etc/motd:
owner: root
group: root
mode: '0644'
template: '/etc/puppetlabs/code/environments/production/site/profile/templates/motd.erb'
/etc/puppetlabs/code/environments/production/site/profile/templates/motd.erb
# Hostname : <%= @fqdn %>
####################################
I've read & consent to terms in IS user agreem't.
File is not updated between the hours of 23:00 to 00:00. I believe this is due to the following line:
puppet-system/manifests/schedules.pp
Line 15 in 93d9c6b
If I change that line to range => '0:00 - 23:59',
it executes correctly.
File contents to be replaced with template
Hello erwbgy,
If I use the future parser I will get an error:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Evaluation Error: Error while evaluating a Function Call, create_resources(): second argument must be a hash at /etc/puppet/environments/development/modules/system/manifests/yumrepos.pp:16:7 on node
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
My Env:
Puppet Master: 3.7.1
Puppet Agent: 3.7.1
Thank you.
Regards,
Florian
Problem affects any users of system::network::dns:
with search
or options
.
The template is incorrect and produces bad results. The results of the following hiera:
system::network::dns:
nameservers: [ 8.8.8.8 ]
domains:
- internal.domain.eu
- domain.eu
options:
- optionA
- optionB
produces the following resolv.conf, which is incorrect format of a resolv.conf:
# File managed by Puppet
nameserver 8.8.8.8
search internal.domain.eu
search domain.eu
options optionA
options optionB
This is incorrect behavior and will result in the first search
and first options
from being ignored. That is, in the above case, the internal.domain.edu
domain will not be searched.
Rather, according to resolv.conf(5)
man page, the produced file should be:
# File managed by Puppet
nameserver 8.8.8.8
search internal.domain.eu domain.eu
options optionA optionB
I am a bit new to hiera. Does anyone know if it's possible to use system::network when you need to set the network IP on multiple nodes(cluster) being pulled from hiera like my example below?? Is there a way to accomplish this using this module?
I was hoping for something like this:
system::network:
server1:
interface: 'eth0'
ipaddress: '10.10.10.10'
netmask: '255.255.255.0'
server2:
interface: 'eth0'
ipaddress: '10.10.10.20'
netmask: '255.255.255.0'
Add support for processing templates in content values as you can with normal file resources.
Hi,
So I'm using hiera as per some of the examples, and attempting to install a package but that package requires EPEL be installed first. I seem to have found myself in a bit of a loop, or maybe misunderstanding of how require should work.
Basic hiera snippet:
site.pp:
node default {
include stdlib
hiera_include('classes','')
}
myhost.yaml:
---
classes: [' system ']
system::yumrepos:
epel:
mirrorlist: 'http://mirrors.fedoraproject.org/mirrorlist?repo=epel-${::os_maj_version}&arch=\$basearch'
gpgcheck: '0'
enabled: '1'
redis:
ensure: installed
require: Yumrepo[ 'epel' ]
When I run puppet agent on a client, I get:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid relationship: Package[redis] { require => Yumrepo[ epel ] }, because Yumrepo[ epel ] doesn't seem to be in the catalog
My assumption is that because im pointing a require at a Yumrepo, but its being defined in hiera, it doesnt see it in the catalog?
Any advice?
Hi, this is very basic modifications, I'm sure you can find a more elegent way to it done by still validating the mac addr, but not having it required for aliased interfaces.
Here is my svn diff, from the puppetlab module to wht i juste modified.
--- system/manifests/network/interface.pp (revision 3)
+++ system/manifests/network/interface.pp (working copy)
@@ -25,9 +25,9 @@
else {
$hwaddr = inline_template("<%= scope.lookupvar('macaddress${_interface}') %>")
}
DEVICE=<%= @_interface %>
BOOTPROTO=<% if @_dhcp %>dhcp<% else %>none<% end %>
-HWADDR=<%= @_hwaddr %>
+<% if @_hwaddr %>HWADDR=<%= @_hwaddr %><% else %><% end %>
ONBOOT=<% if @_onboot %>yes<% else %>no<% end %>
HOTPLUG=<% if @_hotplug %>yes<% else %>no<% end %>
TYPE=<%= @_type %>
The system module currently requires a very minimal ntp provider. There are many, many other ntp providers which have significantly more features. Please make your module optional for this, not required.
Furthermore, the ntp.pp manifest invokes any top-level ntp class, which can lead to confusing and hilarious results.
If you really want to include ntp, why not remove the dependency from Moduleinfo and just require that any top-level ntp module be installed and pass all options through to it? You're pretty much there already. You could just blindly pass all parameters through, and thus allow the user to use any ntp module they want --yours or someone else's.
if you just do a include system as the documentation suggests you run into some trouble.
e.g.
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Must pass gateway to Class[System::Network] at /etc/puppet/modules/system/manifests/init.pp:74
// Michael
The system::sysctl configuration currently only updates /etc/sysctl.conf. It should also check the current sysctl settings and dynamically update them if required.
system::sysconfig::selinux
Changing the state breaks the symbolic link to /etc/selinux/config which stops selinux being correctly confiigured.
class:
class os_build::l1::hosts{
notify {"*** Applying ${name} ***":}
contain system::hosts
}
hieradate:
system::hosts:
puppet:
ensure: 'present'
ip: "10.0.0.3"
host_aliases: []
Code I've had to change to get this working is to comment out were there is "sys_schedule" as shown below
class system::hosts (
$config = undef,
# $sys_schedule = 'always',
) {
$defaults = {
ensure => 'present',
# schedule => $sys_schedule,
}
if $config {
create_resources(host, $config, $defaults)
}
else {
$hiera_config = hiera_hash('system::hosts', undef)
if $hiera_config {
create_resources(host, $hiera_config, $defaults)
}
}
}
The following error message is displayed:
Error: Failed to apply catalog: Could not find schedule always
Entries added to /etc/hosts
Error: Failed to apply catalog: Could not find schedule always
in hiera:
system::ntp::servers:
in module:
include system::ntp
Error while evaluating a Resource Statement, Class[Ntp]: has no parameter named 'iburst'
Hiera_hash (all hiera_* functions)
This function is deprecated in favor of the lookup function. While this function continues to work, it does not support:
lookup_options stored in the data
lookup across global, environment, and module layers
The data merge doesn't work across the hiera data layer. (environment->module layer). It can only lookup the environment data. Module data doesn't work.
Cam merge global->environment->module data layer
I can replace https://github.com/voxpupuli/puppet-system/blob/master/manifests/users.pp
$hiera_config = hiera_hash('system::users', undef)
by
$hiera_config = lookup( { 'name' => 'system::users',
'merge' => {
'strategy' => 'deep',
},
})
The lookup then fine.
So you need to replace all hiera_* functions by lookup function I guess.
Having this yaml:
system::hosts:
host1:
ip: '1.2.3.4'
host_aliases: [ 'system.example.com', 'system' ]
host2:
ip: '1.2.3.5'
host_aliases: [ 'site.example.com ]
It produces /etc/hosts
which always contain hosts ordered from shortest to longest without keeping order specified in host_aliases
. Afterall it makes puppet agent to think that domain is not set when starting as agent, because hostname resolves by shortest one (system
in this example).
facter --puppet | grep domain
=> should be example.com
Is it a bug ? How can I guarantee the order to be as longest alias first.
Hi,
The idea is to update a config file using a system::templates resource that triggers a system::services resource via subscribe, but it fails as system::templates is in last stage and cannot be applied before system::services.
Example of a hiera yaml to show the limitation:
system::templates:
'/etc/ssh/sshd_config':
owner: 'root'
group: 'root'
mode: '0600'
template: "system/sshd_config.erb"
system::services:
'sshd':
ensure: 'running'
subscribe: 'File[/etc/ssh/sshd_config]'
To fix the issue we updated init.pp and made system::services to be ordered in the last stage as well, but I would like to understand the reason why system::templates is ordered in the last stage and services is not?
Regards,
Rafael
Consider making system::packages virtual like users and groups. They can then be declared in one place and realized as many times as required which should help to avoid conflicts and messy 'if defined' checks.
stdlib 8.3.0 now includes the stdlib::manage
class which can replace a lot of the specific logic in use within this module.
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Function Call, Could not find class ::augeasproviders for (file: /etc/puppetlabs/code/environments/development/modules/system/manifests/sysctl.pp, line: 15, column: 7) on node
Hi ,
Kindly guide me to configure this in foreman 1.2 .
How I can create file on target puppet client using Foreman 1.2 using your this mudule ,actually how can I pass parameter in config
[puppet-ntp dependency] [only allowed on RedHat systems], causing installation to fail on non-RedHat system.
Please update the ReadMe to reflect this limitation, preferably with a HUGE warning sign to save others' time. Or, please update the ReadMe to display a workaround.
I've only been using Puppet for less than a week, so there may be something I don't know.
Running 3.5.1.1 and version 7.4 of the system module.
For example, if I have a node-level yaml file (just testing at this point) with nothing but this:
class:
I get this error with the node's client runs:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find data item system::augeas in any Hiera data file and no default supplied at /etc/puppet/modules/system/manifests/augeas.pp:12 on node jobs-6f-vm-01q.xxx.com
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
This remains the case if I set system::augeas::schedule: 'never' in the node's yaml or in common.yaml, or if I include system in the site.pp instead of in hiera.
To get around this I can set the class to the submodule I want to use, e.g. system::augeas and then ensure that I have at least one config defined.
My expectation is that I can include all of system for my nodes and then enable modules by creating configs at the relevant point in my hiera hierarchy.
Is this a a bug or am I guilty of improper usage? Thanks.
Hi all,
Thanks for the updates to this module, unfortunately we're experiencing some difficulties. It appears the install of 0.8.0 doesn't work on Redhat, we get a message "Error: Could not install module 'puppet-system' (???) No version of 'puppet-system' can satisfy all dependencies"
I've tried to locate the required dependencies but herculesteam-augeasproviders seems really old, when specifying 1.0.0 on the puppet module install command it returns saying one can't be found.
[root@dbsms01 puppet]# puppet module list --tree --modulepath=./modules/live/
Warning: Module 'herculesteam-augeasproviders' (v2.1.3) fails to meet some dependencies:
'puppet-system' (v0.8.0) requires 'herculesteam-augeasproviders' (>= 0.5.1 < 1.0.0)
/etc/puppetlabs/puppet/modules/live
├── facts (???)
├── profile (???)
├── role (???)
└─┬ puppet-system (v0.8.0)
├── puppetlabs-stdlib (v4.24.0)
├── puppetlabs-concat (v4.1.1)
├── erwbgy-limits (v0.3.1)
├── erwbgy-ntp (v0.7.3)
└─┬ herculesteam-augeasproviders (v2.1.3) invalid
The error we're seeing is this:
Error: Evaluation Error: Error while evaluating a Function Call, could not create resource of unknown type user at /etc/puppetlabs/puppet/modules/live/system/manifests/users.pp:23:7 on node dbsms01.test.nl.local
/etc/puppetlabs/puppet/modules/live/system/lib/puppet/parser/functions/system_create_resources.rb:66:in block in <top (required)>' /opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/parser/functions.rb:174:in
block (2 levels) in newfunction'
It looks like the user resource isn't being passed or there is something missing.
Thanks
Dave
It seems it is not possible currently :
If you like, I can submit a patch.
Is there a reason for this, or it just hasn't been added yet? The NTP section behaves this way, I think SSHD should also
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.