- Mastering Puppet(Second Edition)
- Thomas Uphill
- 3620字
- 2021-07-16 13:05:24
Organizing the nodes with an ENC
An ENC is a process that is run on the Puppet master or the host compiling the catalog, to determine which classes are applied to the node. The most common form of ENC is a script run through the exec
node terminus. When using the exec
node terminus, the script can be written in any language and it receives certname
(certificate name) from the node, as a command-line argument. In most cases, this will be the Fully Qualified Domain Name (FQDN) of the node. We will assume that the certname
setting has not been explicitly set and that the FQDN of our nodes is being used.
We will only use the hostname portion, as the FQDN can be unreliable in some instances. Across your enterprise, the naming convention of the host should not allow multiple machines to have the same hostname. The FQDN is determined by a fact; this fact is the union of the hostname fact and the domain fact. The domain fact on Linux is determined by running the hostname –f
command. If DNS is not configured correctly or reverse records do not exist, the domain fact will not be set and the FQDN will also not be set, as shown:
# facter domain example.com # facter fqdn node1.example.com # mv /etc/resolv.conf /etc/resolv.conf.bak # facter domain # facter fqdn #
The output of the ENC script is a YAML file that defines the classes, variables, and environment for the node.
Unlike site.pp
, the ENC script can only assign classes, make top-scope variables, and set the environment of the node. The environment is only set from ENC on versions 3 and above of Puppet.
A simple example
To use an ENC, we need to make one small change in our Puppet master machine. We'll have to add the node_terminus
and external_nodes
lines to the [master]
section of puppet.conf
, as shown in the following code (we only need make this change on the master machines, as this is concerned with catalog compilation only):
[master] node_terminus = exec external_nodes = /usr/local/bin/simple_node_classifier
Note
The puppet.conf
files need not be the same across our installation; Puppet masters and CA machines can have different settings. Having different configuration settings is advantageous in a Master-of-Master (MoM) configuration. MoM is a configuration where a top level Puppet master machine is used to provision all of the Puppet master machines.
Our first example, as shown in the following code snippet, will be written in Ruby and live in the file /usr/local/bin/simple_node_classifier
, as shown:
#!/bin/env ruby require 'yaml' # create an empty hash to contain everything @enc = Hash.new @enc["classes"] = Hash.new @enc["classes"]["base"] = Hash.new @enc["parameters"] = Hash.new @enc["environment"] = 'production' #convert the hash to yaml and print puts @enc.to_yaml exit(0)
Make this script executable and test it on the command line, as shown in the following example:
# chmod 755 /usr/local/bin/simple_node_classifier # /usr/local/bin/simple_node_classifier --- classes: base: {} environment: production parameters: {}
Puppet version 4 no longer requires the Ruby system package; Ruby is installed in /opt/puppetlabs/puppet/bin
. The preceding script relies on Ruby being found in the current $PATH
. If Ruby is not in the current $PATH
, either modify your $PATH
to include /opt/puppetlabs/puppet/bin
or install the Ruby system package.
The previous script returns a properly formatted YAML file.
YAML files start with three dashes (---
); they use colons (:
) to separate parameters from values and hyphens (-
) to separate multiple values (arrays). For more information on YAML, visit http://www.yaml.org/.
If you use a language such as Ruby or Python, you do not need to know the syntax of YAML, as the built-in libraries take care of the formatting for you. The following is the same example in Python. To use the Python example, you will need to install PyYAML
, which is the Python YAML interpreter, using the following command:
# yum install PyYAML Installed: PyYAML.x86_64 0:3.10-3.el6
The Python version starts with an empty dictionary. We then use sub-dictionaries to hold the classes, parameters, and environment. We will call our Python example /usr/local/bin/simple_node_classifier_2
. The following is our example:
#!/bin/env python import yaml import sys # create an empty hash enc = {} enc["classes"] = {} enc["classes"]["base"] = {} enc["parameters"] = {} enc["environment"] = 'production' # output the ENC as yaml print "---" print yaml.dump(enc) sys.exit(0)
Make /usr/local/bin/simple_node_classifier_2
executable and run it using the following commands:
worker1# chmod 755 /usr/local/bin/simple_node_classifier_2 worker1# /usr/local/bin/simple_node_classifier_2 --- classes: base: {} environment: production parameters: {}
The order of the lines following ---
may be different on your machine; the order is not specified when Python dumps the hash of values.
The Python script outputs the same YAML, as the Ruby code. We will now define the base
class referenced in our ENC script, as follows:
class base { file {'/etc/motd': mode => '0644', owner => '0', group => '0', content => inline_template("Managed Node: <%= @hostname %>\nManaged by Puppet version <%= @puppetversion %>\n"), } }
Now that our base
class is defined, modify the external_nodes
setting to point at the Python ENC script. Restart puppetserver
to ensure that the change is implemented.
Now, run Puppet on the client
node. Notice that the message of the day (/etc/motd
) has been updated using an inline template, as shown in the following command-line output:
[thomas@client ~]$ sudo puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Caching catalog for client Info: Applying configuration version '1441950102' Notice: /Stage[main]/Base/File[/etc/motd]/ensure: defined content as '{md5}df3dfe6fe2367e36f0505b486aa24da5' Notice: Applied catalog in 0.05 seconds [thomas@client ~]$ cat /etc/motd Managed Node: client Managed by Puppet version 4.2.1
Since the ENC is only given one piece of data, the certname
(FQDN), we need to create a naming convention that provides us with enough information to determine the classes that should be applied to the node.
Hostname strategy
In an enterprise, it's important that your hostnames are meaningful. By meaningful, I mean that the hostname should give you as much information as possible about the machine. When you encounter a machine in a large installation, it is likely that you did not build the machine. You need to be able to know as much as possible about the machine just from its name. The following key points should be readily determined from the hostname:
- Operating system
- Application/role
- Location
- Environment
- Instance
It is important that the convention should be standardized and consistent. In our example, let us suppose that the application is the most important component for our organization, so we put that first and the physical location comes next (which data center), followed by the operating system, environment, and instance number. The instance number will be used when you have more than one machine with the same role, location, environment, and operating system. Since we know that the instance number will always be a number, we can omit the underscore between the operating system and environment; thus, making the hostname a little easier to type and remember.
Your enterprise may have more or less information, but the principle will remain the same. To delineate our components, we will use underscores(_
). Some companies rely on a fixed length for each component of the hostname, so as to mark the inpidual components of the hostname by position alone.
In our example, we have the following environments:
p
: This stands for productionn
: This stands for non-productiond
: This stands for development/testing/lab
Our applications will be of the following types:
web
db
Our operating system will be Linux, which we will shorten to just l
and our location will be our main datacenter (main
). So, a production web server on Linux in the main datacenter will have the hostname web_main_lp01
.
Note
If you think you are going to have more than 99 instances of any single service, you might want to have another leading zero to the instance number (001
).
Based only on the hostname, we know that this is a web server in our main datacenter. It's running on Linux and it's the first such machine in production. Now that we have this nice convention, we need to modify our ENC to utilize this convention to glean all the information from the hostname.
Modified ENC using hostname strategy
We'll build our Python ENC script (/usr/local/bin/simple_node_classifier_2
) and update it to use the new hostname strategy, as follows:
#!/bin/env python # Python ENC # receives fqdn as argument import yaml import sys """output_yaml renders the hash as yaml and exits cleanly""" def output_yaml(enc): # output the ENC as yaml print "---" print yaml.dump(enc) quit()
Python is very particular about spacing; if you are new to Python, take care to copy the indentations exactly as given in the previous snippet.
We define a function to print the YAML and exit the script. We'll exit the script early if the hostname doesn't match our naming standards, as shown in the following example:
# create an empty hash enc = {} enc["classes"] = {} enc["classes"]["base"] = {} enc["parameters"] = {} try: hostname=sys.argv[1] except: # need a hostname sys.exit(10)
Exit the script early if the hostname is not defined. This is the minimum requirement and we should never reach this point.
We first split the hostname using underscores (_
) into an array called parts
and then assign indexes of parts
to role
, location
, os
, environment
, and instance
, as shown in the following code snippet:
# split hostname on _ try: parts = hostname.split('_') role = parts[0] location = parts[1] os = parts[2][0] environment = parts[2][1] instance = parts[2][2:]
We are expecting hostnames to conform to the standard. If you cannot guarantee this, then you will have to use something similar to the regular expression module to deal with the exceptions to the naming standard:
except: # hostname didn't conform to our standard # include a class which notifies us of the problem enc["classes"]["hostname_problem"] = {'enc_hostname': hostname} output_yaml(enc) raise SystemExit
We wrapped the previous assignments in a try
statement. In this except
statement, we exit printing the YAML and assign a class named hostname_problem
. This class will be used to alert us in the console or report to the system that the host has a problem. We send the enc_hostname
parameter to the hostname_problem
class with the {'enc_hostname': hostname}
code.
The environment is a single character in the hostname; hence, we use a dictionary to assign a full name to the environment, as shown here:
# map environment from hostname into environment environments = {} environments['p'] = 'production' environments['n'] = 'nonprod' environments['d'] = 'devel' environments['s'] = 'sbx' try: enc["environment"] = environments[environment] except: enc["environment"] = 'undef'
The following is used to map a role from hostname into role:
# map role from hostname into role enc["classes"][role] = {}
Next, we assign top scope variables to the node based on the values we obtained from the parts
array previously:
# set top scope variables enc["parameters"]["enc_hostname"] = hostname enc["parameters"]["role"] = role enc["parameters"]["location"] = location enc["parameters"]["os"] = os enc["parameters"]["instance"] = instance output_yaml(enc)
We will have to define the web
class to be able to run the Puppet agent on our web_main_lp01
machine, as shown in the following code:
class web {
package {'httpd':
ensure => 'installed'
}
service {'httpd':
ensure => true,
enable => true,
require => Package['httpd'],
}
}
Heading back to web_main_lp01
, we run Puppet, sign the certificate on our puppetca
machine, and then run Puppet again to verify that the web
class is applied, as shown here:
[thomas@web_main_lp01 ~]$ sudo puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Caching catalog for web_main_lp01.example.com Info: Applying configuration version '1441951808' Notice: /Stage[main]/Web/Package[httpd]/ensure: created Notice: /Stage[main]/Web/Service[httpd]/ensure: ensure changed 'stopped' to 'running' Info: /Stage[main]/Web/Service[httpd]: Unscheduling refresh on Service[httpd] Notice: Applied catalog in 16.03 seconds
Our machine has been installed as a web server without any intervention on our part. The system knew which classes were to be applied to the machine based solely on the hostname. Now, if we try to run Puppet against our client
machine created earlier, our ENC will include the hostname_problem
class with the parameter of the hostname passed to it. We can create this class to capture the problem and notify us. Create the hostname_problem
module in /etc/puppet/modules/hostname_problem/manifests/init.pp
, as shown in the following snippet:
class hostname_problem ($enc_hostname) { notify {"WARNING: $enc_hostname ($::ipaddress) doesn't conform to naming standards": } }
Now, when we run Puppet on our node1
machine, we will get a useful warning that node1
isn't a good hostname for our enterprise, as shown here:
[thomas@client ~]$ sudo puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Caching catalog for client.example.com Info: Applying configuration version '1442120036' Notice: WARNING: client.example.com (10.0.2.15) doesn't conform to naming standards Notice: /Stage[main]/Hostname_problem/Notify[WARNING: client.example.com (10.0.2.15) doesn't conform to naming standards]/message: defined 'message' as 'WARNING: client.example.com (10.0.2.15) doesn't conform to naming standards' Notice: Applied catalog in 0.03 seconds
Your ENC can be customized much further than this simple example. You have the power of Python, Ruby, or any other language you wish to use. You could connect to a database and run some queries to determine the classes to be installed. For example, if you have a CMDB in your enterprise, you could connect to the CMDB and retrieve information based on the FQDN of the node and apply classes based on that information. You could connect to a URI and retrieve a catalog (dashboard and foreman do something similar). There are many ways to expand this concept.
In the next section, we'll look at using LDAP to store class information.
LDAP backend
If you already have an LDAP implementation in which you can extend the schema, then you can use the LDAP node terminus that is shipped with Puppet. The support for this backend for puppetserver
has not been maintained as well as it was in the previous releases of Puppet. I still feel that this backend is useful for certain installations. I will outline the steps to be taken to have this backend operate with a puppetserver
installation. Using this schema adds a new objectclass
called puppetclass
. Using this objectclass
, you can set the environment, set top scope variables, and include classes. The LDAP schema that is shipped with Puppet includes puppetClass
, parentNode
, environment
, and puppetVar
attributes that are assigned to the objectclass
named puppetClient
. The LDAP experts should note that all four of these attributes are marked as optional and the objectclass
named puppetClient
is non-structural. To use the LDAP terminus, you must have a working LDAP implementation; apply the Puppet schema to that installation and add the ruby-ldap
package to your Puppet masters (to allow the master to query for node information).
OpenLDAP configuration
We'll begin by setting up a fresh OpenLDAP implementation and adding a Puppet schema. Create a new machine and install openldap-servers
. My installation installed the openldap-servers-2.4.39-6.el7.x86_64
version. This version requires configuration with OLC (OpenLDAP configuration or runtime configuration). Further information on OLC can be obtained at http://www.openldap.org/doc/admin24/slapdconf2.html. OLC configures LDAP using LDAP.
After installing openldap-servers
, your configuration will be in /etc/openldap/slapd.d/cn=config
. There is a file named olcDatabase={2}.hdb.ldif
in this directory; edit the file and change the following lines:
olcSuffix: dc=example,dc=com olcRootDN: cn=Manager,dc=example,dc=com olcRootPW: packtpub
Note that the olcRootPW
line is not present in the default file, so you will have to add it here. If you're going into production with LDAP, you should set olcDbConfig
parameters as outlined at http://www.openldap.org/doc/admin24/slapdconf2.html.
These lines set the top-level location for your LDAP and the password for RootDN
. This password is in plain text; a production installation would use SSHA encryption. You will be making schema changes, so you must also edit olcDatabase={0}config.ldif
and set RootDN
and RootPW
. For our example, we will use the default RootDN
value and set the password to packtpub
, as shown here:
olcRootDN: cn=config olcRootPW: packtpub
These two lines will not exist in the default configuration file provided by the rpm. You might want to keep this RootDN
value and the previous RootDN
values separate so that this RootDN
value is the only one that can modify the schema and top-level configuration parameters.
Next, use ldapsearch
(provided by the openldap-clients
package, which has to be installed separately) to verify that LDAP is working properly. Start slapd
with the systemctl start slapd.service
command and then verify with the following ldapsearch
command:
# ldapsearch -LLL -x -b'dc=example,dc=com' No such object (32)
This result indicates that LDAP is running but the directory is empty. To import the Puppet
schema into this version of OpenLDAP, copy the puppet.schema
from https://github.com/puppetlabs/puppet/blob/master/ext/ldap/puppet.schema to /etc/openldap/schema
.
Tip
To download the file from the command line directly, use the following command:
# curl -O https://raw.githubusercontent.com/puppetlabs/puppet/master/ext/ldap/puppet.schema
Then create a configuration file named /tmp/puppet-ldap.conf
with an include
line pointing to that schema, as shown in the following snippet:
include /etc/openldap/schema/puppet.schema
Then run slaptest
against that configuration file, specifying a temporary directory as storage for the configuration files created by slaptest
, as shown here:
# mkdir /tmp/puppet-ldap # slaptest -f puppet-ldap.conf -F /tmp/puppet-ldap/ config file testing succeeded
This will create an OLC structure in /tmp/puppet-ldap
. The file we need is in /tmp/puppet-ldap/cn=config/cn=schema/cn={0}puppet.ldif
. To import this file into our LDAP instance, we need to remove the ordering information (the braces and numbers ({0},{1},
…)
in this file). We also need to set the location for our schema, cn=schema,cn=config
. All the lines after structuralObjectClass
should be removed. The final version of the file will be in /tmp/puppet-ldap/cn=config/cn=schema/cn={0}puppet.ldif
and will be as follows:
dn: cn=puppet,cn=schema,cn=config objectClass: olcSchemaConfig cn: puppet olcAttributeTypes: ( 1.3.6.1.4.1.34380.1.1.3.10 NAME 'puppetClass' DESC 'Puppet Node Class' EQUALITY caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) olcAttributeTypes: ( 1.3.6.1.4.1.34380.1.1.3.9 NAME 'parentNode' DESC 'Puppet Parent Node' EQUALITY caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 SINGLE-VALUE ) olcAttributeTypes: ( 1.3.6.1.4.1.34380.1.1.3.11 NAME 'environment' DESC 'Puppet Node Environment' EQUALITY caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) olcAttributeTypes: ( 1.3.6.1.4.1.34380.1.1.3.12 NAME 'puppetVar' DESC 'A variable setting for puppet' EQUALITY caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) olcObjectClasses: ( 1.3.6.1.4.1.34380.1.1.1.2 NAME 'puppetClient' DESC 'Puppet Client objectclass' SUP top AUXILIARY MAY ( puppetclass $ parentnode $ environment $ puppetvar ) )
Now add this new schema to our instance using ldapadd
, as follows using the RootDN
value cn=config
:
# ldapadd -x -f cn\=\{0\}puppet.ldif -D'cn=config' -W Enter LDAP Password: packtpub adding new entry "cn=puppet,cn=schema,cn=config"
Now we can start adding nodes to our LDAP installation. We'll need to add some containers and a top-level organization to the database before we can do that. Create a file named start.ldif
with the following contents:
dn: dc=example,dc=com objectclass: dcObject objectclass: organization o: Example dc: example dn: ou=hosts,dc=example,dc=com objectclass: organizationalUnit ou: hosts dn: ou=production,ou=hosts,dc=example,dc=com objectclass: organizationalUnit ou: production
If you are unfamiliar with how LDAP is organized, review the information at http://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol#Directory_structure.
Now add the contents of start.ldif
to the directory using ldapadd
, as follows:
# ldapadd -x -f start.ldif -D'cn=manager,dc=example,dc=com' -W Enter LDAP Password: packtpub adding new entry "dc=example,dc=com" adding new entry "ou=hosts,dc=example,dc=com" adding new entry "ou=production,ou=hosts,dc=example,dc=com"
At this point, we have a container for our nodes at ou=production,ou=hosts,dc=example,dc=com
. We can add an entry to our LDAP with the following LDIF, which we will name web_main_lp01.ldif
:
dn: cn=web_main_lp01,ou=production,ou=hosts,dc=example,dc=com objectclass: puppetClient objectclass: device puppetClass: web puppetClass: base puppetvar: role='Production Web Server'
We then add this LDIF to the directory using ldapadd
again, as shown here:
# ldapadd -x -f web_main_lp01.ldif -D'cn=manager,dc=example,dc=com' -W Enter LDAP Password: packtpub adding new entry "cn=web_main_lp01,ou=production,ou=hosts,dc=example,dc=com"
With our entry in LDAP, we are ready to configure our worker nodes to look in LDAP for node definitions. Change /etc/puppetlabs/puppet/puppet.conf
to have the following lines in the [master]
section:
node_terminus = ldap ldapserver = ldap.example.com ldapbase = ou=hosts,dc=example,dc=com
We are almost ready; puppetserver
runs Ruby within a Java process. To have this process access our LDAP server, we need to install the jruby-ldap
gem. puppetserver
includes a gem installer for this purpose, as shown here:
# puppetserver gem install jruby-ldap Fetching: jruby-ldap-0.0.2.gem (100%) Successfully installed jruby-ldap-0.0.2 1 gem installed
There is a bug in the jruby-ldap
that we just installed; it was discovered by my colleague Steve Huston on the following Google group: https://groups.google.com/forum/#!topic/puppet-users/DKu4e7dzhvE. To patch the jruby-ldap
module, edit the conn.rb
file in /opt/puppetlabs/server/data/puppetserver/jruby-gems/gems/jruby-ldap-0.0.2/lib/ldap
and add the following lines to the beginning:
if RUBY_PLATFORM =~ /^java.*/i class LDAP::Entry def to_hash h = {} get_attributes.each { |a| h[a.downcase.to_sym] = self[a] } h[:dn] = [dn] h end end end
Restart the puppetserver
process after making this modification with the systemctl restart puppetserver.service
command.
Note
The LDAP backend is clearly not a priority project for Puppet. There are still a few unresolved bugs with using this backend. If you wish to integrate with your LDAP infrastructure, I believe writing your own script that references LDAP will be more stable and easier for you to support.
To convince yourself that the node definition is now coming from LDAP, modify the base
class in /etc/puppet/modules/base/manifests/init.pp
to include the role
variable, as shown in the following snippet:
class base { file {'/etc/motd': mode => '0644', owner => '0', group => '0', content => inline_template("Role: <%= @role %>\nManaged Node: <%= @hostname %>\nManaged by Puppet version <%= @puppetversion %>\n"), } }
You will also need to open port 389
, the standard LDAP port, on your LDAP server, ldap.example.com
, to allow Puppet masters to query the LDAP machine.
Then, run Puppet on web_main_lp01
and verify the contents of /etc/motd
using the following command:
# cat /etc/motd Role: 'Production Web Server' Managed Node: web_main_lp01 Managed by Puppet version 4.2.1
Keeping your class and variable information in LDAP makes sense if you already have all your nodes in LDAP for other purposes, such as DNS or DHCP. One potential drawback of this is that all the class information for the node has to be stored within a single LDAP entry. It is useful to be able to apply classes to machines based on criteria. In the next section, we will look at Hiera, a system that can be used for this type of criteria-based application.
Before starting the next section, comment out the LDAP ENC lines in /etc/puppetlabs/puppet/puppet.conf
as follows:
# node_terminus = ldap # ldapserver = puppet.example.com # ldapbase = ou=hosts,dc=example,dc=com
- Mastering AWS Lambda
- SoapUI Cookbook
- Maven Build Customization
- C#編程入門指南(上下冊)
- C/C++常用算法手冊(第3版)
- Spring Cloud、Nginx高并發(fā)核心編程
- Python自然語言處理(微課版)
- QGIS By Example
- Yii Project Blueprints
- Python機器學(xué)習(xí)算法: 原理、實現(xiàn)與案例
- 深度學(xué)習(xí):Java語言實現(xiàn)
- Python Interviews
- PrimeFaces Blueprints
- Backbone.js Testing
- Java高并發(fā)編程詳解:深入理解并發(fā)核心庫