Wednesday, November 25, 2020

Error: Facter: error while resolving custom facts in /opt/puppetlabs/puppet/cache/lib/facter/packages_of_interest.rb: undefined local variable or method `package' for main:Object Did you mean? packages

 

PROBLEM:

puppet agent -t

Info: Using configured environment 'production'

Info: Retrieving pluginfacts

Info: Retrieving plugin

Info: Retrieving locales

Info: Loading facts

Error: Facter: error while resolving custom facts in /opt/puppetlabs/puppet/cache/lib/facter/packages_of_interest.rb: undefined local variable or method `package' for main:Object

Did you mean?  packages

Info: Caching catalog for server.com

Info: Applying configuration version '1606329246'

Notice: Applied catalog in 1.40 seconds

SOLUTION:

Looking at the subroutine in packages_of_interest.rb, replace package with "matching_package" gets rid of that error.

Adding the the package name to the error message shows which package.  I changed line 57 from

from

      puts "unknown provider passed (#{package[:provider]})"

to

      puts "unknown provider passed (#{matching_package[:provider]}) for package (#{matching_package[:name]})"

The agent replaces this file, so copy the modified file into place once the agent says it updates package_of_interest.rb

This is the error message now:

"unknown provider passed (pip2) for package (javapackages)"

even though pip2 exists on the system

I'm opening a ticket with Support since this seems like a bug.

 EOS

4/14/21 Posted at https://stackoverflow.com/questions/67097723/puppet-agent-t-throws-error-facter-error-while-resolving-custom-facts-p/67097724#67097724

Tuesday, November 24, 2020

Puppet executable missing puppet gem

 

PROBLEM:

[root@MoM 2019.8.4]# puppet agent -t;

Traceback (most recent call last):

        1: from /opt/puppetlabs/server/apps/bolt-server/bin/puppet:23:in `<main>'

/opt/puppetlabs/server/apps/bolt-server/bin/puppet:23:in `load': cannot load such file -- /opt/puppetlabs/puppet/lib/ruby/gems/2.5.0/gems/puppet-6.19.1/bin/puppet (LoadError)

[root@MoM 2019.8.4]#

SOLUTION:

Check the PATH environment variable.  If the problem persists after changing PATH, then do:


cp -p /opt/puppetlabs/puppet/bin/puppet /opt/puppetlabs/puppet/bin/puppet.orig

 

/opt/puppetlabs/puppet/bin/gem install puppet -v 6.19.1

 

[root@MoM 2019.8.4]# puppet agent -t;

Info: Using configured environment 'production'

Info: Retrieving pluginfacts

Info: Retrieving plugin

Info: Retrieving locales

Info: Loading facts

Info: Caching catalog for MoM.wrk.fs.usda.gov

Info: Applying configuration version '1606189208'

Notice: Applied catalog in 33.42 seconds

[root@MoM 2019.8.4]#

EOS

Monday, November 16, 2020

Warning: MCollective and Activemq have been removed from PE 2019.0+, but the puppet_enterprise::profile::master::mcollective class is still being applied.

 PROBLEM:

$ puppet agent -t

Info: Using configured environment 'production'

Info: Retrieving pluginfacts

Info: Retrieving plugin

Info: Retrieving locales

Info: Loading facts

Info: Caching catalog for fsxopsx1697.wrk.fs.usda.gov

Info: Applying configuration version '1605550196'

Warning: MCollective and Activemq have been removed from PE 2019.0+, but the puppet_enterprise::profile::master::mcollective class is still being applied. Please remove this class from your classification.

Warning: /Stage[main]/Puppet_enterprise::Profile::Master::Mcollective/Notify[puppet_enterprise::profile::master::mcollective-still-applied]/message: defined 'message' as 'MCollective and Activemq have been removed from PE 2019.0+, but the puppet_enterprise::profile::master::mcollective class is still being applied. Please remove this class from your classification.'

Warning: MCollective and Activemq have been removed from PE 2019.0+, but the puppet_enterprise::profile::mcollective::peadmin class is still being applied. Please remove this class from your classification.

Warning: /Stage[main]/Puppet_enterprise::Profile::Mcollective::Peadmin/Notify[puppet_enterprise::profile::mcollective::peadmin-still-applied]/message: defined 'message' as 'MCollective and Activemq have been removed from PE 2019.0+, but the puppet_enterprise::profile::mcollective::peadmin class is still being applied. Please remove this class from your classification.'

Notice: Applied catalog in 32.16 seconds

SOLUTION:

Query the node groups into q.hocon to find which Node Groups have mcollective classes

$ grep -i mcollective q.hocon

$ curl -k -X GET https://$(hostname -f):4433/classifier-api/v1/groups --cert /etc/puppetlabs/puppet/ssl/certs/$(hostname -f).pem --key /etc/puppetlabs/puppet/ssl/private_keys/$(hostname -f).pem --cacert /etc/puppetlabs/puppet/ssl/certs/ca.pem -H "Content-Type: application/json" | python -m json.tool > q.hocon

$ vim q.hocon

EOS

Friday, November 13, 2020

Class[Puppet_enterprise]: has no parameter named 'mcollective_middleware_hosts' on node mom.com

 PROBLEM:

Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Class[Puppet_enterprise]: has no parameter named 'mcollective_middleware_hosts' on node mom.com

Warning: Not using cache on failed catalog

Error: Could not retrieve catalog; skipping run

SOLUTION:
https://puppet.com/docs/pe/2018.1/removing_mcollective.html
Essentially, go into the console >> PE Infrastructure >> Classes >> Class: puppet_enterprise
and remove mcollective parameter (for Puppet>5), or set it to "disabled" for Puppet 5.

Wednesday, July 15, 2020

Console started, but browser won't connect. Curl gets 404

PROBLEM:

Chrome nor Firefox wouldn't load console login screen.

Curling the url on port 80 give 404 errors, port 443 gives 303 errors (or visa versa)

SOLUTION:

Did this from L-Ubuntu commandline, and it started working:

wget https://192.168.1.138     -O /dev/tty --no-check-certificqate

EOS


Monday, June 15, 2020

Client Error: /Stage[main]/Puppet_agent::Install/Package[puppet-agent]/ensure: change from '5.5.18-1.el7' to '5.5.17'

PROBLEM:

[root@client ~]# puppet agent -t

Info: Using configured environment 'production'

Info: Retrieving pluginfacts

Info: Retrieving plugin

Info: Retrieving locales

Info: Loading facts

Info: Caching catalog for client.com

Info: Applying configuration version '1592088269'

Error: Could not update: Execution of '/bin/yum -d 0 -e 0 -y downgrade puppet-agent-5.5.17' returned 1: Error: Nothing to do

Error: /Stage[main]/Puppet_agent::Install/Package[puppet-agent]/ensure: change from '5.5.18-1.el7' to '5.5.17' failed: Could not update: Execution of '/bin/yum -d 0 -e 0 -y downgrade puppet-agent-5.5.17' returned 1: Error: Nothing to do

Notice: Applied catalog in 3.95 seconds

[root@client ~]#  

SOLUTION:

https://puppet.com/docs/pe/latest/installing_agents.html

Add classes to the PE Master node group for each agent platform used in your environment. For example, pe_repo::platform::el_7_x86_64.

 

If that doesn't work, try deploying the production environment.

puppet access login --service-url https://$(hostname -f):4433/rbac-api --lifetime 5d

puppet code deploy production --wait



Saturday, June 13, 2020

Error 500 on SERVER: Server Error: Could not find class puppet_agent for mom.com on node mom.com

PROBLEM:
[root@MoM]# puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Loading facts
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Could not find class puppet_agent for mom.com on node mom.com
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
SOLUTION:
Deploy the production environment.
puppet access login --service-url https://$(hostname -f):4433/rbac-api --lifetime 5d                              puppet code deploy production --wait
puppet agent -t

Also, make sure these lines are in Puppetfile:  
   mod 'puppetlabs-puppet_agent'
   mod 'puppetlabs-stdlib'

Monday, April 27, 2020

Unrealted error for parent ID needed

PROBLEM:
[root:~]# cat Roles__puppet_node_group.rb
node_group { 'Roles':
  ensure               => 'present',
  classes              => {},
  environment          => 'production',
  override_environment => false,
  parent               => '00000000-0000-4000-8000-000000000000',
  rule                 => [''],
}
[root:~]# puppet apply Roles__puppet_node_group.rb

Error: node_manager failed with error type 'schema-violation': The object(s) in your submitted request did not conform to the schema. The problem is: ([:environment-trumps (not (instance? Boolean "false"))])
SOLUTION:
Specify the parent ID instead of the parent name.  For example:
[root@fsxopsx1697 ~]# cat /nfsroot/work/sysinfra/puppet/2018.1.13.el6/backup/fsxopsx0618/Roles__puppet_node_group.rb
node_group { 'Roles':
  ensure               => 'present',
  classes              => {},
  environment          => 'production',
  override_environment => false,
  parent               => '00000000-0000-4000-8000-000000000000',
  rule                 => [''],
}

Tuesday, January 7, 2020

Puppet console can't run tasks from the console

PROBLEM:
Running a task from the console returns:
Failed - error...
Error: fsxopsx1031.wrk.fs.usda.gov is not connected to the PCP broker


On a broken node, run:
/opt/puppetlabs/puppet/bin/pxp-agent --foreground --loglevel debug
tail /var/log/puppetlabs/pxp-agent/pxp-agent.log

It returns:
2020-01-03 11:48:34.877539 INFO  puppetlabs.pxp_agent.main:189 - pxp-agent 1.9.11 started at debug level
2020-01-03 11:48:34.877742 ERROR puppetlabs.pxp_agent.main:208 - Fatal configuration error: broker-ws-uri or broker-ws-uris must be defined; cannot start pxp-agent

SOLUTION:
Same solution as last post, repeated here:

Documentation says to set master_uris and pcp_broker_list for the PE Agent and PE Infrastructure Agent groups in the console (https://puppet.com/docs/pe/2018.1/installing_compile_masters.html).  For example:

Change the PXP agent to connect directly to the MoM, not the compile masters' loadbalancer.
Classification --> PE Infrastructure --> PE Agent --> Configuration tab
Class: puppet_enterprise::profile::agent
master_uris = ["https://<MoM_FQDN>/"]
pcp_broker_list = ["<MoM_FQDN>:8142"]

However, our experience is that removing master_uris and pcp_broker had a much higher success rate.

After removing them, run the agent on the managed node.  Successful connections should start appearing in 
tail -F /var/log/puppetlabs//pxp-agent/pxp-agent.log

Also, the new master_uris and pcp_broker_list values should appear in the pxp-agent.conf file that the MoM manages remotely on the managed node.  Look on the managed node:
cat /etc/puppetlabs/pxp-agent/pxp-agent.conf | python -m json.tool
EOS

Thursday, January 2, 2020

Can't run remote agents from Puppet console. Also shuts down soon after starting.

PROBLEM:
Console won't run the agent saying, "Run Puppet has been disabled because Node Manager cannot connect to <fqdn>".
Also 'puppet job' can't run anything on any nodes from the MoM command line.

Tried:
Turn on debugging in activemq.

cp -p /etc/puppetlabs/activemq/log4j.properties /etc/puppetlabs/activemq/log4j.properties.orig
vim /etc/puppetlabs/activemq/log4j.properties
Comment this line:          #log4j.rootLogger=INFO, console, logfile
Uncomment this line:     log4j.rootLogger=DEBUG, logfile, console

Bounce the service:
sv=pe-activemq;               echo == $sv; puppet resource service $sv ensure=stopped
sv=pe-activemq;               echo == $sv; puppet resource service $sv ensure=running

The log rotates, so use -F
tail -F /var/log/puppetlabs/activemq/activemq.log


Certificate expiration messages started appearing:


2019-12-31T08:44:01.440-06:00 | WARN | Transport Connection to: tcp://<node_ip>:44166 failed: javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_expired | org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ Transport: ssl:///<node_ip>:44166

Puppet support says to disable MCollective if we're not using it:

Trying to rekey Orchestration on the MoM:
mv /etc/puppetlabs/orchestration-services/ssl/<files> /tmp
cp <puppet>/ssl <orchestration
chown -R pe-orchestration-services:pe-orchestration-services /etc/puppetlabs/orchestraion-services/ssl

CAUSE:
The real cause was the PXP agent.  On one of the failing managed nodes, the PXP agent was throwing errors that it couldn't connect to wss://<compile_master_load_balancer>:8140/pcp2/agent

SOLUTION:
Documentation says to set master_uris and pcp_broker_list for the PE Agent and PE Infrastructure Agent groups in the console (https://puppet.com/docs/pe/2018.1/installing_compile_masters.html).  For example:

Change the PXP agent to connect directly to the MoM, not the compile masters' loadbalancer.
Classification --> PE Infrastructure --> PE Agent --> Configuration tab
Class: puppet_enterprise::profile::agent
master_uris = ["https://<MoM_FQDN>/"]
pcp_broker_list = ["<MoM_FQDN>:8142"]

However, our experience is that removing master_uris and pcp_broker had a much higher success rate.

After removing them, run the agent on the managed node.  Successful connections should start appearing in 
tail -F /var/log/puppetlabs//pxp-agent/pxp-agent.log

Also, the new master_uris and pcp_broker_list values should appear in the pxp-agent.conf file that the MoM manages remotely on the managed node.  Look on the managed node:
cat /etc/puppetlabs/pxp-agent/pxp-agent.conf | python -m json.tool

EOS