Automate all the things across your multi-cloud infrastructure!

Puppet Enterprise (64-bit)

Puppet Enterprise 2019.8.5 6.21.1 (64-bit)

  -  43.2 MB  -  Demo

Sometimes latest versions of the software can cause issues when installed on older devices or devices running an older version of the operating system.

Software makers usually fix these issues but it can take them some time. What you can do in the meantime is to download and install an older version of Puppet Enterprise 2019.8.5 6.21.1 (64-bit).


For those interested in downloading the most recent release of Puppet Enterprise (64-bit) or reading our review, simply click here.


All old versions distributed on our website are completely virus-free and available for download at no cost.


We would love to hear from you

If you have any questions or ideas that you want to share with us - head over to our Contact page and let us know. We value your feedback!

  • Puppet Enterprise 2019.8.5 6.21.1 (64-bit) Screenshots

    The images below have been resized. Click on them to view the screenshots in full size.

    Puppet Enterprise 2019.8.5 6.21.1 (64-bit) Screenshot 1
  • Puppet Enterprise 2019.8.5 6.21.1 (64-bit) Screenshot 2

What's new in this version:

- A new command, puppet infrastructure run remove_old_pe_packages pe_version=current cleans up old PE packages remaining at /opt/puppet/packages/public. For pe_version, you can specify a SHA, a version number, or current. All packages older than the specified version are removed.
- Get better insight into replica sync status after upgrade
- Improved error handling for replica upgrades now results in a warning instead of an error if re-syncing PuppetDB between the primary and replica nodes takes longer than 15 minutes.
- Fix replica enablement issues
- When provisioning and enabling a replica (puppet infra provision replica --enable), the command now times out if there are issues syncing PuppetDB, and provides instructions for fixing any issues and separately provisioning the replica.
- Patch nodes with built-in health checks
- The new group_patching plan patches nodes with pre- and post-patching health checks. The plan verifies that Puppet is configured and running correctly on target nodes, patches the nodes, waits for any reboots, and then runs Puppet on the nodes to verify that they're still operational.
- Run a command after patching nodes
- A new parameter in the pe_patch class, post_patching_scriptpath enables you to run an executable script or binary on a target node after patching is complete. Additionally, the pre_patching_command parameter has been renamed pre_patching_scriptpath to more clearly indicate that you must provide the file path to a script, rather than an actual command.
- Patch nodes despite certain read-only directory permissions
- The file sync client uses SHAs corresponding to the branches of the control repository to name versioned directories. You must deploy an environment to update the directory names.
- Configure failed deployments to display r10k stacktrace in error output
- Configure the new r10k_trace parameter to include the r10k stack trace in the error output of failed deployments. The parameter defaults to false. Use the console to configure the parameter in the PE Master group, in the puppet_enterprise::master::code_manager class, and enter true for Value.
- Reduce query time when querying nodes with a fact filter
- When you run a query in the console that populates information on the Status page to PuppetDB, the query uses the optimize_drop_unused_joins feature in PuppetDB to increase performance when filtering on facts. You can disable drop-joins by setting the environment variable PE_CONSOLE_DISABLE_DROP_JOINS=yes in /etc/sysconfig/pe-console-services and restarting the console service.
- Resolved issues
- PuppetDB restarted continually after upgrade with deprecated parameters
- After upgrade, if the deprecated parameters facts_blacklist or cert_whitelist_path remained, PuppetDB restarted after each Puppet run.
- Tasks failed when specifying both as the input method
- In task metadata, using both for the input method caused the task run to fail
- Patch task misreported success when it timed out on Windows nodes
- If the pe_patch::patch_server task took longer than the timeout setting to apply patches on a Windows node, the debug output noted the timeout, but the task erroneously reported that it completed successfully. Now, the task fails with an error noting that the task timed out. Any updates in progress continue until they finish, but remaining patches aren't installed.
- Orchestrator created an extra JRuby pool
- During startup, the orchestrator created two JRuby pools - one for scheduled jobs and one for everything else. This is because the JRuby pool was not yet available in the configuration passed to the post-migration-fa function, which created its own JRuby pool in response. These JRuby pools accumulated over time because the stop function didn't know about them.
- Console install script installed non-FIPS agents on FIPS Windows nodes
- The command provided in the console to install Windows nodes installed a non-FIPS agent regardless of the node's FIPS status.
- Unfinished sync reported as finished when clients shared the same identifier
- Because the orchestrator and puppetserver file-sync clients shared the same identifier, Code Manager reported an unfinished sync as "all-synced": true. Whichever client finished polling first, notified the storage service that the sync was complete, regardless of the other client's sync status. This reported sync might have caused attempts to access tasks and plans before the newly-deployed code was available.
- Refused connection in orchestrator startup caused PuppetDB migration failure
- A condition on startup failed to delete stale scheduled jobs and prevented the orchestrator service from starting.
- Upgrade failed with Hiera data based on certificate extensions
- If your Hiera hierarchy contained levels based off certificate extensions, like {{trusted.extensions.pp_role}}, upgrade could fail if that Hiera entry was vital to running services, such as {{java_args}}. The failure was due to the puppet infrastructure recover_configuration command, which runs during upgrade, failing to recognize the hierarchy level.
- File sync issued an alert when a repository had no commits
- When a repository had no commits, the file-sync status recognized this repository’s state as invalid and issued an alert. A repository without any commits is still a valid state, and the service is fully functional even when there are no commits.
- Upgrade failed with infrastructure nodes classified based on trusted facts
- If your infrastructure nodes were classified into an environment based on a trusted fact, the recover configuration command used during upgrade could choose an incorrect environment when gathering data about infrastructure nodes, causing upgrade to fail.
- Patch task failed on Windows nodes with old logs
- When patching Windows nodes, if an existing patching log file was 30 or more days old, the task failed trying to both write to and clean up the log file.
- Backups failed if a Puppet run was in progress
- The puppet-backup command failed if a Puppet run was in progress.
- Default branch override did not deploy from the module's default branch
- A default branch override did not deploy from the module’s default branch if the branch override specified by Impact Analysis did not exist.
- Module-only environment updates did not deploy in Versioned Deploys
- Module-only environment updates did not deploy if you tracked a module's branch and redeployed the same control repository SHA, which pulled in new versions of the modules.