#83 – DPK – What We’d Do Different Next Time

This week on the podcast Dan and Kyle discuss changes they would make if they started working on the DPK for the first again. They also discuss setting up Elasticsearch clusters, monitoring Tuxedo queues with Elasticsearch and an annoying Bash on Windows bug.

Show Notes

  • Bash on Windows VPN Bug @ 3:15
  • 9.2 Upgrade Process @ 10:00
  • Monitoring Tuxedo Queueing in Elasticsearch @ 13:30
  • Tuxbeat Project @ 17:00
  • Elasticsearch DPK @ 21:45
  • Elasticsearch Clustering @ 27:00
  • Puppet on Linux @ 32:00
  • DPK Extract Only @ 34:30
  • What we’d do differently with the DPK @ 37:30
    • Co-locating Regions
    • Admin-only Test Servers
    • Config Homes
    • Use Git for managing code and YAML changes
    • Integrate all Puppet code into Classes and Modules
    • DPK Course
    • Use ACM via DPK
    • Go Vanilla

App Capacity Visualization

#82 – Embracing Fluid Navigation

This week on the podcast, Kyle and Dan revisit Fluid Navigation and why they are fans of the new navigation model. Kyle shares his experiences working with the DPK and his method for managing the YAML and Puppet files on servers. Dan shares a top-notch “Dad Joke”.

Show Notes

Apply CPU Patches with Deployment Packages

We have talked on the podcast about different ways to apply CPU patches, but with the DPK we have another tool to help us quickly apply CPU patches. This post and video demo’s will show you how to use the DPK to quickly apply CPU patches to your servers.

Deployment Workflow

When you run the DPK, it will deploy WebLogic, Java, Tuxedo (and more) on your server. The DPK uses archives (also known as “tarballs”) of prepackaged installations and extracts those archives to your server. There is one big problem, the archives included in the DPK’s do not contain the latest security patches. So, let’s make our own tarballs that include the security patches to deploy. This process is also a great exercise to better understand how the DPK deploys software.

If you are on Linux you can use the patching functionality with the DPK, but that code has not been written for Windows. I’m not covering that feature in this post, but the DPK Install Guide has a section on using that functionality (Task 6-3-1: Using the DPK Setup Script to Apply Fixes).

Movement Scripts

There are Fusion Middleware scripts the DPK uses to deploy WebLogic and Tuxedo. (Thanks to Eric Bolinger for pointing me in this direction.) The movement scripts allow you to take a current install of WebLogic, package it up, and deploy it to additional servers. This is how the DPK deploys WebLogic. The PeopleTools team packages up a WebLogic installation and we deploy that install to our servers. The movement scripts also manage the Oracle Inventory file for you.

There are many parts to the movement scripts, but we’ll be using just one part: copyBinary. This script will take a current installation and create a .jar file from that installation. We’ll use copyBinary to package our patched WebLogic installation.

If you have errors with the pasteBinary.cmd on the target system, you may need to configure the $ORACLE_HOME\oui\oraparam.ini file. This is a configuration file used by the OUI software. To make this simple, I copied the settings in the current $BASE\dpk\archives\weblogic12.1.3.0.tgz to my $ORACLE_HOME\oui\oraparam.ini using Beyond Compare. (Yes, Beyond Compare can read inside a tarball and compare against a directory!) Then I recreated my tarball with the updated oraparam.ini file.

Create a Patched WebLogic Tarball

 

Next, it’s time to install the CPU patch and run the copyBinary.cmd script. Stop all your PIA services on the server so you can remove the existing installations.

First, let’s patch Java. For demonstration, I’m using the jdk-7u141-windows-x64 installer. I’m installing

Then, we’ll use OPatch to apply the CPU to WebLogic:

cd $PATCH
$env:ORACLE_HOME=e:\psoft\pt\bea
$env:ORACLE_HOME\OPatch\OPatch napply

Once OPatch is done, we’ll use the movement scripts to package up our installation.

$env:JAVA_HOME=e:\psoft\pt\jdk1.7.0_141
. ${env:ORACLE_HOME}\oracle_common\bin\copyBinary.cmd -javaHome ${env:JAVA_HOME} -archiveLoc ${env:TEMP}\pt-weblogic-copy.jar -sourceMWHomeLoc ${env:ORACLE_HOME}

The output file from this command needs to be named pt-weblogic-copy.jar. The DPK expects that is the name of the .jar file. Next, we create a tarball of the pt-weblogic-copy.jar and two files to do the deploy portion of the movement scripts: cloningclient.jar and pasteBinary.cmd. These movement scripts are used by the DPK to deploy WebLogic. I used 7-zip to create my tarball with these three files:

$WL_VERSION="12.1.3.170418"
7z a -ttar "${env:TEMP}\pt-weblogic${WL_VERSION}.tar" "${env:ORACLE_HOME}\oracle_common\jlib\cloningclient.jar"
7z a -ttar "${env:TEMP}\pt-weblogic${WL_VERSION}.tar" "${env:ORACLE_HOME}\oracle_common\bin\pasteBinary.cmd"
7z a -ttar "${env:TEMP}\pt-weblogic${WL_VERSION}.tar" "${env:TEMP}\pt-weblogic-copy.jar"

Last, we gzip the archive and drop it in the $BASE\dpk\archives folder:

$env:DPK_BASE="e:\psft"
7z a -tgzip "${env:DPK_BASE}\dpk\archives\pt-weblogic${env:WL_VERSION}.tgz" "${env:TEMP}\pt-weblogic${env:WL_VERSION}.tar"

One thing to note here – the DPK doesn’t handle multiple versions of software in the dpk\archives folder well. So, only have one pt-weblogic* file in there.

For Java, we don’t need to use the movement scripts. We’ll simply tarball up the new directory and include that in our $BASE\dpk\archives folder.

$JDK_VERSION="1.7.0_141"
7z a -ttar "${env:TEMP}\pt-jdk${JDK_VERSION}.tar" $env:JAVA_HOME\*
7z a -tgzip "${env:DPK_BASE}\dpk\archives\pt-jdk${JDK_VERSION}.tgz" "${env:TEMP}\pt-jdk${JDK_VERSION}.tar"

Deploy CPU Patches

 

Copy your updated tarballs to a new server. You’ll want to remove the existing tarballs from the $BASE\dpks\archive to prevent the DPK from raising an error.

We have two options for telling the DPK we want to install WebLogic. The first option is to delete the existing WebLogic and Java folders. If you stop your PeopleSoft domains, you can delete both folders. When you run the DPK it will see that WebLogic and Java are missing and reinstall them from the patched tarballs in the $BASE\dpk\archives folder.

The other option is use the redeploy: true flag in psft_customizations.yaml. If you set the redeploy variable to true, the DPK will redeploy all the software in your $BASE\dpk\archives folder. This option requires less work – set a variable in psft_customizations.yaml and run the DPK – but it can take longer because you will redeploy Java, Tuxedo, WebLogic, PS_HOME and more. I think of this option as “the Puppet way”.

For this post and demo, we’ll use the redeploy: true option in our psft_customizations.yaml file. We’ll also use one other trick for testing; we will only run the part of the DPK that handles the middleware. Instead of running the entire DPK that touches the OS, middleware, and domains, the manifest we call includes only the DPK role that ensures the middleware is installed and not touch other parts of the system. This will also speed up our CPU patch deployment.

middleware.pp

Let’s create a new file under c:\programdata\puppetlabs\puppet\etc\manifests called middleware.pp. You can start by cloning the site.pp file. Change the file to look like this:

node default {
  include ::pt_role::pt_tools_deployment
}

Save the file. That’s it!

What we have done is tell Puppet to only run the DPK role pt_tools_deployment instead of running a larger role like pt_hcm_pum.

In the video demo, we are applying patches to a PeopleSoft Image, which is a Fulltier setup. The default pt_tools_deployment.pp manifest won’t run on a Fulltier system. To get around that, I created a copy of pt_tools_deployment.pp manifest called io_tools_deployment.pp and removed the check on env_type: fulltier.

cpu.ps1

We have a few tasks to do before we can run the middleware.pp manifest. We’ll wrap those tasks in a Powershell script we can run on each server.

At a high level, here are the tasks our cpu.ps1 script will do:

  1. Copy new DPK archives to server
  2. Stop PeopleSoft Services
  3. Remove current Java and WebLogic installs (if redeploy: false)
  4. Run middleware.pp to install patched Java and WebLogic
  5. Start PeopleSoft Services

Get the Sample Code

The full code is in the ps-dpk-tarballs GitHub repository. You can find all the scripts from this post and demo on GitHub.

ps-availability Version 2.0

About a year ago, I posted a project to check on the status of all our PeopleSoft environments. Our status page has become an important part of monitoring our environments. I check that page every morning and the email alerts let me be proactive in addressing environment issues. We also embedded the status page into our support teams homepage, so any team member can quickly see the status of environments.

I made some improvements to the script over the last year and am releasing them as version 2.0. The improvements for version 2.0 are:

  • Checking for Stale process schedulers. In version 1, I simply grabbed the process scheduler status and reported on that. But, if your scheduler crashes it won’t update the server status table. So, you can get situations where the page says your scheduler is running but it’s not. In version 2.0, you can configure a Stale Interval to compare the last updated time. If the last update is greater than the interval, it will report the scheduler as “Stale”.
  • I removed the interim Markdown tables that were used to create the HTML table, which let me remove the Redcarpet gem dependency. In version 2.0, the HTML tables are built as the data is collected. This let’s the script dynamically add classes for formatting, but it also lets us build more complex tables.
  • IB Domain Status reporting is in version 2.0. The status of the IB domains doesn’t impact the notifications (partly because we have some domains Inactive on purpose), but you can click on an environment row to see a report from the IB Domains page. The row embeds a table with your IB domain status so you can quickly check the status.
  • You can specify a homepage check for both Classic and Fluid so you don’t have to use the same title for both homepages. This is also useful if you are starting to roll out Fluid in some environments, but have Classic in others.

Here is a screenshot of the new status page:

version2

There are three new configuration options (in the psavailability.rb script – on the list for future improvements) with version 2.0:

  • Fluid Homepage Title Check (default: Homepage)
  • Time Zone (default: US Central)
  • Stale Interval (default: 10 minutes)

Go visit the GitHub repository to download the project and get started.

#71 – DevOps and Ansible w/ Jason Gilfoil and Eric Bolinger

This week we podcast from the Alliance 2017 Conference in Las Vegas. Jason Gilfoil and Eric Bolinger join us to talk about DevOps and Ansible with PeopleSoft. We talk about application orchestration, mixing Ansible and Puppet, customizing the DPK and more.

Show Notes

  • Introducing Jason Gilfoil @ 1:30
  • Introducing Eric Bolinger @ 2:45
  • What is Ansible? @ 3:30
  • What is Orchestration? @ 8:00
  • Differences between Puppet and Ansible @ 12:00
  • Puppet Master, Hiera Hash and the DPK @ 15:15
  • Managing infrastructure code with Git @ 20:45
  • Adjusting to a DevOps culture @ 27:00
  • Getting automation started in your organization @ 30:30
  • Calculating Time Saving for Automation @ 35:30
  • Choosing an automation tool @ 41:30
  • Docker @ 43:00
  • Personal Development Environments @ 45:45
  • Starting to think “cloud” @ 50:00

#66 – Catch-up Projects w/ Charlie Sinks

This week on the podcast, Charlie Sinks joins us to talk about Image “Catch-up Projects”. We talk about different strategies to simplify customization retrofits, reviewing bugs and new features, and application testing strategies. We also talk about making architecture changes to work with the DPK, and Charlie shares an Oh No! story.

Show Notes

  • OS Change Update @ 1:30
  • First “catch-up” projects @ 6:45
    • Strategies to Simplify Retrofitting @ 9:30
  • Reviewing New Images @ 17:30
  • Testing “catch-up” projects @ 23:30
  • Charlie’s Introduction @ 29:00
  • Implementing Rundeck @ 34:00
  • Architecture Changes with the DPK @ 42:00
  • What is one app you hate to close? @ 51:00
  • Charlie’s Oh No! Story @ 60:00

Refreshes with Data Guard and Pluggable Databases

A PeopleSoft refresh is one of the more common tasks for a PeopleSoft Administrator or DBA. There are many ways to accomplish this task, but it usually involves a database restore from backup, custom SQL refresh scripts and potentially ACM steps. Depending on the level of effort put into the refresh scripts, there can also be manual steps involved. This approach is tried and true, but tends to lack the speed and flexibility that we are starting expect with the delivery of the PeopleSoft Cloud Architecture toolset. Nightly or Ad-hoc refresh environments and quickly provisioned temporary environments are just a few use cases that would benefit greatly from refresh process improvements. I have been doing some exploring in this area recently and would like to share a few thoughts. First, a quick overview of some Oracle tools and features that I have been leveraging.

Data Guard

Oracle Data Guard is a tool that gives you high availability, data protection and disaster recovery for your databases. At a high level, it consists of a primary database and one or more standby databases. These standby databases are transactionally consistent copies of the primary database. Therefore, if the primary database goes down, the standby can be switched to primary and your application can keep on rolling.

Physical vs. Snapshot Standby

There are multiple types of standby databases that can be used with Data Guard. I’d like to briefly explain the difference between Physical Standby and Snapshot Standby. A physical standby is a database that is kept in sync with a primary database via Redo Apply. The redo data is shipped from the primary and then applied to the physical standby. A snapshot standby is basically a physical standby that was converted to a snapshot, which is like a point in time clone of the primary. At this point we can use the snapshot to do development, testing, etc. When we are done with our snapshot, we can then convert it back to a physical standby and it will once again be in sync with the primary database. This is accomplished by taking a restore point when the snapshot conversion happens. The whole time the standby is in snapshot mode, the redo data is still being shipped from the primary. However, it is NOT being applied. Once we convert back to physical, the restore point is used to restore and then all waiting redo is applied.

Pluggable Databases

With Oracle 12c, we have the introduction of multitenant architecture. This architecture consists of Container(CDB) and Pluggable(PDB) databases. This setup makes consolidating databases much more efficient. It also gives us the ability to clone a PDB very easily. Cloning a PDB between different CDBs can even be done via a database link. Having a true multitenant setup does require additional licensing, but you can have a CDB-PDB setup without this extra licensing cost if you use a single instance(Only one PDB per CDB). Here is a great video overview of multitenant.

Refresh Approach

Now that we have an idea of what these tools and features gain us, let’s think about how to put them to use with database refreshes. Both of these approaches assume the use of Data Guard and PDBs. Having a true multitenant setup would be most efficient but a single instance setup will work just fine. I would recommend you have a dedicated standby database for your refreshes, versus using the same standby you rely on for HA\DR. It would also make sense for the standby to be located on the same storage as the PDBs you will be refreshing. Neither of these are requirements, but I think you will see better performance and lessen the risk to your HA\DR plan.

The use case we will use for this example is a sandbox PeopleSoft database. This sandbox will be scheduled to refresh nightly, giving the business an environment to test and troubleshoot in with data from the day before. The refresh could also be run adhoc, if there is a need during the business day. So the goal is to have this fully automated and complete as fast as possible.

Clone Standby Approach

This approach will be to take a snapshot of our refresh standby database and clone it, overlaying our previous sandbox PDB. After this is completed, we will need to run custom SQL scripts or ACM steps to prepare the refreshed PDB. Finally, we will restore the refresh standby back to a physical standby database. This blog post by Franck Pachot gives a quick overview of the SQL commands needed to accomplish most of these steps.

  1. Convert the refresh source physical standby to a snapshot standby.
  2. Open the refresh source PDB as read only.
  3. Create database link between the sandbox target CDB and the refresh source PDB.
  4. Drop the sandbox target PDB and create a clone from the refresh source PDB.
  5. Open the new clone sandbox PDB.
  6. Cleanup the sandbox PDB.
    • Check for errors.
    • Patch the PDB to the patch level of the CDB, if needed.
  7. Run custom SQL scripts or ACM steps against sandbox PDB for PeopleSoft setup.
  8. Convert the refresh PDB back to physical standby.

Snapshot Standby Approach

This approach is somewhat similar, except we won’t be doing any cloning. Instead, we will be using the actual snapshot standby itself as our database. Since we know this sandbox database will be refreshed nightly, we can stay in snapshot standby mode all day and then switch to physical standby mode briefly at night, applying redo data to sync up with our primary production database. After that is done, we will then switch back to snapshot mode and run our custom SQL scripts and ACM steps. This will require a dedicated standby database and should only be used with a frequent refresh schedule. Since the redo data continues to ship during snapshot standby mode, the redo data will start to backup. The volume of this redo data backing up could become an issue if it gets too large, so you will need to do some analysis to make sure you can handle it based on your refresh interval.

  1. Create a sandbox PDB as a physical standby, with primary database being production.
  2. Convert sandbox to a snapshot standby.
  3. Run custom SQL scripts or ACM steps against sandbox for PeopleSoft setup.
  4. Use the snapshot standby sandbox PDB as your normal database; connecting app and batch domains, etc.
  5. Wait until next refresh interval.
  6. Convert sandbox from snapshot standby to physical standby.
    • Restore point will be used and redo data applied, syncing up with current primary database state in production.
  7. Covert sandbox from physical standby to snapshot standby.
  8. Run custom SQL scripts or ACM steps against sandbox for PeopleSoft setup.
  9. Repeat.

Conclusion

Those are just two ideas, but you can see that there are probably many variations of these approaches that will work. Leveraging Data Guard and PDBs really gives you many options to choose from. I have been using the Clone Standby approach recently and have packaged up the scripts, including bouncing app\batch domains, in Oracle Enterprise Manager as a job. This gives me push button refreshes with a turn around time under 20 minutes. I have been able to provide adhoc refreshes for emergency production troubleshooting to the business multiple times in just a few months since implementing this approach. This is a very powerful tool to have and is well worth the effort to get your refreshes fast, efficient and automated.

#57 – REST Services

This week on the podcast, Dan and Kyle talk about Elasticsearch, using Powershell with SSH libraries, why the DPK doesn’t merge Hiera data, and how they format SQL files. Then Dan explains a new REST-based web service he built with new features in PeopleTools 8.55.

We want to make this podcast part of the community discussion on PeopleSoft administration. If you have comments, feedback, or topics you’d like us to talk about, we want to hear from you! You can email us at podcast@psadmin.io, tweet us at @psa_io, or use the Twitter hashtag #psadminpodcast.

You can listen to the podcast here on psadmin.io or subscribe with your favorite podcast player using the URL below, or subscribe in iTunes.

Podcast RSS Feed

Show Notes

#52 – Vagabond and Rundeck w/ JR Bing

This week on the podcast we celebrate 1 year of podcasting with JR Bing. JR talked with us about his Vagabond project for managing PeopleSoft Images, how he uses RunDeck to simplify daily administration tasks, being a Mac user and using GitHub to manage projects and code.

We want to make this podcast part of the community discussion on PeopleSoft administration. If you have comments, feedback, or topics you’d like us to talk about, we want to hear from you! You can email us at podcast@psadmin.io, tweet us at @psa_io, or use the Twitter hashtag #psadminpodcast.

You can listen to the podcast here on psadmin.io or subscribe with your favorite podcast player using the URL below, or subscribe in iTunes.

Podcast RSS Feed

Show Notes

Encrypt psft_customizations.yaml Passwords

In the psft_customizations.yaml file we store configuration information for a server, including passwords. There is a project, hiera-eyaml, that supports encrypting and decrypting sensitive data in Hiera YAML files. Out of the box, the Windows-based DPK doesn’t work with hiera-eyaml. For Linux DPK, check out 2188771.1 – there is better support in the Linux DPK for hiera-eyaml.

In this post, we’ll walk through the steps to get hiera-eyaml working on Windows and how to encrypt data in the psft_customizations.yaml file.

Update RubyGems

The version of Ruby, and RubyGems, that ships with the DPK can’t install new Gems. The RubyGems version doesn’t support trust the site’s SSL certificate. To fix that, download the root certificate and tell RubyGems to trust it.

  1. Download the newer SSL certificate.
  2. Save the file as RubyGemsRootCA.pem
  3. Copy the new certificate to C:\Program Files\Puppet Labs\Puppet\sys\ruby\lib\ruby\2.0.0\rubygems\ssl_certs

Copying the new certificate to ssl_certs will tell RubyGems to trust any certificate signed by it. Now we can use RubyGems to install hiera-eyaml on the server.

Install hiera-eyaml

When Puppet is installed, it includes Ruby and RubyGems binaries because Puppet is written in Ruby. We’ll use the gem utility to install the hiera-eyaml RubyGem. First, we should update PATH to include Puppet’s Ruby binaries:

  1. $env:PATH += ";C:\Program Files\Puppet Labs\Puppet\sys\ruby\bin"
  2. gem install hiera-eyaml

RubyGems will install any dependencies and report the progress.

Fetching: trollop-2.1.2.gem (100%)
Successfully installed trollop-2.1.2
Fetching: highline-1.6.21.gem (100%)
Successfully installed highline-1.6.21
Fetching: hiera-eyaml-2.1.0.gem (100%)
Successfully installed hiera-eyaml-2.1.0
Parsing documentation for trollop-2.1.2
Installing ri documentation for trollop-2.1.2
Parsing documentation for highline-1.6.21
Installing ri documentation for highline-1.6.21
Parsing documentation for hiera-eyaml-2.1.0
Installing ri documentation for hiera-eyaml-2.1.0
3 gems installed

Keys

Hiera-eyaml uses it’s own Public and Private keys to encrypt and decrypt data. If you have inspected the puppet\ssl directory, you will see folders for public and private keys. These keys are used by Puppet to communicate with a Puppet Server. We use different keys for encrypting data in psft_customizations.yaml.

The keys should be created in the folder C:\ProgramData\PuppetLabs\puppet\etc\secure\keys\. To ensure the keys are created in the correct location, Hiera-eyaml and Hiera know where they are, we’ll create a configuration file for Hiera-eyaml.

  1. Create eyaml.yaml under C:\ProgramData\PuppetLabs\hiera\etc and add these values:

    ---    
    pkcs7_private_key: C:\ProgramData\PuppetLabs\puppet\etc\secure\keys\private_key.pkcs7.pem  
    pkcs7_public_key: C:\ProgramData\PuppetLabs\puppet\etc\secure\keys\public_key.pkcs7.pem
    
  2. Set the EYAML_CONFIG environment variable:

    $env:EYAML_CONFIG="C:\ProgramData\PuppetLabs\hiera\etc\eyaml.yaml"
    
  3. Create new encryption keys for Hiera-eyaml to use:

    eyaml createkeys
    

Keep these new keys safe and locked down; they decrypt your passwords!

Encrypt Passwords

Now that we have installed Hiera-eyaml and created keys, let’s do a quick test to make sure we can encrypt passwords. This test will encrypt the text “VP1”:

eyaml encrypt -s VP1

The output will look similar to this:

string: ENC[PKCS7,MIIBeQYJKoZIhvcNAQcDoIIBajCCAWYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEALsKtTfAXyHyE/k5r2U2ZZU98SqaQ5/ukfNR/FkOt9bNhoZ1EomqmqIc/06l7Tk5W4BYJA0mXV6ykLgOHYTAJbVPM8gXBuHsw1jh+/VC0er7evlzqtf7UjIvu3rTo+0LUm2X3imjbWHGhyrs2bxm0L1qpC2atlTSzEYrSc6OxkTpZA19Y8iEJxFb+F0fGwsQ3SRVJD1J3Jwf0hAsHN/SXX/p2ywn5qz2BnlJl4wa7ragYv4aVBGbGF3ThvYMCTzNiFHtyHdCFvPX9i/t0fpDUJY76ndAl/T4q/Stopnq6Gm9vLJH5EC6KMUQZzb0ssDHriojQgUH7uFt8/Wn9vFeTQTA8BgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBBFcUOesdHoJgYi5PnXGmkAgBDbEJsr/tDXbDpJu7+xz9uL]

OR

block: >
    ENC[PKCS7,MIIBeQYJKoZIhvcNAQcDoIIBajCCAWYCAQAxggEhMIIBHQIBADAFMAACAQEw
    DQYJKoZIhvcNAQEBBQAEggEALsKtTfAXyHyE/k5r2U2ZZU98SqaQ5/ukfNR/
    FkOt9bNhoZ1EomqmqIc/06l7Tk5W4BYJA0mXV6ykLgOHYTAJbVPM8gXBuHsw
    1jh+/VC0er7evlzqtf7UjIvu3rTo+0LUm2X3imjbWHGhyrs2bxm0L1qpC2at
    lTSzEYrSc6OxkTpZA19Y8iEJxFb+F0fGwsQ3SRVJD1J3Jwf0hAsHN/SXX/p2
    ywn5qz2BnlJl4wa7ragYv4aVBGbGF3ThvYMCTzNiFHtyHdCFvPX9i/t0fpDU
    JY76ndAl/T4q/Stopnq6Gm9vLJH5EC6KMUQZzb0ssDHriojQgUH7uFt8/Wn9
    vFeTQTA8BgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBBFcUOesdHoJgYi5PnX
    GmkAgBDbEJsr/tDXbDpJu7+xz9uL]

Hiera-eyaml gives two options for output: string and block. For psft_customizations.yaml I’m using the string output. It’s cleaner and easier to insert into the file. We can request string output only and assign a label to the encrypted password:

eyaml encrypt -s VP1 -o string -q -l db_user_pwd

The output should look like this:

db_user_pwd: ENC[PKCS7,MIIBeQYJKoZIhvcNAQcDoIIBajCCAWYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAV2y+yriBfuFlXspBIzZ8eBEOow7FU7mcwYL1HCpHd+XrwIliMTgDj+4X47XXQ3bce4WRvaezHUNahJQF4OZrwlGdCgXYeFG4dYvMEg/75T0704I2+y/XmLpI3Y5swd3L9LnHfxpAm6x8AJpf2yybSP4rsD1IxZgrpjy1CjFe3GuRW9ZcNFkNq5WofRweoX4C9QgNp1bmXQnJym+ZnVe1y7vQ9iEY336vF2FH3wJNgqRIy+74RWj9F+OaAg78meSxM0eM7jm4fLa32cMmOLzfU/FGFhLFcQJ2FaAa5/SWmBSgtwDUXsGaLcSa0R2nfQZrbRWmlP+s1WYL9MzkLFTEoDA8BgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBDMAfrBMvJ+HRA+iL4zyQppgBBcHlLe8hUl6JD1jFXH/N22]

We can pass the output of this command into the clipboard and then paste the line directly into psft_customizations.yaml:

eyaml encrypt -s VP1 -o string -q -l db_user_pwd | set-clipboard

set-clipboard requires WMF 5.0

Edit YAML Files

Encrypting passwords on the command line is great, but what if you want to edit all of the passwords in your psft_customizations.yaml file at once? Hiera-eyaml has an edit command that will decrypt the passwords in psft_customizations.yaml and open the file in a text editor for you. First, we need to set the EDITOR environment variable:

  1. $env:EDITOR="notepad.exe"
  2. eyaml edit .\psft_customizations.yaml

Notepad will open the psft_customizations.yaml file. At the top of the file, you will see a large comment block explaining how to add and edit passwords. (The comment block will go away when you close Notepad.)

Add New Passwords

To add a new password, you wrap the plain text password inside the brackets in this syntax: DEC::PKCS7[plaintextpassword]! For example,

db_user_pwd:     DEC::PKCS7[VP1]!

If you save and close the file, and open psft_customizations.yaml directly in Notepad, you will see the db_user_pwd: password is encrypted.

db_user_pwd:  ENC[PKCS7,MIIBeQYJKoZIhvcNAQcDoIIBajCCAWYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAdGWx7WeuGw3lULsdDANpYotX66B1wzO3U9H47RLJA+s4cIVg5z2JtzTp+uHOp9L9SdcNyzsvo6+uPY29DxMsaIUv9Dfa5LWKv+GZypH4myJYxbNfhtRE5TcLWxxwTSji9WYxDyFu8FFJGIkdNcEzN4svG6CknDhmA/od/NPanQg+xWbjP2qJkiOMi2fDwPJd11dev7Qm4NcwkZzdcsMBpkSgL3eL2dZ/BzdJndWrsGlYfUAy0TLxJD9a4aBCiwYoWWmmS4smnmtmti0R1DPEs8BpAl5L76JItMUwzRsnmu5IZ8odxn8rQZQNJaOVk/oScp4SRIgCh5+tYp7FMvgM/jA8BgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBCqrQ+GokeF23Of2odDkv5JgBBulnH4XkLOrQBEy+fa7cMr]

Excellent – we have encrypted passwords!

Edit Passwords

The next step is to edit existing encrypted passwords in the psft_customizations.yaml file. The eyaml edit command will open the file and decrypt passwords. The password syntax will be slightly different – it will have a number assigned to the password: db_user_pwd: DEC(1)::PKCS7[VP1]!

The (1) is used internally by Hiera-eyaml, so don’t change it. But you can change the password inside the square brackets. After changing the password, save and close the file and your updated passwords will be encrypted.

Enable eyaml with the DPK

When we push psft_customizations.yaml out to servers, we also need to ensure each server has the keys used to encrypt the passwords, and also knows about Hiera-eyaml. First, if you are using the encrypted passwords on more than one server, copy the puppet\etc\secure\keys folder to each server.

Next, Hiera needs to know that we are using Hiera-eyaml. In C:\ProgramData\PuppetLabs\hiera\etc\hiera.yaml, enable eyaml as a back-end format by adding - eyaml to the ;backends: section:

:backends:
    - yaml
    - eyaml

Verify that the :eyaml: section is at the bottom of hiera.yaml. Change the paths to the Public and Private keys. If you followed the steps above and created them in puppet\etc\secure\keys, the paths will look like this:

:eyaml:
    :datadir: C:\ProgramData\PuppetLabs\puppet\etc\data
    :extension: yaml

    :pkcs7_private_key: C:\ProgramData\PuppetLabs\puppet\etc\secure\keys\private_key.pkcs7.pem
    :pkcs7_public_key:  C:\ProgramData\PuppetLabs\puppet\etc\secure\keys\public_key.pkcs7.pem

Save hiera.yaml and let’s test our configuration.

Testing Hiera-eyaml

To test Hiera-eyaml and Puppet working together, we’ll encrypt the a password in psft_customizations.yaml and update UserPswd= value in psappsrv.cfg.

  1. Open psft_customizations.yaml with eyaml edit and add the line:

    db_user_pwd: DEC::PKCS7[VP1]!
    
  2. Save and close psft_customizations.yaml.

  3. Save this code below as pwd.pp and save it to puppet\etc\manifests. Change the $configFile path to point to your psappsrv.cfg file.

    $configFile = 'C:/Users/vagrant/psft/pt/8.55/appserv/APPDOM/psappsrv.cfg' 
    ini_setting { "eyaml_test": 
        ensure => present, 
        path => $configFile, 
        section => 'Startup', 
        setting => 'UserPswd', 
        value => hiera('db_user_pwd'), 
    }
    
  4. Change directories to puppet\etc\manifests.

  5. Run puppet apply .\pwd.pp --trace --debug

  6. After the run is done, open your psappsrv.cfg file. You should see UserPswd=VP1 in the file.

If the test above worked, you’re all set to use Hiera-eyaml with the DPK and Puppet. Once Hiera knows about Hiera-eyaml, any data in Hiera can be encrypted. Happy encrypting!