#158 – Federated User Preferences


This week on the podcast, Dan shares some new ideas to improve the DDDAudit and SYSAudit reports, and Kyle discusses how federated User Preferences work in a clustered environment.

Show Notes

Advanced PeopleSoft Administration – OpenWorld 2018

This week Kyle and I presented at OpenWorld 2018. Our session talked about how to take advantage of Deployment Packages, using ACM to simplify database refreshes, using psadmin-plus, and leveraging the new headless features of Change Assistant. You can view the slides here if you want to check out the presentation.

Using the DPK Redeploy Option for CPU Patching

There are lots of ways to apply CPU patching and how to automate it. We’ve covered a few methods on the blog, but I wanted to show another way with the DPK. This method combines using patched archives (discussed here) and an option in the DPK that isn’t really talked about: redeploying software.

Redeploy Option

In the DPK, there is an option to redeploy software that you have already installed. The redeploy option is supported for all the middleware, PS_HOME, and PS_APP_HOME. If you include

redeploy: true

in your psft_customizations.yaml file, the DPK will uninstall all the middleware, PS_HOME, and PS_APP_HOME (if your DPK role is an app role), and then reinstall those from your DPK archives folder. This is slick if you want fix a bad install, or in our case, apply a new version of the software. We can replace the archive file for JDK or WebLogic, trigger a redeploy by running Puppet, and the new version will be installed.

But, there are some downsides to how the DPK handles this. First, it’s all or nothing; you can redeploy all of the middleware or none of the middleware. If you redeploy, your PS_HOME (and PS_APP_HOME) will be redeployed and you’ll need to recompile your COBOL files (if you still use them). That’s too sledgehammer-y for me.

Also, we don’t always want to redeploy our software. We may only want to run a redeploy for new patches. To enable a redeploy, we have to update our configuration, run puppet, then remove our configuration change. I don’t like making transactional changes to my configuration files. Our configuration should be static.

A Better Redeploy Option

Given these downsides, I still like the redeploy option with the DPK. I made some changes to the DPK to make it work better. The changes had two goals:

  1. Don’t include transaction data in the configuration files
  2. Offer per-software redeploy options

To get the first goal, we’ll use Facter to control when we redeploy. Facter offers the ability to override facts by using environment variables. We can set a sane default for our fact (false) but when we want to run a redeploy we can set an environment variable.

For the second goal, we will modify some of the DPK code to read new variables. The global redeploy option will stay, but there will new per-software redeploy options.

My end state for applying CPU patches looks like this:

$env:FACTER_weblogic_redeploy="true"
puppet apply .\site.pp --confdir c:\psft\dpk\puppet

Puppet will run, redeploy only WebLogic from an updated archive file, and bring my system back up. The rest of the software (PS_HOME, Tuxuedo, etc) is left untouched.

DPK Modifications

The next section is a bit code heavy, but hang tight. This all fits together really well.

Custom Facts

We can include custom facts in the DPK to report whatever we want. In this case, we’ll create facts that report false for different software to redeploy. These go in the file modules/pt_role/lib/facter/redeploy.rb.

Facter.add(:jdk_redeploy) do
  setcode do
    'false'
  end
end

Facter.add(:weblogic_redeploy) do
  setcode do
    'false'
  end
end

Facter.add(:tuxedo_redeploy) do
  setcode do
    'false'
  end
end

We now have 3 custom facts:

  • ::jdk_redeploy
  • ::weblogic_redeploy
  • ::tuxedo_redeploy

If we want to set these values to true, we can set these environment variables:

  • FACTER_jdk_redeploy=true
  • FACTER_weblogic_redeploy=true
  • FACTER_tuxedo_redeploy=true

That’s all it takes to override these facts. (This isn’t true of all facts provided by Facter, but for our custom facts this works.)

YAML Configuration

Next, we will read these facts into our configuration. Include this section at the root of your psft_customizations.yaml file.

jdk_redeploy:           "%{::jdk_redeploy}"
weblogic_redeploy:      "%{::weblogic_redeploy}"
tuxedo_redeploy:        "%{::tuxedo_redeploy}"

Puppet Changes

Now the changes get a little more involved. First, we need to read our new variables into the pt_tools_deployment.pp manifest. We’ll set false as the default value in case we forget to add the values to our psft_customizations.yaml. Then, we need to update the call to ::pt_setup::tools_deployment so our new configuration options are passed to the class. (It’s a good practice to only do the hiera lookups in the pt_profile manifests and not in the pt_setup manifests.) These changes are made in modules/pt_profile/manifests/pt_tools_deployment.pp.

  $jdk_redeploy           = hiera('jdk_redeploy', false)
  $weblogic_redeploy      = hiera('weblogic_redeploy', false)
  $tuxedo_redeploy        = hiera('tuxedo_redeploy', false)


  class { '::pt_setup::tools_deployment':
    ensure                 => $ensure,
    deploy_pshome_only     => $deploy_pshome_only,
    db_type                => $db_type,
    pshome_location        => $pshome_location,
    pshome_remove          => $pshome_remove,
    inventory_location     => $inventory_location,
    oracleclient_location  => $oracleclient_location,
    oracleclient_remove    => $oracleclient_remove,
    jdk_location           => $jdk_location,
    jdk_remove             => $jdk_remove,
    jdk_redeploy           => $jdk_redeploy,
    weblogic_location      => $weblogic_location,
    weblogic_remove        => $weblogic_remove,
    weblogic_redeploy      => $weblogic_redeploy,
    tuxedo_location        => $tuxedo_location,
    tuxedo_remove          => $tuxedo_remove,
    tuxedo_redeploy        => $tuxedo_redeploy,
    ohs_location           => $ohs_location,
    ohs_remove             => $ohs_remove,
    redeploy               => $redeploy,
  }

Next, we have to update the API for ::ps_setup::tools_deployment so it knows what values we are passing. These changes are made in modules/pt_setup/manifests/tools_deployment.pp.

class pt_setup::tools_deployment (
  $db_type                = undef,
  $pshome_location        = undef,
  $pshome_remove          = true,
  $inventory_location     = undef,
  $oracleclient_location  = undef,
  $oracleclient_remove    = true,
  $jdk_location           = undef,
  $jdk_remove             = true,
  $jdk_redeploy           = false,
  $weblogic_location      = undef,
  $weblogic_remove        = true,
  $weblogic_redeploy      = false,
  $tuxedo_location        = undef,
  $tuxedo_remove          = true,
  $tuxedo_redeploy        = false,
  $ohs_location           = undef,
  $ohs_remove             = true,
  $redeploy               = false,
)

Last, we have to add some logic to handle our new redeploy options. We’ll keep the main redeploy option, but for each software we’ll evaluate the software-specific redeploy option.

    if ($redeploy == true) or ($jdk_redeploy == true) {
      $java_redeploy = true
    } else {
      $java_redeploy = false
    }
    pt_deploy_jdk { $jdk_tag:
      ensure            => $ensure,
      deploy_user       => $tools_install_user,
      deploy_user_group => $tools_install_group,
      archive_file      => $jdk_archive_file,
      deploy_location   => $jdk_location,
      redeploy          => $java_redeploy,
      remove            => $jdk_remove,
      patch_list        => $jdk_patches_list,
    }

That’s the end of the changes. It’s a little more invasive than I’d like, but the end result is worth it. We can now tell Puppet to do software-specific redeploys. This let’s us deploy new versions of the software by updating the archive file, setting an environment variable, and then running puppet apply.

#144 – Environment “Cluster”


This week on the podcast, Kyle has some follow-up on PIA installations and discovers some unpleasant Unified Navigation behavior. Dan shares some tips for Facter and testing out the redeploy option with the DPK.

Show Notes

  • PIA Installation FUP @ 1:30
  • Facter and FACTER_ override @ 4:30
  • Redeploy and WebLogic @ 8:00
  • CPU Patching @ 18:30
  • Unified Nav and SSO @ 23:00

#143 – Reconnect 2018 Recap


This week on the podcast, Kyle and Dan recap the Reconnect 2018 conference, what they learned about the PeopleTools Roadmap, and some highlights from their sessions.

Show Notes

  • PeopleTools Roadmap Session @ 6:00
  • Securing and Hardening PeopleSoft @ 15:30
  • Kyle’s CPU Talk @ 20:30
  • Dan’s Devops and DPK @ 27:00
  • Advanced PeopleSoft Adminstration @ 29:00
  • PeopleTools Product Panel @ 37:00

#139 – Redeploy


This week on the podcast, Dan and Kyle discuss the new PeopleSoft Support timeframe, controlling how much data the Search Framework indexes, and how to use Facter to redeploy software via the DPK.

Show Notes

  • 2030 Commitment @ 1:00
  • Auto Select Follow-up @ 7:15
    • http://www.peoplesoftwiki.com/lookup-exclusion
    • http://blog.psftdba.com/2006/05/lookup-exclusion-table.html?m=1
  • PeopleSoft Test Framework and TLS @ 8:45
  • Facter and Redeploy @ 12:00
  • Search Framework – Last X days @ 19:00
  • Puppet variable warnings @ 27:00

Improving Windows Services from the DPK

A common theme we write about on the blog is how to make the DPK work with multiple environments on the same machine. It’s common to run a DEV and TST on the same server. The DPK can build those environments, but there are a few changes to make the setup run well. On Windows, the services the DPK creates makes an assumption that breaks when we run multiple environments.

When starting a domain via Windows services, the service assumes that the environment variables are set for that environment. If you create your DEV environment via the DPK, that’s a good assumption. But, if you create a TST environment next, the environment variables are set to TST. When you attempt to start the DEV domain via Windows services, the domain start will fail.

To resolve this, we can improve the Ruby script that starts our domains. Under the ps_cfg_home\appserv\DOMAIN folder, there are Ruby scripts that are called by the Windows service. For the app server, it’s appserver_win_service.rb. These scripts will look for the PS_CFG_HOME environment variable and start the domains it finds under that home. We can add a line in the file to point to the correct PS_CFG_HOME location like this:

ENV["PS_CFG_HOME"]=c:\psft\cfg\DEV

While we can modify the file directly, the DPK way of handling this is to update the template in the DPK. Then, whenever we rebuild our domains the code change is automatically included.

The Ruby scripts to start/stop domains are templates in the DPK. The templates are stored under peoplesoft_base\dpk\puppet\modules\pt_config\files\pt_appserver\appserver_win_service.erb (replace pt_appserver with pt_prcs or pt_pia for the batch and PIA services.)

To make the environment variables we add dynamic, we can reference variables that exist in the Ruby environment that calls the ERB template. In the program appserver_domain_boot.rb, the variables ps_home and ps_cfg_home are set. We will use those variables to build our environment variables.

ENV["PS_HOME"] = "<%= ps_home %>"
ENV["PS_CFG_HOME"] = "<%= ps_cfg_home %>"
system("<%= ps_home %>/appserv/psadmin -c start -d <%= domain_name %>")

The <%= %> tags will output the value of that command or variable. So in our case, we are outputting the string value of ps_cfg_home.

The result of this file will look like this:

ENV["PS_HOME"] = "c:\\psft\\pt\ps_home8.56.08"
ENV["PS_CFG_HOME"] = "c:\\psft\\cfg\\DEV"
system("c:\\psft\\pt\ps_home8.56.08\\appserv\\psadmin -c start -d DEV")

When we run Puppet the next time, our Windows service will have it’s environment variables set before starting or stopping a domain.

#129 – Alliance 2018 Recap


This week on the podcast, Brad Carlson, Adam Wlad and Jim Marion join Dan to recap the Alliance 2018 conference. We discuss some CPU Patching issues, the 2027 date for PeopleSoft, and PeopleTools 8.57 announcements.

Show Notes

  • Technical and Reporting Advisory Group @ 1:45
  • CPU Patching and Security @ 5:15
  • PeopleSoft 2027 @ 15:00
  • PeopleTools 8.57 – Cloud First @ 19:30
  • PeopleTools 8.57 UI Changes @ 29:15
  • Page and Field Configurator @ 40:00
  • App Designer Improvements @ 49:45
  • Stop and Shares @ 56:30
  • Learning about Fluid @ 60:00

Improve the Management of DPK Archives

In Episode 127 of the podcast, Kyle and I discuss some strategies for managing the DPK archive files. The discussion started because I am starting a PeopleTools 8.56. To upgrade each server we copy the new DPK archive files to every server and run the DPK. Having a copy of those archives on every server is redundant and not the best solution.

 

A better solution is to store the DPK archives in centralized place. We run into a problem in the DPK when the archive folder has multiple versions of the same archive. For example, if we have two versions of the Weblogic archive, the DPK will return an error because it doesn’t know which one to deploy.

This error is generated from the function get_matched_file(). get_matched_file() is a custom DPK function that looks at the DPK Archive location and returns a filename based on a tag. The tags are string values like weblogic, pshome, and jdk. The DPK looks for all the file names that match the tag. If there is more than one file, the function returns an error.

Specifying Archive Files

Instead of letting the DPK search a directroy for matching archive files, we could explicity define which archive files we want to deploy. While this isn’t as dynamic, the method would give us more control over which version of each software we deploy. It could also solve the issue with centralizing all our DPK Archives.

To use this method, we have to make two changes to the DPK:

  • Create a new hash to store the archives we want to deploy
  • Update the Puppet code to read from the new hash

Hash to Store Archive Data

In the psft_customizations.yaml file, we create a new archive_files: hash. This hash is where we can explicitly define which archive files we want to use.

archive_files:
  weblogic:             "%{hiera('archive_location')}/pt-weblogic12.2.1.2.20180116.tgz"
  tuxedo:               "%{hiera('archive_location')}/pt-tuxedo12.2.2.0.tgz"
  jdk:                  "%{hiera('archive_location')}/pt-jdk1.8.0_144.tgz"
  pshome:               "%{hiera('archive_location')}/pt-pshome8.56.06.tgz"
  oracleclient:         "%{hiera('archive_location')}/pt-oracleclient12.1.0.2.tgz"
  psapphome:            "%{hiera('archive_location')}/fscm-psapphome027.tgz"

The keys to use the hash are the DPK tags for each piece of software. This lets us leverage the built-in tagging system in the DPK. If you want to take it a step further, you can define the hash with additional variables for each version. For example, in a common.yaml file, configure the archive_files: hash like this:

archive_files:
  jdk:                  "%{hiera('archive_location')}/pt-jdk%{hiera('jdk_version')}.tgz"
  pshome:               "%{hiera('archive_location')}/pt-pshome%{hiera('tools_version')}.tgz"

Then, in your psft_customizations.yaml file you can specify:

jdk_version:    1.8.0_144
tools_version:  8.56.06

Update Puppet to Read the Hash

Next, in the tools_deployment.pp file (under dpk/puppet/modules/pt_setup/manifests) we need to modify the code to use our archive_list: hash. In the delivered code, the DPK archive location and software tag are passed to the Ruby function get_matched_files(). What we will do instead is check if the archive_list: hash is defined. If it is, we will use the path from the hash instead of calling get_matched_file. If the archive_files: hash does not have the tag defined (like pshome), the get_matched_file() function is called.

Here is the code for deplying WebLogic:

$archive_files = hiera('archive_files', '')

if $archive_files {
  $weblogic_archive_file   = $archive_files[$weblogic_tag]
} 
if $weblogic_archive_file == '' {
  $weblogic_archive_file = get_matched_file($tools_archive_location, $weblogic_tag)
}
if $weblogic_archive_file == '' {
  fail("Unable to locate archive (tgz) file for Weblogic in ${tools_archive_location}")
}

You can copy this code for the rest of the archive types in the tools_deployment.pp and app_deployment. For DPK types that you do not specify, the DPK will revert to the original behavior.

I like this setup better. We get more control over which DPK archives we deploy, we can store multiple archives in a central location, and we retain the delivered behavior as the default.