Speed up PeopleSoft Images

One complaint I have about using PeopleSoft Images is that logging in and opening pages is very slow. Behind the scenes, the application server is caching objects as you request them. The initial cache load can take time and that leads to very slow page loads. To eliminate the cache loading and slow performance, we can run the LOADCACHE process to pre-load all of the application cache.

We will solve this problem using a short Puppet manifest that runs after the DPK is finsihed. This lets the delivered DPK build the system, then we follow along and make the performance improvements.

Before we look at the Puppet code, lets look at the LOADCACHE process and configuration.

LOADCACHE

The LOADCACHE process is an App Engine that will pre-build all of the cache files an application server needs. You run the process from the page “PeopleTools > Utilities > Administration > Load App Server Cache”. Depending on the size of your database, this may take a long time. In my HR Image 32 VM, the process only took 10 minutes. In a large Finance production system, the process ran for 3 hours.

The LOADCACHE process will build the cache files in a directory named CACHE/STAGE/stage. The contents of the stage directory can be copied (or symlinked) to your application server domain. (There is an Output Destination box on the run control page, but it does not control the output location.) The pre-built cache files must be stored under a CACHE/STAGE directory. This directory can live anywhere, but it is easiest to store the cache files under the application server domain.

Last, we need to tell our application server to use the shared cache files instead of per-process cache files. In psappsrv.cfg, set ServerCacheMode=1 and reconfigure the application server.

DPK

The steps we want to automate are this:

  1. Run the LOADCACHE app engine
  2. Create a symlink from app domain to pre-built cache
  3. Change ServerCacheMode in psappsrv.cfg
  4. Reconfigure and start the app server domain

First, the DPK is built to handle multiple app server domains on a box, so let’s wrap our Puppet code in an appropriate loop. Create the file manifests/loadcache.pp:

#loadcache.pp
$appserver_domain_list = hiera('appserver_domain_list')
$appserver_domain_list.each | $domain_name, $app_domain_info | {


}

This code looks up our app server domains in Hiera and will iterate over each domain.

To run an app engine via the DPK, we could use the exec resource and create a command to run. There is a pt_psae custom Puppet Type that the DPK delivers and we can use this to run App Engines. BUT, there is a bug in pt_psae! It is hard-coded to only run PTEM_CONFIG (the ACM App Engine). We can fix that pretty easily. It just so happens that we the pt_psae type takes a program_id parameter and we can use that instead of the hard-coded value.

In the file production/modules/pt_config/lib/puppet/provider/psae.rb, we change the line

ae_program_name="PTEM_CONFIG"

to

ae_program_name=resource[:program_id]

To make this easier, let’s wrap this bug fix into a separate Puppet manifest so we can fix the bug on the fly. Create the file manifests/fixdpkbug.pp

#fixdpkbug.pp
$dpk_location = hiera('dpk_location')

exec { 'fix-dpk-bug':
  command => "sed -i 's/ae_program_name=\"PTEM_CONFIG\"/ae_program_name=resource[:program_id]/' ${dpk_location}/puppet/production/modules/pt_config/lib/puppet/provider/psae.rb",
  path  => '/usr/bin',
}

Back in our loadcache.pp file, we will use the pt_psae type to call the LOADCACHE program. The type requires the database connection credentials that live in the hash db_settings:. We need to convert the hash into an array of key=value pairs first. Then we can populate the parameters for pt_psae.

#loadcache.pp
$ps_home_dir = hiera('ps_home_location')

$appserver_domain_list = hiera('appserver_domain_list')
$appserver_domain_list.each | $domain_name, $app_domain_info | {

  $db_settings = $app_domain_info['db_settings']
  $db_settings_array  = join_keys_to_values($db_settings, '=')
  $ps_cfg_home_dir = $app_domain_info['ps_cfg_home_dir']

  pt_psae {"LOADCACHE-${domain_name}":
    db_settings => $db_settings_array,
    run_control_id => 'BUILD',
    program_id => 'LOADCACHE',
    os_user =>  'psadm2',
    logoutput => 'true',
    ps_home_dir => $ps_home_dir,
  }

}

Next, we need to create the symlink in our app server domain to the location where our CACHE files are created. If you run LOADCACHE through the process scheduler, the files will be built under the PS_FILEDIR specified for the scheduler. Normally, this is stored under the process scheduler domain folder. Since we are running the process through the Puppet Type (aka, command line), it will use the psadm2 user’s PS_FILEDIR location which happens to be /home/psadm2/PS_CACHE.

To create the symlink, we can use the file resource built into Puppet.

#loadcache.pp
$ps_home_dir = hiera('ps_home_location')

$appserver_domain_list = hiera('appserver_domain_list')
$appserver_domain_list.each | $domain_name, $app_domain_info | {

  $db_settings = $app_domain_info['db_settings']
  $db_settings_array  = join_keys_to_values($db_settings, '=')
  $ps_cfg_home_dir = $app_domain_info['ps_cfg_home_dir']

  pt_psae {"LOADCACHE-${domain_name}":
    db_settings => $db_settings_array,
    run_control_id => 'BUILD',
    program_id => 'LOADCACHE',
    os_user =>  'psadm2',
    logoutput => 'true',
    ps_home_dir => $ps_home_dir,
  }
  -> file {"${ps_cfg_home_dir}/appserv/${domain_name}/CACHE/SHARE":
    ensure  => link,
    target  => "/home/psadm2/PS_CACHE/CACHE/STAGE/stage"
  }

}

Notice that we use the -> resource chain between pt_psae and file. This tells puppet that the app engine must run first before we create the symlink (so we know the folder exists).

Next, let’s update psappsrv.cfg to set the Server Cache Mode. For that, we will use an exec resource running the sed command. You could use the file_line resource as well. The file_line method would offer better multi-platform support, but the sed command is really easy to use.

#loadcache.pp
$ps_home_dir = hiera('ps_home_location')

$appserver_domain_list = hiera('appserver_domain_list')
$appserver_domain_list.each | $domain_name, $app_domain_info | {

  $db_settings = $app_domain_info['db_settings']
  $db_settings_array  = join_keys_to_values($db_settings, '=')
  $ps_cfg_home_dir = $app_domain_info['ps_cfg_home_dir']

  pt_psae {"LOADCACHE-${domain_name}":
    db_settings => $db_settings_array,
    run_control_id => 'BUILD',
    program_id => 'LOADCACHE',
    os_user =>  'psadm2',
    logoutput => 'true',
    ps_home_dir => $ps_home_dir,
  }
  -> file {"${ps_cfg_home_dir}/appserv/${domain_name}/CACHE/SHARE":
    ensure  => link,
    target  => "/home/psadm2/PS_CACHE/CACHE/STAGE/stage"
  }
  -> exec { "Set-Cache-Mode-${domain_name}": 
    command => "sed -i 's/^\;ServerCacheMode=0/ServerCacheMode=1/' ${ps_cfg_home_dir}/appserv/${domain_name}/psappsrv.cfg",
    path  => '/usr/bin',
  }

}

Last, we need to reconfigure the domain. I prefer using psadmin plus to handle domain restarts because it can bundle multiple actions into a single command. We can call psadmin plus from Puppet to do our reconfiguration.

#loadcache.pp
$ps_home_dir = hiera('ps_home_location')

$gem_home = '/opt/puppetlabs/puppet/bin'
exec { 'install-psadmin_plus':
  command => "${gem_home}/gem install psadmin_plus",
}

$appserver_domain_list = hiera('appserver_domain_list')
$appserver_domain_list.each | $domain_name, $app_domain_info | {

  $db_settings = $app_domain_info['db_settings']
  $db_settings_array  = join_keys_to_values($db_settings, '=')
  $ps_cfg_home_dir = $app_domain_info['ps_cfg_home_dir']

  pt_psae {"LOADCACHE-${domain_name}":
    db_settings => $db_settings_array,
    run_control_id => 'BUILD',
    program_id => 'LOADCACHE',
    os_user =>  'psadm2',
    logoutput => 'true',
    ps_home_dir => $ps_home_dir,
  }
  -> file {"${ps_cfg_home_dir}/appserv/${domain_name}/CACHE/SHARE":
    ensure  => link,
    target  => "/home/psadm2/PS_CACHE/CACHE/STAGE/stage"
  }
  -> exec { "Set-Cache-Mode-${domain_name}": 
    command => "sed -i 's/^\;ServerCacheMode=0/ServerCacheMode=1/' ${ps_cfg_home_dir}/appserv/${domain_name}/psappsrv.cfg",
    path  => '/usr/bin',
  }
  -> exec { "Bounce ${domain_name} App Domain":
    command => "${gem_home}/psa bounce app ${domain_name}",
    require => Exec['install-psadmin_plus'],
  }
}

Using the exec resource, we can install psadmin plus using the gem utility. Inside our domain loop we can bounce the app server after we build our cache and reconfigure the domain.

The bounce action for psadmin plus will stop the domain, clear cache, flush the IPC resources, reconfigure the domain, then start the domain from a single command.

Running loadcache.pp

We are ready to test our new manifest. We actually have two manifests to test:

  • loadcache.pp
  • fixdpkbug.pp

We need to run the fixdpkbug.pp manifest first. Since that manifest changes a line of the underlying Puppet provider code, we have to run it before we compile the Puppet catalog for our loadcache.pp run. Then, we will run the loadcache.pp manifest.

$ DPK_HOME="/opt/oracle/psft/dpk/puppet"
$ cd $DPK_HOME/production
$ sudo /opt/puppetlabs/puppet/bin/puppet apply manifests/fixdpkbug.pp --confdir $DPK_HOME
Notice: Compiled catalog for psvagabond in environment production in 0.09 seconds
Notice: /Stage[main]/Main/Exec[fix-dpk-bug]/returns: executed successfully
Notice: Applied catalog in 1.91 seconds
$ sudo /opt/puppetlabs/puppet/bin/puppet apply manifests/loadcache.pp --confdir $DPK_HOME
Notice: Compiled catalog for psvagabond in environment production in 0.15 seconds

Notice: /Stage[main]/Main/Exec[install-psadmin_plus]/returns: executed successfully
Notice: /Stage[main]/Main/Pt_psae[LOADCACHE-psftdb]/returns: PeopleTools 8.57.08 - Application Engine
Notice: /Stage[main]/Main/Pt_psae[LOADCACHE-psftdb]/returns: Copyright (c) 1988-2019 Oracle and/or its affiliates.
Notice: /Stage[main]/Main/Pt_psae[LOADCACHE-psftdb]/returns: All Rights Reserved
Notice: /Stage[main]/Main/Pt_psae[LOADCACHE-psftdb]/returns:
Notice: /Stage[main]/Main/Pt_psae[LOADCACHE-psftdb]/returns: Application Engine program LOADCACHE ended normally
Notice: /Stage[main]/Main/Pt_psae[LOADCACHE-psftdb]/returns: executed successfully
Notice: /Stage[main]/Main/Exec[Set-Cache-Mode-psftdb]/returns: executed successfully
Notice: /Stage[main]/Main/Exec[Bounce psftdb App Domain]/returns: executed successfully
Notice: Applied catalog in 594.46 seconds

To verify that your application servers are using the shared cache, open your APPSRV_mmdd.LOG file and look for these lines:

Cache Directory being used: /home/psadm2/psft/pt/8.57/appserv/psftdb/CACHE/SHARE/

When you log into your PeopleSoft Image, all of the pages will load faster than before.

Vagabond and Automated Builds

This change as been added to the ps-vagabond project. If you build your PeopleSoft Images with Vagabond, you can pull down the lastest changes in the master branch. If you want to see how to integrate these manifests into your PeopleSoft Image builds, you can look at this provisioning script.

The two files are shared as Gists in Github, you can freely use them:

UPDATE: I created an Idea for this on the PeopleSoft Idea Space. Go vote for this if you want to see this included in future PeopleSoft Images.

#199 – Environment Facts


This week on the podcast, Kyle and Dan share some fun talks, YAML tips, and YAML history from a Red Hat User Group, Dan’s slow conversion over to Vim, and how to use the Puppet environment name in a custom Fact.

Show Notes

#139 – Redeploy


This week on the podcast, Dan and Kyle discuss the new PeopleSoft Support timeframe, controlling how much data the Search Framework indexes, and how to use Facter to redeploy software via the DPK.

Show Notes

  • 2030 Commitment @ 1:00
  • Auto Select Follow-up @ 7:15
    • http://www.peoplesoftwiki.com/lookup-exclusion
    • http://blog.psftdba.com/2006/05/lookup-exclusion-table.html?m=1
  • PeopleSoft Test Framework and TLS @ 8:45
  • Facter and Redeploy @ 12:00
  • Search Framework – Last X days @ 19:00
  • Puppet variable warnings @ 27:00

Improving Windows Services from the DPK

A common theme we write about on the blog is how to make the DPK work with multiple environments on the same machine. It’s common to run a DEV and TST on the same server. The DPK can build those environments, but there are a few changes to make the setup run well. On Windows, the services the DPK creates makes an assumption that breaks when we run multiple environments.

When starting a domain via Windows services, the service assumes that the environment variables are set for that environment. If you create your DEV environment via the DPK, that’s a good assumption. But, if you create a TST environment next, the environment variables are set to TST. When you attempt to start the DEV domain via Windows services, the domain start will fail.

To resolve this, we can improve the Ruby script that starts our domains. Under the ps_cfg_home\appserv\DOMAIN folder, there are Ruby scripts that are called by the Windows service. For the app server, it’s appserver_win_service.rb. These scripts will look for the PS_CFG_HOME environment variable and start the domains it finds under that home. We can add a line in the file to point to the correct PS_CFG_HOME location like this:

ENV["PS_CFG_HOME"]=c:\psft\cfg\DEV

While we can modify the file directly, the DPK way of handling this is to update the template in the DPK. Then, whenever we rebuild our domains the code change is automatically included.

The Ruby scripts to start/stop domains are templates in the DPK. The templates are stored under peoplesoft_base\dpk\puppet\modules\pt_config\files\pt_appserver\appserver_win_service.erb (replace pt_appserver with pt_prcs or pt_pia for the batch and PIA services.)

To make the environment variables we add dynamic, we can reference variables that exist in the Ruby environment that calls the ERB template. In the program appserver_domain_boot.rb, the variables ps_home and ps_cfg_home are set. We will use those variables to build our environment variables.

ENV["PS_HOME"] = "<%= ps_home %>"
ENV["PS_CFG_HOME"] = "<%= ps_cfg_home %>"
system("<%= ps_home %>/appserv/psadmin -c start -d <%= domain_name %>")

The <%= %> tags will output the value of that command or variable. So in our case, we are outputting the string value of ps_cfg_home.

The result of this file will look like this:

ENV["PS_HOME"] = "c:\\psft\\pt\ps_home8.56.08"
ENV["PS_CFG_HOME"] = "c:\\psft\\cfg\\DEV"
system("c:\\psft\\pt\ps_home8.56.08\\appserv\\psadmin -c start -d DEV")

When we run Puppet the next time, our Windows service will have it’s environment variables set before starting or stopping a domain.

Improve the Management of DPK Archives

In Episode 127 of the podcast, Kyle and I discuss some strategies for managing the DPK archive files. The discussion started because I am starting a PeopleTools 8.56. To upgrade each server we copy the new DPK archive files to every server and run the DPK. Having a copy of those archives on every server is redundant and not the best solution.

 

A better solution is to store the DPK archives in centralized place. We run into a problem in the DPK when the archive folder has multiple versions of the same archive. For example, if we have two versions of the Weblogic archive, the DPK will return an error because it doesn’t know which one to deploy.

This error is generated from the function get_matched_file(). get_matched_file() is a custom DPK function that looks at the DPK Archive location and returns a filename based on a tag. The tags are string values like weblogic, pshome, and jdk. The DPK looks for all the file names that match the tag. If there is more than one file, the function returns an error.

Specifying Archive Files

Instead of letting the DPK search a directroy for matching archive files, we could explicity define which archive files we want to deploy. While this isn’t as dynamic, the method would give us more control over which version of each software we deploy. It could also solve the issue with centralizing all our DPK Archives.

To use this method, we have to make two changes to the DPK:

  • Create a new hash to store the archives we want to deploy
  • Update the Puppet code to read from the new hash

Hash to Store Archive Data

In the psft_customizations.yaml file, we create a new archive_files: hash. This hash is where we can explicitly define which archive files we want to use.

archive_files:
  weblogic:             "%{hiera('archive_location')}/pt-weblogic12.2.1.2.20180116.tgz"
  tuxedo:               "%{hiera('archive_location')}/pt-tuxedo12.2.2.0.tgz"
  jdk:                  "%{hiera('archive_location')}/pt-jdk1.8.0_144.tgz"
  pshome:               "%{hiera('archive_location')}/pt-pshome8.56.06.tgz"
  oracleclient:         "%{hiera('archive_location')}/pt-oracleclient12.1.0.2.tgz"
  psapphome:            "%{hiera('archive_location')}/fscm-psapphome027.tgz"

The keys to use the hash are the DPK tags for each piece of software. This lets us leverage the built-in tagging system in the DPK. If you want to take it a step further, you can define the hash with additional variables for each version. For example, in a common.yaml file, configure the archive_files: hash like this:

archive_files:
  jdk:                  "%{hiera('archive_location')}/pt-jdk%{hiera('jdk_version')}.tgz"
  pshome:               "%{hiera('archive_location')}/pt-pshome%{hiera('tools_version')}.tgz"

Then, in your psft_customizations.yaml file you can specify:

jdk_version:    1.8.0_144
tools_version:  8.56.06

Update Puppet to Read the Hash

Next, in the tools_deployment.pp file (under dpk/puppet/modules/pt_setup/manifests) we need to modify the code to use our archive_list: hash. In the delivered code, the DPK archive location and software tag are passed to the Ruby function get_matched_files(). What we will do instead is check if the archive_list: hash is defined. If it is, we will use the path from the hash instead of calling get_matched_file. If the archive_files: hash does not have the tag defined (like pshome), the get_matched_file() function is called.

Here is the code for deplying WebLogic:

$archive_files = hiera('archive_files', '')

if $archive_files {
  $weblogic_archive_file   = $archive_files[$weblogic_tag]
} 
if $weblogic_archive_file == '' {
  $weblogic_archive_file = get_matched_file($tools_archive_location, $weblogic_tag)
}
if $weblogic_archive_file == '' {
  fail("Unable to locate archive (tgz) file for Weblogic in ${tools_archive_location}")
}

You can copy this code for the rest of the archive types in the tools_deployment.pp and app_deployment. For DPK types that you do not specify, the DPK will revert to the original behavior.

I like this setup better. We get more control over which DPK archives we deploy, we can store multiple archives in a central location, and we retain the delivered behavior as the default.

#114 – Git Workflows


This week on the podcast, Kyle shares some changes and improvements to using Git with the Deployment Packages. Kyle and Dan also talk about Continuous Integration and how it can be used to test Deployment Package changes.

Show Notes

#113 – Rundeck and Bolt

This week on the podcast, Kyle and Dan discuss Bolt, a new tool from Puppet to run commands and scripts remotely on servers. Dan also talks about setting up a new Rundeck server and using Bolt with Rundeck.

Show Notes

#105 – Agile PeopleSoft

This week on the podcast, Charlie Sinks joins us to talk about the Upper Midwest Regional User Group. We talk about using Agile with PeopleSoft, our experiences with Elasticearch, the Idea Space for PeopleTools, and using Git with PeopleSoft.

Show Notes

  • Chatbots Demo @ 3:30
  • Agile and PeopleSoft @ 11:00
    • SCRUM
    • Kanban
    • SAFE
  • Elasticsearch Experiences @ 33:45
  • PeopleTools 8.56 Updates @ 43:30
  • Testing Effort for Certifications @ 45:30
    • AIX/Solaris: pspuppet.sh is used to prepare the environment. Need to invoke if the bootstrap or puppet failed and you need to re-run
  • Idea Space @ 50:15
  • Advanced PS Admin Talk @ 56:30
  • Orchestration @ 61:00
  • Using Git @ 67:30

Using Puppet Environments with the DPK

Since the Deployment Packages were released with PeopleTools 8.55, one of my criticisms has been that the DPK is a bit of a sledgehammer. If you define multiple PeopleSoft environments on a server and you want to configure one web server, ALL the domains that the DPK knows about are shut down.

Puppet has an Environments feature that lets you segregate your code and data. While the DPK does not support Puppet Environments out of the box, we can use them to make the DPK less of a sledgehammer when managing our domains. (There is still some sledgehammering going on, so go vote for this idea).

While environments let you separate the modules, manifests and data folder, in this post we’ll separate just the data folder. This will let us share a common set of code (the manifests and modules folders) but the configuration of each domain will be different.

If you want to extend this to the modules and manifests folder, copy those into the environment folders with the environment-specific changes. This is useful for testing new code changes or if you want an environment to use a different DPK Role in the site.pp file.

Create Environment Folders

  1. Make a new dev and tst folders under c:\programdata\puppetlabs\puppet\etc\environments

You can have multiple environments under this folder – as many as you want. A strategy that I’m testing is using the database name as the environment name. For this post, I’ll stick with dev and tst

  1. Copy your YAML files from puppet\etc\data to puppet\etc\environments\dev\data and puppet\etc\ environments\tst\data.

Configure Puppet Environment

Under the puppet\etc folder, add (or modify) the puppet.conf file to look like this:

[main]
environment=production
parser=future
environmentpath=c:\programdata\puppetlabs\puppet\etc\environments
hiera_config=c:\programdata\puppetlabs\hiera\etc\hiera.yaml
basemodulepath=c:\programdata\puppetlabs\puppet\etc\modules

This file tells Puppet where to look for your environments, your Hiera configuration, your default module location, and the default Puppet Environment.

Last, we’ll modify the hiera.yaml file in c:\programdata\puppetlabs\hiera\etc to include environments:

---
:backends:
  - yaml

:hierarchy:
  - "environments/%{::environment}/data/psft_customizations"
  - "environments/%{::environment}/data/psft_configuration"
  - "environments/%{::environment}/data/psft_deployment"
  - "environments/%{::environment}/data/psft_unix_system"
  - "environments/%{::environment}/data/defaults"

:yaml:
  :datadir: c:\programdata\puppetlabs\puppet\etc

If you want to share some of the files, like the defaults.yaml or the psft_unix_system.yaml file, you could keep those under the main puppet\etc\data folder. Your hiera.yaml file would look like this:

---
:backends:
  - yaml

:hierarchy:
  - "environments/%{::environment}/data/psft_customizations"
  - "environments/%{::environment}/data/psft_configuration"
  - "environments/%{::environment}/data/psft_deployment"
  - data/psft_unix_system
  - data/defaults

:yaml:
  :datadir: c:\programdata\puppetlabs\puppet\etc

Test the Environments

Once our Puppet changes are complete we can test some builds. When we run puppet apply, we’ll add an additional paratemer: the environment. To build my dev environment domains, I’ll use this procedure:

cd c:\programdata\puppetlabs\puppet\etc\manifests
puppet apply .\site.pp --environment=dev --debug

Once the dev domains are built and running, you can kick off the tst build with:

puppet apply .\site.pp --environment=tst --debug

As the tst environment is building, your dev domains should stay up and not be affected by the Puppet run. If they are affected, you may have some YAML changes that need to be made. Make sure your configuration’s between the environment don’t overlap (e.g, same PS_CFG_HOME and domain names).

#95 – You are here

This week on the podcast, we share Eric Bolinger’s DPK module for WebLogic, Graham’s 5 Things about PeopleSoft Images, more Fluid Ideas, and dive into ELM’s Find Learning page behavior. We finish the episode discussing about Matt Tremblay’s “Reverse Proxy Server with Docker” post.

Show Notes