psadmin.conf: System Performance Data Collection

In this session from psadmin.conf 2018, Frank Dolezal shares how he collects logs and system analytics to address a number of system issues. He explains how they can track average performance trends in the system, identify errors an anomalies, catch new issues before they are reported, and more.

We released the videos from psadmin.conf as a free course so you can find the sessions in one place. Head over to the psadmin.io courses page and sign up. If you already signed up for the course, you can log in and the new video will be available.

Cloud Manager Configuration

Notes

If you haven’t installed Cloud Manager yet, watch this video first to learn how to install Cloud Manager.

  1. Install Chocolatey

    [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
    Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
    
  2. Install Firefox

    choco install firefox -y
    

psadmin.conf: Elasticsearch Clusters on Kubernetes

In this session from psadmin.conf 2018, JR Bing gives an introduction to Kubernetes and then dives into why he is using it to run an Elasticsearch cluster. The session includes a great demo as well showing how PeopleSoft interacts with the Kubernetes cluster. He finishes the session with some ideas on how containers Kubernetes could be used by PeopleSoft in the future.

We released the videos as a free course so you can find the videos in one place. Head over to the psadmin.io courses page and sign up. If you already signed up for the course, you can log in and the new video will be available.

Signing nVision Macros

Signing nVision Macros

If you have to support nVision reports, you’ve probably had to deal with getting nVision configured on developer workstations. To develop nVision reports, you need to run Excel macros inside Excel. But, many organizations are concerned about allowing users to run any macro. Macros are often an attack vector for hackers, so running Excel macros are something that IT security often discourages.

How do we balance the need to run nVision and IT security discouraging macros? We can sign the nVision macros with a certificate from your organization so that the macros are trusted. To sign the macros, we will use tools that come with Microsoft Office.

Generate a Certificate

First, we need to generate a certificate. If you have Office 2016, you will find the selfcert.exe program here: C:\Program Files\Microsoft Office\root\Office16\

PS C:\> cd 'C:\Program Files\Microsoft Office\root\Office16' 
PS C:\Program Files\Microsoft Office\root\Office16> .\SELFCERT.EXE

Give your certificate a name, nVision, and click OK. Your certificate is stored in the Windows Certificate Manager.

Sign the Excel Macro

Next, launch nVision and sign in. If nVision hangs, you can start Excel, set the macro settings to “All Macros enabled” for now (File> Options > Trust Center > Trust Center Settings > Macro Settings), then relaunch nVision.

Once nVision has started, enable the Developer tab under File > Options > Customize Ribbon. Select the Developer option and move it to the toolbar. Next, click on Developer tab and select the Visual Basic button. In the VB Editor, click on Tools > Digital Signature and select the nVision certificate. Save your changes.

Test the Signed Macros

Before we test, make sure your Excel macro settings are correct. Under File> Options > Trust Center > Trust Center Settings > Macro Settings, select the option “Disable all macros except digitally signed macros”. Close Excel and nVision.

Last, launch nVision and watch your digitally signed macros run in Excel.

Cloud Manager Installation

Sign up for an Oracle Cloud Trial Account

Create SSH key

ssh-keygen -f ~/.ssh/cmtrial
cat ~/.ssh/cmtrial.pub

Create OCI API Key

openssl genrsa -out ~/.oci/cmtrial.pem -aes128 2048
openssl rsa -pubout -in ~/.oci/cmtrial.pem -out ~/.oci/cmtrial.pub
openssl rsa -pubout -outform DER -in ~/.oci/cmtrial.pem | openssl md5 -c
cat ~/.oci/cmtrial.pub
base64 ~/.oci/cmtrial.pem | tr -d "\r\n"

More information on building OCI API Keys

Allow Public Access to CM

  1. Menu > Networking > VCN > cm Subnet > Security Lists > cm_sec
  2. Add a new rule: 0.0.0.0/0 8000
  3. Add a hosts entry to access CM
echo "IPADDRESS psftcm.cm.psftcm.oraclevcn.com" | sudo tee -a /etc/hosts

Logs to view while waiting for Cloud Manager to finish.

ssh -i ~/.ssh/cmtrial opc@IPADDRESS
tail -f bootstrap/CloudManagerStatus.log
tail -f bootstrap/psft_oci_setup.log

psadmin.conf: Process Monitor 2.0

In this session from psadmin.conf 2018, David Vandiver demonstrates his Process Monitor 2.0 bolt-on. David redesigned the Process Monitor from scratch and added many improvements. His bolt-on is free too! You can download it at peoplesofttricks.com.

We released the videos as a free course so you can find the videos in one place. Head over to the psadmin.io courses page and sign up. If you already signed up for the course, you can log in and the new video will be available.

Speed up PeopleSoft Images

One complaint I have about using PeopleSoft Images is that logging in and opening pages is very slow. Behind the scenes, the application server is caching objects as you request them. The initial cache load can take time and that leads to very slow page loads. To eliminate the cache loading and slow performance, we can run the LOADCACHE process to pre-load all of the application cache.

We will solve this problem using a short Puppet manifest that runs after the DPK is finsihed. This lets the delivered DPK build the system, then we follow along and make the performance improvements.

Before we look at the Puppet code, lets look at the LOADCACHE process and configuration.

LOADCACHE

The LOADCACHE process is an App Engine that will pre-build all of the cache files an application server needs. You run the process from the page “PeopleTools > Utilities > Administration > Load App Server Cache”. Depending on the size of your database, this may take a long time. In my HR Image 32 VM, the process only took 10 minutes. In a large Finance production system, the process ran for 3 hours.

The LOADCACHE process will build the cache files in a directory named CACHE/STAGE/stage. The contents of the stage directory can be copied (or symlinked) to your application server domain. (There is an Output Destination box on the run control page, but it does not control the output location.) The pre-built cache files must be stored under a CACHE/STAGE directory. This directory can live anywhere, but it is easiest to store the cache files under the application server domain.

Last, we need to tell our application server to use the shared cache files instead of per-process cache files. In psappsrv.cfg, set ServerCacheMode=1 and reconfigure the application server.

DPK

The steps we want to automate are this:

  1. Run the LOADCACHE app engine
  2. Create a symlink from app domain to pre-built cache
  3. Change ServerCacheMode in psappsrv.cfg
  4. Reconfigure and start the app server domain

First, the DPK is built to handle multiple app server domains on a box, so let’s wrap our Puppet code in an appropriate loop. Create the file manifests/loadcache.pp:

#loadcache.pp
$appserver_domain_list = hiera('appserver_domain_list')
$appserver_domain_list.each | $domain_name, $app_domain_info | {


}

This code looks up our app server domains in Hiera and will iterate over each domain.

To run an app engine via the DPK, we could use the exec resource and create a command to run. There is a pt_psae custom Puppet Type that the DPK delivers and we can use this to run App Engines. BUT, there is a bug in pt_psae! It is hard-coded to only run PTEM_CONFIG (the ACM App Engine). We can fix that pretty easily. It just so happens that we the pt_psae type takes a program_id parameter and we can use that instead of the hard-coded value.

In the file production/modules/pt_config/lib/puppet/provider/psae.rb, we change the line

ae_program_name="PTEM_CONFIG"

to

ae_program_name=resource[:program_id]

To make this easier, let’s wrap this bug fix into a separate Puppet manifest so we can fix the bug on the fly. Create the file manifests/fixdpkbug.pp

#fixdpkbug.pp
$dpk_location = hiera('dpk_location')

exec { 'fix-dpk-bug':
  command => "sed -i 's/ae_program_name=\"PTEM_CONFIG\"/ae_program_name=resource[:program_id]/' ${dpk_location}/puppet/production/modules/pt_config/lib/puppet/provider/psae.rb",
  path  => '/usr/bin',
}

Back in our loadcache.pp file, we will use the pt_psae type to call the LOADCACHE program. The type requires the database connection credentials that live in the hash db_settings:. We need to convert the hash into an array of key=value pairs first. Then we can populate the parameters for pt_psae.

#loadcache.pp
$ps_home_dir = hiera('ps_home_location')

$appserver_domain_list = hiera('appserver_domain_list')
$appserver_domain_list.each | $domain_name, $app_domain_info | {

  $db_settings = $app_domain_info['db_settings']
  $db_settings_array  = join_keys_to_values($db_settings, '=')
  $ps_cfg_home_dir = $app_domain_info['ps_cfg_home_dir']

  pt_psae {"LOADCACHE-${domain_name}":
    db_settings => $db_settings_array,
    run_control_id => 'BUILD',
    program_id => 'LOADCACHE',
    os_user =>  'psadm2',
    logoutput => 'true',
    ps_home_dir => $ps_home_dir,
  }

}

Next, we need to create the symlink in our app server domain to the location where our CACHE files are created. If you run LOADCACHE through the process scheduler, the files will be built under the PS_FILEDIR specified for the scheduler. Normally, this is stored under the process scheduler domain folder. Since we are running the process through the Puppet Type (aka, command line), it will use the psadm2 user’s PS_FILEDIR location which happens to be /home/psadm2/PS_CACHE.

To create the symlink, we can use the file resource built into Puppet.

#loadcache.pp
$ps_home_dir = hiera('ps_home_location')

$appserver_domain_list = hiera('appserver_domain_list')
$appserver_domain_list.each | $domain_name, $app_domain_info | {

  $db_settings = $app_domain_info['db_settings']
  $db_settings_array  = join_keys_to_values($db_settings, '=')
  $ps_cfg_home_dir = $app_domain_info['ps_cfg_home_dir']

  pt_psae {"LOADCACHE-${domain_name}":
    db_settings => $db_settings_array,
    run_control_id => 'BUILD',
    program_id => 'LOADCACHE',
    os_user =>  'psadm2',
    logoutput => 'true',
    ps_home_dir => $ps_home_dir,
  }
  -> file {"${ps_cfg_home_dir}/appserv/${domain_name}/CACHE/SHARE":
    ensure  => link,
    target  => "/home/psadm2/PS_CACHE/CACHE/STAGE/stage"
  }

}

Notice that we use the -> resource chain between pt_psae and file. This tells puppet that the app engine must run first before we create the symlink (so we know the folder exists).

Next, let’s update psappsrv.cfg to set the Server Cache Mode. For that, we will use an exec resource running the sed command. You could use the file_line resource as well. The file_line method would offer better multi-platform support, but the sed command is really easy to use.

#loadcache.pp
$ps_home_dir = hiera('ps_home_location')

$appserver_domain_list = hiera('appserver_domain_list')
$appserver_domain_list.each | $domain_name, $app_domain_info | {

  $db_settings = $app_domain_info['db_settings']
  $db_settings_array  = join_keys_to_values($db_settings, '=')
  $ps_cfg_home_dir = $app_domain_info['ps_cfg_home_dir']

  pt_psae {"LOADCACHE-${domain_name}":
    db_settings => $db_settings_array,
    run_control_id => 'BUILD',
    program_id => 'LOADCACHE',
    os_user =>  'psadm2',
    logoutput => 'true',
    ps_home_dir => $ps_home_dir,
  }
  -> file {"${ps_cfg_home_dir}/appserv/${domain_name}/CACHE/SHARE":
    ensure  => link,
    target  => "/home/psadm2/PS_CACHE/CACHE/STAGE/stage"
  }
  -> exec { "Set-Cache-Mode-${domain_name}": 
    command => "sed -i 's/^\;ServerCacheMode=0/ServerCacheMode=1/' ${ps_cfg_home_dir}/appserv/${domain_name}/psappsrv.cfg",
    path  => '/usr/bin',
  }

}

Last, we need to reconfigure the domain. I prefer using psadmin plus to handle domain restarts because it can bundle multiple actions into a single command. We can call psadmin plus from Puppet to do our reconfiguration.

#loadcache.pp
$ps_home_dir = hiera('ps_home_location')

$gem_home = '/opt/puppetlabs/puppet/bin'
exec { 'install-psadmin_plus':
  command => "${gem_home}/gem install psadmin_plus",
}

$appserver_domain_list = hiera('appserver_domain_list')
$appserver_domain_list.each | $domain_name, $app_domain_info | {

  $db_settings = $app_domain_info['db_settings']
  $db_settings_array  = join_keys_to_values($db_settings, '=')
  $ps_cfg_home_dir = $app_domain_info['ps_cfg_home_dir']

  pt_psae {"LOADCACHE-${domain_name}":
    db_settings => $db_settings_array,
    run_control_id => 'BUILD',
    program_id => 'LOADCACHE',
    os_user =>  'psadm2',
    logoutput => 'true',
    ps_home_dir => $ps_home_dir,
  }
  -> file {"${ps_cfg_home_dir}/appserv/${domain_name}/CACHE/SHARE":
    ensure  => link,
    target  => "/home/psadm2/PS_CACHE/CACHE/STAGE/stage"
  }
  -> exec { "Set-Cache-Mode-${domain_name}": 
    command => "sed -i 's/^\;ServerCacheMode=0/ServerCacheMode=1/' ${ps_cfg_home_dir}/appserv/${domain_name}/psappsrv.cfg",
    path  => '/usr/bin',
  }
  -> exec { "Bounce ${domain_name} App Domain":
    command => "${gem_home}/psa bounce app ${domain_name}",
    require => Exec['install-psadmin_plus'],
  }
}

Using the exec resource, we can install psadmin plus using the gem utility. Inside our domain loop we can bounce the app server after we build our cache and reconfigure the domain.

The bounce action for psadmin plus will stop the domain, clear cache, flush the IPC resources, reconfigure the domain, then start the domain from a single command.

Running loadcache.pp

We are ready to test our new manifest. We actually have two manifests to test:

  • loadcache.pp
  • fixdpkbug.pp

We need to run the fixdpkbug.pp manifest first. Since that manifest changes a line of the underlying Puppet provider code, we have to run it before we compile the Puppet catalog for our loadcache.pp run. Then, we will run the loadcache.pp manifest.

$ DPK_HOME="/opt/oracle/psft/dpk/puppet"
$ cd $DPK_HOME/production
$ sudo /opt/puppetlabs/puppet/bin/puppet apply manifests/fixdpkbug.pp --confdir $DPK_HOME
Notice: Compiled catalog for psvagabond in environment production in 0.09 seconds
Notice: /Stage[main]/Main/Exec[fix-dpk-bug]/returns: executed successfully
Notice: Applied catalog in 1.91 seconds
$ sudo /opt/puppetlabs/puppet/bin/puppet apply manifests/loadcache.pp --confdir $DPK_HOME
Notice: Compiled catalog for psvagabond in environment production in 0.15 seconds

Notice: /Stage[main]/Main/Exec[install-psadmin_plus]/returns: executed successfully
Notice: /Stage[main]/Main/Pt_psae[LOADCACHE-psftdb]/returns: PeopleTools 8.57.08 - Application Engine
Notice: /Stage[main]/Main/Pt_psae[LOADCACHE-psftdb]/returns: Copyright (c) 1988-2019 Oracle and/or its affiliates.
Notice: /Stage[main]/Main/Pt_psae[LOADCACHE-psftdb]/returns: All Rights Reserved
Notice: /Stage[main]/Main/Pt_psae[LOADCACHE-psftdb]/returns:
Notice: /Stage[main]/Main/Pt_psae[LOADCACHE-psftdb]/returns: Application Engine program LOADCACHE ended normally
Notice: /Stage[main]/Main/Pt_psae[LOADCACHE-psftdb]/returns: executed successfully
Notice: /Stage[main]/Main/Exec[Set-Cache-Mode-psftdb]/returns: executed successfully
Notice: /Stage[main]/Main/Exec[Bounce psftdb App Domain]/returns: executed successfully
Notice: Applied catalog in 594.46 seconds

To verify that your application servers are using the shared cache, open your APPSRV_mmdd.LOG file and look for these lines:

Cache Directory being used: /home/psadm2/psft/pt/8.57/appserv/psftdb/CACHE/SHARE/

When you log into your PeopleSoft Image, all of the pages will load faster than before.

Vagabond and Automated Builds

This change as been added to the ps-vagabond project. If you build your PeopleSoft Images with Vagabond, you can pull down the lastest changes in the master branch. If you want to see how to integrate these manifests into your PeopleSoft Image builds, you can look at this provisioning script.

The two files are shared as Gists in Github, you can freely use them:

UPDATE: I created an Idea for this on the PeopleSoft Idea Space. Go vote for this if you want to see this included in future PeopleSoft Images.

Process Scheduler and Powershell

Process Scheduler and Powershell Scripts

The PeopleSoft Process Scheduler supports many different types of tools to run, but one thing it lacks is support to run shell scripts. Thankfully, David Kurtz offered a solution for that a while back, and revisted the solution recently. While making the process scheduler run shell scripts or Powershell scripts isn’t hard, the best part of David’s wrapper script is that it adds a layer to make your scripts API aware with the Process Monitor.

In my case, I have a number of Powershell scripts to run so I re-implemented David’s solution in Powershell. For a detailed description of how to configure shell scripts to run, visit here. The instructions below are an abbreviated version with small changes specific to Powershell.

All of the code is posted on GitHub in the ps-powershell repository. This is an Oracle-specific script, but it could easily be updated to work with SQL Server or DB2. To get started, download the scripts in the repository to c:\psft\tools.

Use Cases

There are a number uses for bringing Powershell scripts into the Process Monitor. The main use case I see is interfaces that are run from the OS you can now schedule and execute from within PeopleSoft. It’s common to use the Windows Task Scheduler to run Powershell scripts, but users have no visibility as to when the scripts run or if they completed successfully. Scheduling and executing those interfaces from the Process Monitor brings the visibility of the scripts into PeopleSoft where all your batch jobs are logged.

Configure the Process Type

To configure your process scheduler to run Powershell scripts, we need to configure a new Process Type and enable the server to run the new type.

  1. Navigate to PeopleTools > Process Scheduler > Process Type
  2. Add a new type called Powershell
  3. Select ORACLE as the Database Type and the Platform as Windows
  4. Select Other as the Generic Process Type
  5. For the Command Line, set it to powershell.exe
  6. The Parameter List is where we pass all of the arguments to our wrapper script. This is a longer string that I’ll break down:

    • -NoProfile: This tells Powershell to not run any profile scripts that could change your environment. It also helps speed up the start of Powershell scripts.
    • -ExecutionPolicy Bypass: Depending on your systems security level, you could exclude this option. This temporarily lowers the security policy for Powershell for the script you are running (but only for this script).
    • -File c:\psft\tools\psft.ps1: This is the location of the wrapper script.
    • -DBNAME %%DBNAME%% -ACCESSID %%ACCESSID%% -ACCESSPSWD %%ACCESSPSWD%% -PRCSINSTANCE %%INSTANCE%%: These are the required parameters the process scheduler will pass to the wrapper script.

    The full Paramter List is this: -NoProfile -ExecutionPolicy Bypass -File c:\psft\tools\psft.ps1 -DBNAME %%DBNAME%% -ACCESSID %%ACCESSID%% -ACCESSPSWD %%ACCESSPSWD%% -PRCSINSTANCE %%INSTANCE%%

  7. Set the working directory to c:\psft\tools

  8. Save the new Process Type

Next, we need to allow the new Process Types to run on your process scheduler.

  1. Navigate to PeopleTools > Process Scheduler > Servers
  2. Open your server definition (PSNT for me)
  3. In the “Process Types run on this Server” grid, add the Powershell Process Type and Save

Last, let’s configure the output format for the Powershell Process Types.

  1. Navigate to PeopleTools > Process Scheduler > System Settings
  2. Click the Process Type Output tab
  3. Select Other for the Process Type
  4. Check the Active checkbox for “Web” and also the Default checkbox
  5. Click the Process Output Format tab
  6. Select Other and Web, and then check the Active and Default checkboxes for the “Text Files (.txt)” row
  7. Save the changes

Our wrapper script is now configured for use.

Run Powershell Scripts

The wrapper script itself doesn’t do any process besides updating the Process Monitor tables. Next, let’s built a test script to verify we can run Powershell scripts. From the GitHub repository, we will use the test.ps1 script. Copy the test.ps1 script to c:\psft\tools\test.ps1

Next we will create a new Process Definition for the test script.

  1. Navigate to PeopleToos > Process Scheduler > Processes
  2. Click “Add a new value”
  3. Select Powershell as the Process Type
  4. Set the Process Name to PSTEST
  5. Add a Description: Test Powershell Script
  6. Click the Process Definition Options tab
  7. In the Process Security group box, set the Component to PRCSMULTI and the Process Group to TLSALL
  8. Click the Override Option tab
  9. Select “Append” for the Parameter List

    The append list is where we pass in commands that the psft.ps1 script will run for us. For our test script, we have one parameter to pass in addition to calling the script. We pass the database name to test.ps1 so it can print the database name.

    "c:\psft\tools\test.ps1 -DBNAME %%DBNAME%%"

    Make sure to wrap the parameter list in double quotes so that our wrapper script passes the entire string to our test.ps1 script.

  10. Save the new Process Definition.

Last, let’s test our new PSTEST process.

  1. Navigate to PeopleTools > Process Scheduler > System Process Request
  2. Add a new run control
  3. Click the Run button
  4. In the list of processes, select PSTEST and verify the output is “Web” and “TXT”
  5. Click OK

Go watch the process in Process Monitor and refresh the page. You should see the status of the process change from QUEUED to PROCESSING to DONE. When the process is done, go view the output under “View Log/Trace”.

psadmin.conf: Terraforming PeopleSoft

The next video from psadmin.conf 2018 is available! Kyle Benson and Dan Iverson look at Terraform, a cloud provisioning tool, and show how psadmin.io uses Terraform to build training environments. Terraform works with Amazon Web Services, Azure, Google Cloud Platform, Oracle Public Cloud, and many more cloud providers. Terraform can deploy a single server, or an infrastructure for 100 people, from a single command. The session will also talk about ps-terraform, an open source project to help you build PeopleSoft Images on the cloud.

We released the conference videos as a free course so you can find the videos in one place. Head over to the psadmin.io courses page and sign up. If you already signed up for the course, you can log in and the new video will be available.