Environment Template Security in Cloud Manager

Now that Cloud Manager is here, we have self-service ability to create PeopleSoft environments in OCI. Cue Uncle Ben… “With great power comes great responsibility.” Having a self-service portal that allows for the creation of these environments is fantastic, but how do we put some controls around this awesome power? This is where Environment Template security comes in to play.

To create an Environment in Cloud Manager, you first need an Environment Template. This template is created using some General Details, a Topology, and finally Security. It is this security detail that will help us control who can use these templates to create environments in the future. When you are creating a template, you will see the following section 3 – Define Security in the wizard. Let’s break down what our options are.

Assign Template to Zone(s)

Templates can be assigned to a single or multiple Zones. As of Image 11, there are currently three zones to choose from:

  • Development
  • Test
  • Production

A Zone is just “a logical grouping of environments,” according to Oracle’s Cloud Manager FAQ. At this time, it doesn’t serve any other purpose outside of helping you organize your environments. I could see a level of security being added to Zones in the future. If not by Oracle, maybe a custom bolt-on?

Assign Template to Role(s)

Templates can also be assigned to PeopleSoft security Roles. Any user that has a Role specified in this section will have the ability to create an Environment based on this template. Cloud Manager delivers three roles intended to be used with templates:

  • Cloud Administrator (PACL_CAD)
  • Cloud PeopleSoft Administrator (PACL_PAD)
  • Self-Service User (PACL_SSC)

As you would expect with PeopleSoft security, you are free to create and use your custom roles here. I think the delivered roles make it clear how Oracle sees the breakdown of potential users. Users who admin OCI resources, users who admin PeopleSoft, and users of PeopleSoft who might want ad-hoc environments(thinking developers or maybe even business staff looking for demos). I could see the OCI and PS admin roles combined often. Also, the self-service user might be split out into a technical and functional role or disabled altogether. Each organization will have to review this for themselves and come up with a good policy. Just keep in mind, you can add multiple roles to each template.

Creating Environments

Once the security and other details are added to a template, it will be available to use when creating an Environment.

Only the templates the user has access to will be in the Template Name dropdown. The Zone dropdown will also be populated with available zones from the selected template. If a single zone were added, this would be auto-selected and read-only.

Overall, I feel that Environment Template security offers a lot of control. It gives us enough control to provide a level of self-service environment deployments if desired. I do look forward to seeing actual functionality added to Zones. It might be easier to manage this security if we could somehow control access by zone versus strictly individual template security.

Taking PeopleSoft Swimming in an OCI Instance Pool

I was recently studying for some OCI Certification exams when I came across the topic of Instance Pools. I’ve known about this OCI feature for a while but realized I somehow never thought seriously about using them with PeopleSoft. So, I decided to do a proof of concept and write about it. What I found was with a few changes to a traditional PeopleSoft topology and configuration, you can get this working!

Please keep in mind that this is a proof of concept. I know I didn’t touch on all PeopleSoft configuration and features, so there is likely more work to be done to make this setup viable. I will try to call out some “gotchas” that come to mind but feel free to call out potential issues in the comments. If we can turn this POC into something usable, it might even open the door for container exploration!

Getting Started

To get started, we first need to understand what an Instance Pool is, and what is required to create one. An Instance Pool is a feature that allows you to create and manage multiple compute instances that share the same configuration. What makes them powerful is the ability to scale instance count and attach to a Load Balancer. You can scale on a schedule, with metric based rules, with Terraform or a script using OCI CLI, or manually with the OCI Console. Adjusting the instance count will create or terminate instances automatically. Also, when the pool is attached to a load balancer, the instances are automatically added or removed from the load balancer’s backend set. As you can see, this type of functionality would be great in the PeopleSoft space, where the load on a system can vary day-to-day. Think “timesheet day,” open enrollment, course registration, etc.

To complete this POC, we first need a few things.

Load Balancer

We will leverage the Load Balancing service that OCI offers. This service’s integration with our Instance Pool will give us the automation that we need. Creating a Load Balancer is straight forward, but there are a few things to keep in mind for this POC.

  1. Select a visibility type of Public if you hope to access it from the internet.
  2. Don’t worry about adding Backends at the time of Load Balancer creation. This will be handled by the Instance Pool in the future.
  3. Make sure to enable Session Persistence on the Backend Set. This is needed for PeopleSoft to work correctly.

Custom Image

An Instance Pool uses an Instance Configuration to spin up new instances on demand. An Instance Configuration contains all the configuration details like network, OS, shape, and image needed to create an instance. The image can be an Oracle provided platform image( Oracle-Linux-7.x, etc.) or a custom image that you create.

This image marks the first real decision on how to approach this POC. What we are after is a server with PeopleSoft web and application domains fully deployed and running. I see two approaches to this. One, use a standard platform image and then use cloud-init to configure and deploy the domains using DPK at instance creation. Two, create a custom image with the domains already deployed. I chose to leverage the custom image approach for this POC. I felt this was the fastest, most reliable way to create these instances. With rule-based scaling, and maybe to a lesser extent scheduled scaling, we ideally want these instances created quickly, and DPK takes time.

If startup time at creation isn’t a concern, then an approach using cloud-init probably is the way to go. One thought I had was to keep all middleware installations on FSS. The cloud-init script could just mount that and have DPK just focus on deploying PS_CFG_HOME. That would really speed things up. Maybe something to try in a future blog post!

Create Base Instance

I found a few things that were needed when prepping a PeopleSoft webapp server for Custom Image creation. I started by creating a new instance using the latest Linux 7 platform image. Next, I downloaded the latest PeopleTools 8.58 DPK to the instance and ran the psft-dpk-setup.sh setup script, passing the --env_type midtier --domain_type all options. This gave me a server with working web and application domains.

Update PeopleSoft Configuration

If we stopped here and created a Custom Image from this base instance, we might have some success. However, every future instance created from this image would have the exact same PeopleSoft domain configuration. There are some parts of a PeopleSoft domain configuration that are required to be unique. Also, some parts of the configuration will have a hostname “hardcoded,” etc. Here is a list of changes I made to address these concerns before I created a Custom Image. As mentioned before, there are likely more changes needed. However, these were enough to get this POC working.

  1. Update Cookie Configuration
    • Make sure your cookie configuration in the weblogic.xml file is set not to include a hostname.
    • <cookie-name>DBNAME-PORTAL-PSJSESSIONID</cookie-name>
  2. Update configuration.properties
    • Set the psserver= value to use localhost, since we are using a webapp deployment approach.
    • This will pin each web domain to use its local app domain, with load balancing and failover handled strictly by the OCI Load Balancer.
  3. Update setEnv.sh
    • By default, the ADMINSERVER_HOSTNAME variable is set to the hostname the domain was installed on.
    • Change this to be a dynamic value, driving off of the $HOSTNAME environment variable.
  4. Update psappsrv.cfg
    • The [Domain Settings]\Domain ID value should be unique across all your PeopleSoft domains.
    • It is common for domain ids to follow a sequence number pattern. Example: APPDOM1, APPDOM2, etc.
    • Update the value to use an environment variable for the sequence number, ensuring a unique ID.
      • We will discuss ideas on how to set this %PS_DOMAIN_NBR% variable later.
  5. Update psft-appserver service script
    • To ensure our domain configurations are correct and domains start properly, we should enforce a domain configure during instance creation and boot.
    • For this POC, I simply added a psadmin configure command to the domain service script.
    • For each application domain installed by the DPK, there is a service setup using a script found here:
      • $PS_CFG_HOME/appserv/APPDOM/psft-appserver-APPDOM-domain-appdom.sh
    • Update this script in the start_application_server function and add the following configure command before the start command.
      • su - $APPSRV_ADMIN -c "$APPSRV_PS_HOME/bin/psadmin -c configure -d $APPSRVDOM" 1>>$LOG_FILE 2>&1
  6. Create $PS_DOMAIN_NBR
    • In the psadm2 ~/.bashrc file, export a $PS_DOMAIN_NBR variable.
    • The goal for this is to generate a unique number that can be used in your domain configuration.
    • When an Instance Pool creates an instance, a random number is appended to the hostname.
      • Example: webapp-634836, webapp-973369, etc.
    • To leverage this appended number for $PS_DOMAIN_NBR, you can use something like this:
      • export PS_DOMAIN_NBR=$(hostname | rev | cut -d- -f1 | rev)

Create a Custom Image

Before creating the custom image, there are a few more things to clean up. First, remove any unwanted domains or software. Depending on your DPK deployment, a process scheduler domain may have been created. We won’t be needing this, so it can be manually removed using psadmin. Next, remove any unwanted entries in /etc/hosts file. For this POC, I stripped down to just the “localhost” entries. New entries will be added when instances are created with new hostnames. Last, stop all running PeopleSoft domains. This step is just to ensure a clean configuration.

Now we are ready to create the Custom Image. In the OCI Console, find your base instance. Under More Actions, select Create Custom Image.

Instance Configuration

After the Custom Image is done provisioning, we are ready to create our Instance Configuration. To start, we need to create an instance based on our new Custom Image. In the OCI Console, navigate to Compute > Custom Image and find the new image. Click on Create Instance. Enter a name, shape, networking, and other details we want to be used in our future Instance Configuration. Select Create and wait for the instance to provision.

Once provisioned, go ahead and validate that the instance looks good. Login to the PIA, make sure the domains started correctly, etc. If all looks well, go back to your newly created instance in the OCI Console. Under More Actions, select Create Instance Configuration. Select a compartment, name, and optional tags, then click Create Instance Configuration.

Instance Pool

Now that we have our Instance Configuration, we can finally create our Instance Pool. In OCI Console, navigate to the newly created Instance Configuration and click on Create Instance Pool. Select a name, the instance configuration, and the number of instances you would like in the pool. Next, we will configure the pool placement with Availability Domain details. Also, make sure to select Attach a Load Balancer. Select the Load Balancer details we created earlier. Lastly, review the details and click Create. You can now follow the provisioning steps under Work Requests.

Once the provisioning work requests are completed, you can validate everything worked as expected. Under Instance Pool Details, you can navigate to Created Instances. You should see the newly created instances listed here. Also, you can navigate to Load Balancers. You will see the attached load balancer. Click into this to review that the Backend Set was updated with the newly created instances.

After validating the initial deployment, you can play around with the instance count. Navigate back to your Instance Pool and click on Edit. Try adding or subtracting to the Number of Instances. Monitor the work requests then validate instances were added or subtracted correctly.


This blog post was a quick, high-level walkthrough of using Instance Pools with PeopleSoft on OCI. My goal was mainly to prove that this was possible in a POC. However, I also wanted to help start a conversation about how this might look in a Production setting. I listed out a few things that could be done to get a custom image approach to work. What other configuration values did I not mention? What ideas do you have for a more dynamic approach, using DPK installs at creation time? As the feature set of OCI grows relating to Load Balancers and Instance Pools, I think we will be more and more motivated to get a deployment like this working outside of the lab!

psadmin.conf: System Performance Data Collection

In this session from psadmin.conf 2018, Frank Dolezal shares how he collects logs and system analytics to address a number of system issues. He explains how they can track average performance trends in the system, identify errors an anomalies, catch new issues before they are reported, and more.

We released the videos from psadmin.conf as a free course so you can find the sessions in one place. Head over to the psadmin.io courses page and sign up. If you already signed up for the course, you can log in and the new video will be available.

Cloud Manager Configuration


If you haven’t installed Cloud Manager yet, watch this video first to learn how to install Cloud Manager.

  1. Install Chocolatey

    [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
    Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
  2. Install Firefox

    choco install firefox -y

psadmin.conf: Elasticsearch Clusters on Kubernetes

In this session from psadmin.conf 2018, JR Bing gives an introduction to Kubernetes and then dives into why he is using it to run an Elasticsearch cluster. The session includes a great demo as well showing how PeopleSoft interacts with the Kubernetes cluster. He finishes the session with some ideas on how containers Kubernetes could be used by PeopleSoft in the future.

We released the videos as a free course so you can find the videos in one place. Head over to the psadmin.io courses page and sign up. If you already signed up for the course, you can log in and the new video will be available.

Signing nVision Macros

Signing nVision Macros

If you have to support nVision reports, you’ve probably had to deal with getting nVision configured on developer workstations. To develop nVision reports, you need to run Excel macros inside Excel. But, many organizations are concerned about allowing users to run any macro. Macros are often an attack vector for hackers, so running Excel macros are something that IT security often discourages.

How do we balance the need to run nVision and IT security discouraging macros? We can sign the nVision macros with a certificate from your organization so that the macros are trusted. To sign the macros, we will use tools that come with Microsoft Office.

Generate a Certificate

First, we need to generate a certificate. If you have Office 2016, you will find the selfcert.exe program here: C:\Program Files\Microsoft Office\root\Office16\

PS C:\> cd 'C:\Program Files\Microsoft Office\root\Office16' 
PS C:\Program Files\Microsoft Office\root\Office16> .\SELFCERT.EXE

Give your certificate a name, nVision, and click OK. Your certificate is stored in the Windows Certificate Manager.

Sign the Excel Macro

Next, launch nVision and sign in. If nVision hangs, you can start Excel, set the macro settings to “All Macros enabled” for now (File> Options > Trust Center > Trust Center Settings > Macro Settings), then relaunch nVision.

Once nVision has started, enable the Developer tab under File > Options > Customize Ribbon. Select the Developer option and move it to the toolbar. Next, click on Developer tab and select the Visual Basic button. In the VB Editor, click on Tools > Digital Signature and select the nVision certificate. Save your changes.

Test the Signed Macros

Before we test, make sure your Excel macro settings are correct. Under File> Options > Trust Center > Trust Center Settings > Macro Settings, select the option “Disable all macros except digitally signed macros”. Close Excel and nVision.

Last, launch nVision and watch your digitally signed macros run in Excel.

Cloud Manager Installation

Sign up for an Oracle Cloud Trial Account

Create SSH key

ssh-keygen -f ~/.ssh/cmtrial
cat ~/.ssh/cmtrial.pub

Create OCI API Key

openssl genrsa -out ~/.oci/cmtrial.pem -aes128 2048
openssl rsa -pubout -in ~/.oci/cmtrial.pem -out ~/.oci/cmtrial.pub
openssl rsa -pubout -outform DER -in ~/.oci/cmtrial.pem | openssl md5 -c
cat ~/.oci/cmtrial.pub
base64 ~/.oci/cmtrial.pem | tr -d "\r\n"

More information on building OCI API Keys

Allow Public Access to CM

  1. Menu > Networking > VCN > cm Subnet > Security Lists > cm_sec
  2. Add a new rule: 8000
  3. Add a hosts entry to access CM
echo "IPADDRESS psftcm.cm.psftcm.oraclevcn.com" | sudo tee -a /etc/hosts

Logs to view while waiting for Cloud Manager to finish.

ssh -i ~/.ssh/cmtrial opc@IPADDRESS
tail -f bootstrap/CloudManagerStatus.log
tail -f bootstrap/psft_oci_setup.log

psadmin.conf: Process Monitor 2.0

In this session from psadmin.conf 2018, David Vandiver demonstrates his Process Monitor 2.0 bolt-on. David redesigned the Process Monitor from scratch and added many improvements. His bolt-on is free too! You can download it at peoplesofttricks.com.

We released the videos as a free course so you can find the videos in one place. Head over to the psadmin.io courses page and sign up. If you already signed up for the course, you can log in and the new video will be available.

Speed up PeopleSoft Images

One complaint I have about using PeopleSoft Images is that logging in and opening pages is very slow. Behind the scenes, the application server is caching objects as you request them. The initial cache load can take time and that leads to very slow page loads. To eliminate the cache loading and slow performance, we can run the LOADCACHE process to pre-load all of the application cache.

We will solve this problem using a short Puppet manifest that runs after the DPK is finsihed. This lets the delivered DPK build the system, then we follow along and make the performance improvements.

Before we look at the Puppet code, lets look at the LOADCACHE process and configuration.


The LOADCACHE process is an App Engine that will pre-build all of the cache files an application server needs. You run the process from the page “PeopleTools > Utilities > Administration > Load App Server Cache”. Depending on the size of your database, this may take a long time. In my HR Image 32 VM, the process only took 10 minutes. In a large Finance production system, the process ran for 3 hours.

The LOADCACHE process will build the cache files in a directory named CACHE/STAGE/stage. The contents of the stage directory can be copied (or symlinked) to your application server domain. (There is an Output Destination box on the run control page, but it does not control the output location.) The pre-built cache files must be stored under a CACHE/STAGE directory. This directory can live anywhere, but it is easiest to store the cache files under the application server domain.

Last, we need to tell our application server to use the shared cache files instead of per-process cache files. In psappsrv.cfg, set ServerCacheMode=1 and reconfigure the application server.


The steps we want to automate are this:

  1. Run the LOADCACHE app engine
  2. Create a symlink from app domain to pre-built cache
  3. Change ServerCacheMode in psappsrv.cfg
  4. Reconfigure and start the app server domain

First, the DPK is built to handle multiple app server domains on a box, so let’s wrap our Puppet code in an appropriate loop. Create the file manifests/loadcache.pp:

$appserver_domain_list = hiera('appserver_domain_list')
$appserver_domain_list.each | $domain_name, $app_domain_info | {


This code looks up our app server domains in Hiera and will iterate over each domain.

To run an app engine via the DPK, we could use the exec resource and create a command to run. There is a pt_psae custom Puppet Type that the DPK delivers and we can use this to run App Engines. BUT, there is a bug in pt_psae! It is hard-coded to only run PTEM_CONFIG (the ACM App Engine). We can fix that pretty easily. It just so happens that we the pt_psae type takes a program_id parameter and we can use that instead of the hard-coded value.

In the file production/modules/pt_config/lib/puppet/provider/psae.rb, we change the line




To make this easier, let’s wrap this bug fix into a separate Puppet manifest so we can fix the bug on the fly. Create the file manifests/fixdpkbug.pp

$dpk_location = hiera('dpk_location')

exec { 'fix-dpk-bug':
  command => "sed -i 's/ae_program_name=\"PTEM_CONFIG\"/ae_program_name=resource[:program_id]/' ${dpk_location}/puppet/production/modules/pt_config/lib/puppet/provider/psae.rb",
  path  => '/usr/bin',

Back in our loadcache.pp file, we will use the pt_psae type to call the LOADCACHE program. The type requires the database connection credentials that live in the hash db_settings:. We need to convert the hash into an array of key=value pairs first. Then we can populate the parameters for pt_psae.

$ps_home_dir = hiera('ps_home_location')

$appserver_domain_list = hiera('appserver_domain_list')
$appserver_domain_list.each | $domain_name, $app_domain_info | {

  $db_settings = $app_domain_info['db_settings']
  $db_settings_array  = join_keys_to_values($db_settings, '=')
  $ps_cfg_home_dir = $app_domain_info['ps_cfg_home_dir']

  pt_psae {"LOADCACHE-${domain_name}":
    db_settings => $db_settings_array,
    run_control_id => 'BUILD',
    program_id => 'LOADCACHE',
    os_user =>  'psadm2',
    logoutput => 'true',
    ps_home_dir => $ps_home_dir,


Next, we need to create the symlink in our app server domain to the location where our CACHE files are created. If you run LOADCACHE through the process scheduler, the files will be built under the PS_FILEDIR specified for the scheduler. Normally, this is stored under the process scheduler domain folder. Since we are running the process through the Puppet Type (aka, command line), it will use the psadm2 user’s PS_FILEDIR location which happens to be /home/psadm2/PS_CACHE.

To create the symlink, we can use the file resource built into Puppet.

$ps_home_dir = hiera('ps_home_location')

$appserver_domain_list = hiera('appserver_domain_list')
$appserver_domain_list.each | $domain_name, $app_domain_info | {

  $db_settings = $app_domain_info['db_settings']
  $db_settings_array  = join_keys_to_values($db_settings, '=')
  $ps_cfg_home_dir = $app_domain_info['ps_cfg_home_dir']

  pt_psae {"LOADCACHE-${domain_name}":
    db_settings => $db_settings_array,
    run_control_id => 'BUILD',
    program_id => 'LOADCACHE',
    os_user =>  'psadm2',
    logoutput => 'true',
    ps_home_dir => $ps_home_dir,
  -> file {"${ps_cfg_home_dir}/appserv/${domain_name}/CACHE/SHARE":
    ensure  => link,
    target  => "/home/psadm2/PS_CACHE/CACHE/STAGE/stage"


Notice that we use the -> resource chain between pt_psae and file. This tells puppet that the app engine must run first before we create the symlink (so we know the folder exists).

Next, let’s update psappsrv.cfg to set the Server Cache Mode. For that, we will use an exec resource running the sed command. You could use the file_line resource as well. The file_line method would offer better multi-platform support, but the sed command is really easy to use.

$ps_home_dir = hiera('ps_home_location')

$appserver_domain_list = hiera('appserver_domain_list')
$appserver_domain_list.each | $domain_name, $app_domain_info | {

  $db_settings = $app_domain_info['db_settings']
  $db_settings_array  = join_keys_to_values($db_settings, '=')
  $ps_cfg_home_dir = $app_domain_info['ps_cfg_home_dir']

  pt_psae {"LOADCACHE-${domain_name}":
    db_settings => $db_settings_array,
    run_control_id => 'BUILD',
    program_id => 'LOADCACHE',
    os_user =>  'psadm2',
    logoutput => 'true',
    ps_home_dir => $ps_home_dir,
  -> file {"${ps_cfg_home_dir}/appserv/${domain_name}/CACHE/SHARE":
    ensure  => link,
    target  => "/home/psadm2/PS_CACHE/CACHE/STAGE/stage"
  -> exec { "Set-Cache-Mode-${domain_name}": 
    command => "sed -i 's/^\;ServerCacheMode=0/ServerCacheMode=1/' ${ps_cfg_home_dir}/appserv/${domain_name}/psappsrv.cfg",
    path  => '/usr/bin',


Last, we need to reconfigure the domain. I prefer using psadmin plus to handle domain restarts because it can bundle multiple actions into a single command. We can call psadmin plus from Puppet to do our reconfiguration.

$ps_home_dir = hiera('ps_home_location')

$gem_home = '/opt/puppetlabs/puppet/bin'
exec { 'install-psadmin_plus':
  command => "${gem_home}/gem install psadmin_plus",

$appserver_domain_list = hiera('appserver_domain_list')
$appserver_domain_list.each | $domain_name, $app_domain_info | {

  $db_settings = $app_domain_info['db_settings']
  $db_settings_array  = join_keys_to_values($db_settings, '=')
  $ps_cfg_home_dir = $app_domain_info['ps_cfg_home_dir']

  pt_psae {"LOADCACHE-${domain_name}":
    db_settings => $db_settings_array,
    run_control_id => 'BUILD',
    program_id => 'LOADCACHE',
    os_user =>  'psadm2',
    logoutput => 'true',
    ps_home_dir => $ps_home_dir,
  -> file {"${ps_cfg_home_dir}/appserv/${domain_name}/CACHE/SHARE":
    ensure  => link,
    target  => "/home/psadm2/PS_CACHE/CACHE/STAGE/stage"
  -> exec { "Set-Cache-Mode-${domain_name}": 
    command => "sed -i 's/^\;ServerCacheMode=0/ServerCacheMode=1/' ${ps_cfg_home_dir}/appserv/${domain_name}/psappsrv.cfg",
    path  => '/usr/bin',
  -> exec { "Bounce ${domain_name} App Domain":
    command => "${gem_home}/psa bounce app ${domain_name}",
    require => Exec['install-psadmin_plus'],

Using the exec resource, we can install psadmin plus using the gem utility. Inside our domain loop we can bounce the app server after we build our cache and reconfigure the domain.

The bounce action for psadmin plus will stop the domain, clear cache, flush the IPC resources, reconfigure the domain, then start the domain from a single command.

Running loadcache.pp

We are ready to test our new manifest. We actually have two manifests to test:

  • loadcache.pp
  • fixdpkbug.pp

We need to run the fixdpkbug.pp manifest first. Since that manifest changes a line of the underlying Puppet provider code, we have to run it before we compile the Puppet catalog for our loadcache.pp run. Then, we will run the loadcache.pp manifest.

$ DPK_HOME="/opt/oracle/psft/dpk/puppet"
$ cd $DPK_HOME/production
$ sudo /opt/puppetlabs/puppet/bin/puppet apply manifests/fixdpkbug.pp --confdir $DPK_HOME
Notice: Compiled catalog for psvagabond in environment production in 0.09 seconds
Notice: /Stage[main]/Main/Exec[fix-dpk-bug]/returns: executed successfully
Notice: Applied catalog in 1.91 seconds
$ sudo /opt/puppetlabs/puppet/bin/puppet apply manifests/loadcache.pp --confdir $DPK_HOME
Notice: Compiled catalog for psvagabond in environment production in 0.15 seconds

Notice: /Stage[main]/Main/Exec[install-psadmin_plus]/returns: executed successfully
Notice: /Stage[main]/Main/Pt_psae[LOADCACHE-psftdb]/returns: PeopleTools 8.57.08 - Application Engine
Notice: /Stage[main]/Main/Pt_psae[LOADCACHE-psftdb]/returns: Copyright (c) 1988-2019 Oracle and/or its affiliates.
Notice: /Stage[main]/Main/Pt_psae[LOADCACHE-psftdb]/returns: All Rights Reserved
Notice: /Stage[main]/Main/Pt_psae[LOADCACHE-psftdb]/returns:
Notice: /Stage[main]/Main/Pt_psae[LOADCACHE-psftdb]/returns: Application Engine program LOADCACHE ended normally
Notice: /Stage[main]/Main/Pt_psae[LOADCACHE-psftdb]/returns: executed successfully
Notice: /Stage[main]/Main/Exec[Set-Cache-Mode-psftdb]/returns: executed successfully
Notice: /Stage[main]/Main/Exec[Bounce psftdb App Domain]/returns: executed successfully
Notice: Applied catalog in 594.46 seconds

To verify that your application servers are using the shared cache, open your APPSRV_mmdd.LOG file and look for these lines:

Cache Directory being used: /home/psadm2/psft/pt/8.57/appserv/psftdb/CACHE/SHARE/

When you log into your PeopleSoft Image, all of the pages will load faster than before.

Vagabond and Automated Builds

This change as been added to the ps-vagabond project. If you build your PeopleSoft Images with Vagabond, you can pull down the lastest changes in the master branch. If you want to see how to integrate these manifests into your PeopleSoft Image builds, you can look at this provisioning script.

The two files are shared as Gists in Github, you can freely use them:

UPDATE: I created an Idea for this on the PeopleSoft Idea Space. Go vote for this if you want to see this included in future PeopleSoft Images.

Process Scheduler and Powershell

Process Scheduler and Powershell Scripts

The PeopleSoft Process Scheduler supports many different types of tools to run, but one thing it lacks is support to run shell scripts. Thankfully, David Kurtz offered a solution for that a while back, and revisted the solution recently. While making the process scheduler run shell scripts or Powershell scripts isn’t hard, the best part of David’s wrapper script is that it adds a layer to make your scripts API aware with the Process Monitor.

In my case, I have a number of Powershell scripts to run so I re-implemented David’s solution in Powershell. For a detailed description of how to configure shell scripts to run, visit here. The instructions below are an abbreviated version with small changes specific to Powershell.

All of the code is posted on GitHub in the ps-powershell repository. This is an Oracle-specific script, but it could easily be updated to work with SQL Server or DB2. To get started, download the scripts in the repository to c:\psft\tools.

Use Cases

There are a number uses for bringing Powershell scripts into the Process Monitor. The main use case I see is interfaces that are run from the OS you can now schedule and execute from within PeopleSoft. It’s common to use the Windows Task Scheduler to run Powershell scripts, but users have no visibility as to when the scripts run or if they completed successfully. Scheduling and executing those interfaces from the Process Monitor brings the visibility of the scripts into PeopleSoft where all your batch jobs are logged.

Configure the Process Type

To configure your process scheduler to run Powershell scripts, we need to configure a new Process Type and enable the server to run the new type.

  1. Navigate to PeopleTools > Process Scheduler > Process Type
  2. Add a new type called Powershell
  3. Select ORACLE as the Database Type and the Platform as Windows
  4. Select Other as the Generic Process Type
  5. For the Command Line, set it to powershell.exe
  6. The Parameter List is where we pass all of the arguments to our wrapper script. This is a longer string that I’ll break down:

    • -NoProfile: This tells Powershell to not run any profile scripts that could change your environment. It also helps speed up the start of Powershell scripts.
    • -ExecutionPolicy Bypass: Depending on your systems security level, you could exclude this option. This temporarily lowers the security policy for Powershell for the script you are running (but only for this script).
    • -File c:\psft\tools\psft.ps1: This is the location of the wrapper script.
    • -DBNAME %%DBNAME%% -ACCESSID %%ACCESSID%% -ACCESSPSWD %%ACCESSPSWD%% -PRCSINSTANCE %%INSTANCE%%: These are the required parameters the process scheduler will pass to the wrapper script.

    The full Paramter List is this: -NoProfile -ExecutionPolicy Bypass -File c:\psft\tools\psft.ps1 -DBNAME %%DBNAME%% -ACCESSID %%ACCESSID%% -ACCESSPSWD %%ACCESSPSWD%% -PRCSINSTANCE %%INSTANCE%%

  7. Set the working directory to c:\psft\tools

  8. Save the new Process Type

Next, we need to allow the new Process Types to run on your process scheduler.

  1. Navigate to PeopleTools > Process Scheduler > Servers
  2. Open your server definition (PSNT for me)
  3. In the “Process Types run on this Server” grid, add the Powershell Process Type and Save

Last, let’s configure the output format for the Powershell Process Types.

  1. Navigate to PeopleTools > Process Scheduler > System Settings
  2. Click the Process Type Output tab
  3. Select Other for the Process Type
  4. Check the Active checkbox for “Web” and also the Default checkbox
  5. Click the Process Output Format tab
  6. Select Other and Web, and then check the Active and Default checkboxes for the “Text Files (.txt)” row
  7. Save the changes

Our wrapper script is now configured for use.

Run Powershell Scripts

The wrapper script itself doesn’t do any process besides updating the Process Monitor tables. Next, let’s built a test script to verify we can run Powershell scripts. From the GitHub repository, we will use the test.ps1 script. Copy the test.ps1 script to c:\psft\tools\test.ps1

Next we will create a new Process Definition for the test script.

  1. Navigate to PeopleToos > Process Scheduler > Processes
  2. Click “Add a new value”
  3. Select Powershell as the Process Type
  4. Set the Process Name to PSTEST
  5. Add a Description: Test Powershell Script
  6. Click the Process Definition Options tab
  7. In the Process Security group box, set the Component to PRCSMULTI and the Process Group to TLSALL
  8. Click the Override Option tab
  9. Select “Append” for the Parameter List

    The append list is where we pass in commands that the psft.ps1 script will run for us. For our test script, we have one parameter to pass in addition to calling the script. We pass the database name to test.ps1 so it can print the database name.

    "c:\psft\tools\test.ps1 -DBNAME %%DBNAME%%"

    Make sure to wrap the parameter list in double quotes so that our wrapper script passes the entire string to our test.ps1 script.

  10. Save the new Process Definition.

Last, let’s test our new PSTEST process.

  1. Navigate to PeopleTools > Process Scheduler > System Process Request
  2. Add a new run control
  3. Click the Run button
  4. In the list of processes, select PSTEST and verify the output is “Web” and “TXT”
  5. Click OK

Go watch the process in Process Monitor and refresh the page. You should see the status of the process change from QUEUED to PROCESSING to DONE. When the process is done, go view the output under “View Log/Trace”.