Process Scheduler and Powershell

Process Scheduler and Powershell Scripts

The PeopleSoft Process Scheduler supports many different types of tools to run, but one thing it lacks is support to run shell scripts. Thankfully, David Kurtz offered a solution for that a while back, and revisted the solution recently. While making the process scheduler run shell scripts or Powershell scripts isn’t hard, the best part of David’s wrapper script is that it adds a layer to make your scripts API aware with the Process Monitor.

In my case, I have a number of Powershell scripts to run so I re-implemented David’s solution in Powershell. For a detailed description of how to configure shell scripts to run, visit here. The instructions below are an abbreviated version with small changes specific to Powershell.

All of the code is posted on GitHub in the ps-powershell repository. This is an Oracle-specific script, but it could easily be updated to work with SQL Server or DB2. To get started, download the scripts in the repository to c:\psft\tools.

Use Cases

There are a number uses for bringing Powershell scripts into the Process Monitor. The main use case I see is interfaces that are run from the OS you can now schedule and execute from within PeopleSoft. It’s common to use the Windows Task Scheduler to run Powershell scripts, but users have no visibility as to when the scripts run or if they completed successfully. Scheduling and executing those interfaces from the Process Monitor brings the visibility of the scripts into PeopleSoft where all your batch jobs are logged.

Configure the Process Type

To configure your process scheduler to run Powershell scripts, we need to configure a new Process Type and enable the server to run the new type.

  1. Navigate to PeopleTools > Process Scheduler > Process Type
  2. Add a new type called Powershell
  3. Select ORACLE as the Database Type and the Platform as Windows
  4. Select Other as the Generic Process Type
  5. For the Command Line, set it to powershell.exe
  6. The Parameter List is where we pass all of the arguments to our wrapper script. This is a longer string that I’ll break down:

    • -NoProfile: This tells Powershell to not run any profile scripts that could change your environment. It also helps speed up the start of Powershell scripts.
    • -ExecutionPolicy Bypass: Depending on your systems security level, you could exclude this option. This temporarily lowers the security policy for Powershell for the script you are running (but only for this script).
    • -File c:\psft\tools\psft.ps1: This is the location of the wrapper script.
    • -DBNAME %%DBNAME%% -ACCESSID %%ACCESSID%% -ACCESSPSWD %%ACCESSPSWD%% -PRCSINSTANCE %%INSTANCE%%: These are the required parameters the process scheduler will pass to the wrapper script.

    The full Paramter List is this: -NoProfile -ExecutionPolicy Bypass -File c:\psft\tools\psft.ps1 -DBNAME %%DBNAME%% -ACCESSID %%ACCESSID%% -ACCESSPSWD %%ACCESSPSWD%% -PRCSINSTANCE %%INSTANCE%%

  7. Set the working directory to c:\psft\tools

  8. Save the new Process Type

Next, we need to allow the new Process Types to run on your process scheduler.

  1. Navigate to PeopleTools > Process Scheduler > Servers
  2. Open your server definition (PSNT for me)
  3. In the “Process Types run on this Server” grid, add the Powershell Process Type and Save

Last, let’s configure the output format for the Powershell Process Types.

  1. Navigate to PeopleTools > Process Scheduler > System Settings
  2. Click the Process Type Output tab
  3. Select Other for the Process Type
  4. Check the Active checkbox for “Web” and also the Default checkbox
  5. Click the Process Output Format tab
  6. Select Other and Web, and then check the Active and Default checkboxes for the “Text Files (.txt)” row
  7. Save the changes

Our wrapper script is now configured for use.

Run Powershell Scripts

The wrapper script itself doesn’t do any process besides updating the Process Monitor tables. Next, let’s built a test script to verify we can run Powershell scripts. From the GitHub repository, we will use the test.ps1 script. Copy the test.ps1 script to c:\psft\tools\test.ps1

Next we will create a new Process Definition for the test script.

  1. Navigate to PeopleToos > Process Scheduler > Processes
  2. Click “Add a new value”
  3. Select Powershell as the Process Type
  4. Set the Process Name to PSTEST
  5. Add a Description: Test Powershell Script
  6. Click the Process Definition Options tab
  7. In the Process Security group box, set the Component to PRCSMULTI and the Process Group to TLSALL
  8. Click the Override Option tab
  9. Select “Append” for the Parameter List

    The append list is where we pass in commands that the psft.ps1 script will run for us. For our test script, we have one parameter to pass in addition to calling the script. We pass the database name to test.ps1 so it can print the database name.

    "c:\psft\tools\test.ps1 -DBNAME %%DBNAME%%"

    Make sure to wrap the parameter list in double quotes so that our wrapper script passes the entire string to our test.ps1 script.

  10. Save the new Process Definition.

Last, let’s test our new PSTEST process.

  1. Navigate to PeopleTools > Process Scheduler > System Process Request
  2. Add a new run control
  3. Click the Run button
  4. In the list of processes, select PSTEST and verify the output is “Web” and “TXT”
  5. Click OK

Go watch the process in Process Monitor and refresh the page. You should see the status of the process change from QUEUED to PROCESSING to DONE. When the process is done, go view the output under “View Log/Trace”.

psadmin.conf: Terraforming PeopleSoft

The next video from psadmin.conf 2018 is available! Kyle Benson and Dan Iverson look at Terraform, a cloud provisioning tool, and show how psadmin.io uses Terraform to build training environments. Terraform works with Amazon Web Services, Azure, Google Cloud Platform, Oracle Public Cloud, and many more cloud providers. Terraform can deploy a single server, or an infrastructure for 100 people, from a single command. The session will also talk about ps-terraform, an open source project to help you build PeopleSoft Images on the cloud.

We released the conference videos as a free course so you can find the videos in one place. Head over to the psadmin.io courses page and sign up. If you already signed up for the course, you can log in and the new video will be available.

psadmin.conf: Automation Challenges for Administrators

The next video from psadmin.conf 2018 is available! Nate Werner focuses on sing change management (Git/GitHub) for your collection of admin and monitoring scripts and setting up GitHub branches for non-prod, production, with a change approval process and team code review. Nate also shares solutions for managing passwords with scripts, how to abstract them and prevent passwords from leaking in logs. He also shows how Rundeck can be used to protect and audit your script execution (streamline vault access).

We released the videos as a free course so you can find the videos in one place. Head over to the psadmin.io courses page and sign up. If you already signed up for the course, you can log in and the new video will be available.

Advanced PeopleSoft Administration – OpenWorld 2018

This week Kyle and I presented at OpenWorld 2018. Our session talked about how to take advantage of Deployment Packages, using ACM to simplify database refreshes, using psadmin-plus, and leveraging the new headless features of Change Assistant. You can view the slides here if you want to check out the presentation.

psadmin.conf: Monitoring PeopleSoft Integrations

The first video from psadmin.conf 2018 is available! Brad Carlson discusses the need for monitoring Integration Broker messages. The PeopleSoft Integration Broker Monitor Service was developed by the University of Minnesota to provide robust monitoring of asynchronous messaging into and out of the PeopleSoft Enterprise. Through simple XML configuration, the monitor will check at defined intervals for pre-defined scenarios, when if found, will trigger configured actions such as notification.

We released the videos as a free course so you can find the videos in one place. Head over to the psadmin.io courses page and sign up. If you already signed up for the course, you can log in and the new video will be available.

psadmin-plus 2.0

About 2 years ago, Kyle released psadmin-plus, a bash utility to help automate and simplify domain management. We happy to announce psadmin-plus is now at 2.0!

psadmin-plus makes is easy to start, stop, reconfigure, purge cache, and more for your PeopleSoft domains:

psa stop app HDEV
psa start web HDEV
psa bounce all HDEV

Version 2.0 is all command-line driven and has a number of changes and improvements from the original release.

Ruby Gem

First, psadmin-plus was re-written as a Ruby Gem. The original version was a bash script for Linux. A little while later, a powershell version was created for Windows. We chose to re-write psadmin-plus in Ruby so it could be a cross-platform tool. One code line makes is much easier to implement features for both platforms.

The second benefit of re-writing as a Ruby Gem is installation is much easier. This command is all you need to install psadmin-plus:

gem install psadmin_plus

When updates to psadmin-plus are released, you can upgrade with:

gem update psadmin_plus

You will need Ruby installed on your system for gem and psadmin-plus command to work. If you used the DPK to build you system Ruby is already available. (For Windows, if you use the DPK’s version of Ruby, the RubyGems SSL trusted certificate is out of date. Run these commands to uppdate the RubyGems trusted certificate.)

Configuration File

psadmin-plus has a number of configuration options to customize the behavior of you system. You can set these options with environment variables, but version 2.0 supports the use of a configuration file. You use the file ~/.psa.conf to set your configuration options. You can also change the file by setting PS_PSA_CONF=/path/to/conf.file. Here is a sample of what your .psa.conf file would look like:

PS_POOL_MGMT=on
PS_MULTI_HOMES=c:\psft\cfg
PS_WIN_SERVICES=web
PS_HEALTH_FILE=host.html

For a full list of configuration options for psadmin-plus, check out the README on the GitHub repository.

Windows Service Support

The first version of psadmin-plus for Windows started and stopped domains via psadmin. Many Windows admins prefer to start their domains with the Windows Services. Setting the configuration options PS_WIN_SERVICES will tell psadmin-plus to start and stop domains via the start-service and stop-service command-let. The PS_WIN_SERVICES setting has a number of options. You can specify all to start all domains with services, tux to start the Tuxedo domains with services, or specify one of web, app, prcs if you want to call out a specific type.

If you start domains via the command line, but have Windows services defined, we want to keep the service status in sync. The PS_TRAIL_SERVICE option will do just that. If you enable that setting, psadmin-plus will start or stop the domain via psadmin and then start or stop the respective Windows service. This ensures that the Windows service status matches the status of your domain.

psadmin-plus assumes you are using the DPK service names, but you can override that if you used a different method for creating Windows services. The defaults are:

"Psft*Pia*#{domain}*"
"Psft*App*#{domain}*"
"Psft*Prcs*#{domain}*"

If you want to change those names, you can add these settings in your .psa.conf file:

WEB_SERVICE_NAME="#{domain}-pia"
APP_SERVICE_NAME="#{domain}-AppServer"
PRCS_SERVICE_NAME"#{domain}-Batch"

Fill out the value to match your naming convention, but you must include #{domain}. That is how psadmin-plus finds your service.

Multi-PS_CFG_HOME Support

One last feature we added to psadmin-plus is support for multiple PS_CFG_HOME folders. By default psadmin-plus assumes you have one config home folder on the server for all your domains. If you have more than one config home, you can set the environment variable PS_MULTI_HOMES to the base folder where your config homes live. For example, if c:\psft\cfg stored your config homes, you can set PS_MULTI_HOMES=c:\psft\cfg, and psadmin-plus will look under there for your config homes.

There is one assumption though with multi-config home support: the domain name matches the config home name. If your domain name is HDEV and your base config home is c:\psft\cfg, psadmin-plus will look in the path c:\psft\cfg\HDEV\ for your domains.

Shell Versions

If you prefer the older shell script version of psadmin-plus, don’t worry. We saved those versions into branched on the on GitHub repository. You still use the shell scripts but we’ll only be adding new features to the RubyGems version of psadmin-plus.

Using the DPK Redeploy Option for CPU Patching

There are lots of ways to apply CPU patching and how to automate it. We’ve covered a few methods on the blog, but I wanted to show another way with the DPK. This method combines using patched archives (discussed here) and an option in the DPK that isn’t really talked about: redeploying software.

Redeploy Option

In the DPK, there is an option to redeploy software that you have already installed. The redeploy option is supported for all the middleware, PS_HOME, and PS_APP_HOME. If you include

redeploy: true

in your psft_customizations.yaml file, the DPK will uninstall all the middleware, PS_HOME, and PS_APP_HOME (if your DPK role is an app role), and then reinstall those from your DPK archives folder. This is slick if you want fix a bad install, or in our case, apply a new version of the software. We can replace the archive file for JDK or WebLogic, trigger a redeploy by running Puppet, and the new version will be installed.

But, there are some downsides to how the DPK handles this. First, it’s all or nothing; you can redeploy all of the middleware or none of the middleware. If you redeploy, your PS_HOME (and PS_APP_HOME) will be redeployed and you’ll need to recompile your COBOL files (if you still use them). That’s too sledgehammer-y for me.

Also, we don’t always want to redeploy our software. We may only want to run a redeploy for new patches. To enable a redeploy, we have to update our configuration, run puppet, then remove our configuration change. I don’t like making transactional changes to my configuration files. Our configuration should be static.

A Better Redeploy Option

Given these downsides, I still like the redeploy option with the DPK. I made some changes to the DPK to make it work better. The changes had two goals:

  1. Don’t include transaction data in the configuration files
  2. Offer per-software redeploy options

To get the first goal, we’ll use Facter to control when we redeploy. Facter offers the ability to override facts by using environment variables. We can set a sane default for our fact (false) but when we want to run a redeploy we can set an environment variable.

For the second goal, we will modify some of the DPK code to read new variables. The global redeploy option will stay, but there will new per-software redeploy options.

My end state for applying CPU patches looks like this:

$env:FACTER_weblogic_redeploy="true"
puppet apply .\site.pp --confdir c:\psft\dpk\puppet

Puppet will run, redeploy only WebLogic from an updated archive file, and bring my system back up. The rest of the software (PS_HOME, Tuxuedo, etc) is left untouched.

DPK Modifications

The next section is a bit code heavy, but hang tight. This all fits together really well.

Custom Facts

We can include custom facts in the DPK to report whatever we want. In this case, we’ll create facts that report false for different software to redeploy. These go in the file modules/pt_role/lib/facter/redeploy.rb.

Facter.add(:jdk_redeploy) do
  setcode do
    'false'
  end
end

Facter.add(:weblogic_redeploy) do
  setcode do
    'false'
  end
end

Facter.add(:tuxedo_redeploy) do
  setcode do
    'false'
  end
end

We now have 3 custom facts:

  • ::jdk_redeploy
  • ::weblogic_redeploy
  • ::tuxedo_redeploy

If we want to set these values to true, we can set these environment variables:

  • FACTER_jdk_redeploy=true
  • FACTER_weblogic_redeploy=true
  • FACTER_tuxedo_redeploy=true

That’s all it takes to override these facts. (This isn’t true of all facts provided by Facter, but for our custom facts this works.)

YAML Configuration

Next, we will read these facts into our configuration. Include this section at the root of your psft_customizations.yaml file.

jdk_redeploy:           "%{::jdk_redeploy}"
weblogic_redeploy:      "%{::weblogic_redeploy}"
tuxedo_redeploy:        "%{::tuxedo_redeploy}"

Puppet Changes

Now the changes get a little more involved. First, we need to read our new variables into the pt_tools_deployment.pp manifest. We’ll set false as the default value in case we forget to add the values to our psft_customizations.yaml. Then, we need to update the call to ::pt_setup::tools_deployment so our new configuration options are passed to the class. (It’s a good practice to only do the hiera lookups in the pt_profile manifests and not in the pt_setup manifests.) These changes are made in modules/pt_profile/manifests/pt_tools_deployment.pp.

  $jdk_redeploy           = hiera('jdk_redeploy', false)
  $weblogic_redeploy      = hiera('weblogic_redeploy', false)
  $tuxedo_redeploy        = hiera('tuxedo_redeploy', false)


  class { '::pt_setup::tools_deployment':
    ensure                 => $ensure,
    deploy_pshome_only     => $deploy_pshome_only,
    db_type                => $db_type,
    pshome_location        => $pshome_location,
    pshome_remove          => $pshome_remove,
    inventory_location     => $inventory_location,
    oracleclient_location  => $oracleclient_location,
    oracleclient_remove    => $oracleclient_remove,
    jdk_location           => $jdk_location,
    jdk_remove             => $jdk_remove,
    jdk_redeploy           => $jdk_redeploy,
    weblogic_location      => $weblogic_location,
    weblogic_remove        => $weblogic_remove,
    weblogic_redeploy      => $weblogic_redeploy,
    tuxedo_location        => $tuxedo_location,
    tuxedo_remove          => $tuxedo_remove,
    tuxedo_redeploy        => $tuxedo_redeploy,
    ohs_location           => $ohs_location,
    ohs_remove             => $ohs_remove,
    redeploy               => $redeploy,
  }

Next, we have to update the API for ::ps_setup::tools_deployment so it knows what values we are passing. These changes are made in modules/pt_setup/manifests/tools_deployment.pp.

class pt_setup::tools_deployment (
  $db_type                = undef,
  $pshome_location        = undef,
  $pshome_remove          = true,
  $inventory_location     = undef,
  $oracleclient_location  = undef,
  $oracleclient_remove    = true,
  $jdk_location           = undef,
  $jdk_remove             = true,
  $jdk_redeploy           = false,
  $weblogic_location      = undef,
  $weblogic_remove        = true,
  $weblogic_redeploy      = false,
  $tuxedo_location        = undef,
  $tuxedo_remove          = true,
  $tuxedo_redeploy        = false,
  $ohs_location           = undef,
  $ohs_remove             = true,
  $redeploy               = false,
)

Last, we have to add some logic to handle our new redeploy options. We’ll keep the main redeploy option, but for each software we’ll evaluate the software-specific redeploy option.

    if ($redeploy == true) or ($jdk_redeploy == true) {
      $java_redeploy = true
    } else {
      $java_redeploy = false
    }
    pt_deploy_jdk { $jdk_tag:
      ensure            => $ensure,
      deploy_user       => $tools_install_user,
      deploy_user_group => $tools_install_group,
      archive_file      => $jdk_archive_file,
      deploy_location   => $jdk_location,
      redeploy          => $java_redeploy,
      remove            => $jdk_remove,
      patch_list        => $jdk_patches_list,
    }

That’s the end of the changes. It’s a little more invasive than I’d like, but the end result is worth it. We can now tell Puppet to do software-specific redeploys. This let’s us deploy new versions of the software by updating the archive file, setting an environment variable, and then running puppet apply.

Elasticsearch: Anatomy of a Search

Search is becoming more important in PeopleSoft and is often required for applications to work properly. We’ve talked before on how to configure Elasticsearch with PeopleSoft, but what happens when we perform a search? There are a number of moving parts that handle the search query, applying security and displaying the results. Let’s take a look into the anatomy of a search to understand the process.

Overview of the Search Flow

The user creates a search request. Generally, this happens when a user opens the Search Box and enters a term or terms to search on. But this process can also start from a component. A good example is the “Find Learning” page in ELM. When you open the component, it runs sends a query to Elasticsearch for data to populate the page with.

Search Framework Builds the Query

The search request from the user needs to be parsed and formatted so that Elasticsearch understands the query. The app package PT_SEARCH has a number of methods that check for wildcards, partial words, and create a valid JSON request for Elasticsearch.

IB Builds JSON Message

In the app package PTSF_ES, the JSON search query string is converted into an IB Message using the IB_GENERIC message. The app package also builds the HTTP headers for the transaction, such as the content-type: application/json and the Authorization header.

The Integration Broker POSTs the message against the alias for the Search Category. If you deployed the EP_ASSETS search in Finance for the database PSFTDB, the alias is ep_assets_psftdb_orcl_es_alias. When you search, you search against the Search Category instead of the individual indices. The alias is the Elasticsearch mechanism to let you search against the category instead of each index.

Elasticsearch Validates Security

A PeopleSoft plugin running in Elasticsearch verifies the security of the user request before running the query. The orcl_es_auth plugin will validate the Basic Authentication value in the HTTP POST from the IB. The user it is validating is esadmin – the user and password defined on the Search Instance page. In your Elasticsearch logs, you can see the validation happening:

[authentication           ] Authentication plugin : authenticate method
[authentication           ] Authentication type : Base64
[authentication           ] Authentication successful

ES Runs the Query

Next, Elasticsearch will run the query. In the Elasticsearch logs, you can see which indexes are used to perform the search:

[orclacl                  ] Indices/ aliases : [ ep_assets_psftdb_orcl_es_alias ]
[orclacl                  ] Index types retrieved : [ ep_am_asset_psftdb ]

In this example, the EP_ASSETS Search Category has only one index associated with it.

ES Checks the ACL Cache

After Elasticsearch has the search results for the query, the results are filtered based on PeopleSoft security. Search Definitions can implement different security types to lock down search data. There is User/Role security, as well as Document security. If a Search Definition has a security type, the security data is collected when the Search Index is built with PTSF_GENFEED. The security attributes are attached to each row of data sent to Elasticsearch.

If the index has security enabled, Elasticsearch needs to know what security the user running the query has access to. The orcl_es_acl plugin is responsible for managing the user security data inside Elasticsearch. If the user has run a query in the last two hours, the orcl_es_acl plugin uses the existing security data for the user stored in Elasticsearch.

In the Elasticsearch logs, you can see this in action:

[orclacl                  ] Document is secured for the type : ep_am_asset_psftdb
[orclacl                  ] Orcl ACL Plugin method: getAttrValFromCache

ES Performs Callback (if needed)

If the user has not run a query for the index before, the plugin will perform a callback to PeopleSoft. The callback will ask PeopleSoft what security values the user has and stores them in a special index: orcl_es_acl. To perform the callback, Elasticsearch creates a new cURL request to the IB URI /RESTListeningConnector/PSFTDB/getsecurityvalues.v1/. The callback URL and security is stored in Elasticsearch under the index orcl_es_acl

The request includes three parameters as well:

  • ?type=ep_am_asset_psftdb
  • ?user=PS
  • ?attribute=BU_SECURITY_SES

The type parameter is the name of the search index requesting the security data. The user is the name of the user running the search query. And the attribute is the security value configured on the Search Definition page.

PS Runs the Callback Code

When the IB receives the callback request, it hands the request to the Application Package specified on the Search Definition. The App Package inspects the parameters to see what security attribute has been requested. The callback code then runs any appropriate code and SQL to build a list of Roles or Permission Lists assigned to the user.

PS Returns the ID and Security

The Permission Lists or Roles are assembled into an array and have P: or R: appended to the beginning of each item. Once the array is assembled, the values are returned to Elasticsearch in a response message. For example, the response might contain an array that looks like ["P:PTPT1000","P:PTPT1200","P:PTPT3100"] – Permission Lists assigned to that user.

ES Filters the Results

Elasticsearch has no concept of the security structures in PeopleSoft, so it uses the data returned from the callback to remove search results that a user has no security to. The callback array from above lists the Permission Lists assigned to a user. Elasticsearch will build a filter query to remove any rows from the search results that do not contain one of those Permission Lists.

Something to keep in mind – Elasticsearch can only filter the search results based on the Permission List stored in each searh result. This means that Permission Lists are attached to each search result when the PTSF_GENFEED process is run. If you change your security dramatically (say, re-implement Business Unit Security), you may need to fun a full build to push your security changes into Elasticsearch.

You can view the permission lists attached to each document with the Elasticsearch API. In your browser you can call the /_search REST method against any URL. If you deployed the PTPORTALREGISTRY search definition with the default settings, you can call the URL http://elastic.psadmin.io:9200/ptportalregistry_psftdb_orcl_es_alias/_search?pretty=true to view the data in Elasticsearch. In the document, you will find the list of security attributes like this: ["P:PTPT1000","S:Admin"].

ES Returns the Results

After the filter query completes, Elasticsearch will return the search results as a JSON message back to PeopleSoft. When the message is received, the results are parsed and displayed by the search results component. Depending on the application, you might see the generic search results page, or an application specific page.