psadmin-plus 2.0

About 2 years ago, Kyle released psadmin-plus, a bash utility to help automate and simplify domain management. We happy to announce psadmin-plus is now at 2.0!

psadmin-plus makes is easy to start, stop, reconfigure, purge cache, and more for your PeopleSoft domains:

psa stop app HDEV
psa start web HDEV
psa bounce all HDEV

Version 2.0 is all command-line driven and has a number of changes and improvements from the original release.

Ruby Gem

First, psadmin-plus was re-written as a Ruby Gem. The original version was a bash script for Linux. A little while later, a powershell version was created for Windows. We chose to re-write psadmin-plus in Ruby so it could be a cross-platform tool. One code line makes is much easier to implement features for both platforms.

The second benefit of re-writing as a Ruby Gem is installation is much easier. This command is all you need to install psadmin-plus:

gem install psadmin_plus

When updates to psadmin-plus are released, you can upgrade with:

gem update psadmin_plus

You will need Ruby installed on your system for gem and psadmin-plus command to work. If you used the DPK to build you system Ruby is already available. (For Windows, if you use the DPK’s version of Ruby, the RubyGems SSL trusted certificate is out of date. Run these commands to uppdate the RubyGems trusted certificate.)

Configuration File

psadmin-plus has a number of configuration options to customize the behavior of you system. You can set these options with environment variables, but version 2.0 supports the use of a configuration file. You use the file ~/.psa.conf to set your configuration options. You can also change the file by setting PS_PSA_CONF=/path/to/conf.file. Here is a sample of what your .psa.conf file would look like:

PS_POOL_MGMT=on
PS_MULTI_HOMES=c:\psft\cfg
PS_WIN_SERVICES=web
PS_HEALTH_FILE=host.html

For a full list of configuration options for psadmin-plus, check out the README on the GitHub repository.

Windows Service Support

The first version of psadmin-plus for Windows started and stopped domains via psadmin. Many Windows admins prefer to start their domains with the Windows Services. Setting the configuration options PS_WIN_SERVICES will tell psadmin-plus to start and stop domains via the start-service and stop-service command-let. The PS_WIN_SERVICES setting has a number of options. You can specify all to start all domains with services, tux to start the Tuxedo domains with services, or specify one of web, app, prcs if you want to call out a specific type.

If you start domains via the command line, but have Windows services defined, we want to keep the service status in sync. The PS_TRAIL_SERVICE option will do just that. If you enable that setting, psadmin-plus will start or stop the domain via psadmin and then start or stop the respective Windows service. This ensures that the Windows service status matches the status of your domain.

psadmin-plus assumes you are using the DPK service names, but you can override that if you used a different method for creating Windows services. The defaults are:

"Psft*Pia*#{domain}*"
"Psft*App*#{domain}*"
"Psft*Prcs*#{domain}*"

If you want to change those names, you can add these settings in your .psa.conf file:

WEB_SERVICE_NAME="#{domain}-pia"
APP_SERVICE_NAME="#{domain}-AppServer"
PRCS_SERVICE_NAME"#{domain}-Batch"

Fill out the value to match your naming convention, but you must include #{domain}. That is how psadmin-plus finds your service.

Multi-PS_CFG_HOME Support

One last feature we added to psadmin-plus is support for multiple PS_CFG_HOME folders. By default psadmin-plus assumes you have one config home folder on the server for all your domains. If you have more than one config home, you can set the environment variable PS_MULTI_HOMES to the base folder where your config homes live. For example, if c:\psft\cfg stored your config homes, you can set PS_MULTI_HOMES=c:\psft\cfg, and psadmin-plus will look under there for your config homes.

There is one assumption though with multi-config home support: the domain name matches the config home name. If your domain name is HDEV and your base config home is c:\psft\cfg, psadmin-plus will look in the path c:\psft\cfg\HDEV\ for your domains.

Shell Versions

If you prefer the older shell script version of psadmin-plus, don’t worry. We saved those versions into branched on the on GitHub repository. You still use the shell scripts but we’ll only be adding new features to the RubyGems version of psadmin-plus.

#151 – psadmin_plus 2.0


This week on the podcast, Kyle shares a fix for deep links with the PIA and Dan discovers a bug with the Processing Indicator. Then Kyle and Dan talk about the new release of psadmin_plus and the improvements of version 2.0.

Show Notes

#150 – PIA URLs & IB Failover


This week on the podcast, Kyle discusses a workflow message tester, and the thought process behind setting up IB failover. Dan explains what the ampersand at the end of URL means for PIA URLs.

Show Notes

Using the DPK Redeploy Option for CPU Patching

There are lots of ways to apply CPU patching and how to automate it. We’ve covered a few methods on the blog, but I wanted to show another way with the DPK. This method combines using patched archives (discussed here) and an option in the DPK that isn’t really talked about: redeploying software.

Redeploy Option

In the DPK, there is an option to redeploy software that you have already installed. The redeploy option is supported for all the middleware, PS_HOME, and PS_APP_HOME. If you include

redeploy: true

in your psft_customizations.yaml file, the DPK will uninstall all the middleware, PS_HOME, and PS_APP_HOME (if your DPK role is an app role), and then reinstall those from your DPK archives folder. This is slick if you want fix a bad install, or in our case, apply a new version of the software. We can replace the archive file for JDK or WebLogic, trigger a redeploy by running Puppet, and the new version will be installed.

But, there are some downsides to how the DPK handles this. First, it’s all or nothing; you can redeploy all of the middleware or none of the middleware. If you redeploy, your PS_HOME (and PS_APP_HOME) will be redeployed and you’ll need to recompile your COBOL files (if you still use them). That’s too sledgehammer-y for me.

Also, we don’t always want to redeploy our software. We may only want to run a redeploy for new patches. To enable a redeploy, we have to update our configuration, run puppet, then remove our configuration change. I don’t like making transactional changes to my configuration files. Our configuration should be static.

A Better Redeploy Option

Given these downsides, I still like the redeploy option with the DPK. I made some changes to the DPK to make it work better. The changes had two goals:

  1. Don’t include transaction data in the configuration files
  2. Offer per-software redeploy options

To get the first goal, we’ll use Facter to control when we redeploy. Facter offers the ability to override facts by using environment variables. We can set a sane default for our fact (false) but when we want to run a redeploy we can set an environment variable.

For the second goal, we will modify some of the DPK code to read new variables. The global redeploy option will stay, but there will new per-software redeploy options.

My end state for applying CPU patches looks like this:

$env:FACTER_weblogic_redeploy="true"
puppet apply .\site.pp --confdir c:\psft\dpk\puppet

Puppet will run, redeploy only WebLogic from an updated archive file, and bring my system back up. The rest of the software (PS_HOME, Tuxuedo, etc) is left untouched.

DPK Modifications

The next section is a bit code heavy, but hang tight. This all fits together really well.

Custom Facts

We can include custom facts in the DPK to report whatever we want. In this case, we’ll create facts that report false for different software to redeploy. These go in the file modules/pt_role/lib/facter/redeploy.rb.

Facter.add(:jdk_redeploy) do
  setcode do
    'false'
  end
end

Facter.add(:weblogic_redeploy) do
  setcode do
    'false'
  end
end

Facter.add(:tuxedo_redeploy) do
  setcode do
    'false'
  end
end

We now have 3 custom facts:

  • ::jdk_redeploy
  • ::weblogic_redeploy
  • ::tuxedo_redeploy

If we want to set these values to true, we can set these environment variables:

  • FACTER_jdk_redeploy=true
  • FACTER_weblogic_redeploy=true
  • FACTER_tuxedo_redeploy=true

That’s all it takes to override these facts. (This isn’t true of all facts provided by Facter, but for our custom facts this works.)

YAML Configuration

Next, we will read these facts into our configuration. Include this section at the root of your psft_customizations.yaml file.

jdk_redeploy:           "%{::jdk_redeploy}"
weblogic_redeploy:      "%{::weblogic_redeploy}"
tuxedo_redeploy:        "%{::tuxedo_redeploy}"

Puppet Changes

Now the changes get a little more involved. First, we need to read our new variables into the pt_tools_deployment.pp manifest. We’ll set false as the default value in case we forget to add the values to our psft_customizations.yaml. Then, we need to update the call to ::pt_setup::tools_deployment so our new configuration options are passed to the class. (It’s a good practice to only do the hiera lookups in the pt_profile manifests and not in the pt_setup manifests.) These changes are made in modules/pt_profile/manifests/pt_tools_deployment.pp.

  $jdk_redeploy           = hiera('jdk_redeploy', false)
  $weblogic_redeploy      = hiera('weblogic_redeploy', false)
  $tuxedo_redeploy        = hiera('tuxedo_redeploy', false)


  class { '::pt_setup::tools_deployment':
    ensure                 => $ensure,
    deploy_pshome_only     => $deploy_pshome_only,
    db_type                => $db_type,
    pshome_location        => $pshome_location,
    pshome_remove          => $pshome_remove,
    inventory_location     => $inventory_location,
    oracleclient_location  => $oracleclient_location,
    oracleclient_remove    => $oracleclient_remove,
    jdk_location           => $jdk_location,
    jdk_remove             => $jdk_remove,
    jdk_redeploy           => $jdk_redeploy,
    weblogic_location      => $weblogic_location,
    weblogic_remove        => $weblogic_remove,
    weblogic_redeploy      => $weblogic_redeploy,
    tuxedo_location        => $tuxedo_location,
    tuxedo_remove          => $tuxedo_remove,
    tuxedo_redeploy        => $tuxedo_redeploy,
    ohs_location           => $ohs_location,
    ohs_remove             => $ohs_remove,
    redeploy               => $redeploy,
  }

Next, we have to update the API for ::ps_setup::tools_deployment so it knows what values we are passing. These changes are made in modules/pt_setup/manifests/tools_deployment.pp.

class pt_setup::tools_deployment (
  $db_type                = undef,
  $pshome_location        = undef,
  $pshome_remove          = true,
  $inventory_location     = undef,
  $oracleclient_location  = undef,
  $oracleclient_remove    = true,
  $jdk_location           = undef,
  $jdk_remove             = true,
  $jdk_redeploy           = false,
  $weblogic_location      = undef,
  $weblogic_remove        = true,
  $weblogic_redeploy      = false,
  $tuxedo_location        = undef,
  $tuxedo_remove          = true,
  $tuxedo_redeploy        = false,
  $ohs_location           = undef,
  $ohs_remove             = true,
  $redeploy               = false,
)

Last, we have to add some logic to handle our new redeploy options. We’ll keep the main redeploy option, but for each software we’ll evaluate the software-specific redeploy option.

    if ($redeploy == true) or ($jdk_redeploy == true) {
      $java_redeploy = true
    } else {
      $java_redeploy = false
    }
    pt_deploy_jdk { $jdk_tag:
      ensure            => $ensure,
      deploy_user       => $tools_install_user,
      deploy_user_group => $tools_install_group,
      archive_file      => $jdk_archive_file,
      deploy_location   => $jdk_location,
      redeploy          => $java_redeploy,
      remove            => $jdk_remove,
      patch_list        => $jdk_patches_list,
    }

That’s the end of the changes. It’s a little more invasive than I’d like, but the end result is worth it. We can now tell Puppet to do software-specific redeploys. This let’s us deploy new versions of the software by updating the archive file, setting an environment variable, and then running puppet apply.

#149 – Anatomy of a Search


This week on the podcast, Kyle gives some suggestions on how to make dynamic links to other PeopleSoft homepages and Dan dives into what happens when you perform a search with Elasticsearch.

Show Notes

#148 – Recent Places


This week on the podcast, Dan and Kyle talk about what happens when you add a node to the node network, tracing unified navigation, and how to increase the number of Recent Places stored in the system.

Show Notes

Elasticsearch: Anatomy of a Search

Search is becoming more important in PeopleSoft and is often required for applications to work properly. We’ve talked before on how to configure Elasticsearch with PeopleSoft, but what happens when we perform a search? There are a number of moving parts that handle the search query, applying security and displaying the results. Let’s take a look into the anatomy of a search to understand the process.

Overview of the Search Flow

The user creates a search request. Generally, this happens when a user opens the Search Box and enters a term or terms to search on. But this process can also start from a component. A good example is the “Find Learning” page in ELM. When you open the component, it runs sends a query to Elasticsearch for data to populate the page with.

Search Framework Builds the Query

The search request from the user needs to be parsed and formatted so that Elasticsearch understands the query. The app package PT_SEARCH has a number of methods that check for wildcards, partial words, and create a valid JSON request for Elasticsearch.

IB Builds JSON Message

In the app package PTSF_ES, the JSON search query string is converted into an IB Message using the IB_GENERIC message. The app package also builds the HTTP headers for the transaction, such as the content-type: application/json and the Authorization header.

The Integration Broker POSTs the message against the alias for the Search Category. If you deployed the EP_ASSETS search in Finance for the database PSFTDB, the alias is ep_assets_psftdb_orcl_es_alias. When you search, you search against the Search Category instead of the individual indices. The alias is the Elasticsearch mechanism to let you search against the category instead of each index.

Elasticsearch Validates Security

A PeopleSoft plugin running in Elasticsearch verifies the security of the user request before running the query. The orcl_es_auth plugin will validate the Basic Authentication value in the HTTP POST from the IB. The user it is validating is esadmin – the user and password defined on the Search Instance page. In your Elasticsearch logs, you can see the validation happening:

[authentication           ] Authentication plugin : authenticate method
[authentication           ] Authentication type : Base64
[authentication           ] Authentication successful

ES Runs the Query

Next, Elasticsearch will run the query. In the Elasticsearch logs, you can see which indexes are used to perform the search:

[orclacl                  ] Indices/ aliases : [ ep_assets_psftdb_orcl_es_alias ]
[orclacl                  ] Index types retrieved : [ ep_am_asset_psftdb ]

In this example, the EP_ASSETS Search Category has only one index associated with it.

ES Checks the ACL Cache

After Elasticsearch has the search results for the query, the results are filtered based on PeopleSoft security. Search Definitions can implement different security types to lock down search data. There is User/Role security, as well as Document security. If a Search Definition has a security type, the security data is collected when the Search Index is built with PTSF_GENFEED. The security attributes are attached to each row of data sent to Elasticsearch.

If the index has security enabled, Elasticsearch needs to know what security the user running the query has access to. The orcl_es_acl plugin is responsible for managing the user security data inside Elasticsearch. If the user has run a query in the last two hours, the orcl_es_acl plugin uses the existing security data for the user stored in Elasticsearch.

In the Elasticsearch logs, you can see this in action:

[orclacl                  ] Document is secured for the type : ep_am_asset_psftdb
[orclacl                  ] Orcl ACL Plugin method: getAttrValFromCache

ES Performs Callback (if needed)

If the user has not run a query for the index before, the plugin will perform a callback to PeopleSoft. The callback will ask PeopleSoft what security values the user has and stores them in a special index: orcl_es_acl. To perform the callback, Elasticsearch creates a new cURL request to the IB URI /RESTListeningConnector/PSFTDB/getsecurityvalues.v1/. The callback URL and security is stored in Elasticsearch under the index orcl_es_acl

The request includes three parameters as well:

  • ?type=ep_am_asset_psftdb
  • ?user=PS
  • ?attribute=BU_SECURITY_SES

The type parameter is the name of the search index requesting the security data. The user is the name of the user running the search query. And the attribute is the security value configured on the Search Definition page.

PS Runs the Callback Code

When the IB receives the callback request, it hands the request to the Application Package specified on the Search Definition. The App Package inspects the parameters to see what security attribute has been requested. The callback code then runs any appropriate code and SQL to build a list of Roles or Permission Lists assigned to the user.

PS Returns the ID and Security

The Permission Lists or Roles are assembled into an array and have P: or R: appended to the beginning of each item. Once the array is assembled, the values are returned to Elasticsearch in a response message. For example, the response might contain an array that looks like ["P:PTPT1000","P:PTPT1200","P:PTPT3100"] – Permission Lists assigned to that user.

ES Filters the Results

Elasticsearch has no concept of the security structures in PeopleSoft, so it uses the data returned from the callback to remove search results that a user has no security to. The callback array from above lists the Permission Lists assigned to a user. Elasticsearch will build a filter query to remove any rows from the search results that do not contain one of those Permission Lists.

Something to keep in mind – Elasticsearch can only filter the search results based on the Permission List stored in each searh result. This means that Permission Lists are attached to each search result when the PTSF_GENFEED process is run. If you change your security dramatically (say, re-implement Business Unit Security), you may need to fun a full build to push your security changes into Elasticsearch.

You can view the permission lists attached to each document with the Elasticsearch API. In your browser you can call the /_search REST method against any URL. If you deployed the PTPORTALREGISTRY search definition with the default settings, you can call the URL http://elastic.psadmin.io:9200/ptportalregistry_psftdb_orcl_es_alias/_search?pretty=true to view the data in Elasticsearch. In the document, you will find the list of security attributes like this: ["P:PTPT1000","S:Admin"].

ES Returns the Results

After the filter query completes, Elasticsearch will return the search results as a JSON message back to PeopleSoft. When the message is received, the results are parsed and displayed by the search results component. Depending on the application, you might see the generic search results page, or an application specific page.

#147 – Federation and Clustered Apps


This week on the podcast, Dan and Kyle talk about running PeopleSoft applications in a cluster and with federated approval and push notifications. We discuss some of the configurations needed for federations and our lessons learned.

Show Notes

  • Presence of IT Cloud White Paper @ 1:00
  • Fluid Home Button URL @ 5:30
  • Federated Push Notifications Bug @ 11:00
  • Clustered Application Concerns and Testing @ 17:00
  • Federated Approvals @ 21:45

#146 – Bad Migration Recommendations


This week on the podcast Dan shares a fun tip to view hidden characters, Kyle talks about Vim Tabs and Sessions. Then Kyle and Dan talk about change package behavior, some bad migration advice on Oracle Support and what you should do instead.

Show Notes

#145 – Terraform and OCI Classic


This week on the podcast, Dan shares a VisualStudio Code plugin that edits files using SSHFS and some lessons learned from a recent PeopleTools 8.56 upgrade. Dan and Kyle also talk about using Terraform to build and create servers on Oracle Cloud Infrastructure Classic.

Show Notes