Announcing psadmin.conf 2019!


We are excited to announce the dates and location for psadmin.conf 2019! The conference will be held May 6-8, 2019, in Minneapolis MN.

Visit the psadmin.conf page to learn more and listen to a special podcast about the 2019 conference.

Submit your presentation proposals now and registration opens 12/15/2018!

psadmin-plus 2.0

About 2 years ago, Kyle released psadmin-plus, a bash utility to help automate and simplify domain management. We happy to announce psadmin-plus is now at 2.0!

psadmin-plus makes is easy to start, stop, reconfigure, purge cache, and more for your PeopleSoft domains:

psa stop app HDEV
psa start web HDEV
psa bounce all HDEV

Version 2.0 is all command-line driven and has a number of changes and improvements from the original release.

Ruby Gem

First, psadmin-plus was re-written as a Ruby Gem. The original version was a bash script for Linux. A little while later, a powershell version was created for Windows. We chose to re-write psadmin-plus in Ruby so it could be a cross-platform tool. One code line makes is much easier to implement features for both platforms.

The second benefit of re-writing as a Ruby Gem is installation is much easier. This command is all you need to install psadmin-plus:

gem install psadmin_plus

When updates to psadmin-plus are released, you can upgrade with:

gem update psadmin_plus

You will need Ruby installed on your system for gem and psadmin-plus command to work. If you used the DPK to build you system Ruby is already available. (For Windows, if you use the DPK’s version of Ruby, the RubyGems SSL trusted certificate is out of date. Run these commands to uppdate the RubyGems trusted certificate.)

Configuration File

psadmin-plus has a number of configuration options to customize the behavior of you system. You can set these options with environment variables, but version 2.0 supports the use of a configuration file. You use the file ~/.psa.conf to set your configuration options. You can also change the file by setting PS_PSA_CONF=/path/to/conf.file. Here is a sample of what your .psa.conf file would look like:

PS_POOL_MGMT=on
PS_MULTI_HOMES=c:\psft\cfg
PS_WIN_SERVICES=web
PS_HEALTH_FILE=host.html

For a full list of configuration options for psadmin-plus, check out the README on the GitHub repository.

Windows Service Support

The first version of psadmin-plus for Windows started and stopped domains via psadmin. Many Windows admins prefer to start their domains with the Windows Services. Setting the configuration options PS_WIN_SERVICES will tell psadmin-plus to start and stop domains via the start-service and stop-service command-let. The PS_WIN_SERVICES setting has a number of options. You can specify all to start all domains with services, tux to start the Tuxedo domains with services, or specify one of web, app, prcs if you want to call out a specific type.

If you start domains via the command line, but have Windows services defined, we want to keep the service status in sync. The PS_TRAIL_SERVICE option will do just that. If you enable that setting, psadmin-plus will start or stop the domain via psadmin and then start or stop the respective Windows service. This ensures that the Windows service status matches the status of your domain.

psadmin-plus assumes you are using the DPK service names, but you can override that if you used a different method for creating Windows services. The defaults are:

"Psft*Pia*#{domain}*"
"Psft*App*#{domain}*"
"Psft*Prcs*#{domain}*"

If you want to change those names, you can add these settings in your .psa.conf file:

WEB_SERVICE_NAME="#{domain}-pia"
APP_SERVICE_NAME="#{domain}-AppServer"
PRCS_SERVICE_NAME"#{domain}-Batch"

Fill out the value to match your naming convention, but you must include #{domain}. That is how psadmin-plus finds your service.

Multi-PS_CFG_HOME Support

One last feature we added to psadmin-plus is support for multiple PS_CFG_HOME folders. By default psadmin-plus assumes you have one config home folder on the server for all your domains. If you have more than one config home, you can set the environment variable PS_MULTI_HOMES to the base folder where your config homes live. For example, if c:\psft\cfg stored your config homes, you can set PS_MULTI_HOMES=c:\psft\cfg, and psadmin-plus will look under there for your config homes.

There is one assumption though with multi-config home support: the domain name matches the config home name. If your domain name is HDEV and your base config home is c:\psft\cfg, psadmin-plus will look in the path c:\psft\cfg\HDEV\ for your domains.

Shell Versions

If you prefer the older shell script version of psadmin-plus, don’t worry. We saved those versions into branched on the on GitHub repository. You still use the shell scripts but we’ll only be adding new features to the RubyGems version of psadmin-plus.

Using the DPK Redeploy Option for CPU Patching

There are lots of ways to apply CPU patching and how to automate it. We’ve covered a few methods on the blog, but I wanted to show another way with the DPK. This method combines using patched archives (discussed here) and an option in the DPK that isn’t really talked about: redeploying software.

Redeploy Option

In the DPK, there is an option to redeploy software that you have already installed. The redeploy option is supported for all the middleware, PS_HOME, and PS_APP_HOME. If you include

redeploy: true

in your psft_customizations.yaml file, the DPK will uninstall all the middleware, PS_HOME, and PS_APP_HOME (if your DPK role is an app role), and then reinstall those from your DPK archives folder. This is slick if you want fix a bad install, or in our case, apply a new version of the software. We can replace the archive file for JDK or WebLogic, trigger a redeploy by running Puppet, and the new version will be installed.

But, there are some downsides to how the DPK handles this. First, it’s all or nothing; you can redeploy all of the middleware or none of the middleware. If you redeploy, your PS_HOME (and PS_APP_HOME) will be redeployed and you’ll need to recompile your COBOL files (if you still use them). That’s too sledgehammer-y for me.

Also, we don’t always want to redeploy our software. We may only want to run a redeploy for new patches. To enable a redeploy, we have to update our configuration, run puppet, then remove our configuration change. I don’t like making transactional changes to my configuration files. Our configuration should be static.

A Better Redeploy Option

Given these downsides, I still like the redeploy option with the DPK. I made some changes to the DPK to make it work better. The changes had two goals:

  1. Don’t include transaction data in the configuration files
  2. Offer per-software redeploy options

To get the first goal, we’ll use Facter to control when we redeploy. Facter offers the ability to override facts by using environment variables. We can set a sane default for our fact (false) but when we want to run a redeploy we can set an environment variable.

For the second goal, we will modify some of the DPK code to read new variables. The global redeploy option will stay, but there will new per-software redeploy options.

My end state for applying CPU patches looks like this:

$env:FACTER_weblogic_redeploy="true"
puppet apply .\site.pp --confdir c:\psft\dpk\puppet

Puppet will run, redeploy only WebLogic from an updated archive file, and bring my system back up. The rest of the software (PS_HOME, Tuxuedo, etc) is left untouched.

DPK Modifications

The next section is a bit code heavy, but hang tight. This all fits together really well.

Custom Facts

We can include custom facts in the DPK to report whatever we want. In this case, we’ll create facts that report false for different software to redeploy. These go in the file modules/pt_role/lib/facter/redeploy.rb.

Facter.add(:jdk_redeploy) do
  setcode do
    'false'
  end
end

Facter.add(:weblogic_redeploy) do
  setcode do
    'false'
  end
end

Facter.add(:tuxedo_redeploy) do
  setcode do
    'false'
  end
end

We now have 3 custom facts:

  • ::jdk_redeploy
  • ::weblogic_redeploy
  • ::tuxedo_redeploy

If we want to set these values to true, we can set these environment variables:

  • FACTER_jdk_redeploy=true
  • FACTER_weblogic_redeploy=true
  • FACTER_tuxedo_redeploy=true

That’s all it takes to override these facts. (This isn’t true of all facts provided by Facter, but for our custom facts this works.)

YAML Configuration

Next, we will read these facts into our configuration. Include this section at the root of your psft_customizations.yaml file.

jdk_redeploy:           "%{::jdk_redeploy}"
weblogic_redeploy:      "%{::weblogic_redeploy}"
tuxedo_redeploy:        "%{::tuxedo_redeploy}"

Puppet Changes

Now the changes get a little more involved. First, we need to read our new variables into the pt_tools_deployment.pp manifest. We’ll set false as the default value in case we forget to add the values to our psft_customizations.yaml. Then, we need to update the call to ::pt_setup::tools_deployment so our new configuration options are passed to the class. (It’s a good practice to only do the hiera lookups in the pt_profile manifests and not in the pt_setup manifests.) These changes are made in modules/pt_profile/manifests/pt_tools_deployment.pp.

  $jdk_redeploy           = hiera('jdk_redeploy', false)
  $weblogic_redeploy      = hiera('weblogic_redeploy', false)
  $tuxedo_redeploy        = hiera('tuxedo_redeploy', false)


  class { '::pt_setup::tools_deployment':
    ensure                 => $ensure,
    deploy_pshome_only     => $deploy_pshome_only,
    db_type                => $db_type,
    pshome_location        => $pshome_location,
    pshome_remove          => $pshome_remove,
    inventory_location     => $inventory_location,
    oracleclient_location  => $oracleclient_location,
    oracleclient_remove    => $oracleclient_remove,
    jdk_location           => $jdk_location,
    jdk_remove             => $jdk_remove,
    jdk_redeploy           => $jdk_redeploy,
    weblogic_location      => $weblogic_location,
    weblogic_remove        => $weblogic_remove,
    weblogic_redeploy      => $weblogic_redeploy,
    tuxedo_location        => $tuxedo_location,
    tuxedo_remove          => $tuxedo_remove,
    tuxedo_redeploy        => $tuxedo_redeploy,
    ohs_location           => $ohs_location,
    ohs_remove             => $ohs_remove,
    redeploy               => $redeploy,
  }

Next, we have to update the API for ::ps_setup::tools_deployment so it knows what values we are passing. These changes are made in modules/pt_setup/manifests/tools_deployment.pp.

class pt_setup::tools_deployment (
  $db_type                = undef,
  $pshome_location        = undef,
  $pshome_remove          = true,
  $inventory_location     = undef,
  $oracleclient_location  = undef,
  $oracleclient_remove    = true,
  $jdk_location           = undef,
  $jdk_remove             = true,
  $jdk_redeploy           = false,
  $weblogic_location      = undef,
  $weblogic_remove        = true,
  $weblogic_redeploy      = false,
  $tuxedo_location        = undef,
  $tuxedo_remove          = true,
  $tuxedo_redeploy        = false,
  $ohs_location           = undef,
  $ohs_remove             = true,
  $redeploy               = false,
)

Last, we have to add some logic to handle our new redeploy options. We’ll keep the main redeploy option, but for each software we’ll evaluate the software-specific redeploy option.

    if ($redeploy == true) or ($jdk_redeploy == true) {
      $java_redeploy = true
    } else {
      $java_redeploy = false
    }
    pt_deploy_jdk { $jdk_tag:
      ensure            => $ensure,
      deploy_user       => $tools_install_user,
      deploy_user_group => $tools_install_group,
      archive_file      => $jdk_archive_file,
      deploy_location   => $jdk_location,
      redeploy          => $java_redeploy,
      remove            => $jdk_remove,
      patch_list        => $jdk_patches_list,
    }

That’s the end of the changes. It’s a little more invasive than I’d like, but the end result is worth it. We can now tell Puppet to do software-specific redeploys. This let’s us deploy new versions of the software by updating the archive file, setting an environment variable, and then running puppet apply.

Elasticsearch: Anatomy of a Search

Search is becoming more important in PeopleSoft and is often required for applications to work properly. We’ve talked before on how to configure Elasticsearch with PeopleSoft, but what happens when we perform a search? There are a number of moving parts that handle the search query, applying security and displaying the results. Let’s take a look into the anatomy of a search to understand the process.

Overview of the Search Flow

The user creates a search request. Generally, this happens when a user opens the Search Box and enters a term or terms to search on. But this process can also start from a component. A good example is the “Find Learning” page in ELM. When you open the component, it runs sends a query to Elasticsearch for data to populate the page with.

Search Framework Builds the Query

The search request from the user needs to be parsed and formatted so that Elasticsearch understands the query. The app package PT_SEARCH has a number of methods that check for wildcards, partial words, and create a valid JSON request for Elasticsearch.

IB Builds JSON Message

In the app package PTSF_ES, the JSON search query string is converted into an IB Message using the IB_GENERIC message. The app package also builds the HTTP headers for the transaction, such as the content-type: application/json and the Authorization header.

The Integration Broker POSTs the message against the alias for the Search Category. If you deployed the EP_ASSETS search in Finance for the database PSFTDB, the alias is ep_assets_psftdb_orcl_es_alias. When you search, you search against the Search Category instead of the individual indices. The alias is the Elasticsearch mechanism to let you search against the category instead of each index.

Elasticsearch Validates Security

A PeopleSoft plugin running in Elasticsearch verifies the security of the user request before running the query. The orcl_es_auth plugin will validate the Basic Authentication value in the HTTP POST from the IB. The user it is validating is esadmin – the user and password defined on the Search Instance page. In your Elasticsearch logs, you can see the validation happening:

[authentication           ] Authentication plugin : authenticate method
[authentication           ] Authentication type : Base64
[authentication           ] Authentication successful

ES Runs the Query

Next, Elasticsearch will run the query. In the Elasticsearch logs, you can see which indexes are used to perform the search:

[orclacl                  ] Indices/ aliases : [ ep_assets_psftdb_orcl_es_alias ]
[orclacl                  ] Index types retrieved : [ ep_am_asset_psftdb ]

In this example, the EP_ASSETS Search Category has only one index associated with it.

ES Checks the ACL Cache

After Elasticsearch has the search results for the query, the results are filtered based on PeopleSoft security. Search Definitions can implement different security types to lock down search data. There is User/Role security, as well as Document security. If a Search Definition has a security type, the security data is collected when the Search Index is built with PTSF_GENFEED. The security attributes are attached to each row of data sent to Elasticsearch.

If the index has security enabled, Elasticsearch needs to know what security the user running the query has access to. The orcl_es_acl plugin is responsible for managing the user security data inside Elasticsearch. If the user has run a query in the last two hours, the orcl_es_acl plugin uses the existing security data for the user stored in Elasticsearch.

In the Elasticsearch logs, you can see this in action:

[orclacl                  ] Document is secured for the type : ep_am_asset_psftdb
[orclacl                  ] Orcl ACL Plugin method: getAttrValFromCache

ES Performs Callback (if needed)

If the user has not run a query for the index before, the plugin will perform a callback to PeopleSoft. The callback will ask PeopleSoft what security values the user has and stores them in a special index: orcl_es_acl. To perform the callback, Elasticsearch creates a new cURL request to the IB URI /RESTListeningConnector/PSFTDB/getsecurityvalues.v1/. The callback URL and security is stored in Elasticsearch under the index orcl_es_acl

The request includes three parameters as well:

  • ?type=ep_am_asset_psftdb
  • ?user=PS
  • ?attribute=BU_SECURITY_SES

The type parameter is the name of the search index requesting the security data. The user is the name of the user running the search query. And the attribute is the security value configured on the Search Definition page.

PS Runs the Callback Code

When the IB receives the callback request, it hands the request to the Application Package specified on the Search Definition. The App Package inspects the parameters to see what security attribute has been requested. The callback code then runs any appropriate code and SQL to build a list of Roles or Permission Lists assigned to the user.

PS Returns the ID and Security

The Permission Lists or Roles are assembled into an array and have P: or R: appended to the beginning of each item. Once the array is assembled, the values are returned to Elasticsearch in a response message. For example, the response might contain an array that looks like ["P:PTPT1000","P:PTPT1200","P:PTPT3100"] – Permission Lists assigned to that user.

ES Filters the Results

Elasticsearch has no concept of the security structures in PeopleSoft, so it uses the data returned from the callback to remove search results that a user has no security to. The callback array from above lists the Permission Lists assigned to a user. Elasticsearch will build a filter query to remove any rows from the search results that do not contain one of those Permission Lists.

Something to keep in mind – Elasticsearch can only filter the search results based on the Permission List stored in each searh result. This means that Permission Lists are attached to each search result when the PTSF_GENFEED process is run. If you change your security dramatically (say, re-implement Business Unit Security), you may need to fun a full build to push your security changes into Elasticsearch.

You can view the permission lists attached to each document with the Elasticsearch API. In your browser you can call the /_search REST method against any URL. If you deployed the PTPORTALREGISTRY search definition with the default settings, you can call the URL http://elastic.psadmin.io:9200/ptportalregistry_psftdb_orcl_es_alias/_search?pretty=true to view the data in Elasticsearch. In the document, you will find the list of security attributes like this: ["P:PTPT1000","S:Admin"].

ES Returns the Results

After the filter query completes, Elasticsearch will return the search results as a JSON message back to PeopleSoft. When the message is received, the results are parsed and displayed by the search results component. Depending on the application, you might see the generic search results page, or an application specific page.

Improving Windows Services from the DPK

A common theme we write about on the blog is how to make the DPK work with multiple environments on the same machine. It’s common to run a DEV and TST on the same server. The DPK can build those environments, but there are a few changes to make the setup run well. On Windows, the services the DPK creates makes an assumption that breaks when we run multiple environments.

When starting a domain via Windows services, the service assumes that the environment variables are set for that environment. If you create your DEV environment via the DPK, that’s a good assumption. But, if you create a TST environment next, the environment variables are set to TST. When you attempt to start the DEV domain via Windows services, the domain start will fail.

To resolve this, we can improve the Ruby script that starts our domains. Under the ps_cfg_home\appserv\DOMAIN folder, there are Ruby scripts that are called by the Windows service. For the app server, it’s appserver_win_service.rb. These scripts will look for the PS_CFG_HOME environment variable and start the domains it finds under that home. We can add a line in the file to point to the correct PS_CFG_HOME location like this:

ENV["PS_CFG_HOME"]=c:\psft\cfg\DEV

While we can modify the file directly, the DPK way of handling this is to update the template in the DPK. Then, whenever we rebuild our domains the code change is automatically included.

The Ruby scripts to start/stop domains are templates in the DPK. The templates are stored under peoplesoft_base\dpk\puppet\modules\pt_config\files\pt_appserver\appserver_win_service.erb (replace pt_appserver with pt_prcs or pt_pia for the batch and PIA services.)

To make the environment variables we add dynamic, we can reference variables that exist in the Ruby environment that calls the ERB template. In the program appserver_domain_boot.rb, the variables ps_home and ps_cfg_home are set. We will use those variables to build our environment variables.

ENV["PS_HOME"] = "<%= ps_home %>"
ENV["PS_CFG_HOME"] = "<%= ps_cfg_home %>"
system("<%= ps_home %>/appserv/psadmin -c start -d <%= domain_name %>")

The <%= %> tags will output the value of that command or variable. So in our case, we are outputting the string value of ps_cfg_home.

The result of this file will look like this:

ENV["PS_HOME"] = "c:\\psft\\pt\ps_home8.56.08"
ENV["PS_CFG_HOME"] = "c:\\psft\\cfg\\DEV"
system("c:\\psft\\pt\ps_home8.56.08\\appserv\\psadmin -c start -d DEV")

When we run Puppet the next time, our Windows service will have it’s environment variables set before starting or stopping a domain.

Improve the Management of DPK Archives

In Episode 127 of the podcast, Kyle and I discuss some strategies for managing the DPK archive files. The discussion started because I am starting a PeopleTools 8.56. To upgrade each server we copy the new DPK archive files to every server and run the DPK. Having a copy of those archives on every server is redundant and not the best solution.

 

A better solution is to store the DPK archives in centralized place. We run into a problem in the DPK when the archive folder has multiple versions of the same archive. For example, if we have two versions of the Weblogic archive, the DPK will return an error because it doesn’t know which one to deploy.

This error is generated from the function get_matched_file(). get_matched_file() is a custom DPK function that looks at the DPK Archive location and returns a filename based on a tag. The tags are string values like weblogic, pshome, and jdk. The DPK looks for all the file names that match the tag. If there is more than one file, the function returns an error.

Specifying Archive Files

Instead of letting the DPK search a directroy for matching archive files, we could explicity define which archive files we want to deploy. While this isn’t as dynamic, the method would give us more control over which version of each software we deploy. It could also solve the issue with centralizing all our DPK Archives.

To use this method, we have to make two changes to the DPK:

  • Create a new hash to store the archives we want to deploy
  • Update the Puppet code to read from the new hash

Hash to Store Archive Data

In the psft_customizations.yaml file, we create a new archive_files: hash. This hash is where we can explicitly define which archive files we want to use.

archive_files:
  weblogic:             "%{hiera('archive_location')}/pt-weblogic12.2.1.2.20180116.tgz"
  tuxedo:               "%{hiera('archive_location')}/pt-tuxedo12.2.2.0.tgz"
  jdk:                  "%{hiera('archive_location')}/pt-jdk1.8.0_144.tgz"
  pshome:               "%{hiera('archive_location')}/pt-pshome8.56.06.tgz"
  oracleclient:         "%{hiera('archive_location')}/pt-oracleclient12.1.0.2.tgz"
  psapphome:            "%{hiera('archive_location')}/fscm-psapphome027.tgz"

The keys to use the hash are the DPK tags for each piece of software. This lets us leverage the built-in tagging system in the DPK. If you want to take it a step further, you can define the hash with additional variables for each version. For example, in a common.yaml file, configure the archive_files: hash like this:

archive_files:
  jdk:                  "%{hiera('archive_location')}/pt-jdk%{hiera('jdk_version')}.tgz"
  pshome:               "%{hiera('archive_location')}/pt-pshome%{hiera('tools_version')}.tgz"

Then, in your psft_customizations.yaml file you can specify:

jdk_version:    1.8.0_144
tools_version:  8.56.06

Update Puppet to Read the Hash

Next, in the tools_deployment.pp file (under dpk/puppet/modules/pt_setup/manifests) we need to modify the code to use our archive_list: hash. In the delivered code, the DPK archive location and software tag are passed to the Ruby function get_matched_files(). What we will do instead is check if the archive_list: hash is defined. If it is, we will use the path from the hash instead of calling get_matched_file. If the archive_files: hash does not have the tag defined (like pshome), the get_matched_file() function is called.

Here is the code for deplying WebLogic:

$archive_files = hiera('archive_files', '')

if $archive_files {
  $weblogic_archive_file   = $archive_files[$weblogic_tag]
} 
if $weblogic_archive_file == '' {
  $weblogic_archive_file = get_matched_file($tools_archive_location, $weblogic_tag)
}
if $weblogic_archive_file == '' {
  fail("Unable to locate archive (tgz) file for Weblogic in ${tools_archive_location}")
}

You can copy this code for the rest of the archive types in the tools_deployment.pp and app_deployment. For DPK types that you do not specify, the DPK will revert to the original behavior.

I like this setup better. We get more control over which DPK archives we deploy, we can store multiple archives in a central location, and we retain the delivered behavior as the default.

psadmin.conf 2018 Registration is Open

Registration for psadmin.conf 2018 is open!

Register for psadmin.conf

The Call for Presentations closes on February 15, 2018. We are still accepting proposals and would love to have you present!

Head over to psadmin.io/conference to see the agenda and learn more. If you want to re-watch the excellent presentations from psadmin.conf 2017, click here to view them all.

See you in May!

Change Assistant 8.56 Installation Changes

Starting in Change Assistant 8.56, a new file is used to track the installations of Change Assistant, C:\Windows\PeoplesoftCA.txt. While Change Assistant 8.55 let us install multiple versions at the same time, 8.56 now tracks which your Change Assistant installations.

C:\Windows\PeoplesoftCA.txt

1 8.56.05  c:\psft\ca\hr025-8.56.05
2 8.56.05  c:\psft\ca\hrtst-8.56.05

The Change Assistant installer scripts set the path for the file. On Windows, is uses the WINDIR environment variable and on Linux it generates a temporary folder.

if isWin() :
  winpath = os.environ['WINDIR']
else :
  winpath = tempfile.gettempdir()

The C:\Windows location doens’t seem like the best place to store this file though. C:\Windows is for Windows files, not for vendor configuration. I’d like to see this stored in a better location, like C:\ProgramData\Peoplesoft.

If you run into errors when installing Change Assistant, especially in early releases of 8.56, try deleting the PeoplesoftCA.txt file. Earlier versions of Change Assistant used this format of the file (this is from Change Assistant 8.56.03):

INSTALL_PATH=c:\psft\ca\hr024-8.56.03

There are multiple ways to install Change Assistant with 8.56.

  1. tools_client\setupPTClient.bat on the PeopleSoft Image
  2. Use the setup.bat file under PS_HOME\setup\PsCA
  3. Silent Install option:

    .\silentInstall.bat c:\psft\ca\hr025-8.56.05 NEW NOBACKUP
    .\silentInstall.bat c:\psft\ca\hcmpum UPGRADE NOBACKUP
    

A few changes to the installation process:

  1. Installing an older version of Change Assistant will error if the PeoplesoftCA.txt has a newer version in it.
  2. The PeopleTools Installation Guide for 8.56 references the installer as setup\PsCA\installCA.exe, but in the latest 8.56.05 client tools that file does not exist.
  3. Instead, there is a setup.bat script to perform the installation.
  4. At OpenWorld, we heard that 8.56 included Change Assistant for Linux but you had to look for it. Now we have a setup.sh script for Change Assistant on Linux.
  5. The installation process is powered by Python, not an installation wizard. This follows suit with the DPK changes and supports multiple platforms.

Using ElasticHQ to Monitor Elasticsearch

There are a variety of ways to monitor Elasticsearch with PeopleSoft. You can use the Health Center, use the Elasticsearch Interact page, make API calls. But my favorite tool to keep an Elasticsearch is ElasticHQ. ElasticHQ is an open source plugin that you can easily install into Elasticsearch.

There are two ways to install ElasticHQ:

  1. Install as Plugin
  2. Download .zip

If your Elasticsearch server has access to download from the internet, you can use this command to install the ElasticHQ plugin:

PS C:\psft\elastic\pt\es2.3.3\bin> .\plugin install royrusso/elasticsearch-HQ/v2.0.2

-> Installing royrusso/elasticsearch-HQ/v2.0.2...
https://oss.sonatype.org/service/local/repositories/releases/content/royrusso/elasticsearch-HQ/v2.0.2/elasticsear
ch-HQ-v2.0.2.zip 
Trying https://github.com/royrusso/elasticsearch-HQ/archive/v2.0.2.zip 
Downloading ..........................................................
..................................................................DONE
Verifying https://github.com/royrusso/elasticsearch-HQ/archive/v2.0.2.zip checksums if available ...
Installed hq into C:\psft\elastic\pt\es2.3.2\plugins\hq

You need to specify version 2.0.2 for PeopleSoft Elasticsearch. The latest version of the ElasticHQ plugin is designed for newer versions of Elasticsearch, not the version that ships with PeopleSoft.

If your server cannot download from the internet, you can download the .zip file, extract it, and copy to base/pt/es2.3.2/plugins

To access ElasticHQ,

  1. Open a web browser and the page http://server:9200/_plugin/hq
  2. Log in with the esadmin user
  3. Click the “Connect” button to get your Elasticsearch status

If you have an Elasticsearch cluster, ElasticHQ will show you statistics about every node in the cluster.