Extending psadmin with psadmin-plus

I have created a helper menu script to extend the delivered psadmin program. The script is called psadmin-plus and I have created a repository for it on psadmin.io’s GitHub account. This was built as a self-study side project while I was on paternity leave this summer. I wanted to learn a little more about bash scripting and how to use git, and at the same time try to make a useful tool for myself and others to use. As of this writing, the tool is usable but far from complete. At the moment it only has support for Linux. I hope to make improvements over time and would invite others to summit issues on GitHub for questions, bugs or enhancement ideas. If anyone wants to contribute themselves, that would be great too!

There are two main uses for psadmin-plus. The first is actually calling the delivered psadmin program. The value add here is that it will auto-discover all your PS_CFG_HOME directories for you and source environment variables as needed. This all assumes you follow a few conventions, which should be documented in the GitHub readme or wiki pages. As mentioned in a previous blog post, this is useful if you use a single user to run your PeopleSoft environments. If you have a different user for each environment and source at login, then this feature doesn’t really help.

The second use is executing actions against multiple PS_CFG_HOMEs and domains at once. An example would be to stop all Process Scheduler domains on a machine. With this tool, you can do this with a few key strokes. You also have the option to execute now or later. If you select later, a script will generate to file. This allows you to run at a future time, maybe during a maintenance window. Again, there are a few assumed conventions that must be followed.

If you want to try it out for yourself, I have created a setup script to run against a PeopleSoft Image(VBox or Linux DPK install only). This will create a few extra PS_CFG_HOMEs and domains for you to play with in the menu. You can find instructions in the GitHub readme.

Below is a quick demo of psadmin-plus in use. For more information please see GitHub.

psadmin-plus-demo

Managing Environment Variables When Using Decoupled Homes

As a reader of this blog or a listener of the podcast, you know I am a user of both Linux and decoupled homes. Traditionally with a Linux PeopleSoft installation you need to source the delivered psconfig.sh to set your environment variables. When an entire environment was contained under its own PS_HOME, you could tweak this psconfig.sh file if customizations were needed without fear of impacting other environments. Now with decoupled homes, the PS_HOME directory will likely be shared, so changing the psconfig.sh file located there is a bad idea.

When switching to decoupled homes, I was looking for a good way to manage sourcing the psconfig.sh file and the different environment variables. While attending Alliance 2015, I saw a presentation given by Eric Bolinger from the University of Colorado. He was talking about their approach to decoupled homes and he had some really good ideas. The approach I currently use is mostly based on these ideas, with a few tweaks. The main difference is that he has a different Linux user account specific to each environment. With this approach, he is able to store the environment specific configuration file in the users home directory and source it at login time. This is similar to the approach Oracle suggests and uses with their PIs(see user psadm2). My organization didn’t go down the road of multiple users to run PeopleSoft. Instead, we have a single user that owns all the environments and we source our environment specific configuration file before we start psadmin. We use a psadmin wrapper script to help with this sourcing(which I will discuss and share in a future post). The main thing to keep in mind is regardless of how these files are sourced, the same basic approach can still be used.

The idea here is to keep as much delivered and common configuration in psconfig.sh as possible and keep environment specific customization in their own separate files. I like to keep these config files in a centralized location, that each server has access to via a NFS mount. I usually refer to this directory as $PSCONFIGS_DIR. What I do is copy the delivered psconfig.sh file to $PSCONFIGS_DIR and rename it psconfig.common.sh. I then remove any configurations that I know I will always want to set in our custom environment specific file, mainly PS_HOME. I then add any needed configuration that I know will be common across all environments (Another approach would be to create a new psconfig.common.sh file from scratch, set a few variables and then just source the delivered file cd $PS_HOME && . psconfig.sh. Either way works, but I like the cloning approach). This common file will be called at the end of every environment specific file. Remember to take care when making any changes to this file, as it will impact any environment calling it. It is also a good idea to review this file when patching or upgrading your tools.

Next for the environment specific files, I create a new file called psconfig.[env].sh. The environment name is listed in its filename. An example would be psconfig.fdev.sh. You could really choose any name for this, but I found this approach to be handy. In this file you will set the environment specific variables as needed, then end with calling psconfig.common.sh. Here is an example file:

This approach allows you to be a little more nimble when patching or upgrading. You can install new homes or middleware, then update the psconfig.[env].sh file and build new domains. When you get to go-live for Production, you can have the domains all built ahead of time. When ready, just update the config file, upgrade the database, and you are good to go!

One final note, regarding directory naming conventions. My organization tends to always have our PS_CFG_HOME directory match the environment or database name exactly, ie. fdev. I’m considering changing this, however. During our last Tools patching project, I found it a little awkward to prebuild the domains and still end up with same directory name. It seems to make much more sense to include the PeopleTools version in the directory name. That way you can prebuild the domains in a new PS_CFG_HOME, and when you go-live just blow the old home away. Another great idea I took away from Eric’s presentation was how to dynamically generate a PS_CFG_HOME directory name:

export PS_CFG_HOME=/opt/pscfg/$ENV-`$PS_HOME/bin/psadmin -v | awk '{print $2}'`

If you use this technique, you will want this to be the last line in your config file – after sourcing the common file. What this does is concatenate your environment name with the PeopleTools version, using the psadmin version command – ie. fdev-8.55.03. This will give you more clarity on what tools version the domains under this PS_CFG_HOME were built with and it will make it easier to prebuild your domains.

#1 – Decoupled Homes

I’m really happy to announce a new project we are launching today:

The PeopleSoft Administrator Podcast

Kyle Benson and I sit down and talk about all things PeopleSoft Admin. Kyle and I would meet for lunch every few weeks to talk about our latest issues at work and the latest features in PeopleTools. After a year of this, we thought it would be fun to record our conversations and share them.


In episode 1 of the PeopleSoft Administrator Podcast, we talk about decoupled homes and how to use them. They discuss the advantages of using them, how they manage each home and why you should look at using decoupled homes.

We want to make this podcast part of the community discussion on PeopleSoft administration. If you have comments, feedback, or topics you’d like us to talk about, we want to hear from you! You can email us at podcast@psadmin.io, tweet us at @psa_io, or use the Twitter hashtag #psadminpodcast.

You can listen to the podcast here on psadmin.io or subscribe with your favorite podcast player using the URL below, or subscribe in iTunes.

Podcast RSS Feed

Links from this episode:

All About COBOL

Time for everyone’s favorite language: COBOL! Well, it’s not my favorite, and probably not your favorite, but it is important to PeopleSoft. Many core programs in HR use COBOL and will most likely stay COBOL for a while. Those COBOL programs are stable, fast, and once you compile you rarely have to touch them. Because COBOL is important to PeopleSoft, let’s talk about setting up the compiler, runtime, and learn how to compile COBOL programs.

Installation and Configuration

Compiler

PeopleSoft delivers COBOL source files with some of the applications, but you need to install the COBOL compiler separately. Oracle will give you a license to MicroFocus, so you don’t need to buy your own. Oracle Support has a nice article on downloading MicroFocus and acquiring a license. As a PeopleSoft customer, a MicroFocus license is included with the product to compile COBOL programs. Keep in mind, you can only use the compiler for PeopleSoft programs!

Runtime

After you install the compiler, you will need to install the Runtime License on any server were your compiled COBOL will run: process schedulers and application servers. To install the runtime, open a command prompt and navigate to the folder that contains

setupMF.exe. Run these commands:

  1. setupMF.exe e:\psoft\psft-mf—nx-as-license to install the license folder
  2. e:\psoft\pt-85x.xx (your path to PS_HOME)
  3. MFLMWin.exe -i to install the License Manager service.
  4. For Windows, you may need to change the security for MFLMWin.exe. Right-click on that file, select the Compatibility tab, and check the box for “Run this program as an administrator”
  5. In the Services panel, verify the “MicroFocus License Manager” service has started and set to start Automatically

Database

Clients Are you on PeopleTools 8.53 or lower? You’ll still need a 32-bit client for COBOL. That means installing the 32-bit client on your process scheduler and app server (for the remote call COBOL programs).

Domain Configuration

If you need the 32-bit client on your process schedulers and app server, you’ll need to make sure the domains know where to find them. The simplest solution is to add the 32- bit client to the PATH in the

psappsrv.cfg and psprcs.cfg files. I added the client path the beginning of the PATH variable. For the psappsrv.cfg and psprcs.cfg files, change this setting to your 32-bit client path:

Add to PATH=e:\oracle\product\12.1.0\client_32;[existing entries]

Compiling

COBOL PeopleSoft delivers scripts to simplify the compilation process. The scripts know about the decoupled homes and can compile all homes, or just your customizations. Under the PS_HOME\setup folder you’ll find the main script, cblbld.bat. Here is the basic usage for cblbld.bat:

  1. Set up your environment variables for MicroFocus and homes
  2. Tell the script where to compile
  3. Tell the script which homes to compile

The script will handle copying source files, compiling, and deploying to the CBLBINx folder. For example, let’s compile all the source COBOL in our demo environment:

set PS_HOME=e:\psoft\pt-file-8.5x.xx 
set PS_APP_HOME=e:\psoft\hr-file-9.2.xxx 
set PS_CUST_HOME=e:\psoft\HR92DMO 
set COBROOT="e\:Program Files\MicroFocus\bin"

cd %PS_HOME%setup
cblbld.bat e: temp\compile

The last command will compile PS_HOME first and deploy and PeopleTools COBOL programs to PS_HOME\CBLBINx. Then, it will compile PS_APP_HOME source files and deploy to PS_APP_HOME\CBLBINx. Last, it will compile any source files in PS_CUST_HOME\src\cbl\base and deploy to PS_CUST_HOME\CBLBINx Let’s dig into this script to understand what is happening behind the scenes. The CBLBLD.bat script is powerful and can really help out when compiling programs.

CBLBLD

The CBLBLD.bat script takes four parameters:

  1. Drive Letter
  2. Compile Directory
  3. Encoding (Optional)
  4. Directories to compile (Optional)

CBLBLD.bat assumes you have set the environment variables mentioned above so it knows where to grab the sources files for compilation. To compile only your custom COBOL source files, pass PS_CUST_HOME to CBLBLD.bat. For example:

cblbld.bat e: temp\compile PS_CUST_HOME 

You can also pass in PS_HOME, PS_APP_HOME or PS_CUST_APP_HOME to compile only those directories. The script uses the directory name to copy the COBOL sources files to the compile directory. Then, the script CBLBLD_MF.bat is called to start up MicroFocus and compile everything in the compile directory. The CBLBLD.bat script will call the CBLBLD_MF.bat script that will set up the directory and flags for the compiler. Then CBLBLD_MF.bat will call CBLMAKE.bat to compile everything in the and CBLBIN.bat to copy the .gnt, .int, and .exe files to the appropriate CBLBINx directory.

searchCOBOLPgms

“But what if I modify a copy book that is used in lots of delivered programs? Do they compile if I only pass in PS_CUST_HOME?” The CBLBLD.bat script also calls the Perl script searchCOBOLPgms.ps. This Perl script will look at the files in your PS_CUST_HOME and look at PS_APP_HOME and PS_HOME to see if the copy books or programs you have modified to see if they are referenced by any delivered code. So, if you have modified a copy book (e.g, change an array size), this script will copy any program that uses the copy book to the Compile Directory. This makes the copy book change apply to all programs that use the copy book. The source files will still live in PS_APP_HOME or PS_HOME, but the compiled programs will be located in PS_CUST_HOME\CBLBINx.

There is a bug with the searchCOBOLPgms.ps script. If you make a copy book change in a sub program, the searchCOBOLPgms.ps script will only copy the sub program to the Compile Directory; it doesn’t look for the sub programs’ parent program. So, it can miss programs that use a copy book during the compile. Another part of the bug (or design) is that programs that run out of PS_APP_HOME or PS_HOME do not use the CBLBIN search path in psprcs.cfg to find .gnt files. This is how we found the bug in searchCOBOLPgms.ps. We had a program that needed an array increase. We modified a copy book, compiled, and re-ran the program. But, the program would still run in PS_APP_HOME\CBLBINx and ignore the new copy book in PS_CUST_HOME\CBLBINx. I filed an SR on that a few weeks ago; I’ll update the post when fix is posted.

UPDATE: December 1, 2015 The Oracle analyst I worked with agrees that the searchCOBOLPgms.ps script does not account for the subprograms. When a fix is scheduled, I’ll post another update.

UPDATE: April 18, 2016 To address some of the issues with the searchCOBOLPgms.ps script, but to also make COBOL source programs work better with selective adoption, I am starting to use a PS_APP_PATCH_HOME in our decoupled-home setup.