Skip to content

Monitoring PeopleSoft Uptime with Heartbeat

Heartbeat is a lightweight utility from Elasticsearch that can help you monitor the status and uptime of different applications. In our case, we can use Heartbeat to tell us if a PeopleSoft environment is available or not. Using HTTP monitors, Heartbeat can request a specific URL and evaluate the response. While other utilities can do the same thing, Heartbeat was built to send data to Elasticsearch (and Opensearch) and comes with delivered dashboards. This makes is easy to build a quick dashboard to show our PeopleSoft environment status.

Running Heartbeat

There are multiple ways to run Heartbeat, but for this post we’ll run it as a container with podman-compose. You will also need an Opensearch or Elasticsearch instance to send the Heartbeat data to. If you don’t have either system running and want a quick installation, use this post to get you started.

First, create (or modify) a compose.yaml to setup our Heartbeat container.

$ cd ~
$ vi compose.yaml

We will use the heartbeat-oss image and the 7.12.1 release since it is compatible with Opensearch.

version: '3'
    container_name: heartbeat
      - /home/opc/heartbeat/heartbeat.docker.yml:/usr/share/heartbeat/heartbeat.yml:Z
      - /home/opc/heartbeat/data:/usr/share/heartbeat/data:Z
      - "-e"
      - "--strict.perms=false"

Create the heartbeat/data/ directory and a heartbeat/heartbeat.docker.yml file.

$ mkdir -p heartbeat/data
$ vi heartbeat/heartbeat.docker.yml

We will create a simple icmp monitor in  heartbeat.docker.yml so we can verify the container starts properly. Also set the Elasticsearch or Opensearch target system and credentials. We’ll encrypt the password later.

- type: icmp
  name: containers
  schedule: '@every 30s'
    - heartbeat

  hosts: ['https://opensearch-node1:9200']
  username: 'admin'
  password: 'admin'

If you are using a newer release of Opensearch, we need to tell Opensearch to return an Elasticsearch-compatible version number. This will prevent template loading errors with Heartbeat.

$ curl -XPUT -ku admin:admin -H 'Content-Type: application/json' https://localhost:9200/_cluster/settings -d'{"persistent": {"compatibility": {"override_main_response_version": true}}}'


Start the container.

$ podman-compose up -d && podman-compose logs -f heartbeat

In our compose.yaml file, we added the line which tells Heartbeat to install an index template for the data that we are sending to Opensearch. The template will ensure fields are the correct types (integers, strings, ip addresses, etc).

View Heartbeat Data

Once Heartbeat starts, it will run the monitors listed in heartbeat.docker.yml and send the results to Opensearch. Log into Dashboards (http://<server>:5601) and navigate to Dashboards Management > Index Patterns to create a new Index Pattern for our heartbeat-* indexes.

Select @timestamp for the Time Field and click “Create index pattern”.

Next, we can view our Heartbeat data under the “Discover” page. Make sure heartbeat-* is selected for the Index Pattern.

While this is cool to see, we only have one monitor running and it’s pinging our “heartbeat” container to see if it’s running. We need to add more monitors to make it work for a PeopleSoft Status Dashboard.

HTTP Monitors

Heartbeat supports three types of monitors out of the box: ICMP (or ping), TCP, and HTTP. We used the ICMP type to check our Heartbeat container. We will start using the HTTP type check our Opensearch Dashboards endpoint is up.

Open the heartbeat.docker.yml file and insert this monitor after the containers monitor.

- type: http
  schedule: '@every 60s'
  id: search-dashboards
  name: 'Opensearch Dashboards'
  username: admin
  password: admin
    - http://opensearch-dashboards:5601/api/status
    status: [200]
      - description: 'Dashboard Status'
            status.overall.state: green

The http monitor will run every minute and run a GET request against the /api/status endpoint. That endpoint returns a large JSON document with status for many different services. We only care about the main status, so we use the check.response.json section to drill down to the specific field we want to validate. We also pass in credentials to log into Dashboards.

Save the file and reload the Heartbeat container.

$ podman-compose down heartbeat && podman-compose up heartbeat -d

If you go back to Dashboards and refresh the Discover page, you will see our new monitor shows up with the latest status.

PeopleSoft HTTP Monitors

Now for the fun part, using Heartbeat to check our PeopleSoft systems. We could check just the web server by looking at the login page, but that doesn’t tell us if the app server is running. A more practical check is if someone can log in to verify the web and app servers are running, and the login is successful. To do this, you will need to create a basic account in PeopleSoft that only has enough access to log in.

When you log into PeopleSoft, a successful login will redirect the browser to the iScript WEBLIB_PTBR.ISCRIPT1.FieldFormula.IScript_StartPage. That means our monitor will need to POST credentials to the login page, wait for a redirect, and then validate if the redirect is to the iScript. Here is how that looks with Heartbeat:

- type: http
  schedule: '@every 60s'
  id: pia-lmdev
  name: 'ELM Devevelopment'
    - ''
  max_redirects: 2
    method: POST
      'Content-Type': 'application/x-www-form-urlencoded'
    # URL Encoded
    body: "userid=HEARTBEAT&pwd=password"
    body: WEBLIB_PTBR.ISCRIPT1.FieldFormula.IScript_StartPage

Unlike the Dashboards HTTP monitor, we don’t use the username and password fields. Those are for sites that support BASIC authentication. With PeopleSoft, the credentials are URL encoded (that’s important for passwords) and passed in the POST form body to PeopleSoft. We then allow PeopleSoft to return a redirect and we check each response if it is sending us to the iScript. If it is, we know it was a successful login.

It is important to URL encode the password, as special characters will break the check. There are many places online that can help with encoding your password.

You can also check the Integration Gateway to make sure it’s active.

- type: http
  schedule: '@every 60s'
  id: igw-lmdev
  name: 'ELM Devevelopment - Gateway'
    - ''
    body: ACTIVE

You may also want to check your Elasticsearch or Opensearch clusters.

- type: http
  schedule: '@every 60s'
  id: search-elasticsearch
  name: 'Elasticsearch
  username: 'esadmin'
  password: 'esadmin'
    - ''
    status: [200]
      - condition:
            status: green

You can use the Dashboards check above for Kibana or Insights (they are all the same tool – just different names).

We can also monitor if a specific web server is up and running by requesting the /index.html page and verifying the response code was 200

- type: http
  schedule: '@every 60s'
  id: web-servers
  name: 'Web Servers'
  hosts: [ "" ]
    status: [200]

With the HTTP monitor, you can monitor quite a bit of the PeopleSoft stack. If you want to try something new, use your browsers “Network” tab in the Debug Tools to see what the HTTP request/response is. That’s a great way to see what is happening behind the browser and will help you mock up the HTTP monitor in Heartbeat.

You can also use the TCP monitor to check if an application server is running. There are a number of processes that make up an application server, but a simple check is seeing if the JSL process is responding to connections.

- type: tcp
  id: app-servers
  name: 'App Servers'
  schedule: '@every 60s'
  hosts: [

Before we build our Uptime Dashboard, we should protect the passwords in our heartbeat.docker.yml file.

Heartbeat Keystore

Heartbeat (and all the other Beats from Elastic) support a keystore out of the box. It’s easy to use and allows you to hide passwords from people who might be able to read your config file.

First, we need to create a new keystore file. We will create the keystore from inside our Heartbeat container. The keystore will be created in a directory that is volume mounted to our host file system. This allows you to backup the keystore (and config files) so you don’t have to recreate the everything when the container restarts.

$ podman-compose exec heartbeat /bin/bash
bash-4.2$ heartbeat keystore create
Created heartbeat keystore

To add passwords (or any value) to the keystore, we use the keystore add command from inside the container.

bash-4.2$ echo 'admin' | heartbeat keystore add OS_ADMIN --stdin
Successfully updated the keystore

The Opensearch admin user password is now stored in the keystore under they key OS_ADMIN. We can reference that in our config files using this syntax: "${OS_ADMIN}"

Update the heartbeat.docker.yml file in the two places we have the password hard-coded.


- type: http
  schedule: '@every 60s'
  id: dashboards
  name: 'Opensearch Dashboards'
  username: admin
  password: "${OS_ADMIN}"


  hosts: ['https://opensearch-node1:9200']
  username: 'admin'
  password: "${OS_ADMIN}"
  ssl.verification_mode: 'none'

Verify the keystore is available on the host machine. Then you can restart the Heartbeat container to test our encrypted password.

$ ls heartbeat/data/*.keystore

$ podman-compose down heartbeat && podman-compose up heartbeat -d

Your monitors should still work after using the encrypted passwords with the keystore.

Uptime Dashboards

With our monitors defined and producing data, we can start building visualizations in Opensearch Dashboards or Kibana.

Navigate to Visualise and select “Create new visualization”. Select the “TVSB” type.

One tricky part of the building a simple up/down visualization is that we only need the last check from Heartbeat. Most of the visualizations are designed to aggregate data by count, average, etc and not just a single row. To get a single row, we need to use two aggregations together to limit the data.

In the TSVB visualization page:

  1. Select “Markdown” for the visualization style.
  2. Enter this text in the Markdown panel. This will be blank to start, and will show us sample output after we configure the query.
| Environment | Running |
| ----------- | ------- | 
{{#each _all}} 
| {{@key}} |{{status.last.raw}} | 
  1. Select the Data panel
  2. Enter Status for the Label and status for the Variable Name. The status variable is used by the Mustache code we used in the Markdown block.
  3. Select “Top Hit” for the first Aggregation
  4. Choose “monitor.status” for Field
  5. Enter 1 for Size
  6. Select “Concatenate” for Aggregate with
  7. Select “event.created” for Order by
  8. Choose “Desc” for Order
  9. Choose “Terms” for Group by
  10. Enter for By
  11. Enter 50 for Top
  12. Select “Terms” for Order by
  13. Select “Descending” for Direction

The Data panel will look like this when you are done.

The visualization will look something like this. The Running column might start out with 0 or 1 initially, but after a few checks the monitor.status field will start showing “up” and “down”. Save the visualization as “Environment Status”.

One issue with this chart is that it shows everything: web servers, app servers, PIA login checks, servers pings. We can break down this visualization into separate charts, one for each type.

In the Panel Options panel, you can add a “Panel filter”. You can use any field to filter, but if you look at the sample monitors above I intentionally added a prefix to the id: field. We’ll use this prefix to separate the results.

  • pia-*: Environment Status
  • app-*: App Server Status
  • web-*: Web Server Status
  • search-*: Search Server Status
  • monitor.type: icmp: Server Status

Save each as a new visualization use for our dashboard.

The last visualization we’ll make it a graph showing the historical status of our Environment uptime.

  1. Create a new Area visualization with heartbeat-* as the data source.
  2. Enter pia-* for the search filter.
  3. Add an X-axis Bucket and use @timestamp for the Date Histogram.
  4. Add another Bucket and select “Split Series”.
  5. Choose Filters for the Sub Aggregation
  6. Add monitor.status: down and monitor.status: up for Filters.
  7. Under the Metrics & Axes panel, expand “Y-axes”.
  8. Change the Mode to “Percentage” and click “Update”.
  9. Save the visualization as “Environment Status History”.

With our visualizations created, we can create our status dashboard. Navigate to Dashboards and create a new one. Add the visualizations we created so the dashboard looks like this:

This is a quick way to check on your PeopleSoft resources and get information about each environment’s uptime and availability.

11 thoughts on “Monitoring PeopleSoft Uptime with Heartbeat”

  1. Hi Dan,
    Thanks for this post, very useful. If I have an API endpoint that produces a large dataset in JSON format, would I be able to import it into PeopleSoft? Is it easy enough to set it up?

  2. Pingback: #339 – Uptime Monitoring

  3. Hi Dan (great name too),

    I was just curious why you chose to run heart throuh the container podman-compose?
    To be honest I’d never heard of this container before. Are you a linux shop?

    In my situation we are a windows shop and so will try this with the native windows dowload of heartbeat. I will more than likely also have to deal with certs also as both ES and Kibana are SSL.
    But I am very excited to give this s try.

    Thanks for another great share.

    Cheers, Dan

    1. Hi Dan – I’m moving anything non-PeopleSoft to running containers these days, so that’s why I picked podman-compose to run Heartbeat. I already had an Opensearch cluster running with podman-compose so it only made sense to add the Heartbeat container to the compose file. Over time I’d like to move these into a full Kubernetes cluster, but for now podman works well.

  4. Hi Dan,
    I was able to get opensearch-dashboards running after following your “Rootless Podman on Oracle Linux” example. Then I tried to follow your “Monitoring PS Uptime with Heartbeat”. However, heartbeat index was not created in opensearch instance after executing

    $ podman-compose up -d && podman-compose logs -f heartbeatman-compose logs -f heartbeat

    log is showing following ERROR:

    b80261f16a85 Received fatal alert: bad_certificate
    b80261f16a85 at ~[?:?]
    b80261f16a85 at ~[?:?]
    b80261f16a85 at ~[?:?]
    b80261f16a85 at$AlertConsumer.consume( ~[?:?]
    b80261f16a85 at ~[?:?]
    b80261f16a85 at ~[?:?]
    b80261f16a85 at ~[?:?]
    b80261f16a85 at ~[?:?]
    b80261f16a85 at ~[?:?]
    b80261f16a85 at ~[?:?]
    b80261f16a85 at ~[?:?]
    b80261f16a85 at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap( ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
    b80261f16a85 at io.netty.handler.ssl.SslHandler.unwrap( ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
    b80261f16a85 at io.netty.handler.ssl.SslHandler.decodeJdkCompatible( ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
    b80261f16a85 at io.netty.handler.ssl.SslHandler.decode( ~[netty-handler-4.1.100.Final.jar:4.1.100.Final]
    b80261f16a85 at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection( ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
    b80261f16a85 at io.netty.handler.codec.ByteToMessageDecoder.callDecode( ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
    b80261f16a85 at io.netty.handler.codec.ByteToMessageDecoder.channelRead( ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
    b80261f16a85 at [netty-transport-4.1.100.Final.jar:4.1.100.Final]
    b80261f16a85 at [netty-transport-4.1.100.Final.jar:4.1.100.Final]
    b80261f16a85 at [netty-transport-4.1.100.Final.jar:4.1.100.Final]
    b80261f16a85 at$HeadContext.channelRead( [netty-transport-4.1.100.Final.jar:4.1.100.Final]
    dcb08b4f775b 2023-11-14T18:35:53.151Z ERROR [publisher_pipeline_output] pipeline/output.go:154 Failed to connect to backoff(elasticsearch(https://opensearch-node1:9200)): Get “https://opensearch-node1:9200”: dial tcp connect: connection refused

    Any suggestion for resolving above error? Please advise.


  5. Hey Dan,

    I was able to setup monitors successfully using Docker images of Elasticsearch, Kibana, Heartbeat from its website.

    Thank you for the insightful sample monitors shared in your post.

    PeopleSoft Process Scheduler is different from others and I wonder if there is a workaround for Heartbeat to monitor Process Monitor.


    1. Hi James – glad to hear you got it all working!

      When I built this setup, I purposefully ignored the process schedulers because I was more concerned with monitoring general availability of the system (e.g, can users log in a enter time/view paycheck?). Making sure the schedulers are active is important, but they don’t affect the way I was looking at measuring uptime.

      The other issue with monitoring the schedulers is that they don’t start a process that listens on a port (unlike the JSL/JSH), so there is not port to check. If you have Remote Monitoring enabled (for the Health Center), there will be a JMX process (rmiregistry) that runs alongside the domain. The issue with checking that process is it can be running when the domain is down so you can’t do a simple tcp connection test. rmiregistry does return JMX data if you wanted to go down that route.

      If you come up with something, I’d love to hear what you did!

      1. Hi Dan – thanks for the response.

        Thanks again for showing how to monitor PeopleSoft using heartbeat approach, which is especially valuable with the example for monitoring user login from PIA. Over the years t I’d dealt with PPM, PHC, OEM plugin for PeopleSoft, which mostly show CPU/memory usages and server up/down status, not much practical values for customers, because they could not detect user login failure due to malfunctioning runaway webservers or appservers, not to mention the big efforts associated with setting up PPM, PHC, OEM.

        For heartbeat to monitor Process Scheduler, I wonder if HTTP POST can be used to query PeopleSoft table.

        Thanks again,

  6. Hi Dan,

    I wonder if PSQuery can be used for heartbeat to monitor status for Process Scheduler. For example, PSQuery PRCS_STATUS is created with SQL


    would return value of SERVERSTATUS (3=running; 1=down). Is there a way for heartbeat monitor to evaluate the value of SERVERSTATUS and report its status?

    Please advise.


  7. Hi Dan,

    I am able to setup an http monitor for Process Scheduler by running PSQuery using REST Web Service.


Leave a Reply

Your email address will not be published. Required fields are marked *