OCI – 02 – Connecting Terraform

In our fresh tenancy, the first thing we’ll do is connect Terraform. We will use Terraform to build resources as a first option, and a CLI as the second option. This will help enforce an Infrastructure-as-Code deployment.

Before we write any code we need to create a Git repository to track our code. There will be a number of repositories for our tenancy, so I created a parent directory (named after my tenancy) to organize them.

Log into OCI and start a Cloud Shell session. We will create a new Git repository for a sample project called minecraft.

$ echo "export ocitenancy=<your tenancy name>" | tee -a ~/.bashrc
$ source ~/.bashrc
$ mkdir -p ~/repos/$ocitenancy/minecraft
$ cd ~/repos/$ocitenancy/minecraft
$ git init

Since this if the first time we are using Git in our shell, we need to configure our Git Identity.

$ git config --global user.email "you@example.com" 
$ git config --global user.name "Your Name" 

Next, we create a README file, commit the change, and change the primary branch to main

$ echo "# Minecraft Project" | tee README.md
$ git add .
$ git commit -m "Initial Commit" 
$ git branch -m master main 

As a way to backup my code, I’ve created a repository on GitHub that I setup as a remote copy. You can clone it and use the repository to follow along. Each post will have a branch that matches the post’s number. (E.g, this post will be saved to branch 02-terraform).

$ git remote add origin https://github.com/ocadmin-io/oci.minecraft.git
$ git push -u origin main

CloudShell Provider Configuration

A Terraform Provider is a backend library that will take the generic Terraform code and apply it to the specific technology. In our case, there is an OCI Provider for Terraform that we can use to create anything in OCI.

First, we will set an environment variable for Terraform to know which OCI Region we are in. Any environment variable that starts with TF_VAR* will be available in Terraform. Add the variables to our .bashrc file so they will always be available when we log in.

$ echo "export TF_VAR_region=$OCI_REGION" | tee -a ~/.bashrc
$ echo "export TF_VAR_tenancy_id=$OCI_TENANCY" | tee -a ~/.bashrc
$ source ~/.bashrc

Create a new file called provider.tf and configure it to use the oci backend.

$ touch provider.tf

We can edit this file two ways: with vim in the shell, or with the Code Editor feature of the OCI Console. The Code Editor is a web-based Visual Studio Code editor that let’s you work with any file in your Cloud Shell home directory. Launch the Code Editor by click on the “Developer tools” icons and click “Code Editor”.

Another window will open in the OCI Console and display the text editor. You can drill down into our repos\$ocitenancy\minecraft folder and see the provider.tf file we created.

Add the following code to provider.tf so that Terraform will use the InstancePrincipal authentication built into Cloud Shell.

provider "oci" {
    auth = "InstancePrincipal"
    region = var.region

variable "region" {}
variable "tenancy_id" {}

data "oci_identity_availability_domains" "ads" {
  compartment_id = var.tenancy_id

output "ads" {
  value = data.oci_identity_availability_domains.ads.availability_domains

After you save the changes, go back into Cloud Shell and initialize our minecraft project.

$ terraform init

Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/oci...
- Installing hashicorp/oci v5.11.0...
- Installed hashicorp/oci v5.11.0 (unauthenticated)
Terraform has been successfully initialized!

Initializing will tell Terraform to download any providers we use into the .terraform directory for our project. Next, we can run a plan to see what Terraform will do.

$ terraform plan

data.oci_identity_availability_domains.ads: Reading...
data.oci_identity_availability_domains.ads: Read complete after 0s [id=IdentityAvailabilityDomainsDataSource-42880534]

Changes to Outputs:
  + ads = [
      + {
          + compartment_id = "ocid1.tenancy.oc1..xxxx"
          + id             = "ocid1.availabilitydomain.oc1..xxxx"
          + name           = "xxxx:US-CHICAGO-1-AD-1"
      + {
          + compartment_id = "ocid1.tenancy.oc1..xxxx"
          + id             = "ocid1.availabilitydomain.oc1..xxxx"
          + name           = "xxxx:US-CHICAGO-1-AD-2"
      + {
          + compartment_id = "ocid1.tenancy.oc1..xxxx"
          + id             = "ocid1.availabilitydomain.oc1..xxxx"
          + name           = "xxxx:US-CHICAGO-1-AD-3"

You can apply this plan to save these new output values to the Terraform state, without changing any real infrastructure.

Normally, after we get a plan from Terraform, we can apply those changes and Terraform will create resources in OCI. But, our code only retrieves some values for us and doesn’t build anything, yet, so there is no need to apply our changes. What we have done is verify our connection between Terraform and our OCI tenancy.

Before we move on, add your changes into the repository and commit them. We will also add the .terraform/* directory to our .gitignore file. The directory stores the provider binaries and we don’t need to include those in our repo.

$ echo ".terraform/**" | tee -a .gitignore 
$ echo ".terraform.lock.hcl" | tee -a .gitignore $ git add . 
$ git commit -m "OCI Cloud Shell Connection Complete" 
$ git push origin main

If you want to store your git provider username and password in the credential helper, you can store them like this.

$ git config --global credential.https://github.com.username <username> 
$ git config credential.helper store 
$ git push origin main 
Password for 'https://</username><username>@github.com':</username>

Once you enter the password, Git will save it into the local credential store in Cloud Shell.

Terraform State

When you create resources using Terraform, Terraform will track the state of those objects so it knows if it needs to create or update any resources. Currently, Terraform is tracking the state locally on the Cloud Shell file system. This is useful if you are testing something, but for anything permanent or in a shared environment, we want to store our state in a secure remote location. The state files can also contain sensitive information like private key data and passwords, so it is best to keep the state files someplace more secure. There are many options for remote state storage including Terraform Cloud and OCI Object Storage.

We will use OCI Object Storage to store our state file, and we will use the OCI-CLI to create an Object Store bucket. There is a bit of a chicken and egg situation here because we need to create the bucket first before we use it to track our state. We will use a script with the OCI-CLI to create our bucket and configure Terrafom to use our OCI credentials to access the bucket.

First, we need to grab the Object Storage namespace for our tenancy. The OCI-CLI will return the data in JSON so we use jq to parse the returned data.

$ namespace=$(oci os ns get | jq -r .data) 
$ echo "export TF_VAR_namespace=$namespace" | tee -a ~/.bashrc 
$ source ~/.bashrc

Next, we create a new bucket in Object Storage.

$ oci os bucket create --name terraform_state --namespace $namespace --compartment-id $OCI_TENANCY

  "data": {
    "compartment-id": "ocid1.tenancy.oc1..xxxx",
    "name": "terraform_state",
    "namespace": "xxxx",
    "public-access-type": "NoPublicAccess",

We have a bucket created but we need to create credentials to read and write the state files. OCI Object Store is compatible with Amazon’s S3 object store and we will use the S3 state provider with Terraform. Create a new credential and pull the S3 secret values to use with Terraform.

$ read aws_key_id aws_key_value <<< $(oci iam customer-secret-key create --user-id $OCI_CS_USER_OCID --display-name 'terraform-state-rw' --query "data".{"AWS_ACCESS_KEY_ID:\"id\",AWS_SECRET_ACCESS_KEY:\"key\""} | jq -r '.AWS_ACCESS_KEY_ID,.AWS_SECRET_ACCESS_KEY')

In the command, we redirected the credentials into variables so that w can securely handle them. We will add the AWS access credentials to a new file ~/.aws/credentials that Terraform will automatically read.

$ mkdir -p ~/.aws
$ tee ~/.aws/credentials <<EOF >/dev/null

In the Github repository, I also created a create_bucket.sh script that runs these commands.

Our bucket and credentials to access our bucket have been configured. Now we can update our Terraform provider to use the “s3” storage mechanism for our state files. Add this to the provider.tf file.

variable "namespace" {
  default = ""

terraform {
  backend "s3" {}

To keep our configuration repeatable, we will set our specific tenancy configuration in a config.s3.tfbackend file. The terraform{} block doesn’t allow variables so you must add your specific namespace and region into the file.

bucket   = "terraform_state"
key      = "minecraft"
region   = "<your region>"
endpoint = "https://<your namespace>.compat.objectstorage.<your region>.oraclecloud.com"
skip_region_validation      = true
skip_credentials_validation = true
skip_metadata_api_check     = true
force_path_style            = true

You can also exclude this file our our Git repo since it contains sensitive information. If you are planning to run your Terraform repo in multiple places, you will need to copy this file to your other servers.

$ echo "*.tfbackend" | tee -a .gitignore

We need to tell Terraform we have a new storage location for our state file.

$ terraform init -backend-config=config.s3.tfbackend

Initializing the backend...

Initializing provider plugins... - Reusing previous version of hashicorp/oci from the dependency lock file - Using previously-installed hashicorp/oci v5.13.0

Terraform has been successfully initialized!

Terraform will now store it’s state files in OCI’s Object Storage in a bucket named “terraform_state”. We can use OCI-CLI to verify the bucket exists.

$ oci os bucket list --compartment-id $OCI_TENANCY

  "data": [
      "compartment-id": "ocid1.tenancy.oc1..xxxx",
      "name": "terraform_state",
      "namespace": "xxxx",

We now have a Terraform provider configured for our OCI tenancy and we are ready to start building resources.

Local Shell Provider Connection

To connect Terraform to OCI on your local computer, we have to change our the provider gets credentials to authenticate. If you installed and configured the OCI-CLI locally, we can re-use the profile that was created when setting up the OCI-CLI. Change your provider.tf file to use the config file instead of InstancePrincipal authentication.

provider "oci" {
  tenancy_ocid = var.tenancy_id

variable "config_file_profile" {
  default = "DEFAULT"

Next, we need to set up the TF_VAR_ environment variables. Unlike in Cloud Shell, where we can pull the values from another environment variable, we can pull the values from our OCI-CLI config file. (The tail -1 is optional, but in my case it pulls in the last profile’s region from the OCI-CLI file since my local shell is configured for more than one OCI tenancy).

echo "export TF_VAR_tenancy_id=$(grep tenancy ~/.oci/config | cut -d '=' -f2 | tail -1)" | tee -a .variables
echo "export TF_VAR_region=$(grep region ~/.oci/config | cut -d '=' -f2 | tail -1)" | tee -a .variables
echo ".variables" | tee -a .gitignore
source .variables

Unlike our Cloud Shell setup, we will need to run the source .variables command each time we run the Terraform code locally. You can add that command to your shell’s RC files if you want it to be automatic.

Before we generate a plan locally, we need to re-initialize our Terraform provider. Because we excluded the .terraform/ directory from our Git repo, initializing will download the correct provider binaries to our machine. This is very useful if you are switching platforms (e.g, Linux or Cloud Shell to Windows or Mac).

$ terraform init

Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/oci...
- Installing hashicorp/oci v5.13.0...
- Installed hashicorp/oci v5.13.0 (signed by HashiCorp)

Terraform has been successfully initialized!

Now we can run our Terraform code locally and test our OCI connection.

$ terraform plan

401-NotAuthenticated, Failed to verify the HTTP(S) Signature

On my computer, Terraform didn’t play nicely with the encrypted RSA API key generated by the OCI-CLI. While it is best to have the API key encrypted, to make Terraform work with with the OCI-CLI config I generated a clear API key file to use with Terraform. (https://github.com/oracle/terraform-provider-oci/issues/512)

$ openssl rsa -in ~/.oci/oci_api_key.pem -check | tee ~/.oci/oci_api_key_clear.pem 
Enter pass phrase for /Users/dan/.oci/oci_api_key.pem:

$ oci setup repair-file-permissions --file /Users/dan/.oci/oci_api_key_clear.pem

Then, update your ~/.oci/config file to change the private_key_file to point to our new _clear.pem file


Now we can test our Terraform provider again.

$ terraform plan                            
data.oci_identity_availability_domains.ads: Reading...
data.oci_identity_availability_domains.ads: Read complete after 0s [id=IdentityAvailabilityDomainsDataSource-42880534]

Changes to Outputs:
  + ads = [
      + {
          + compartment_id = "ocid1.tenancy.oc1..xxxx"
          + id             = "ocid1.availabilitydomain.oc1..xxxx"
          + name           = "xxxx:US-CHICAGO-1-AD-1"
      + {
          + compartment_id = "ocid1.tenancy.oc1..xxxx"
          + id             = "ocid1.availabilitydomain.oc1..xxxx"
          + name           = "xxxx:US-CHICAGO-1-AD-2"
      + {
          + compartment_id = "ocid1.tenancy.oc1..xxxx"
          + id             = "ocid1.availabilitydomain.oc1..xxxx"
          + name           = "xxxx:US-CHICAGO-1-AD-3"

OCI – 01 – Setting Up a New Tenancy

Recently, I created a new Oracle Cloud Infrastructure environment and decided to document the steps I went through to go from a brand-new tenancy to a functioning PeopleSoft environment. These blog posts can service as a guide for administrators who are new to OCI, or for experienced administrators who want to look at a different way to build and manage their tenancy.

I’m taking a Terraform-first approach to building the tenancy, and strive to have the bare minimum manual steps when building objects in the cloud. I want to leverage infrastructure-as-code and automation every place we can so this setup can be replicated by anyone.

If you want to follow along, you can use an existing tenancy and build everything in a separate domain or compartment so it doesn’t interfere with any existing setup. Or, you can sign up for your own test OCI tenancy and start from scratch just like me.

If you want to follow along, you can sign up for an OCI tenancy here and get $300 in free credits, plus access to the Always-free resources. Once you have created your new tenancy and logged in, you can follow along.

OCI Cloud Shell

All of the configuration and setup for this tenancy will be done using Terraform (and the OCI Command Line, or OCI-CLI, if needed). These are command-line tools that require a little setup and configuration to use. To make life easier, we can use the OCI Cloud Shell that is included with OCI. The Cloud Shell has both the OCI-CLI and Terraform installed and is pre-configured for our tenancy.

To access the Cloud Shell, click the on Developer Tools icon in the header and select “Cloud Shell”

Screenshot 2023-09-22 at 8.24.51 AM.png

The first time you launch the Cloud Shell, you are prompted to take a tour of its features. If you aren’t familiar with it, go ahead and take the tour. If you want to take the tutorial later, you can start a Cloud Shell and use the command cstutorial to start it.

Screenshot 2023-09-22 at 9.06.51 AM.png

Let’s run a quick test in our new Cloud Shell by checking our availability domains in the new tenancy.

oci iam availability-domain list
  "data": [
      "compartment-id": "ocid1.tenancy.oc1..xxxx",
      "id": "ocid1.availabilitydomain.oc1..xxxx",
      "name": "xxxx:US-CHICAGO-1-AD-1"
      "compartment-id": "ocid1.tenancy.oc1..xxxx",
      "id": "ocid1.availabilitydomain.oc1..xxxx",
      "name": "xxxx:US-CHICAGO-1-AD-2"
      "compartment-id": "ocid1.tenancy.oc1..xxxx",
      "id": "ocid1.availabilitydomain.oc1..xxxx",
      "name": "xxx:US-CHICAGO-1-AD-3"

You should see the Availability Domains assigned to your tenancy. The OCI-CLI requires authentication to connect to your account, and the Cloud Shell pre-configures that access so you can get started quickly. Cloud Shell uses “InstancePrincipal” authentication to connect to the tenancy.

oci setup autocomplete
To set up autocomplete, we would update few lines in rc/bash_profile file.

===> Enter a path to an rc file to update (file will be created if it does not exist) (leave blank to use '/home/dan/.bashrc'): 
Using tab completion script at: /home/oci/lib/oracle-cli/lib64/python3.8/site-packages/oci_cli/bin/oci_autocomplete.sh
We need to add a few lines to your /home/dan/.bashrc file. Please confirm this is ok. [Y/n]: Y
 Restart your terminal or Run '[[ -e "/home/dan/lib/oci_autocomplete.sh" ]] && source "/home/dan/lib/oci_autocomplete.sh"' for the changes to take effect.

Run the command listed to reload the shell session with autocomplete enabled.

The last change I made to the shell is to configure the prompt, or PS1 variable. Copy and paste the command below into your Cloud Shell session.

echo "PS1=$'\\[\\033]0;\\W\\007\\]\\[\E[1m\\]\\n\\[\E[38;5;166m\\]\\u\\[\E[97m\\] at \\[\E[38;5;136m\\]cloudshell(\$OCI_REGION)\\[\E[97m\\] in \\[\E[38;5;64m\\]\\w\\n\\[\E[97m\\]$ \\[\E(B\E[m\\]'" | tee -a ~/.bashrc
source ~/.bashrc

The prompt should now look similar to mine below. I prefer the descriptive language for the prompt, and the colors can be used on remote servers to differentiate production vs. non-production.

Screenshot 2023-09-22 at 9.59.14 AM.png

Local OCI Command Line

If you don’t want to use the Cloud Shell and run the OCI-CLI and Terraform on your local machine, you can install and configure the OCI Command Line too (oci). The Oracle Docs have a good reference for installing the tool. I’ll be installing the OCI-CLI on my Mac.

brew update && brew install oci-cli

oci --version

When running the OCI-CLI on your local machine, you have to configure authentication yourself (unlike Cloud Shell which does this for you). There is a handy command with the OCI-CLI that will walk you through the setup. The setup will require a few manual steps in the OCI Console

oci setup config
Enter a location for your config [/Users/dan/.oci/config]:
Enter a user OCID:

To get your user accounts OCID (Oracle Cloud ID), you need to log into the OCI Console. Click on your user icon (top-right) and click “My profile”. On the “My profile” page, you can click the “Copy” link for the OCID field.

Screenshot 2023-09-22 at 11.34.49 AM.png

Enter a user OCID: ocid1.user.oc1..xxxx
Enter a tenancy OCID: 

To find the Tenancy OCID, click on the user icon (top-right) and click on “Tenancy: xxxx”. Click the “Copy “ link for the OCID field.

Enter a tenancy OCID: ocid1.tenancy.oc1..xxxx
Enter a region by index or name(e.g.
1: af-johannesburg-1 ...): 

Enter your home region and generate a new API Signing Key

Enter a region by index or name: us-chicago-1
Do you want to generate a new API Signing RSA key pair? [Y/n]: Y
Enter a directory for your keys to be created [/Users/dan/.oci]: 
Enter a name for your key [oci_api_key]: 
Public key written to: /Users/dan/.oci/oci_api_key_public.pem
Enter a passphrase for your private key ("N/A" for no passphrase): 
Repeat for confirmation: 
Private key written to: /Users/dan/.oci/oci_api_key_acedan.pem
Fingerprint: 40:99:xx::xx
Do you want to write your passphrase to the config file? [y/N]: y
Config written to /Users/dan/.oci/config

Before we are done, we have to add our API Public Key to our OCI account. This will allow our OCI-CLI authenticate with our user. Copy the Public key that was generated in the prior step.

cat ~/.oci/oci_api_key_public.pem 
-----END PUBLIC KEY-----

In the OCI Console, click on the user icon (top-right) and click “My profile”. Select the “API Keys” link, and click “Add API key”. Select the option “Paste a public key” and paste in the contents of the oci_api_key_public.pem Click Add.

The OCI Console will give you a preview of the configuration file you can use to connect. This should already be configured for you by the oci setup configuration command we ran. You can verify the files are the same.

cat ~/.oci/config

Now, we can test our locally installed OCI-CLI.

oci iam availability-domain list --output table
| compartment-id          | id                                 | name                   |
| ocid1.tenancy.oc1..xxxx | ocid1.availabilitydomain.oc1..xxxx | xxxx:US-CHICAGO-1-AD-1 |
| ocid1.tenancy.oc1..xxxx | ocid1.availabilitydomain.oc1..xxxx | xxxx:US-CHICAGO-1-AD-2 |
| ocid1.tenancy.oc1..xxxx | ocid1.availabilitydomain.oc1..xxxx | xxxx:US-CHICAGO-1-AD-3 |

At this point, we have a working OCI-CLI installation (both locally and in Cloud Shell). Next, we’ll start building resources in our tenancy.

#337 – ACM and Load Balancers

This week on the podcast, Kyle and Dan discuss securing your public user and leaking information, how to speed up change assistant upgrade projects, and new ACM plugins to work with Load Balanced gateways.

The PeopleSoft Administrator Podcast hosted by Dan Iverson and Kyle Benson.

Show Notes

Build a PeopleSoft Image – On Your Laptop with Vagabond

YouTube player

You can run PeopleSoft Images on your laptop using three tools that work seamlessly together: VirtualBox, Vagrant, and Vagabond.

  • VirtualBox is virtualization software to run a VM on your laptop (any Intel-based laptop, not Apple Silicon-based Macs)
  • Vagrant is a tool to interact with VirtualBox that can automate VM builds
  • Vagabond is an open source tool that delivers automation to build a PeopleSoft Image.

Download and install VirtualBox and Vagrant on your laptop.

Next, you will need to download Vagabond from GitHub. The preferred way to to download is via git (you can install Git but it’s optional). Without git, you can click on the latest Release and download the zip file.

cd ~/Downloads
# unzip ps-vagabond*.zip if downloading from the Releases
git clone https://github.com/psadmin-io/ps-vagabond.git
cd ps-vagabond

There are two configuration files we need to run Vagabond:

  • config.rb
  • psft_customizations.yaml

Vagabond delivers example versions of each of these files and we can copy the .example files.

cd config
copy-item ./psft_customizations.yaml.example psft_customizations.yaml
copy-item ./config.rb.example config.rb
cd ..

You only need to modify the psft_customizations.yaml file if you are building a Finance or Interaction Hub image. You need to modify the default user to be VP1 instead of PS.

The config.rb file you must modify to make Vagabond work. It needs two pieces of information: your My Oracle Support Credentials, and a Patch ID for the PeopleSoft Image.



Now you are ready to build the PeopleSoft Image.

vagrant up

You will see output from Vagrant displaying the different tasks it handles setting up the VM. Vagrant will download a base Oracle Linux 8 VM, create a new VirtualBox VM, configure the networking, and start provisioning. Provisioning is the set of scripts that Vagabond provides to automate the download and building of PeopleSoft Images. Below is a (simplified) version of the output you will see as Vagabond runs.

Bringing machine 'ps-vagabond' up with 'virtualbox' provider...
==> ps-vagabond: Importing base box 'generic/oracle8'...
==> ps-vagabond: Setting the name of the VM: 34775556
==> ps-vagabond: Configuring and enabling network interfaces...
==> ps-vagabond: Mounting shared folders...
    ps-vagabond: /vagrant => /Users/dan/Downloads/ps-vagabond
    ps-vagabond: /media/sf_34775556 => /Users/dan/Downloads/ps-vagabond/dpks/download
==> ps-vagabond: Running provisioner: networking_setup (shell)...
    ps-vagabond:  ☆  INFO: 
    ps-vagabond:  ☆  INFO: 
    ps-vagabond:  ☆  INFO: ===> Add ' psvagabond psvagabond.psadmin.local' to your hosts file
    ps-vagabond:  ☆  INFO: 
    ps-vagabond:  ☆  INFO: 
==> ps-vagabond: Running provisioner: storage (shell)...
==> ps-vagabond: Running provisioner: bootstrap-lnx (shell)...
    ps-vagabond:                                       dP                               dP 
    ps-vagabond:                                       88                               88 
    ps-vagabond:   dP   .dP .d8888b. .d8888b. .d8888b. 88d888b. .d8888b. 88d888b. .d888b88 
    ps-vagabond:   88   d8' 88'  `88 88'  `88 88'  `88 88'  `88 88'  `88 88'  `88 88'  `88 
    ps-vagabond:   88 .88'  88.  .88 88.  .88 88.  .88 88.  .88 88.  .88 88    88 88.  .88 
    ps-vagabond:   8888P'   `88888P8 `8888P88 `88888P8 88Y8888' `88888P' dP    dP `88888P8 
    ps-vagabond:                          .88 
    ps-vagabond:                      d8888P 
    ps-vagabond:  ☆  INFO: Updating installed packages
    ps-vagabond:  ☆  INFO: Installing additional packages
    ps-vagabond:  ☆  INFO: Disable SELinux for PeopleSoft Images
    ps-vagabond:  ☆  INFO: Downloading patch files
    ps-vagabond:  ☆  INFO: Unpacking DPK setup scripts
    ps-vagabond:  ☆  INFO: Executing Pre setup script
    ps-vagabond:  ☆  INFO: Executing DPK setup script
    ps-vagabond:  ☆  INFO: Install psadmin_plus
    ps-vagabond:  ☆  INFO: Open Firewall Ports
    ps-vagabond:  TASK                         DURATION
    ps-vagabond: ========================================
    ps-vagabond:  install_psadmin_plus         00:00:01
    ps-vagabond:  download_patch_files         00:13:36
    ps-vagabond:  unpack_setup_scripts         00:00:41
    ps-vagabond:  execute_pre_setup            00:00:00
    ps-vagabond:  install_additional_packages  00:01:29
    ps-vagabond:  update_packages              00:03:40
    ps-vagabond:  open_firewall_ports          00:00:02
    ps-vagabond:  execute_psft_dpk_setup       01:00:56
    ps-vagabond:  generate_response_file       00:00:00
    ps-vagabond:  disable_selinux              00:00:00
    ps-vagabond: ========================================
    ps-vagabond:  TOTAL TIME:                  01:20:25
==> ps-vagabond: Running provisioner: cache-lnx (shell)...
    ps-vagabond:  ☆  INFO: Copying Manifests
    ps-vagabond:  ☆  INFO: Fix DPK App Engine Bug
    ps-vagabond:  ☆  INFO: Pre-load Application Cache
    ps-vagabond:  TASK                         DURATION
    ps-vagabond: ========================================
    ps-vagabond:  fix_dpk_bug                  00:00:02
    ps-vagabond:  load_cache                   00:21:03
    ps-vagabond:  download_manifests           00:00:00
    ps-vagabond: ========================================
    ps-vagabond:  TOTAL TIME:                  00:21:05

To access the PeopleSoft Image from a browser, you first need to add a hosts entry. In the output, you’ll see the text you need to add to the file.

For Windows, the file is located at C:\Windows\System32\drivers\etc\hosts, and Mac/Linux the file is /etc/hosts. From our example above, you would add this line to the file: psvagabond psvagabond.psadmin.local

Then you can access the PeopleSoft Image at http://psvagabond.psadmin.local:8000/ps/signon.html

A few tips for working with Vagrant VMs. It’s very easy to take a snapshot as a backup of the VM. I always take a snapshot right after building the image.

vagrant snapshot save build

If you need to stop or start the VM, use these commands:

vagrant halt        # stop the VM
vagrant up          # start the VM
vagrant suspend     # pause the VM
vagrant resume      # unpause the VM

You can also SSH into the VM if you want to access the server:

vagrant ssh

#336 – 8.60 Themes and How we use the DPK

This week on the podcast, Kyle and Dan talk about the new psadmin.io Themes for 8.60, and then they discuss how they are currently using the DPK to build new environments and what has changed in the DPK since it was released.

The PeopleSoft Administrator Podcast hosted by Dan Iverson and Kyle Benson.

Show Notes

Themes for PeopleTools 8.60

We have released new branding themes for PeopleTools 8.60! The new themes use colors that better match the new Redwood UI color palette, and the underlying stylesheets use the new CSS variables in 8.60.

There are 10 colors to choose from in this release.

But, it’s also very easy to change the colors, or even make your own based on this project. Starting with PeopleTools 8.60, there are new CSS variables to simplify branding changes. There are only 2 color variables used in this project:

:root {
  --pt-banner-background-color: #1374BA; /* Primary */
  --pt-border-contrast-color: #2C526E; /* Accent */
  --pt-strip-height: 0px;

You can modify these colors to suit your needs and make the themes match your own branding requirements. The --pt-strip-height variable is to hide the Redwood color strip in the default theme.

The goal of our themes is to simply change the header color (for Fluid and Classic) so you know which environment you are working in. But you can take these new CSS variables and change so much more.

To install the themes, download the IO_STYLE_REDWOOD.zip file from Github and use the Data Migration Workbench to import the project. You can read more about the project and how install it on GitHub.

#335 – io_homes DPK Module

This week on the podcast, Kyle and Dan talk about the changes to the 8.60 database upgrade and the new PPTLS860 project. Dan also shares some updates to the IO_STYLE_859 project and discusses his new io_homes DPK module.

The PeopleSoft Administrator Podcast hosted by Dan Iverson and Kyle Benson.

Show Notes

#334 – byop – Bring Your Own Patches – an Infra-DPK Builder

This week on the podcast, Dan discusses a new tool he built to simplify CPU patches by using the same tools as the Infrastructure DPK. byop, or Build Your Own Patches, will take a list of patches to download and store them in a format that matches the Infrastructure DPK.

The PeopleSoft Administrator Podcast hosted by Dan Iverson and Kyle Benson.

Show Notes

Dan’s method to apply CPU patches

Custom Fact to trigger Infrastructure-DPK processing

  • /puppet/production/modules/pt_role/lib/facter/cpu.rb

    # Set the env var APPLY_INFRA_CPU=true and run the DPK to apply the Infra-DPK patches
    Facter.add(:apply_infra_cpu) do
      setcode do
        apply_infra_cpu = ENV["APPLY_INFRA_CPU"] || 'false'

Bash alias I use to apply CPU patches via Infrastructure-DPK

$ alias applycpu='sudo APPLY_INFRA_CPU=true && puppet apply -e "contain ::pt_profile::pt_tools_deployment" --confdir <dpk_home>/puppet -d'
$ applycpu

psadmin.conf 2023 Registration is Open!

You can now register for psadmin.conf 2023! Click here to see more information and register.

psadmin.conf is a conference specifically for PeopleSoft Administrators. The conference features talks from expert admins on a variety of topics, as well as hands-on training from Oracle ACEs. The goal of the conference is to expand your knowledge of PeopleSoft Administration and network with admins from around the country.

Registration is limited to 50 attendees, so register early to ensure your spot!

Building a PeopleSoft Image – OCI Marketplace

In this video we will build a new PeopleSoft Image to check out some of the newer features. We will use OCI to host our image because the PeopleSoft team provides us with images that are ready to build. We just need to provide some passwords and away we go.

YouTube player

OCI Marketplace Images

The OCI Marketplace is where you can find pre-packaged software ready to deploy on OCI. Each new image release, the PeopleSoft team pushes a new build for each application. For this demo, we will use Finance Image 46.

There are a few benefits to the OCI Marketplace-based PeopleSoft Images over other methods.

  • No need to download DPK files to a server
  • They come with Elasticsearch and Kibana pre-packaged
  • They are updated each release and easy to build for developers
  • Doesn’t require Cloud Manager or MOS Download rights

There are a few drawbacks to using these images though.

  • They don’t support the new VM.Standard Flex shapes
  • You need an OCI account and privileges to create a new instance (and virtual cloud network)

Boot Volume

When building a Marketplace-based Image, you must increase the boot volume to at least 200GB. This will ensure there is plenty of space to extract the DPK files and install PeopleSoft.

Generate Passwords

You can enter these by hand – it’s a JSON string – but there are different requirements for each password. You can use the sample JSON below for reference, but let’s take a quick tangent and I’ll show you how I generated my passwords.

The secret is psst

To run psst, you need Python and Git installed:

For Windows you can use Powershell and Chocolatey to install these:

Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; 
iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))

choco install python3 -y
choco install git -y

For Linux and macOS:

yum install git -y
yum install python3 -y

The psst tool is hosted on Github and you can clone the code to run it:

git clone https://github.com/psadmin-io/psst.git

cd psst
pip install .
psst secrets generate -oci

    "connect_pwd": "eu9P3HCj6WwI95vj498JX6Yzjk6VGS",
    "access_pwd": "hsRqmDFjyrntMEJ74fMBBwMKi",
    "admin_pwd": "0WkAoB531GXr#2AtvpNo9SZ5u-_gEh",
    "weblogic_admin_pwd": "#ma1Q4%7SrIyKmpfIT3iS!&1Q22o$x",
    "webprofile_user_pwd": "xSFb74gd2YeyvkXjh1s9tI7wDK9Dew",
    "gw_user_pwd": "xtc4IxtBkDiNpJCMT04wRXGUNHG4bQ",
    "domain_conn_pwd": "G2rxzYThC2BTKq5DfHc",
    "opr_pwd": "78rN8StJt8rvSaUwB1FAWgEMK"

You can copy the JSON and paste it directly into the OCI Console’s “cloud-init” section.

Host File

Our instance is created with a public IP address, but the DNS name is private to your OCI Cloud Network. To translate between the two, we will add a hosts entry to our computer. Grab both the Public IP and Internal FQDN values from the Instances page.

For Linux and macOS

echo "<ip address> <fqdn>" | sudo tee -a /etc/hosts

For Windows, add this line to the end of the file c:\Windows\System32\drivers\etc\hosts

<ip address> <fqdn>

For example, my hosts entry looks like this: fscm046.subnet12081732.vcn12081732.oraclevcn.com


Linux, macOS, and WSL Users:

chmod 600 ~/Downloads/ssh-key-2022-12-08.key 
ssh -i ~/Downloads/ssh-key-2022-12-08.key opc@<IP>

PuTTY for Windows

  • Convert SSH Key to Putty Format with PuTTYGen
  • Connect with PuTTY

Ingress Rules in OCI

  1. PIA Rule

    • CIDR Block:
    • Destination Port: 8000
  2. Kibana Rule

    • CIDR Block:
    • Destination Port: 5601
  3. TNS Rule (Optional – Required for App Designer or SQL access)

    • CIDR Block:
    • Destination Port: 1521

(Optional) Add firewalld Rule for TNS

sudo firewall-cmd --permanent --zone=public --add-port=1521/tcp
sudo firewall-cmd --reload