OCI – 02 – Connecting Terraform

In our fresh tenancy, the first thing we’ll do is connect Terraform. We will use Terraform to build resources as a first option, and a CLI as the second option. This will help enforce an Infrastructure-as-Code deployment.

Before we write any code we need to create a Git repository to track our code. There will be a number of repositories for our tenancy, so I created a parent directory (named after my tenancy) to organize them.

Log into OCI and start a Cloud Shell session. We will create a new Git repository for a sample project called minecraft.

$ echo "export ocitenancy=<your tenancy name>" | tee -a ~/.bashrc
$ source ~/.bashrc
$ mkdir -p ~/repos/$ocitenancy/minecraft
$ cd ~/repos/$ocitenancy/minecraft
$ git init

Since this if the first time we are using Git in our shell, we need to configure our Git Identity.

$ git config --global user.email "you@example.com" 
$ git config --global user.name "Your Name" 

Next, we create a README file, commit the change, and change the primary branch to main

$ echo "# Minecraft Project" | tee README.md
$ git add .
$ git commit -m "Initial Commit" 
$ git branch -m master main 

As a way to backup my code, I’ve created a repository on GitHub that I setup as a remote copy. You can clone it and use the repository to follow along. Each post will have a branch that matches the post’s number. (E.g, this post will be saved to branch 02-terraform).

$ git remote add origin https://github.com/ocadmin-io/oci.minecraft.git
$ git push -u origin main

CloudShell Provider Configuration

A Terraform Provider is a backend library that will take the generic Terraform code and apply it to the specific technology. In our case, there is an OCI Provider for Terraform that we can use to create anything in OCI.

First, we will set an environment variable for Terraform to know which OCI Region we are in. Any environment variable that starts with TF_VAR* will be available in Terraform. Add the variables to our .bashrc file so they will always be available when we log in.

$ echo "export TF_VAR_region=$OCI_REGION" | tee -a ~/.bashrc
$ echo "export TF_VAR_tenancy_id=$OCI_TENANCY" | tee -a ~/.bashrc
$ source ~/.bashrc

Create a new file called provider.tf and configure it to use the oci backend.

$ touch provider.tf

We can edit this file two ways: with vim in the shell, or with the Code Editor feature of the OCI Console. The Code Editor is a web-based Visual Studio Code editor that let’s you work with any file in your Cloud Shell home directory. Launch the Code Editor by click on the “Developer tools” icons and click “Code Editor”.

Another window will open in the OCI Console and display the text editor. You can drill down into our repos\$ocitenancy\minecraft folder and see the provider.tf file we created.

Add the following code to provider.tf so that Terraform will use the InstancePrincipal authentication built into Cloud Shell.

provider "oci" {
    auth = "InstancePrincipal"
    region = var.region
}

variable "region" {}
variable "tenancy_id" {}

data "oci_identity_availability_domains" "ads" {
  compartment_id = var.tenancy_id
}

output "ads" {
  value = data.oci_identity_availability_domains.ads.availability_domains
}

After you save the changes, go back into Cloud Shell and initialize our minecraft project.

$ terraform init

Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/oci...
- Installing hashicorp/oci v5.11.0...
- Installed hashicorp/oci v5.11.0 (unauthenticated)
Terraform has been successfully initialized!

Initializing will tell Terraform to download any providers we use into the .terraform directory for our project. Next, we can run a plan to see what Terraform will do.

$ terraform plan

data.oci_identity_availability_domains.ads: Reading...
data.oci_identity_availability_domains.ads: Read complete after 0s [id=IdentityAvailabilityDomainsDataSource-42880534]

Changes to Outputs:
  + ads = [
      + {
          + compartment_id = "ocid1.tenancy.oc1..xxxx"
          + id             = "ocid1.availabilitydomain.oc1..xxxx"
          + name           = "xxxx:US-CHICAGO-1-AD-1"
        },
      + {
          + compartment_id = "ocid1.tenancy.oc1..xxxx"
          + id             = "ocid1.availabilitydomain.oc1..xxxx"
          + name           = "xxxx:US-CHICAGO-1-AD-2"
        },
      + {
          + compartment_id = "ocid1.tenancy.oc1..xxxx"
          + id             = "ocid1.availabilitydomain.oc1..xxxx"
          + name           = "xxxx:US-CHICAGO-1-AD-3"
        },
    ]

You can apply this plan to save these new output values to the Terraform state, without changing any real infrastructure.

Normally, after we get a plan from Terraform, we can apply those changes and Terraform will create resources in OCI. But, our code only retrieves some values for us and doesn’t build anything, yet, so there is no need to apply our changes. What we have done is verify our connection between Terraform and our OCI tenancy.

Before we move on, add your changes into the repository and commit them. We will also add the .terraform/* directory to our .gitignore file. The directory stores the provider binaries and we don’t need to include those in our repo.

$ echo ".terraform/**" | tee -a .gitignore 
$ echo ".terraform.lock.hcl" | tee -a .gitignore $ git add . 
$ git commit -m "OCI Cloud Shell Connection Complete" 
$ git push origin main

If you want to store your git provider username and password in the credential helper, you can store them like this.

$ git config --global credential.https://github.com.username <username> 
$ git config credential.helper store 
$ git push origin main 
Password for 'https://</username><username>@github.com':</username>

Once you enter the password, Git will save it into the local credential store in Cloud Shell.

Terraform State

When you create resources using Terraform, Terraform will track the state of those objects so it knows if it needs to create or update any resources. Currently, Terraform is tracking the state locally on the Cloud Shell file system. This is useful if you are testing something, but for anything permanent or in a shared environment, we want to store our state in a secure remote location. The state files can also contain sensitive information like private key data and passwords, so it is best to keep the state files someplace more secure. There are many options for remote state storage including Terraform Cloud and OCI Object Storage.

We will use OCI Object Storage to store our state file, and we will use the OCI-CLI to create an Object Store bucket. There is a bit of a chicken and egg situation here because we need to create the bucket first before we use it to track our state. We will use a script with the OCI-CLI to create our bucket and configure Terrafom to use our OCI credentials to access the bucket.

First, we need to grab the Object Storage namespace for our tenancy. The OCI-CLI will return the data in JSON so we use jq to parse the returned data.

$ namespace=$(oci os ns get | jq -r .data) 
$ echo "export TF_VAR_namespace=$namespace" | tee -a ~/.bashrc 
$ source ~/.bashrc

Next, we create a new bucket in Object Storage.

$ oci os bucket create --name terraform_state --namespace $namespace --compartment-id $OCI_TENANCY

{
  "data": {
    "compartment-id": "ocid1.tenancy.oc1..xxxx",
        ...
    "name": "terraform_state",
    "namespace": "xxxx",
    "public-access-type": "NoPublicAccess",
    ...
}

We have a bucket created but we need to create credentials to read and write the state files. OCI Object Store is compatible with Amazon’s S3 object store and we will use the S3 state provider with Terraform. Create a new credential and pull the S3 secret values to use with Terraform.

$ read aws_key_id aws_key_value <<< $(oci iam customer-secret-key create --user-id $OCI_CS_USER_OCID --display-name 'terraform-state-rw' --query "data".{"AWS_ACCESS_KEY_ID:\"id\",AWS_SECRET_ACCESS_KEY:\"key\""} | jq -r '.AWS_ACCESS_KEY_ID,.AWS_SECRET_ACCESS_KEY')

In the command, we redirected the credentials into variables so that w can securely handle them. We will add the AWS access credentials to a new file ~/.aws/credentials that Terraform will automatically read.

$ mkdir -p ~/.aws
$ tee ~/.aws/credentials <<EOF >/dev/null
[default]
aws_access_key_id=$aws_key_id 
aws_secret_access_key=$aws_key_value 
EOF

In the Github repository, I also created a create_bucket.sh script that runs these commands.

Our bucket and credentials to access our bucket have been configured. Now we can update our Terraform provider to use the “s3” storage mechanism for our state files. Add this to the provider.tf file.

variable "namespace" {
  default = ""
}

terraform {
  backend "s3" {}
}

To keep our configuration repeatable, we will set our specific tenancy configuration in a config.s3.tfbackend file. The terraform{} block doesn’t allow variables so you must add your specific namespace and region into the file.

bucket   = "terraform_state"
key      = "minecraft"
region   = "<your region>"
endpoint = "https://<your namespace>.compat.objectstorage.<your region>.oraclecloud.com"
skip_region_validation      = true
skip_credentials_validation = true
skip_metadata_api_check     = true
force_path_style            = true

You can also exclude this file our our Git repo since it contains sensitive information. If you are planning to run your Terraform repo in multiple places, you will need to copy this file to your other servers.

$ echo "*.tfbackend" | tee -a .gitignore

We need to tell Terraform we have a new storage location for our state file.

$ terraform init -backend-config=config.s3.tfbackend

Initializing the backend...

Initializing provider plugins... - Reusing previous version of hashicorp/oci from the dependency lock file - Using previously-installed hashicorp/oci v5.13.0

Terraform has been successfully initialized!

Terraform will now store it’s state files in OCI’s Object Storage in a bucket named “terraform_state”. We can use OCI-CLI to verify the bucket exists.

$ oci os bucket list --compartment-id $OCI_TENANCY

{
  "data": [
    {
      "compartment-id": "ocid1.tenancy.oc1..xxxx",
      "name": "terraform_state",
      "namespace": "xxxx",
      ...
    }
  ]
}

We now have a Terraform provider configured for our OCI tenancy and we are ready to start building resources.

Local Shell Provider Connection

To connect Terraform to OCI on your local computer, we have to change our the provider gets credentials to authenticate. If you installed and configured the OCI-CLI locally, we can re-use the profile that was created when setting up the OCI-CLI. Change your provider.tf file to use the config file instead of InstancePrincipal authentication.

provider "oci" {
  tenancy_ocid = var.tenancy_id
  config_file_profile=var.config_file_profile
}

variable "config_file_profile" {
  default = "DEFAULT"
}

Next, we need to set up the TF_VAR_ environment variables. Unlike in Cloud Shell, where we can pull the values from another environment variable, we can pull the values from our OCI-CLI config file. (The tail -1 is optional, but in my case it pulls in the last profile’s region from the OCI-CLI file since my local shell is configured for more than one OCI tenancy).

echo "export TF_VAR_tenancy_id=$(grep tenancy ~/.oci/config | cut -d '=' -f2 | tail -1)" | tee -a .variables
echo "export TF_VAR_region=$(grep region ~/.oci/config | cut -d '=' -f2 | tail -1)" | tee -a .variables
echo ".variables" | tee -a .gitignore
source .variables

Unlike our Cloud Shell setup, we will need to run the source .variables command each time we run the Terraform code locally. You can add that command to your shell’s RC files if you want it to be automatic.

Before we generate a plan locally, we need to re-initialize our Terraform provider. Because we excluded the .terraform/ directory from our Git repo, initializing will download the correct provider binaries to our machine. This is very useful if you are switching platforms (e.g, Linux or Cloud Shell to Windows or Mac).

$ terraform init

Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/oci...
- Installing hashicorp/oci v5.13.0...
- Installed hashicorp/oci v5.13.0 (signed by HashiCorp)

Terraform has been successfully initialized!

Now we can run our Terraform code locally and test our OCI connection.

$ terraform plan

401-NotAuthenticated, Failed to verify the HTTP(S) Signature

On my computer, Terraform didn’t play nicely with the encrypted RSA API key generated by the OCI-CLI. While it is best to have the API key encrypted, to make Terraform work with with the OCI-CLI config I generated a clear API key file to use with Terraform. (https://github.com/oracle/terraform-provider-oci/issues/512)

$ openssl rsa -in ~/.oci/oci_api_key.pem -check | tee ~/.oci/oci_api_key_clear.pem 
Enter pass phrase for /Users/dan/.oci/oci_api_key.pem:

$ oci setup repair-file-permissions --file /Users/dan/.oci/oci_api_key_clear.pem

Then, update your ~/.oci/config file to change the private_key_file to point to our new _clear.pem file

key_file=/Users/dan/.oci/oci_api_key_clear.pem

Now we can test our Terraform provider again.

$ terraform plan                            
data.oci_identity_availability_domains.ads: Reading...
data.oci_identity_availability_domains.ads: Read complete after 0s [id=IdentityAvailabilityDomainsDataSource-42880534]

Changes to Outputs:
  + ads = [
      + {
          + compartment_id = "ocid1.tenancy.oc1..xxxx"
          + id             = "ocid1.availabilitydomain.oc1..xxxx"
          + name           = "xxxx:US-CHICAGO-1-AD-1"
        },
      + {
          + compartment_id = "ocid1.tenancy.oc1..xxxx"
          + id             = "ocid1.availabilitydomain.oc1..xxxx"
          + name           = "xxxx:US-CHICAGO-1-AD-2"
        },
      + {
          + compartment_id = "ocid1.tenancy.oc1..xxxx"
          + id             = "ocid1.availabilitydomain.oc1..xxxx"
          + name           = "xxxx:US-CHICAGO-1-AD-3"
        },
    ]

OCI – 01 – Setting Up a New Tenancy

Recently, I created a new Oracle Cloud Infrastructure environment and decided to document the steps I went through to go from a brand-new tenancy to a functioning PeopleSoft environment. These blog posts can service as a guide for administrators who are new to OCI, or for experienced administrators who want to look at a different way to build and manage their tenancy.

I’m taking a Terraform-first approach to building the tenancy, and strive to have the bare minimum manual steps when building objects in the cloud. I want to leverage infrastructure-as-code and automation every place we can so this setup can be replicated by anyone.

If you want to follow along, you can use an existing tenancy and build everything in a separate domain or compartment so it doesn’t interfere with any existing setup. Or, you can sign up for your own test OCI tenancy and start from scratch just like me.

If you want to follow along, you can sign up for an OCI tenancy here and get $300 in free credits, plus access to the Always-free resources. Once you have created your new tenancy and logged in, you can follow along.

OCI Cloud Shell

All of the configuration and setup for this tenancy will be done using Terraform (and the OCI Command Line, or OCI-CLI, if needed). These are command-line tools that require a little setup and configuration to use. To make life easier, we can use the OCI Cloud Shell that is included with OCI. The Cloud Shell has both the OCI-CLI and Terraform installed and is pre-configured for our tenancy.

To access the Cloud Shell, click the on Developer Tools icon in the header and select “Cloud Shell”

Screenshot 2023-09-22 at 8.24.51 AM.png

The first time you launch the Cloud Shell, you are prompted to take a tour of its features. If you aren’t familiar with it, go ahead and take the tour. If you want to take the tutorial later, you can start a Cloud Shell and use the command cstutorial to start it.

Screenshot 2023-09-22 at 9.06.51 AM.png

Let’s run a quick test in our new Cloud Shell by checking our availability domains in the new tenancy.

oci iam availability-domain list
{
  "data": [
    {
      "compartment-id": "ocid1.tenancy.oc1..xxxx",
      "id": "ocid1.availabilitydomain.oc1..xxxx",
      "name": "xxxx:US-CHICAGO-1-AD-1"
    },
    {
      "compartment-id": "ocid1.tenancy.oc1..xxxx",
      "id": "ocid1.availabilitydomain.oc1..xxxx",
      "name": "xxxx:US-CHICAGO-1-AD-2"
    },
    {
      "compartment-id": "ocid1.tenancy.oc1..xxxx",
      "id": "ocid1.availabilitydomain.oc1..xxxx",
      "name": "xxx:US-CHICAGO-1-AD-3"
    }
  ]
}

You should see the Availability Domains assigned to your tenancy. The OCI-CLI requires authentication to connect to your account, and the Cloud Shell pre-configures that access so you can get started quickly. Cloud Shell uses “InstancePrincipal” authentication to connect to the tenancy.

oci setup autocomplete
To set up autocomplete, we would update few lines in rc/bash_profile file.

===> Enter a path to an rc file to update (file will be created if it does not exist) (leave blank to use '/home/dan/.bashrc'): 
Using tab completion script at: /home/oci/lib/oracle-cli/lib64/python3.8/site-packages/oci_cli/bin/oci_autocomplete.sh
We need to add a few lines to your /home/dan/.bashrc file. Please confirm this is ok. [Y/n]: Y
Success! 
 Restart your terminal or Run '[[ -e "/home/dan/lib/oci_autocomplete.sh" ]] && source "/home/dan/lib/oci_autocomplete.sh"' for the changes to take effect.

Run the command listed to reload the shell session with autocomplete enabled.

The last change I made to the shell is to configure the prompt, or PS1 variable. Copy and paste the command below into your Cloud Shell session.

echo "PS1=$'\\[\\033]0;\\W\\007\\]\\[\E[1m\\]\\n\\[\E[38;5;166m\\]\\u\\[\E[97m\\] at \\[\E[38;5;136m\\]cloudshell(\$OCI_REGION)\\[\E[97m\\] in \\[\E[38;5;64m\\]\\w\\n\\[\E[97m\\]$ \\[\E(B\E[m\\]'" | tee -a ~/.bashrc
source ~/.bashrc
clear

The prompt should now look similar to mine below. I prefer the descriptive language for the prompt, and the colors can be used on remote servers to differentiate production vs. non-production.

Screenshot 2023-09-22 at 9.59.14 AM.png

Local OCI Command Line

If you don’t want to use the Cloud Shell and run the OCI-CLI and Terraform on your local machine, you can install and configure the OCI Command Line too (oci). The Oracle Docs have a good reference for installing the tool. I’ll be installing the OCI-CLI on my Mac.

brew update && brew install oci-cli

oci --version
3.33.1

When running the OCI-CLI on your local machine, you have to configure authentication yourself (unlike Cloud Shell which does this for you). There is a handy command with the OCI-CLI that will walk you through the setup. The setup will require a few manual steps in the OCI Console

oci setup config
Enter a location for your config [/Users/dan/.oci/config]:
Enter a user OCID:

To get your user accounts OCID (Oracle Cloud ID), you need to log into the OCI Console. Click on your user icon (top-right) and click “My profile”. On the “My profile” page, you can click the “Copy” link for the OCID field.

Screenshot 2023-09-22 at 11.34.49 AM.png

Enter a user OCID: ocid1.user.oc1..xxxx
Enter a tenancy OCID: 

To find the Tenancy OCID, click on the user icon (top-right) and click on “Tenancy: xxxx”. Click the “Copy “ link for the OCID field.

Enter a tenancy OCID: ocid1.tenancy.oc1..xxxx
Enter a region by index or name(e.g.
1: af-johannesburg-1 ...): 

Enter your home region and generate a new API Signing Key

Enter a region by index or name: us-chicago-1
Do you want to generate a new API Signing RSA key pair? [Y/n]: Y
Enter a directory for your keys to be created [/Users/dan/.oci]: 
Enter a name for your key [oci_api_key]: 
Public key written to: /Users/dan/.oci/oci_api_key_public.pem
Enter a passphrase for your private key ("N/A" for no passphrase): 
Repeat for confirmation: 
Private key written to: /Users/dan/.oci/oci_api_key_acedan.pem
Fingerprint: 40:99:xx::xx
Do you want to write your passphrase to the config file? [y/N]: y
Config written to /Users/dan/.oci/config

Before we are done, we have to add our API Public Key to our OCI account. This will allow our OCI-CLI authenticate with our user. Copy the Public key that was generated in the prior step.

cat ~/.oci/oci_api_key_public.pem 
-----BEGIN PUBLIC KEY-----
MIIBIjANBgxxxx
-----END PUBLIC KEY-----

In the OCI Console, click on the user icon (top-right) and click “My profile”. Select the “API Keys” link, and click “Add API key”. Select the option “Paste a public key” and paste in the contents of the oci_api_key_public.pem Click Add.

The OCI Console will give you a preview of the configuration file you can use to connect. This should already be configured for you by the oci setup configuration command we ran. You can verify the files are the same.

cat ~/.oci/config

Now, we can test our locally installed OCI-CLI.

oci iam availability-domain list --output table
+-------------------------+------------------------------------+------------------------+
| compartment-id          | id                                 | name                   |
+-------------------------+------------------------------------+------------------------+
| ocid1.tenancy.oc1..xxxx | ocid1.availabilitydomain.oc1..xxxx | xxxx:US-CHICAGO-1-AD-1 |
| ocid1.tenancy.oc1..xxxx | ocid1.availabilitydomain.oc1..xxxx | xxxx:US-CHICAGO-1-AD-2 |
| ocid1.tenancy.oc1..xxxx | ocid1.availabilitydomain.oc1..xxxx | xxxx:US-CHICAGO-1-AD-3 |
+-------------------------+------------------------------------+------------------------+

At this point, we have a working OCI-CLI installation (both locally and in Cloud Shell). Next, we’ll start building resources in our tenancy.

Building a PeopleSoft Image – OCI Marketplace

In this video we will build a new PeopleSoft Image to check out some of the newer features. We will use OCI to host our image because the PeopleSoft team provides us with images that are ready to build. We just need to provide some passwords and away we go.

YouTube player

OCI Marketplace Images

The OCI Marketplace is where you can find pre-packaged software ready to deploy on OCI. Each new image release, the PeopleSoft team pushes a new build for each application. For this demo, we will use Finance Image 46.

There are a few benefits to the OCI Marketplace-based PeopleSoft Images over other methods.

  • No need to download DPK files to a server
  • They come with Elasticsearch and Kibana pre-packaged
  • They are updated each release and easy to build for developers
  • Doesn’t require Cloud Manager or MOS Download rights

There are a few drawbacks to using these images though.

  • They don’t support the new VM.Standard Flex shapes
  • You need an OCI account and privileges to create a new instance (and virtual cloud network)

Boot Volume

When building a Marketplace-based Image, you must increase the boot volume to at least 200GB. This will ensure there is plenty of space to extract the DPK files and install PeopleSoft.

Generate Passwords

You can enter these by hand – it’s a JSON string – but there are different requirements for each password. You can use the sample JSON below for reference, but let’s take a quick tangent and I’ll show you how I generated my passwords.

The secret is psst

To run psst, you need Python and Git installed:

For Windows you can use Powershell and Chocolatey to install these:

Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; 
iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))

choco install python3 -y
choco install git -y

For Linux and macOS:

yum install git -y
yum install python3 -y

The psst tool is hosted on Github and you can clone the code to run it:

git clone https://github.com/psadmin-io/psst.git

cd psst
pip install .
psst secrets generate -oci

{
    "connect_pwd": "eu9P3HCj6WwI95vj498JX6Yzjk6VGS",
    "access_pwd": "hsRqmDFjyrntMEJ74fMBBwMKi",
    "admin_pwd": "0WkAoB531GXr#2AtvpNo9SZ5u-_gEh",
    "weblogic_admin_pwd": "#ma1Q4%7SrIyKmpfIT3iS!&1Q22o$x",
    "webprofile_user_pwd": "xSFb74gd2YeyvkXjh1s9tI7wDK9Dew",
    "gw_user_pwd": "xtc4IxtBkDiNpJCMT04wRXGUNHG4bQ",
    "domain_conn_pwd": "G2rxzYThC2BTKq5DfHc",
    "opr_pwd": "78rN8StJt8rvSaUwB1FAWgEMK"
}

You can copy the JSON and paste it directly into the OCI Console’s “cloud-init” section.

Host File

Our instance is created with a public IP address, but the DNS name is private to your OCI Cloud Network. To translate between the two, we will add a hosts entry to our computer. Grab both the Public IP and Internal FQDN values from the Instances page.

For Linux and macOS

echo "<ip address> <fqdn>" | sudo tee -a /etc/hosts

For Windows, add this line to the end of the file c:\Windows\System32\drivers\etc\hosts

<ip address> <fqdn>

For example, my hosts entry looks like this:

129.213.146.185 fscm046.subnet12081732.vcn12081732.oraclevcn.com

SSH Key

Linux, macOS, and WSL Users:

chmod 600 ~/Downloads/ssh-key-2022-12-08.key 
ssh -i ~/Downloads/ssh-key-2022-12-08.key opc@<IP>

PuTTY for Windows

  • Convert SSH Key to Putty Format with PuTTYGen
  • Connect with PuTTY

Ingress Rules in OCI

  1. PIA Rule

    • CIDR Block: 0.0.0.0/0
    • Destination Port: 8000
  2. Kibana Rule

    • CIDR Block: 0.0.0.0/0
    • Destination Port: 5601
  3. TNS Rule (Optional – Required for App Designer or SQL access)

    • CIDR Block: 0.0.0.0/0
    • Destination Port: 1521

(Optional) Add firewalld Rule for TNS

sudo firewall-cmd --permanent --zone=public --add-port=1521/tcp
sudo firewall-cmd --reload

#328 – psadmin.conf 2022 Recap

The PeopleSoft Administrator Podcast hosted by Dan Iverson and Kyle Benson

This week on the podcast, Dan and Kyle recap the psadmin.conf 2022 conference.

Show Notes

  • Getting the Most Out of PeopleTools @ 2:30
  • Kibana Lab @ 10:15
  • Monday Open Lab @ 13:00
  • Automating Maintenance Windows @ 15:00
  • Journey to the Cloud @ 16:30
  • OCI Migration @ 18:45
  • Environment Validation Lab @ 21:30
  • Tuesday Open Lab @ 25:00
  • Securing PS with Apache Rules @ 27:00
  • PeopleTools Platform Overview @ 29:45
  • Lightning Talks @ 33:00
  • Real-Time Indexing @ 41:30

#327 – HAProxy and OCI Load Balancer

The PeopleSoft Administrator Podcast hosted by Dan Iverson and Kyle Benson

This week on the podcast, Kyle and Dan talk about mapping remote client IPs to PeopleSoft logs and tables, and then discuss the benefits of load balancing with HAProxy and the OCI Load Balancer as a Service.

Show Notes

#325 – psadmin.io Themes

The PeopleSoft Administrator Podcast hosted by Dan Iverson and Kyle Benson

This week on the podcast, Kyle and Dan talk about the new psadmin.io Themes for PeopleTools 8.59 (and 8.58), using the OCI Auto Scale project to save money on OCI, and the benefits of blogging.

Show Notes

#323 – patchBot

The PeopleSoft Administrator Podcast hosted by Dan Iverson and Kyle Benson

This week on the podcast, Dan discusses his experience running a DR test in Oracle Cloud. Dan and Kyle also talk about patchBot, a tool to get notifications about new patches on Oracle Support, and some updates to their Rundeck OCI Node Classifier.

Show Notes

#319 – 8.59 Infrastructure DPK

The PeopleSoft Administrator Podcast hosted by Dan Iverson and Kyle Benson

This week on the podcast, Dan and Kyle talk about the PeopleTools 8.59 Infrastructure DPK and how it works, and also discuss building PeopleSoft Images from the OCI Marketplace.

Show Notes

#312 – Stalling at 30%

The PeopleSoft Administrator Podcast hosted by Dan Iverson and Kyle Benson

This week on the podcast, Kyle and Dan talk about the new Rundeck Cloud product, working with the PeopleSoft DDDAudit report, preventing App Designer from stalling at 30%, and updating the ACM with SQL.

Show Notes

  • Rundeck Cloud @ 4:15
    • Rundeck and OCI MySQL Service
  • DDDAudit Table Output @ 15:00
  • App Designer Stalling at 30% @ 18:45
  • Working with ACM via SQL @ 25:30

#311 – Cloud Manager 13

The PeopleSoft Administrator Podcast hosted by Dan Iverson and Kyle Benson

This week on the podcast, Dan shares some new PeopleTools ideas for Change Assistant and Control-J, and Kyle and Dan talk about the new PeopleSoft Cloud Manager 13 features.

Show Notes