Upcoming Speaking Sessions

Hello there! Just a quick update to let you know Dan and Kyle have some upcoming speaking sessions. These are some great sessions, so make sure to register and check them out!

Environment Template Security in Cloud Manager

Now that Cloud Manager is here, we have self-service ability to create PeopleSoft environments in OCI. Cue Uncle Ben… “With great power comes great responsibility.” Having a self-service portal that allows for the creation of these environments is fantastic, but how do we put some controls around this awesome power? This is where Environment Template security comes in to play.

To create an Environment in Cloud Manager, you first need an Environment Template. This template is created using some General Details, a Topology, and finally Security. It is this security detail that will help us control who can use these templates to create environments in the future. When you are creating a template, you will see the following section 3 – Define Security in the wizard. Let’s break down what our options are.

Assign Template to Zone(s)

Templates can be assigned to a single or multiple Zones. As of Image 11, there are currently three zones to choose from:

  • Development
  • Test
  • Production

A Zone is just “a logical grouping of environments,” according to Oracle’s Cloud Manager FAQ. At this time, it doesn’t serve any other purpose outside of helping you organize your environments. I could see a level of security being added to Zones in the future. If not by Oracle, maybe a custom bolt-on?

Assign Template to Role(s)

Templates can also be assigned to PeopleSoft security Roles. Any user that has a Role specified in this section will have the ability to create an Environment based on this template. Cloud Manager delivers three roles intended to be used with templates:

  • Cloud Administrator (PACL_CAD)
  • Cloud PeopleSoft Administrator (PACL_PAD)
  • Self-Service User (PACL_SSC)

As you would expect with PeopleSoft security, you are free to create and use your custom roles here. I think the delivered roles make it clear how Oracle sees the breakdown of potential users. Users who admin OCI resources, users who admin PeopleSoft, and users of PeopleSoft who might want ad-hoc environments(thinking developers or maybe even business staff looking for demos). I could see the OCI and PS admin roles combined often. Also, the self-service user might be split out into a technical and functional role or disabled altogether. Each organization will have to review this for themselves and come up with a good policy. Just keep in mind, you can add multiple roles to each template.

Creating Environments

Once the security and other details are added to a template, it will be available to use when creating an Environment.

Only the templates the user has access to will be in the Template Name dropdown. The Zone dropdown will also be populated with available zones from the selected template. If a single zone were added, this would be auto-selected and read-only.

Overall, I feel that Environment Template security offers a lot of control. It gives us enough control to provide a level of self-service environment deployments if desired. I do look forward to seeing actual functionality added to Zones. It might be easier to manage this security if we could somehow control access by zone versus strictly individual template security.

Taking PeopleSoft Swimming in an OCI Instance Pool

I was recently studying for some OCI Certification exams when I came across the topic of Instance Pools. I’ve known about this OCI feature for a while but realized I somehow never thought seriously about using them with PeopleSoft. So, I decided to do a proof of concept and write about it. What I found was with a few changes to a traditional PeopleSoft topology and configuration, you can get this working!

Please keep in mind that this is a proof of concept. I know I didn’t touch on all PeopleSoft configuration and features, so there is likely more work to be done to make this setup viable. I will try to call out some “gotchas” that come to mind but feel free to call out potential issues in the comments. If we can turn this POC into something usable, it might even open the door for container exploration!

Getting Started

To get started, we first need to understand what an Instance Pool is, and what is required to create one. An Instance Pool is a feature that allows you to create and manage multiple compute instances that share the same configuration. What makes them powerful is the ability to scale instance count and attach to a Load Balancer. You can scale on a schedule, with metric based rules, with Terraform or a script using OCI CLI, or manually with the OCI Console. Adjusting the instance count will create or terminate instances automatically. Also, when the pool is attached to a load balancer, the instances are automatically added or removed from the load balancer’s backend set. As you can see, this type of functionality would be great in the PeopleSoft space, where the load on a system can vary day-to-day. Think “timesheet day,” open enrollment, course registration, etc.

To complete this POC, we first need a few things.

Load Balancer

We will leverage the Load Balancing service that OCI offers. This service’s integration with our Instance Pool will give us the automation that we need. Creating a Load Balancer is straight forward, but there are a few things to keep in mind for this POC.

  1. Select a visibility type of Public if you hope to access it from the internet.
  2. Don’t worry about adding Backends at the time of Load Balancer creation. This will be handled by the Instance Pool in the future.
  3. Make sure to enable Session Persistence on the Backend Set. This is needed for PeopleSoft to work correctly.

Custom Image

An Instance Pool uses an Instance Configuration to spin up new instances on demand. An Instance Configuration contains all the configuration details like network, OS, shape, and image needed to create an instance. The image can be an Oracle provided platform image( Oracle-Linux-7.x, etc.) or a custom image that you create.

This image marks the first real decision on how to approach this POC. What we are after is a server with PeopleSoft web and application domains fully deployed and running. I see two approaches to this. One, use a standard platform image and then use cloud-init to configure and deploy the domains using DPK at instance creation. Two, create a custom image with the domains already deployed. I chose to leverage the custom image approach for this POC. I felt this was the fastest, most reliable way to create these instances. With rule-based scaling, and maybe to a lesser extent scheduled scaling, we ideally want these instances created quickly, and DPK takes time.

If startup time at creation isn’t a concern, then an approach using cloud-init probably is the way to go. One thought I had was to keep all middleware installations on FSS. The cloud-init script could just mount that and have DPK just focus on deploying PS_CFG_HOME. That would really speed things up. Maybe something to try in a future blog post!

Create Base Instance

I found a few things that were needed when prepping a PeopleSoft webapp server for Custom Image creation. I started by creating a new instance using the latest Linux 7 platform image. Next, I downloaded the latest PeopleTools 8.58 DPK to the instance and ran the psft-dpk-setup.sh setup script, passing the --env_type midtier --domain_type all options. This gave me a server with working web and application domains.

Update PeopleSoft Configuration

If we stopped here and created a Custom Image from this base instance, we might have some success. However, every future instance created from this image would have the exact same PeopleSoft domain configuration. There are some parts of a PeopleSoft domain configuration that are required to be unique. Also, some parts of the configuration will have a hostname “hardcoded,” etc. Here is a list of changes I made to address these concerns before I created a Custom Image. As mentioned before, there are likely more changes needed. However, these were enough to get this POC working.

  1. Update Cookie Configuration
    • Make sure your cookie configuration in the weblogic.xml file is set not to include a hostname.
    • <cookie-name>DBNAME-PORTAL-PSJSESSIONID</cookie-name>
  2. Update configuration.properties
    • Set the psserver= value to use localhost, since we are using a webapp deployment approach.
    • This will pin each web domain to use its local app domain, with load balancing and failover handled strictly by the OCI Load Balancer.
  3. Update setEnv.sh
    • By default, the ADMINSERVER_HOSTNAME variable is set to the hostname the domain was installed on.
    • Change this to be a dynamic value, driving off of the $HOSTNAME environment variable.
      • ADMINSERVER_HOSTNAME=${HOSTNAME}
  4. Update psappsrv.cfg
    • The [Domain Settings]\Domain ID value should be unique across all your PeopleSoft domains.
    • It is common for domain ids to follow a sequence number pattern. Example: APPDOM1, APPDOM2, etc.
    • Update the value to use an environment variable for the sequence number, ensuring a unique ID.
      • Domain ID=APPDOM%PS_DOMAIN_NBR%
      • We will discuss ideas on how to set this %PS_DOMAIN_NBR% variable later.
  5. Update psft-appserver service script
    • To ensure our domain configurations are correct and domains start properly, we should enforce a domain configure during instance creation and boot.
    • For this POC, I simply added a psadmin configure command to the domain service script.
    • For each application domain installed by the DPK, there is a service setup using a script found here:
      • $PS_CFG_HOME/appserv/APPDOM/psft-appserver-APPDOM-domain-appdom.sh
    • Update this script in the start_application_server function and add the following configure command before the start command.
      • su - $APPSRV_ADMIN -c "$APPSRV_PS_HOME/bin/psadmin -c configure -d $APPSRVDOM" 1>>$LOG_FILE 2>&1
  6. Create $PS_DOMAIN_NBR
    • In the psadm2 ~/.bashrc file, export a $PS_DOMAIN_NBR variable.
    • The goal for this is to generate a unique number that can be used in your domain configuration.
    • When an Instance Pool creates an instance, a random number is appended to the hostname.
      • Example: webapp-634836, webapp-973369, etc.
    • To leverage this appended number for $PS_DOMAIN_NBR, you can use something like this:
      • export PS_DOMAIN_NBR=$(hostname | rev | cut -d- -f1 | rev)

Create a Custom Image

Before creating the custom image, there are a few more things to clean up. First, remove any unwanted domains or software. Depending on your DPK deployment, a process scheduler domain may have been created. We won’t be needing this, so it can be manually removed using psadmin. Next, remove any unwanted entries in /etc/hosts file. For this POC, I stripped down to just the “localhost” entries. New entries will be added when instances are created with new hostnames. Last, stop all running PeopleSoft domains. This step is just to ensure a clean configuration.

Now we are ready to create the Custom Image. In the OCI Console, find your base instance. Under More Actions, select Create Custom Image.

Instance Configuration

After the Custom Image is done provisioning, we are ready to create our Instance Configuration. To start, we need to create an instance based on our new Custom Image. In the OCI Console, navigate to Compute > Custom Image and find the new image. Click on Create Instance. Enter a name, shape, networking, and other details we want to be used in our future Instance Configuration. Select Create and wait for the instance to provision.

Once provisioned, go ahead and validate that the instance looks good. Login to the PIA, make sure the domains started correctly, etc. If all looks well, go back to your newly created instance in the OCI Console. Under More Actions, select Create Instance Configuration. Select a compartment, name, and optional tags, then click Create Instance Configuration.

Instance Pool

Now that we have our Instance Configuration, we can finally create our Instance Pool. In OCI Console, navigate to the newly created Instance Configuration and click on Create Instance Pool. Select a name, the instance configuration, and the number of instances you would like in the pool. Next, we will configure the pool placement with Availability Domain details. Also, make sure to select Attach a Load Balancer. Select the Load Balancer details we created earlier. Lastly, review the details and click Create. You can now follow the provisioning steps under Work Requests.

Once the provisioning work requests are completed, you can validate everything worked as expected. Under Instance Pool Details, you can navigate to Created Instances. You should see the newly created instances listed here. Also, you can navigate to Load Balancers. You will see the attached load balancer. Click into this to review that the Backend Set was updated with the newly created instances.

After validating the initial deployment, you can play around with the instance count. Navigate back to your Instance Pool and click on Edit. Try adding or subtracting to the Number of Instances. Monitor the work requests then validate instances were added or subtracted correctly.

Conclusion

This blog post was a quick, high-level walkthrough of using Instance Pools with PeopleSoft on OCI. My goal was mainly to prove that this was possible in a POC. However, I also wanted to help start a conversation about how this might look in a Production setting. I listed out a few things that could be done to get a custom image approach to work. What other configuration values did I not mention? What ideas do you have for a more dynamic approach, using DPK installs at creation time? As the feature set of OCI grows relating to Load Balancers and Instance Pools, I think we will be more and more motivated to get a deployment like this working outside of the lab!

Refreshes with Data Guard and Pluggable Databases

A PeopleSoft refresh is one of the more common tasks for a PeopleSoft Administrator or DBA. There are many ways to accomplish this task, but it usually involves a database restore from backup, custom SQL refresh scripts and potentially ACM steps. Depending on the level of effort put into the refresh scripts, there can also be manual steps involved. This approach is tried and true, but tends to lack the speed and flexibility that we are starting expect with the delivery of the PeopleSoft Cloud Architecture toolset. Nightly or Ad-hoc refresh environments and quickly provisioned temporary environments are just a few use cases that would benefit greatly from refresh process improvements. I have been doing some exploring in this area recently and would like to share a few thoughts. First, a quick overview of some Oracle tools and features that I have been leveraging.

Data Guard

Oracle Data Guard is a tool that gives you high availability, data protection and disaster recovery for your databases. At a high level, it consists of a primary database and one or more standby databases. These standby databases are transactionally consistent copies of the primary database. Therefore, if the primary database goes down, the standby can be switched to primary and your application can keep on rolling.

Physical vs. Snapshot Standby

There are multiple types of standby databases that can be used with Data Guard. I’d like to briefly explain the difference between Physical Standby and Snapshot Standby. A physical standby is a database that is kept in sync with a primary database via Redo Apply. The redo data is shipped from the primary and then applied to the physical standby. A snapshot standby is basically a physical standby that was converted to a snapshot, which is like a point in time clone of the primary. At this point we can use the snapshot to do development, testing, etc. When we are done with our snapshot, we can then convert it back to a physical standby and it will once again be in sync with the primary database. This is accomplished by taking a restore point when the snapshot conversion happens. The whole time the standby is in snapshot mode, the redo data is still being shipped from the primary. However, it is NOT being applied. Once we convert back to physical, the restore point is used to restore and then all waiting redo is applied.

Pluggable Databases

With Oracle 12c, we have the introduction of multitenant architecture. This architecture consists of Container(CDB) and Pluggable(PDB) databases. This setup makes consolidating databases much more efficient. It also gives us the ability to clone a PDB very easily. Cloning a PDB between different CDBs can even be done via a database link. Having a true multitenant setup does require additional licensing, but you can have a CDB-PDB setup without this extra licensing cost if you use a single instance(Only one PDB per CDB). Here is a great video overview of multitenant.

Refresh Approach

Now that we have an idea of what these tools and features gain us, let’s think about how to put them to use with database refreshes. Both of these approaches assume the use of Data Guard and PDBs. Having a true multitenant setup would be most efficient but a single instance setup will work just fine. I would recommend you have a dedicated standby database for your refreshes, versus using the same standby you rely on for HA\DR. It would also make sense for the standby to be located on the same storage as the PDBs you will be refreshing. Neither of these are requirements, but I think you will see better performance and lessen the risk to your HA\DR plan.

The use case we will use for this example is a sandbox PeopleSoft database. This sandbox will be scheduled to refresh nightly, giving the business an environment to test and troubleshoot in with data from the day before. The refresh could also be run adhoc, if there is a need during the business day. So the goal is to have this fully automated and complete as fast as possible.

Clone Standby Approach

This approach will be to take a snapshot of our refresh standby database and clone it, overlaying our previous sandbox PDB. After this is completed, we will need to run custom SQL scripts or ACM steps to prepare the refreshed PDB. Finally, we will restore the refresh standby back to a physical standby database. This blog post by Franck Pachot gives a quick overview of the SQL commands needed to accomplish most of these steps.

  1. Convert the refresh source physical standby to a snapshot standby.
  2. Open the refresh source PDB as read only.
  3. Create database link between the sandbox target CDB and the refresh source PDB.
  4. Drop the sandbox target PDB and create a clone from the refresh source PDB.
  5. Open the new clone sandbox PDB.
  6. Cleanup the sandbox PDB.
    • Check for errors.
    • Patch the PDB to the patch level of the CDB, if needed.
  7. Run custom SQL scripts or ACM steps against sandbox PDB for PeopleSoft setup.
  8. Convert the refresh PDB back to physical standby.

Snapshot Standby Approach

This approach is somewhat similar, except we won’t be doing any cloning. Instead, we will be using the actual snapshot standby itself as our database. Since we know this sandbox database will be refreshed nightly, we can stay in snapshot standby mode all day and then switch to physical standby mode briefly at night, applying redo data to sync up with our primary production database. After that is done, we will then switch back to snapshot mode and run our custom SQL scripts and ACM steps. This will require a dedicated standby database and should only be used with a frequent refresh schedule. Since the redo data continues to ship during snapshot standby mode, the redo data will start to backup. The volume of this redo data backing up could become an issue if it gets too large, so you will need to do some analysis to make sure you can handle it based on your refresh interval.

  1. Create a sandbox PDB as a physical standby, with primary database being production.
  2. Convert sandbox to a snapshot standby.
  3. Run custom SQL scripts or ACM steps against sandbox for PeopleSoft setup.
  4. Use the snapshot standby sandbox PDB as your normal database; connecting app and batch domains, etc.
  5. Wait until next refresh interval.
  6. Convert sandbox from snapshot standby to physical standby.
    • Restore point will be used and redo data applied, syncing up with current primary database state in production.
  7. Covert sandbox from physical standby to snapshot standby.
  8. Run custom SQL scripts or ACM steps against sandbox for PeopleSoft setup.
  9. Repeat.

Conclusion

Those are just two ideas, but you can see that there are probably many variations of these approaches that will work. Leveraging Data Guard and PDBs really gives you many options to choose from. I have been using the Clone Standby approach recently and have packaged up the scripts, including bouncing app\batch domains, in Oracle Enterprise Manager as a job. This gives me push button refreshes with a turn around time under 20 minutes. I have been able to provide adhoc refreshes for emergency production troubleshooting to the business multiple times in just a few months since implementing this approach. This is a very powerful tool to have and is well worth the effort to get your refreshes fast, efficient and automated.

Reconnect 2016 – Day 3

Day 3 was the last day of the conference, and it was a busy one! With travel, and a presentation of my own, I didn’t get a chance to see everything I wanted to. I also didn’t get this post written until now because it took me 23 hours to get home(O’Hare was a mess due to weather), what a nightmare. That said, I had a great time. Here are my highlights from the last day.

Publishing PeopleSoft through WAP for AD FS Single Sign-on

  • This was my session.
  • Used Web Application Proxy as a rps for Supplier Portal.
  • Non-claims based pre-authentication done with AD FS using Kerberos.
  • Our Enterprise standard is SAML, so would like to use claims based pre-auth in the future.

REST Your Weary Head! Query can handle it!

  • HyperGen talked about REST and Query Access Services.
  • REST is a lot better now in 8.54 and 8.55.
  • Queries via QAS can be run Synchronously or Asynchronously.
  • QAS is a good quick and easy way to create REST web services.
  • If you have a more complicated web service, you may want to create your own message handler via App Packages.

Rethinking Excel to CI

  • Kevin Weaver talks about his alternative to ExcelToCI
  • His solution creates CI templates online instead of within Excel.
  • You upload a data sheet online and then process it in an App Engine.
  • You can find this and more on his blog – http://pskcw.blogspot.com/

PeopleSoft Single Sign-On with SAML 2.0

  • Vlad from GNC walks us through his great SSO solution.
  • This uses an IdP-initiated SSO profile.
  • Built an Assertion Consumer Service in Signon PeopleCode
  • Uses openSAML Java package to do the validation.
  • His slides had a ton of code snippets.

PeopleTools Product Panel

  • A panel of Oracle employees discuss PeopleTools.
  • MS Office will be removing ActiveX from there products soon.
  • The Finance team will be delivering a new solution for Journal Upload and other solutions that use XmlLink. Most likely using IB(similar to ExcelToCi?).
  • Talked more about Cloud Manager.
  • Will be used in Oracle private cloud at first, then maybe other cloud providers, then maybe YOUR datacenter.
  • Will be placed within an Interaction Hub instance.
  • Elastic is coming soon, likely around patch 10.
  • Question about PS Admins needing Puppet skills.
  • Oracle will be getting the DPKs better, but you will need these skills to customize and fit your organization.

Reconnect 2016 – Day 2

Day 2 at Reconnect featured an Oracle Keynote and a bunch of deep dive sessions. Here is a quick overview.

Oracle PeopleSoft Product Line Update

  • The format was a discussion between Marc Weintraub and Paco Aubrejuan.
  • Even with all the cloud talk, no change in commitment to PeopleSoft.
  • Support for 9.1 ends Jan 2018, won’t be extended again.
  • Discussed different options with Cloud. Example, move demo\dev first.
  • Paco guessing 50% of customers at Reconnect will be in the cloud in 5 years.
  • Discussed PeopleSoft Cloud Architecture.
  • Talked about a new offering coming soon – Cloud Manager.
  • This will be a Self Service interface for doing psadmin type tasks
    • Deployments, Refreshes, Start, Stop, Clear cache, etc
  • Should be coming to 8.55
  • Selection Adoption discussion
  • Confirmed they use Scrum internally to develop and deliver images
  • We should see the size of images stabilizing now with this approach.
  • Discussion on Fluid.
  • Pushing more Related Content.
  • Confirmed again that Elasticsearch is coming soon.
  • Marc and Paco mentioned “cloud” 66 times.

PeopleSoft Technology Roadmap

  • This was given by Jeff Robbins
  • Similar to Paco’s talk
  • A lot more Cloud talk
  • Did show a screenshot of Cloud Manager – looks very nice, sorry no pic.
  • Fluid big on Homepages, Tiles and Personalization options.
  • NavBar now remembers where you were when you return.
  • Idea not just putting a Fluid stylesheet on, but refactoring for Fluid.
  • Simplified Analytics discussion.
  • Search and analytic line is blurring.
  • Rushed through some LCM and Security talk.

Leveraging PeopleSoft Test Framework to Minimize the Time to Test

  • How NYU Langone uses PTF to help with Selective Adoption.
  • Get a list of your manual test scripts first.
  • Do a PTF “project” before or after your upgrade project – not during.
  • Focus on tests that effect casual users.
  • Some power users like to do all manual testing. Let them if they can handle it.
  • Blank out field defaults when recording.
  • Documentation is key for your test scripts.
  • Layout a plan before you record.
  • Cannot simulate drag and drop.
  • They run in UAT, as well as dev and system test.
  • PI releases also tend to include Tools and middleware patching.
  • Not using Usage Monitor yet – leveraging normal compare reports, etc to determine testing needs.
  • About 40% of test scripts in PTF.

Continuously Upgrade: Are they crazy? No, actually it’s really clever!

  • Mark and Wendy from NYU Langone chat Selective Adoption.
  • They had a clever Tetris theme for their presentation.
  • Business and IT working together key.
  • More of an agile approach, versus waterfall.
  • Getting current with images 4 times a year.
  • Turned capital costs into operational costs.
  • Estimated to save them 70% versus old upgrade methods.
  • CFO tool will compare features from image to image.
  • PTF was a big deal for them.
  • HR and FIN teams have their own sprint schedules.

Oracle’s Investment in PeopleSoft Fluid UI Technology

  • Oracle’s David Bain talks Fluid
  • Fluid not just mobile, it is UX.
  • Getting reports that power users resistant to new navigation, casual users love it.
  • Multiple paths to access content, users can choose their own path.
  • Quick navigation from anywhere.
  • Homepages are role based, everyone gets a My Homepage.
  • Tiles are crefs, not pagelets.
  • Their primary job is navigation
  • Put anything in a tile
    • Component
    • iScript
    • External Source
    • Pivot Grid
    • NavCollections
  • There is a Tile wizard now
  • Activity Guides key in refactoring classic components into Fluid.
  • Native notifications can be setup via Oracle Mobile App Framework.
  • You get a restricted use license for MAF with PeopleSoft.
  • Fluid can be used in Interaction Hub
  • Can be a blend of tiles and homepages from multiple apps.
  • Page Designer is coming – data driven Fluid page design.
  • Guided branding is a wizard to help with branding – only in IH.
  • Fluid standards DocID 2063602.1
  • Fluid UI best Practices DocID 2136404.1

Reconnect 2016 – Day 1

As mentioned before, I am at Reconnect again this year and will be presenting. Day 1 of the conference tends to be all about networking, and I did plenty of it! I also learned a thing or two, which is nice to say before the deep dives even start. Here is a quick run down on the sessions I attended.

Phire Networking

This was basically a round-table discussion about Phire. We are hoping to create a SIG specifically for Phire, so be on the look out for that. Thanks to Mark from Wells Fargo for hosting the discussion.

  • Some customers do NOT let Phire handle database object builds. Those tasks go to a DB team. Mostly DB2 users.
  • There were questions regarding refreshes – how to reapply Change Requests?
    • Solution was to run a query and manually re-apply. Also, Phire has a feature to handle this but no one was using it.
  • There was consensus that Phire did not slow down migration process, but actually sped it up. Especially regarding SQL and DMS requests.
  • Some customers are packaging up maintenance out of PUM and applying via Phire
    • Only thing not working is Application Data Set steps
    • Large effort to take a Change Package and get it into a Phire Change Request
  • Questions about integrations into or out of Phire
    • Example: Central Change Management tool is system of record, can we integrate that with Phire?
    • Phire is built in PeopleTools, so you could build a solution using IB.
    • Phire Workflow has an API that gives you PeopleCode hooks into Workflow tasks.
    • Suggestion was to ask Phire about enhancements like this.
  • Question whether customers use Issues as well as Change Request management.
    • Almost everyone uses both.
    • One customer said they have 1 Issue Domain for System issues and 1 for Application issues.

PeopleSoft Technology SIG

This discussion was focused around PeopleSoft Test Framework(PTF). David Bain from Oracle was there, along with some customers who are having great success with PTF.

  • The Tech SIG has 757 members – you should join!
  • How do they use PTF?
    • Mainly regression testing only.
    • Run in Dev or System Test, but not UAT
    • Used for major releases, like get current projects.
    • Mostly used in a overnight batch mode.
    • Tests are run on dedicated desktops or VMs.
  • What skill set is needed for PTF?
    • Developers or QA Team with tech skills.
    • Familiarity with business process and automation tools a plus.
  • What level of effort did it take to setup?
    • Helps to already have Test Scripts and\or QA Team
    • Ran into some system resource issues, adjustments needed for citrix.
    • Would help if Oracle delivered Test Scripts[…see David Bain section]
  • What level of effort does it take to maintain the scripts?
    • Not much effort…so far.
    • Maintenance and Coverage reports are helpful
      • Best to have Usage Monitor up and running when recording.
      • If not running while recording, you will have less data to work with on Coverage Reports.
  • What can be done to make test scripts more repeatable?
    • Create a best practice guide, including naming conventions.
    • Break large tests into smaller tests.
    • Create small reusable tests like login, search, etc.
    • Pass in data with variables versus hard coding.
  • What are some limitations with PTF.
    • Can only record on IE 11, can execute on other browsers starting in 8.55.
    • It will lock up your machine, so use dedicated machines.
  • Highlights from David Bain
    • In the past no delivered test scripts, since everyone’s data is different.
    • Now with PI’s, everyone has the same demo data.
    • Oracle plans to start shipping PTF test scripts with PI’s.
    • The delivered scripts will be built for PI demo data in mind.
    • Time-frame could be 2016 still, but no commitment there.
    • Oracle is a HEAVY user of PTF internally.
    • Oracle has many overnight PTF jobs running, with reports waiting in the morning for teams to review.
    • Repeated that PTF is NOT a volume or performance testing tool.
    • Talked about PTF tying in with PUM Dashboard now.
    • You can store PTF scripts in one database, but execute them in another.
    • PTF metadata stored in PTTST_ records.

Public Sector Industry Networking

This was a networking event for Public Sector employees. There was more of the same at this event. Major topics were how are you using PUM, when are you getting to 9.2, are you using Fluid. Someone even got a cheap shot in on SES, that I gladly jumped on board with, ha.

Overall it was a great first day. Looking forward to some deep dives tomorrow!

Reconnect 2016

Summer is flying by and Reconnect 2016 is next week! I will be in Chicago and presenting again this year. I will be talking about a recent project to publish our Supplier Portal through Microsoft’s Web Application Proxy (WAP). This will cover our struggles to fit PeopleSoft in with our enterprise wide SSO solution using AD FS. If you are interested, please check it out!

  • Publishing PeopleSoft through WAP for AD FS Single Sign-on
    • Session: 100280
    • Date: Thursday, 7/21/2016
    • Time: 8:00 AM
    • Room: Narita AB

For those of you that can’t make it, there should be some content on this same topic coming through the psadmin.io pipeline soon. If you have questions or would like to discuss this topic, feel free to reach out on the Community.

As always, I’m super excited about many sessions and knowledge sharing with a bunch of really smart people. I’d like to get a good psadmin discussion going at the Tuesday night reception. Hopefully I will see you there or later in the conference. If you see me, come say hello!

Linux DPK: Dealing with Missing Required OS Packages

For those of you using the NativeOS Linux install for Update Images, you have probably come across this scenario. You start the DPK install and once you get to the Puppet installation section, the script comes to an abrupt end. What the heck! Looking in the log file, you quickly figure out your OS is missing some required packages. So now what?

In the PeopleSoft Deployment Packages for Update Images Installation document, task 2-3-3 walks you through how to get the required OS packages needed for Puppet. They make it clear that it is your job to obtain these packages and install them – you’re on your own. They then list a few steps on how to accomplish this. The steps pretty much come down to this:

  1. Install DPK
  2. DPK will fail on missing packages
  3. Find missing OS packages by reviewing the log
    • $DPK_INSTALL/setup/psft-dpk-setup.log
  4. Run DPK cleanup
  5. Install missing OS packages
  6. Install DPK again

Following the steps is pretty straight forward, but I don’t like having to manually dig through a log file and pick out the missing OS Packages. So, what I did is write a little shell script to extract them for me. This script will generate the list of missing OS packages and write it to a file. After reviewing the list, you can then use this file for installing the packages.

Here are the steps I follow to ensure I have all the needed OS packages when installing NativeOS Linux DPKs. These steps assume your current directory is $DPK_INSTALL/setup.

  1. Install DPK
  2. DPK will fail on missing packages
  3. Generate missing packages list
    • grep "is needed by" psft-dpk-setup.log | awk '{print $1;}' >> os-packages.list
  4. Run DPK cleanup
  5. Review list, edit if needed
    • vi os-packages.list
  6. Install missing OS packages
    • sudo yum install $(cat os-packages.list)
  7. Install DPK again

Unfortunately, you may have to repeat this process a few times to identify all the missing packages. Once I have gotten through a DPK install on a particular OS, I save off the os-packages.list file for safe keeping. I then make sure I install this list of packages to any new VM that I am doing a fresh DPK install on. Doing this before DPK installs will ensure we don’t see any missing OS package errors. I’m sure this list will need to be updated as time goes on and we see different versions of Puppet, etc in our DPKs.

Hopefully you found this post helpful! This little tidbit was pulled out of the PeopleSoft Deployment Package QuickStart course. Feel free to enroll in this FREE course today for more DPK goodness.

os-packages

Extending psadmin with psadmin-plus

I have created a helper menu script to extend the delivered psadmin program. The script is called psadmin-plus and I have created a repository for it on psadmin.io’s GitHub account. This was built as a self-study side project while I was on paternity leave this summer. I wanted to learn a little more about bash scripting and how to use git, and at the same time try to make a useful tool for myself and others to use. As of this writing, the tool is usable but far from complete. At the moment it only has support for Linux. I hope to make improvements over time and would invite others to summit issues on GitHub for questions, bugs or enhancement ideas. If anyone wants to contribute themselves, that would be great too!

There are two main uses for psadmin-plus. The first is actually calling the delivered psadmin program. The value add here is that it will auto-discover all your PS_CFG_HOME directories for you and source environment variables as needed. This all assumes you follow a few conventions, which should be documented in the GitHub readme or wiki pages. As mentioned in a previous blog post, this is useful if you use a single user to run your PeopleSoft environments. If you have a different user for each environment and source at login, then this feature doesn’t really help.

The second use is executing actions against multiple PS_CFG_HOMEs and domains at once. An example would be to stop all Process Scheduler domains on a machine. With this tool, you can do this with a few key strokes. You also have the option to execute now or later. If you select later, a script will generate to file. This allows you to run at a future time, maybe during a maintenance window. Again, there are a few assumed conventions that must be followed.

If you want to try it out for yourself, I have created a setup script to run against a PeopleSoft Image(VBox or Linux DPK install only). This will create a few extra PS_CFG_HOMEs and domains for you to play with in the menu. You can find instructions in the GitHub readme.

Below is a quick demo of psadmin-plus in use. For more information please see GitHub.

psadmin-plus-demo