Refreshes with Data Guard and Pluggable Databases

A PeopleSoft refresh is one of the more common tasks for a PeopleSoft Administrator or DBA. There are many ways to accomplish this task, but it usually involves a database restore from backup, custom SQL refresh scripts and potentially ACM steps. Depending on the level of effort put into the refresh scripts, there can also be manual steps involved. This approach is tried and true, but tends to lack the speed and flexibility that we are starting expect with the delivery of the PeopleSoft Cloud Architecture toolset. Nightly or Ad-hoc refresh environments and quickly provisioned temporary environments are just a few use cases that would benefit greatly from refresh process improvements. I have been doing some exploring in this area recently and would like to share a few thoughts. First, a quick overview of some Oracle tools and features that I have been leveraging.

Data Guard

Oracle Data Guard is a tool that gives you high availability, data protection and disaster recovery for your databases. At a high level, it consists of a primary database and one or more standby databases. These standby databases are transactionally consistent copies of the primary database. Therefore, if the primary database goes down, the standby can be switched to primary and your application can keep on rolling.

Physical vs. Snapshot Standby

There are multiple types of standby databases that can be used with Data Guard. I’d like to briefly explain the difference between Physical Standby and Snapshot Standby. A physical standby is a database that is kept in sync with a primary database via Redo Apply. The redo data is shipped from the primary and then applied to the physical standby. A snapshot standby is basically a physical standby that was converted to a snapshot, which is like a point in time clone of the primary. At this point we can use the snapshot to do development, testing, etc. When we are done with our snapshot, we can then convert it back to a physical standby and it will once again be in sync with the primary database. This is accomplished by taking a restore point when the snapshot conversion happens. The whole time the standby is in snapshot mode, the redo data is still being shipped from the primary. However, it is NOT being applied. Once we convert back to physical, the restore point is used to restore and then all waiting redo is applied.

Pluggable Databases

With Oracle 12c, we have the introduction of multitenant architecture. This architecture consists of Container(CDB) and Pluggable(PDB) databases. This setup makes consolidating databases much more efficient. It also gives us the ability to clone a PDB very easily. Cloning a PDB between different CDBs can even be done via a database link. Having a true multitenant setup does require additional licensing, but you can have a CDB-PDB setup without this extra licensing cost if you use a single instance(Only one PDB per CDB). Here is a great video overview of multitenant.

Refresh Approach

Now that we have an idea of what these tools and features gain us, let’s think about how to put them to use with database refreshes. Both of these approaches assume the use of Data Guard and PDBs. Having a true multitenant setup would be most efficient but a single instance setup will work just fine. I would recommend you have a dedicated standby database for your refreshes, versus using the same standby you rely on for HA\DR. It would also make sense for the standby to be located on the same storage as the PDBs you will be refreshing. Neither of these are requirements, but I think you will see better performance and lessen the risk to your HA\DR plan.

The use case we will use for this example is a sandbox PeopleSoft database. This sandbox will be scheduled to refresh nightly, giving the business an environment to test and troubleshoot in with data from the day before. The refresh could also be run adhoc, if there is a need during the business day. So the goal is to have this fully automated and complete as fast as possible.

Clone Standby Approach

This approach will be to take a snapshot of our refresh standby database and clone it, overlaying our previous sandbox PDB. After this is completed, we will need to run custom SQL scripts or ACM steps to prepare the refreshed PDB. Finally, we will restore the refresh standby back to a physical standby database. This blog post by Franck Pachot gives a quick overview of the SQL commands needed to accomplish most of these steps.

  1. Convert the refresh source physical standby to a snapshot standby.
  2. Open the refresh source PDB as read only.
  3. Create database link between the sandbox target CDB and the refresh source PDB.
  4. Drop the sandbox target PDB and create a clone from the refresh source PDB.
  5. Open the new clone sandbox PDB.
  6. Cleanup the sandbox PDB.
    • Check for errors.
    • Patch the PDB to the patch level of the CDB, if needed.
  7. Run custom SQL scripts or ACM steps against sandbox PDB for PeopleSoft setup.
  8. Convert the refresh PDB back to physical standby.

Snapshot Standby Approach

This approach is somewhat similar, except we won’t be doing any cloning. Instead, we will be using the actual snapshot standby itself as our database. Since we know this sandbox database will be refreshed nightly, we can stay in snapshot standby mode all day and then switch to physical standby mode briefly at night, applying redo data to sync up with our primary production database. After that is done, we will then switch back to snapshot mode and run our custom SQL scripts and ACM steps. This will require a dedicated standby database and should only be used with a frequent refresh schedule. Since the redo data continues to ship during snapshot standby mode, the redo data will start to backup. The volume of this redo data backing up could become an issue if it gets too large, so you will need to do some analysis to make sure you can handle it based on your refresh interval.

  1. Create a sandbox PDB as a physical standby, with primary database being production.
  2. Convert sandbox to a snapshot standby.
  3. Run custom SQL scripts or ACM steps against sandbox for PeopleSoft setup.
  4. Use the snapshot standby sandbox PDB as your normal database; connecting app and batch domains, etc.
  5. Wait until next refresh interval.
  6. Convert sandbox from snapshot standby to physical standby.
    • Restore point will be used and redo data applied, syncing up with current primary database state in production.
  7. Covert sandbox from physical standby to snapshot standby.
  8. Run custom SQL scripts or ACM steps against sandbox for PeopleSoft setup.
  9. Repeat.

Conclusion

Those are just two ideas, but you can see that there are probably many variations of these approaches that will work. Leveraging Data Guard and PDBs really gives you many options to choose from. I have been using the Clone Standby approach recently and have packaged up the scripts, including bouncing app\batch domains, in Oracle Enterprise Manager as a job. This gives me push button refreshes with a turn around time under 20 minutes. I have been able to provide adhoc refreshes for emergency production troubleshooting to the business multiple times in just a few months since implementing this approach. This is a very powerful tool to have and is well worth the effort to get your refreshes fast, efficient and automated.

Reconnect 2016 – Day 3

Day 3 was the last day of the conference, and it was a busy one! With travel, and a presentation of my own, I didn’t get a chance to see everything I wanted to. I also didn’t get this post written until now because it took me 23 hours to get home(O’Hare was a mess due to weather), what a nightmare. That said, I had a great time. Here are my highlights from the last day.

Publishing PeopleSoft through WAP for AD FS Single Sign-on

  • This was my session.
  • Used Web Application Proxy as a rps for Supplier Portal.
  • Non-claims based pre-authentication done with AD FS using Kerberos.
  • Our Enterprise standard is SAML, so would like to use claims based pre-auth in the future.

REST Your Weary Head! Query can handle it!

  • HyperGen talked about REST and Query Access Services.
  • REST is a lot better now in 8.54 and 8.55.
  • Queries via QAS can be run Synchronously or Asynchronously.
  • QAS is a good quick and easy way to create REST web services.
  • If you have a more complicated web service, you may want to create your own message handler via App Packages.

Rethinking Excel to CI

  • Kevin Weaver talks about his alternative to ExcelToCI
  • His solution creates CI templates online instead of within Excel.
  • You upload a data sheet online and then process it in an App Engine.
  • You can find this and more on his blog – http://pskcw.blogspot.com/

PeopleSoft Single Sign-On with SAML 2.0

  • Vlad from GNC walks us through his great SSO solution.
  • This uses an IdP-initiated SSO profile.
  • Built an Assertion Consumer Service in Signon PeopleCode
  • Uses openSAML Java package to do the validation.
  • His slides had a ton of code snippets.

PeopleTools Product Panel

  • A panel of Oracle employees discuss PeopleTools.
  • MS Office will be removing ActiveX from there products soon.
  • The Finance team will be delivering a new solution for Journal Upload and other solutions that use XmlLink. Most likely using IB(similar to ExcelToCi?).
  • Talked more about Cloud Manager.
  • Will be used in Oracle private cloud at first, then maybe other cloud providers, then maybe YOUR datacenter.
  • Will be placed within an Interaction Hub instance.
  • Elastic is coming soon, likely around patch 10.
  • Question about PS Admins needing Puppet skills.
  • Oracle will be getting the DPKs better, but you will need these skills to customize and fit your organization.

Reconnect 2016 – Day 2

Day 2 at Reconnect featured an Oracle Keynote and a bunch of deep dive sessions. Here is a quick overview.

Oracle PeopleSoft Product Line Update

  • The format was a discussion between Marc Weintraub and Paco Aubrejuan.
  • Even with all the cloud talk, no change in commitment to PeopleSoft.
  • Support for 9.1 ends Jan 2018, won’t be extended again.
  • Discussed different options with Cloud. Example, move demo\dev first.
  • Paco guessing 50% of customers at Reconnect will be in the cloud in 5 years.
  • Discussed PeopleSoft Cloud Architecture.
  • Talked about a new offering coming soon – Cloud Manager.
  • This will be a Self Service interface for doing psadmin type tasks
    • Deployments, Refreshes, Start, Stop, Clear cache, etc
  • Should be coming to 8.55
  • Selection Adoption discussion
  • Confirmed they use Scrum internally to develop and deliver images
  • We should see the size of images stabilizing now with this approach.
  • Discussion on Fluid.
  • Pushing more Related Content.
  • Confirmed again that Elasticsearch is coming soon.
  • Marc and Paco mentioned “cloud” 66 times.

PeopleSoft Technology Roadmap

  • This was given by Jeff Robbins
  • Similar to Paco’s talk
  • A lot more Cloud talk
  • Did show a screenshot of Cloud Manager – looks very nice, sorry no pic.
  • Fluid big on Homepages, Tiles and Personalization options.
  • NavBar now remembers where you were when you return.
  • Idea not just putting a Fluid stylesheet on, but refactoring for Fluid.
  • Simplified Analytics discussion.
  • Search and analytic line is blurring.
  • Rushed through some LCM and Security talk.

Leveraging PeopleSoft Test Framework to Minimize the Time to Test

  • How NYU Langone uses PTF to help with Selective Adoption.
  • Get a list of your manual test scripts first.
  • Do a PTF “project” before or after your upgrade project – not during.
  • Focus on tests that effect casual users.
  • Some power users like to do all manual testing. Let them if they can handle it.
  • Blank out field defaults when recording.
  • Documentation is key for your test scripts.
  • Layout a plan before you record.
  • Cannot simulate drag and drop.
  • They run in UAT, as well as dev and system test.
  • PI releases also tend to include Tools and middleware patching.
  • Not using Usage Monitor yet – leveraging normal compare reports, etc to determine testing needs.
  • About 40% of test scripts in PTF.

Continuously Upgrade: Are they crazy? No, actually it’s really clever!

  • Mark and Wendy from NYU Langone chat Selective Adoption.
  • They had a clever Tetris theme for their presentation.
  • Business and IT working together key.
  • More of an agile approach, versus waterfall.
  • Getting current with images 4 times a year.
  • Turned capital costs into operational costs.
  • Estimated to save them 70% versus old upgrade methods.
  • CFO tool will compare features from image to image.
  • PTF was a big deal for them.
  • HR and FIN teams have their own sprint schedules.

Oracle’s Investment in PeopleSoft Fluid UI Technology

  • Oracle’s David Bain talks Fluid
  • Fluid not just mobile, it is UX.
  • Getting reports that power users resistant to new navigation, casual users love it.
  • Multiple paths to access content, users can choose their own path.
  • Quick navigation from anywhere.
  • Homepages are role based, everyone gets a My Homepage.
  • Tiles are crefs, not pagelets.
  • Their primary job is navigation
  • Put anything in a tile
    • Component
    • iScript
    • External Source
    • Pivot Grid
    • NavCollections
  • There is a Tile wizard now
  • Activity Guides key in refactoring classic components into Fluid.
  • Native notifications can be setup via Oracle Mobile App Framework.
  • You get a restricted use license for MAF with PeopleSoft.
  • Fluid can be used in Interaction Hub
  • Can be a blend of tiles and homepages from multiple apps.
  • Page Designer is coming – data driven Fluid page design.
  • Guided branding is a wizard to help with branding – only in IH.
  • Fluid standards DocID 2063602.1
  • Fluid UI best Practices DocID 2136404.1

Reconnect 2016 – Day 1

As mentioned before, I am at Reconnect again this year and will be presenting. Day 1 of the conference tends to be all about networking, and I did plenty of it! I also learned a thing or two, which is nice to say before the deep dives even start. Here is a quick run down on the sessions I attended.

Phire Networking

This was basically a round-table discussion about Phire. We are hoping to create a SIG specifically for Phire, so be on the look out for that. Thanks to Mark from Wells Fargo for hosting the discussion.

  • Some customers do NOT let Phire handle database object builds. Those tasks go to a DB team. Mostly DB2 users.
  • There were questions regarding refreshes – how to reapply Change Requests?
    • Solution was to run a query and manually re-apply. Also, Phire has a feature to handle this but no one was using it.
  • There was consensus that Phire did not slow down migration process, but actually sped it up. Especially regarding SQL and DMS requests.
  • Some customers are packaging up maintenance out of PUM and applying via Phire
    • Only thing not working is Application Data Set steps
    • Large effort to take a Change Package and get it into a Phire Change Request
  • Questions about integrations into or out of Phire
    • Example: Central Change Management tool is system of record, can we integrate that with Phire?
    • Phire is built in PeopleTools, so you could build a solution using IB.
    • Phire Workflow has an API that gives you PeopleCode hooks into Workflow tasks.
    • Suggestion was to ask Phire about enhancements like this.
  • Question whether customers use Issues as well as Change Request management.
    • Almost everyone uses both.
    • One customer said they have 1 Issue Domain for System issues and 1 for Application issues.

PeopleSoft Technology SIG

This discussion was focused around PeopleSoft Test Framework(PTF). David Bain from Oracle was there, along with some customers who are having great success with PTF.

  • The Tech SIG has 757 members – you should join!
  • How do they use PTF?
    • Mainly regression testing only.
    • Run in Dev or System Test, but not UAT
    • Used for major releases, like get current projects.
    • Mostly used in a overnight batch mode.
    • Tests are run on dedicated desktops or VMs.
  • What skill set is needed for PTF?
    • Developers or QA Team with tech skills.
    • Familiarity with business process and automation tools a plus.
  • What level of effort did it take to setup?
    • Helps to already have Test Scripts and\or QA Team
    • Ran into some system resource issues, adjustments needed for citrix.
    • Would help if Oracle delivered Test Scripts[…see David Bain section]
  • What level of effort does it take to maintain the scripts?
    • Not much effort…so far.
    • Maintenance and Coverage reports are helpful
      • Best to have Usage Monitor up and running when recording.
      • If not running while recording, you will have less data to work with on Coverage Reports.
  • What can be done to make test scripts more repeatable?
    • Create a best practice guide, including naming conventions.
    • Break large tests into smaller tests.
    • Create small reusable tests like login, search, etc.
    • Pass in data with variables versus hard coding.
  • What are some limitations with PTF.
    • Can only record on IE 11, can execute on other browsers starting in 8.55.
    • It will lock up your machine, so use dedicated machines.
  • Highlights from David Bain
    • In the past no delivered test scripts, since everyone’s data is different.
    • Now with PI’s, everyone has the same demo data.
    • Oracle plans to start shipping PTF test scripts with PI’s.
    • The delivered scripts will be built for PI demo data in mind.
    • Time-frame could be 2016 still, but no commitment there.
    • Oracle is a HEAVY user of PTF internally.
    • Oracle has many overnight PTF jobs running, with reports waiting in the morning for teams to review.
    • Repeated that PTF is NOT a volume or performance testing tool.
    • Talked about PTF tying in with PUM Dashboard now.
    • You can store PTF scripts in one database, but execute them in another.
    • PTF metadata stored in PTTST_ records.

Public Sector Industry Networking

This was a networking event for Public Sector employees. There was more of the same at this event. Major topics were how are you using PUM, when are you getting to 9.2, are you using Fluid. Someone even got a cheap shot in on SES, that I gladly jumped on board with, ha.

Overall it was a great first day. Looking forward to some deep dives tomorrow!

Reconnect 2016

Summer is flying by and Reconnect 2016 is next week! I will be in Chicago and presenting again this year. I will be talking about a recent project to publish our Supplier Portal through Microsoft’s Web Application Proxy (WAP). This will cover our struggles to fit PeopleSoft in with our enterprise wide SSO solution using AD FS. If you are interested, please check it out!

  • Publishing PeopleSoft through WAP for AD FS Single Sign-on
    • Session: 100280
    • Date: Thursday, 7/21/2016
    • Time: 8:00 AM
    • Room: Narita AB

For those of you that can’t make it, there should be some content on this same topic coming through the psadmin.io pipeline soon. If you have questions or would like to discuss this topic, feel free to reach out on the Community.

As always, I’m super excited about many sessions and knowledge sharing with a bunch of really smart people. I’d like to get a good psadmin discussion going at the Tuesday night reception. Hopefully I will see you there or later in the conference. If you see me, come say hello!

Linux DPK: Dealing with Missing Required OS Packages

For those of you using the NativeOS Linux install for Update Images, you have probably come across this scenario. You start the DPK install and once you get to the Puppet installation section, the script comes to an abrupt end. What the heck! Looking in the log file, you quickly figure out your OS is missing some required packages. So now what?

In the PeopleSoft Deployment Packages for Update Images Installation document, task 2-3-3 walks you through how to get the required OS packages needed for Puppet. They make it clear that it is your job to obtain these packages and install them – you’re on your own. They then list a few steps on how to accomplish this. The steps pretty much come down to this:

  1. Install DPK
  2. DPK will fail on missing packages
  3. Find missing OS packages by reviewing the log
    • $DPK_INSTALL/setup/psft-dpk-setup.log
  4. Run DPK cleanup
  5. Install missing OS packages
  6. Install DPK again

Following the steps is pretty straight forward, but I don’t like having to manually dig through a log file and pick out the missing OS Packages. So, what I did is write a little shell script to extract them for me. This script will generate the list of missing OS packages and write it to a file. After reviewing the list, you can then use this file for installing the packages.

Here are the steps I follow to ensure I have all the needed OS packages when installing NativeOS Linux DPKs. These steps assume your current directory is $DPK_INSTALL/setup.

  1. Install DPK
  2. DPK will fail on missing packages
  3. Generate missing packages list
    • grep "is needed by" psft-dpk-setup.log | awk '{print $1;}' >> os-packages.list
  4. Run DPK cleanup
  5. Review list, edit if needed
    • vi os-packages.list
  6. Install missing OS packages
    • sudo yum install $(cat os-packages.list)
  7. Install DPK again

Unfortunately, you may have to repeat this process a few times to identify all the missing packages. Once I have gotten through a DPK install on a particular OS, I save off the os-packages.list file for safe keeping. I then make sure I install this list of packages to any new VM that I am doing a fresh DPK install on. Doing this before DPK installs will ensure we don’t see any missing OS package errors. I’m sure this list will need to be updated as time goes on and we see different versions of Puppet, etc in our DPKs.

Hopefully you found this post helpful! This little tidbit was pulled out of the PeopleSoft Deployment Package QuickStart course. Feel free to enroll in this FREE course today for more DPK goodness.

os-packages

Extending psadmin with psadmin-plus

I have created a helper menu script to extend the delivered psadmin program. The script is called psadmin-plus and I have created a repository for it on psadmin.io’s GitHub account. This was built as a self-study side project while I was on paternity leave this summer. I wanted to learn a little more about bash scripting and how to use git, and at the same time try to make a useful tool for myself and others to use. As of this writing, the tool is usable but far from complete. At the moment it only has support for Linux. I hope to make improvements over time and would invite others to summit issues on GitHub for questions, bugs or enhancement ideas. If anyone wants to contribute themselves, that would be great too!

There are two main uses for psadmin-plus. The first is actually calling the delivered psadmin program. The value add here is that it will auto-discover all your PS_CFG_HOME directories for you and source environment variables as needed. This all assumes you follow a few conventions, which should be documented in the GitHub readme or wiki pages. As mentioned in a previous blog post, this is useful if you use a single user to run your PeopleSoft environments. If you have a different user for each environment and source at login, then this feature doesn’t really help.

The second use is executing actions against multiple PS_CFG_HOMEs and domains at once. An example would be to stop all Process Scheduler domains on a machine. With this tool, you can do this with a few key strokes. You also have the option to execute now or later. If you select later, a script will generate to file. This allows you to run at a future time, maybe during a maintenance window. Again, there are a few assumed conventions that must be followed.

If you want to try it out for yourself, I have created a setup script to run against a PeopleSoft Image(VBox or Linux DPK install only). This will create a few extra PS_CFG_HOMEs and domains for you to play with in the menu. You can find instructions in the GitHub readme.

Below is a quick demo of psadmin-plus in use. For more information please see GitHub.

psadmin-plus-demo

Managing Environment Variables When Using Decoupled Homes

As a reader of this blog or a listener of the podcast, you know I am a user of both Linux and decoupled homes. Traditionally with a Linux PeopleSoft installation you need to source the delivered psconfig.sh to set your environment variables. When an entire environment was contained under its own PS_HOME, you could tweak this psconfig.sh file if customizations were needed without fear of impacting other environments. Now with decoupled homes, the PS_HOME directory will likely be shared, so changing the psconfig.sh file located there is a bad idea.

When switching to decoupled homes, I was looking for a good way to manage sourcing the psconfig.sh file and the different environment variables. While attending Alliance 2015, I saw a presentation given by Eric Bolinger from the University of Colorado. He was talking about their approach to decoupled homes and he had some really good ideas. The approach I currently use is mostly based on these ideas, with a few tweaks. The main difference is that he has a different Linux user account specific to each environment. With this approach, he is able to store the environment specific configuration file in the users home directory and source it at login time. This is similar to the approach Oracle suggests and uses with their PIs(see user psadm2). My organization didn’t go down the road of multiple users to run PeopleSoft. Instead, we have a single user that owns all the environments and we source our environment specific configuration file before we start psadmin. We use a psadmin wrapper script to help with this sourcing(which I will discuss and share in a future post). The main thing to keep in mind is regardless of how these files are sourced, the same basic approach can still be used.

The idea here is to keep as much delivered and common configuration in psconfig.sh as possible and keep environment specific customization in their own separate files. I like to keep these config files in a centralized location, that each server has access to via a NFS mount. I usually refer to this directory as $PSCONFIGS_DIR. What I do is copy the delivered psconfig.sh file to $PSCONFIGS_DIR and rename it psconfig.common.sh. I then remove any configurations that I know I will always want to set in our custom environment specific file, mainly PS_HOME. I then add any needed configuration that I know will be common across all environments (Another approach would be to create a new psconfig.common.sh file from scratch, set a few variables and then just source the delivered file cd $PS_HOME && . psconfig.sh. Either way works, but I like the cloning approach). This common file will be called at the end of every environment specific file. Remember to take care when making any changes to this file, as it will impact any environment calling it. It is also a good idea to review this file when patching or upgrading your tools.

Next for the environment specific files, I create a new file called psconfig.[env].sh. The environment name is listed in its filename. An example would be psconfig.fdev.sh. You could really choose any name for this, but I found this approach to be handy. In this file you will set the environment specific variables as needed, then end with calling psconfig.common.sh. Here is an example file:

This approach allows you to be a little more nimble when patching or upgrading. You can install new homes or middleware, then update the psconfig.[env].sh file and build new domains. When you get to go-live for Production, you can have the domains all built ahead of time. When ready, just update the config file, upgrade the database, and you are good to go!

One final note, regarding directory naming conventions. My organization tends to always have our PS_CFG_HOME directory match the environment or database name exactly, ie. fdev. I’m considering changing this, however. During our last Tools patching project, I found it a little awkward to prebuild the domains and still end up with same directory name. It seems to make much more sense to include the PeopleTools version in the directory name. That way you can prebuild the domains in a new PS_CFG_HOME, and when you go-live just blow the old home away. Another great idea I took away from Eric’s presentation was how to dynamically generate a PS_CFG_HOME directory name:

export PS_CFG_HOME=/opt/pscfg/$ENV-`$PS_HOME/bin/psadmin -v | awk '{print $2}'`

If you use this technique, you will want this to be the last line in your config file – after sourcing the common file. What this does is concatenate your environment name with the PeopleTools version, using the psadmin version command – ie. fdev-8.55.03. This will give you more clarity on what tools version the domains under this PS_CFG_HOME were built with and it will make it easier to prebuild your domains.

Switching File Attachment Storage

In previous posts I have talked about the File Attachment storage options in PeopleSoft. The three basic options are Database, FTP or HTTP. My organization initially went with HTTP, but now we are looking to move to Database storage. The requirement is to move not only future file attachments, but attachments from the past as well. At first I thought this would require a lot of work, including custom conversion AE programs, etc. However, there is a delivered process called Copy File Attachements that does all the heavy lifting for you. There is an online and batch method to run this, but I highly recommend the batch mode. There are a few other steps needed to fully convert all attachments, but it is fairly straight forward. Below are the steps I used in FSCM 9.2.17, PeopleTools 8.54.18.

Create new File Attachment Server

  1. Navigate to Main Menu > Set Up Financials/Supply Chain > Common Defictions > File Attachments > Administer File Attachments
  2. Click Add Database Server
  3. Record Name will default to PV_ATT_DB_SRV. The default is fine, but change if you would like.
  4. Set Pick Active Server to match the ID of your new server.
  5. Save. Future File Attachments will now be stored in your database.

Copy File Attachments

  1. If needed, create new URL for Database storage – PeopleTools > Utilities > Administration > URLs. Values URL Identifier: PSA_ATT_DB, URLID:record://PV_ATT_DB_SRV
  2. Navigate to run control page PeopleTools > Utilities > Administration > Administer File Processing > Copy File Attachments (Batch)
  3. Values Source: URL.PSA_ATT_HTTP, Destination:URL.PSA_ATT_DB
  4. Run the App Engine COPYATTS through Process Scheduler.
  5. This will take a long time to run, possibly hours depending on your attachment count.
  6. After completion, you will want to review the trace file AE_COPYATTS_[PIN].trc file. The AE should produce this automatically.
  7. I used this trick to generate a log of ONLY errors from the trace:

    grep "EvalCopyAttachments: failed getting file" AE_COPYATTS_*.trc > AE_COPYATTS.errors

  8. Take action on any errors that occurred.

Update previous File Attachment URL

  1. These steps point the old HTTP server setup to point to the new DB server.
  2. Navigate to PeopleTools > Utilities > Administration > URLs. Values URL Identifier
  3. Open the URL used for the previous File Attachment Server setup. Example: PSA_ATT_HTTP
  4. Change the URLID value to the new Database server URL values. Example: record://PV_ATT_DB_SRV
  5. I recommend adding comments describing this conversion process – especially if HTTP or FTP is in the URL ID and it now points to the DB. This can help avoid confusion in the future.

After we completed these steps, we decided to keep our old storage location around just in case we found any issues after the fact. We ended up renaming the directory path, just to make sure nothing was still referencing the old location. After a few weeks of no issues, we went ahead and destroyed this old storage location. As mentioned above, this was done in the FSCM application which has its own File Attachment framework built on top of the one Tools delivers. You should be able to take a similar approach with other applications, but the Create new File Attachment Server section above won’t be relevant. Instead, you can simply complete the Update previous File Attachment URL steps after your copy is complete.

PUM Dashboard Queries and Backlog Report

Hopefully everyone has had a chance to play around with the new PUM Dashboard delivered in the 8.55 PIs. If not, Logesh from leanITdesigns has a very good write up on it. Dan and I also spend some time discussing it in Podcast #20 – PUM Dashboard. It is basically a new one stop shop for managing the PUM maintenance process. It uses a Fluid dashboard to help keep track of BUGs, customizations and test cases.

I have found the dashboard to be a very nice tool with a lot of promise. That said, I don’t think it is mature enough to really work for ALL your PUM maintenance planning needs. In the near future I can see both Oracle(tweaking the dashboard) and organizations(tweaking their maintenance planning processes) working together to make this dashboard truly useful.

Even before this dashboard, one thing I have been doing for our organization is providing a spreadsheet report that lists all the BUGs we have not yet applied. In theory, this can now be replaced by the dashboard. However, our group is pretty used to this spreadsheet and it gives them a little more personalized control of this data. As the dashboard improves, I’m hoping this report can go away and everyone can just be directed to the dashboard.

Queries

In the past, to generate this report I had to create a “Get Current” change package in PUM. This would of course list all the unapplied BUGs in a grid online. I would export the grid to excel, then copy and paste into the report template. This worked fine, but now there is a better way – leveraging the PUM Dashboard queries.

The PUM Dashboard drives off of a set of queries. These queries all have a prefix of PTIA and can be found under the normal Query Viewer or Manager component in your PI. Here is a list of a few of them:

  • PTIA_BUG_TARGET_DETAIL
  • PTIA_BUG_IMAGE_DETAIL
  • PTIA_BUG_PRODUCT_DETAIL
  • PTIA_STATUS_BY_IMAGE
  • PTIA_STATUS_BY_PROD

I found the PTIA_BUG_TARGET_DETAIL to be the most useful. This query was basically the same output I used to get with my change package grid export. One thing I did add to this detail was a link directly to the BUG Matrix in My Oracle Support(https://support.oracle.com/epmos/faces/BugMatrix?id=YOUR_BUG_ID_HERE). Our group has found this very useful when researching a BUG and wanting a little more detail than what is listed in PUM.

PUM Backlog Report

I have packaged up my spreadsheet report and posted it to GitHub, in case anyone else is interested. Since this report shows all unapplied BUGs from PUM, I have titled it the PUM Backlog Report. The instructions on how to import your PUM data is included in the readme file on GitHub, as well as in the spreadsheet itself. If you have questions or ideas for improvement, feel free to open an issue on GitHub or post in the comments below.

You can find the report on the psadmin.io GitHub site HERE.

pum_backlog_detail

pum_backlog_chart