#64 – Testing Oracle Cloud w/ Sasank Vemana

This week on the podcast, Sasank Vemana returns to talk about his experience testing Oracle Cloud and using Google Analytics to analyze web traffic. We also discuss the differences in supporting students as end-users.

Show Notes

Refreshes with Data Guard and Pluggable Databases

A PeopleSoft refresh is one of the more common tasks for a PeopleSoft Administrator or DBA. There are many ways to accomplish this task, but it usually involves a database restore from backup, custom SQL refresh scripts and potentially ACM steps. Depending on the level of effort put into the refresh scripts, there can also be manual steps involved. This approach is tried and true, but tends to lack the speed and flexibility that we are starting expect with the delivery of the PeopleSoft Cloud Architecture toolset. Nightly or Ad-hoc refresh environments and quickly provisioned temporary environments are just a few use cases that would benefit greatly from refresh process improvements. I have been doing some exploring in this area recently and would like to share a few thoughts. First, a quick overview of some Oracle tools and features that I have been leveraging.

Data Guard

Oracle Data Guard is a tool that gives you high availability, data protection and disaster recovery for your databases. At a high level, it consists of a primary database and one or more standby databases. These standby databases are transactionally consistent copies of the primary database. Therefore, if the primary database goes down, the standby can be switched to primary and your application can keep on rolling.

Physical vs. Snapshot Standby

There are multiple types of standby databases that can be used with Data Guard. I’d like to briefly explain the difference between Physical Standby and Snapshot Standby. A physical standby is a database that is kept in sync with a primary database via Redo Apply. The redo data is shipped from the primary and then applied to the physical standby. A snapshot standby is basically a physical standby that was converted to a snapshot, which is like a point in time clone of the primary. At this point we can use the snapshot to do development, testing, etc. When we are done with our snapshot, we can then convert it back to a physical standby and it will once again be in sync with the primary database. This is accomplished by taking a restore point when the snapshot conversion happens. The whole time the standby is in snapshot mode, the redo data is still being shipped from the primary. However, it is NOT being applied. Once we convert back to physical, the restore point is used to restore and then all waiting redo is applied.

Pluggable Databases

With Oracle 12c, we have the introduction of multitenant architecture. This architecture consists of Container(CDB) and Pluggable(PDB) databases. This setup makes consolidating databases much more efficient. It also gives us the ability to clone a PDB very easily. Cloning a PDB between different CDBs can even be done via a database link. Having a true multitenant setup does require additional licensing, but you can have a CDB-PDB setup without this extra licensing cost if you use a single instance(Only one PDB per CDB). Here is a great video overview of multitenant.

Refresh Approach

Now that we have an idea of what these tools and features gain us, let’s think about how to put them to use with database refreshes. Both of these approaches assume the use of Data Guard and PDBs. Having a true multitenant setup would be most efficient but a single instance setup will work just fine. I would recommend you have a dedicated standby database for your refreshes, versus using the same standby you rely on for HA\DR. It would also make sense for the standby to be located on the same storage as the PDBs you will be refreshing. Neither of these are requirements, but I think you will see better performance and lessen the risk to your HA\DR plan.

The use case we will use for this example is a sandbox PeopleSoft database. This sandbox will be scheduled to refresh nightly, giving the business an environment to test and troubleshoot in with data from the day before. The refresh could also be run adhoc, if there is a need during the business day. So the goal is to have this fully automated and complete as fast as possible.

Clone Standby Approach

This approach will be to take a snapshot of our refresh standby database and clone it, overlaying our previous sandbox PDB. After this is completed, we will need to run custom SQL scripts or ACM steps to prepare the refreshed PDB. Finally, we will restore the refresh standby back to a physical standby database. This blog post by Franck Pachot gives a quick overview of the SQL commands needed to accomplish most of these steps.

  1. Convert the refresh source physical standby to a snapshot standby.
  2. Open the refresh source PDB as read only.
  3. Create database link between the sandbox target CDB and the refresh source PDB.
  4. Drop the sandbox target PDB and create a clone from the refresh source PDB.
  5. Open the new clone sandbox PDB.
  6. Cleanup the sandbox PDB.
    • Check for errors.
    • Patch the PDB to the patch level of the CDB, if needed.
  7. Run custom SQL scripts or ACM steps against sandbox PDB for PeopleSoft setup.
  8. Convert the refresh PDB back to physical standby.

Snapshot Standby Approach

This approach is somewhat similar, except we won’t be doing any cloning. Instead, we will be using the actual snapshot standby itself as our database. Since we know this sandbox database will be refreshed nightly, we can stay in snapshot standby mode all day and then switch to physical standby mode briefly at night, applying redo data to sync up with our primary production database. After that is done, we will then switch back to snapshot mode and run our custom SQL scripts and ACM steps. This will require a dedicated standby database and should only be used with a frequent refresh schedule. Since the redo data continues to ship during snapshot standby mode, the redo data will start to backup. The volume of this redo data backing up could become an issue if it gets too large, so you will need to do some analysis to make sure you can handle it based on your refresh interval.

  1. Create a sandbox PDB as a physical standby, with primary database being production.
  2. Convert sandbox to a snapshot standby.
  3. Run custom SQL scripts or ACM steps against sandbox for PeopleSoft setup.
  4. Use the snapshot standby sandbox PDB as your normal database; connecting app and batch domains, etc.
  5. Wait until next refresh interval.
  6. Convert sandbox from snapshot standby to physical standby.
    • Restore point will be used and redo data applied, syncing up with current primary database state in production.
  7. Covert sandbox from physical standby to snapshot standby.
  8. Run custom SQL scripts or ACM steps against sandbox for PeopleSoft setup.
  9. Repeat.

Conclusion

Those are just two ideas, but you can see that there are probably many variations of these approaches that will work. Leveraging Data Guard and PDBs really gives you many options to choose from. I have been using the Clone Standby approach recently and have packaged up the scripts, including bouncing app\batch domains, in Oracle Enterprise Manager as a job. This gives me push button refreshes with a turn around time under 20 minutes. I have been able to provide adhoc refreshes for emergency production troubleshooting to the business multiple times in just a few months since implementing this approach. This is a very powerful tool to have and is well worth the effort to get your refreshes fast, efficient and automated.

#63 – Revisiting PS_APP_HOME

This week, Dan and Kyle talk about writing PTF tests for PeopleTools, running multiple IB domains and trying A CM again. Then, we revisit our strategies for managing PS_APP_HOME when applying selective maintenance.

Show Notes

#62 – PeopleTools Patch Testing

This week on the podcast, Dan and Kyle talk about load balancing all environments or some environments, Diagnostic Plugins and syntax coloring code. Then, they dive into the getting current and how to test PeopleTools Patches.

Show Notes

Announcing the Deployment Packages Course

We are excited to announce a new course, Mastering PeopleSoft Administration: Deployment Packages. This course will teach you how to make Deployment Packages (DPK) work for you. We begin the course by exploring the different components that make up the DPK, how you can configure the DPK, and how to enhance it. Then, the course shows you how to extend the DPK beyond it’s current functionality and use it to build servers exactly the way you need them.

Out of the box, the DPK is great for building demo environments. But most organizations have requriements for their environments that go beyond what the DPK can do. Those requirements can include custom signon pages, deploying additional software, and more. Learning how to use the DPK to deploy those requirements as you build environments can significantly benefit you. With the DPK, environment and server builds are repeatable, consistent and fast.

To learn more about the course, check out the videos below.

  • A course introduction video
  • A segment from the podcast where Kyle and Dan discuss why we built the course and what you can learn from it
  • A video on why you should buy a course from psadmin.io
  • Dan’s 48-minute talk on Enhancing and Extending the DPK

Last, there are 5 free lectures from the course available to watch right now. Click here to visit the course page and make sure to watch all the free lectures.

Course Introduction

 

Podcast Segment about the Deployment Packages Course

Why Buy a psadmin.io Course

 

Enhancing and Extending Deployment Packages Talk

Dan presented at the Upper Midwest Regional User Group about Enhancing and Extending the DPK (October 2016). This 48 minute talk goes over some DPK basics, introduces configuring the DPK and your possibilities for extending the DPK. If this talk is exciting and you want to know more, The Deployment Packages Course will go into the details of this talk and show you how to accomplish it all.

 

#61 – Jolt Failover

This week on the podcast, Dan and Kyle launch a new course about Deployment Packages. Dan tests out a new text editor and discovers you can run OPatch on MOS. Kyle digs into Jolt Failover options with the IB and brainstorms some great configuration ideas.

Show Notes

#60 – PeopleSoft Test Framework 101

This week on the podcast, Dan tries a different Remote Desktop tool, using RSS feeds to monitor PeopleSoft data and comparing SQL Explain Plans with SQL Developer. Then Kyle gives a great overview of the PeopleSoft Test Framework and what you need to know before using it.

Show Notes

#59 – Security Deployment

This week on the podcast Dan and Kyle talk about the new CFO tool, applying CPU patches with the DPK, and how the DPK could improve with newer versions of Puppet. Then Dan digs into the new Security Deployment tool and how you can use it improve security migrations.

Show Notes

#58 – Pagelets and Complaints

This week on the podcast, Dan and Kyle talk about a ransomware attack, load balancer health checks, applying POC patches and complain about minor annoyances in Change Assistant. Kyle shares a story about a misbehaving pagelet and how he investigated the issue.

We want to make this podcast part of the community discussion on PeopleSoft administration. If you have comments, feedback, or topics you’d like us to talk about, we want to hear from you! You can email us at podcast@psadmin.io, tweet us at @psa_io, or use the Twitter hashtag #psadminpodcast.

You can listen to the podcast here on psadmin.io or subscribe with your favorite podcast player using the URL below, or subscribe in iTunes.

Podcast RSS Feed

Show Notes

Monitoring WebLogic and Java

Lately, I have had interest in monitoring WebLogic’s performance. Since Weblogic is built on Java, there are some standard tools we can use to look into the Java Virutal Machine (JVM). We’ll cover two of those tools in this post: JConsole and VisualVM. Both JConsole and VisualVM are included in the Java Development Kit so they are already on your server. These tools will give you information about the JVM used to run WebLogic and can help you tune you web servers.

JMX

To get monitoring data out of WebLogic’s JVM, we need to enable JMX. Java Management Extensions (JMX) is a monitoring technology built into Java. Applications that run on Java can build instrumentation into the application to provide performance data about the application. Even without additional data, the JVM will provide CPU, Memory, Thread and other stats about the heap.

To enable JMX for WebLogic, we’ll update the setEnv.cmd or setEnv.sh file. At the end of the JAVA_OPTIONS line, add these flags:

-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8888 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false

There are 4 flags we pass to Java when the JVM is started:

  1. Enabling JMX Remote connections
  2. JMX Remote Port
  3. Requiring SSL Connections
  4. Authentication for JMX

For testing, I’ve turned off SSL and Authentication, but if you are enabling JMX across all your servers I recommend you turn both options on. For the JMX Port, pick a port value that is unique for each instance of the JVM. Each WebLogic and App Server instance will have its own JVM. For more information on configuring JMX, this is a link to the official documentation.

If your are on Windows and updated the setEnv.cmd file, you will want to re-install the Service that starts the PIA domain. The JAVA_OPTIONS parameters are stored in the registry when you create the service. If you update setEnv.cmd, you need to recreate the service (or manually update the registry).

Now that JMX is enabled on our domains, let’s look at a few tools to help us monitor our JVMs.

JConsole

JConsole is a utility included with the JDK download. Under JAVA_HOME\bin you’ll find jconsole.exe To start, we’ll run JConsole from our web server where we enabled JXM (instead of our desktop). Open JConsole and it will ask you to connect to a JMX Process. You have two options: Local Process and Remote Process. We’ll use the remote process option and use these values to connect to our web server: localhost:8888. We don’t need a username or password since we passed the flag jmxremote.authentication=false.

jconsole01

After connecting, you’ll get a message asking about your insecure connection. Click “Insecure” to continue. On the main page, we see 4 graphs related to the JVM.

jconsole03

These graphs give you a good overview of the JVM status. The CPU graph will show you how much of the CPU that JVM is using, and the Threads graph gives a good indication of workload on the JVM. The best part of JMX is the Memory graph. Getting your JVM Heap sized correctly can make a big different in performance. The graph should follow a pattern when Garbage Collection runs.

jconsole04

You don’t want Garbage Collection to run too often, or the usage too high after Garbage Collection. This graph helps with getting the right size for your web server. (You can find more tuning information here.)

VisualVM

VisualVM is another untility included with the JDK download and is also under JAVA_HOME\bin. We’ll start VisualVM on the server as well by running jvisualvm.exe --console new.

visualmv02

When VisualVM opens, we create a new connection by right-clicking on “Local” and selecting “Add JMX Connection”. Fill in the port number and select “Do not require SSL connection”.

visualvm03

VisualVM show us similar data as JConsole, but I think it looks a nicer. Under the Monitor tab, you can also force the JVM to run a Garbage Collection. For the most part, these two applications are similar.

visualvm03b

Remote JMX Connections

We have run both applications on the server to connect to JMX, but these applications are more useful if we can connect to the servers remotely. By default, JMX will only accept local connections. To enable remote connections to JMX, we have to pass this flag:

 -Dcom.sun.management.jmxremote.local.only=false

After you add that parameter to your setEnv.cmd JAVA_OPTIONS line, restart the web server. On a different computer or server, launch VisualVM or JConsole. Create a remote connection to JMX on the server. In the Connection box, enter the server name and port for the JMX instance.

visualvm05

JMX Authentication

Once you get the basic configuration in place, you want to enable authentication to connect to the JMX instance. The default JMX authentication is stored in the JDK folder. That will affect all domains using the JDK folder. Instead, we will use a JMX password file for each web server domain.

  1. Open the file JAVA_HOME\jre\lib\management\jmxremote.access.
  2. Add the line psMonitor readonly to the bottom of the file and save. This line adds a new user named psMonitor and a read-only account to any JMX instances using this JAVA_HOME.
  3. Copy the file JAVA_HOME\jre\lib\management\jmxremote.password.template to PS_CFG_HOME\webserv\jmxremote.password.
  4. Open the new jmxremote.password file.
  5. Add the line psMonitor test123 to the bottom of the file and save. This line sets the password for the psMonitor user. To give each web server domain a different password, set a unique password in this file under each PS_CFG_HOME.
  6. Open the setEnv.cmd file and add these parameters:

    -Dcom.sun.management.jmxremote.password.file=PS_CFG_HOME\webserv\jmxremote.password -Dcom.sun.management.jmxremote.authenticate=true
    
  7. Restart the web server for the new paramters to take affect.

Now that we have a web server configured to run JMX with authentication, we will create another connection in VusualVM to use the username and password.

  1. Right-click on the remote server and select “Add JMX Connection”
  2. Enter the server name and port.
  3. Enter psMonitor for the Username and test123 for the Password.
  4. Select “Do no require SSL connection”
  5. Click OK.

jmxauth01

jmxauth02