Pages

Tuesday, October 8, 2019

Adventures in the Land of Oracle and the Peoplesoft Test Framework

Hey everybody,

    I've been writing test cases with the PeopleSoft Testing Framework for the past year and a half, and I've found it hard to find any real objective feedback regarding how this framework operates and whether it's a tool that various businesses want to use.  So, I thought I would scribble my thoughts on the matter, and talk about testing with this interesting tool.  Oh, and just to be clear, I have a strong opinion that I am sure you will understand by the end.  (Scroll to the last line if you want to know the spoiler.)

   First, what is this thing about which I speak?  Oracle bought PeopleSoft some years back, and a tester on that PeopleSoft team wrote a UI automation tool to test new features when they come out.  He called this thing the PeopleSoft Testing Framework (which I will call PTF).  It is included with PeopleSoft, and it is a database-driven layer on top of Selenium as well as a UI to write/record and playback tests.

   When the County hired me to come on board and write their tests, I had never touched PeopleSoft before, and all I knew about it was that big companies use it for HR like stuff.  I think they hired me because the PTF UI had just enough clues for me to look at it, and see the similarities with Selenium development.

    Because I know Oracle gets a little litigious at times, I'll just describe the UI rather than post a screenshot.  The UI feels like a 90s era muti-document interface, complete with awkward elements that won't fit quite right on the page, and the inevitable empty spaces that almost always take up a third of the screen.  You have a sidebar that displays a folder tree along the left side, and inside that you will find tests and test shells.  The test shells are containers for you to execute tests from in a specific order.  The main meat of the program comes from tests - which look a lot like test shells - and are displayed as a table of values.  Each row is a test step, and the steps have some general properties like ID, comments, and a checkbox to toggle a step active/inactive; however, the meat of the tests is contained in the Type, Action, Recognition, Parameters, and Value fields.

     Type is used to point your action to a type of element on the page, the browser itself, or back at the PTF tool to run a function.  Each Type has a list of supported Actions (I.E. the type "button" has a "click" action.  The Recognition is mostly used for HTML element selectors, but there are exceptions to this, like the "Page" type, which uses GoTo and a unique identifier for the page.  You can pass Parameters to a special type of test call a library tests, and the "Get Property" Action on many Types will use Parameters to determine the exact property you want as well as what Variable you will store it as.  Finally, every test has a set of Test Cases, which use the Value field to create variants of the test.

Here is an example of what a test case would look like:
TypeActionRecognitionParametersValue
BrowserStart_Login
PagePromptManagePosition.GBLadd
PagePromptOk
TextSet_Valuename=POSITION_DATA_EFFDT$0&effdt

     The above, opens a new browser and logs in with a test user specified in the global PTF configuration (more on that later.)  it loads a page, with a value of "add", which actually loads a URL within the PTF environment (again in the global configuration), then clicks over to a section to create a new user.  PromptOk is a special action to click that add user command.  Finally, it enters text into a textbox on the page with the desired name.  The Value here is actually a variable, stored globally in the database.

     So, I said globally there a couple times.  PTF uses a global configuration for each environment; this locks down the base URL for the test, specifies a browser, and securely stores a username/password for use in your tests.  Also, because EVERYTHING is stored in the database, so are the variables, in fact, you can view your PTF variables using the regular PeopleSoft web UI.  There are also certain permissions in PeopleSoft security to allow a user to create, modify, or run PTF related things.  I'm not getting into that here.

     Now, I described the UI a lot, but if you go down the road to learn about PTF, everyone promotes the test recorder feature.  It does it's job well, but only works in Internet Explorer, and not even Microsoft wants to support that anymore; however, it will record what you click on and where you go.  There are also a few configuration options for making it recognize when you are just clicking through menus, and it will deactivate those steps in favor of a single goto action (however, it saves both methods to the test.

     Once you record some tests it is time to playback, which is the part about PTF I hate the most.  When you playback tests they run extremely slow.  I don't know if it's something the County has done or everyone has the same problem, but the tests are only capable of filling out about one text box per minute.  I have seen it take 5 minutes just to log into the system.  In theory, your tests would be running over night when nobody is sitting around watching the paint dry, but I have had a lot of problems getting command line execution to work correctly, and nobody can touch the keyboard or mouse while the tests are running.  I have also had a whole host or problems getting it to run in a Windows VM as well, and you can wave goodbye to Linux support.

    We found that things run well as long as we had a dedicated computer sitting in a corner cube to run the tests on, and we didn't mind leaving the computer unlocked when we walk away while still signed in... this part always makes me cringe.  I dread the day someone causes havoc while logged in with my credentials because of this issue.  I have found that if I log in using remote desktop, things work okay most of the time, but then I need to babysit the tests because we will occasionally get errors because it won't type in a value in some random text box; I suspect this is some sort of network related snafu, but it makes execution hard.  If you run in Hyper-V, get ready for random crashes and network failure messages, and god help you if you are connected to a VPN.

     As you might have picked up on at this point, running the tests requires constant babysitting.  I was told and once upon a bunch of the tests passed on to me ran smoothly, and nothing ever went wrong, but in my experience, this has never been the case.  You will run a testshell, that shell will call five test cases that run great, then the sixth one fails.  You review XML the log in PTFs internal log viewer and find that it didn't type in a required field, and the test failed half way through creating a complex object.  Fixing the problem looks like this:

  1. You take the shell you are running, and deactivate the tests that succeeded
  2. click into the failed test, make sure you are viewing the correct test case
  3. deactivate whatever steps succeeded
  4. possibly add an extra step to open the partially created object and edit it 
  5. deactivate all the steps in the shell after the object is created
  6. re-run the test... the mysterious failure goes away!
  7. remove or deactivate the step that you possibly added to the test
  8. re-activate all the stuff that was deactivated in step 3
  9. deactivate everything that has been run in the shell
  10. reactivate anything that was not run in the shell
  11. run the shell again and pray you don't get another failure like that
  12. possibly repeat steps 1-11 several times, but for a different failure in another part of the shell.

As a result of running these automated tests a few times, I know in detail all of the steps created in each script.  I know that if one part of the test fails, I need to manually go into the database and correct the data by hand with SQL or a compare report will fail.  I have an intimate knowledge of this kind of thing just from troubleshooting my tests, and saying.. well, lets see what happens if I just push forward with the tests even though that step failed... or I just repeat this step and create a new position rather than correcting an incorrect position.

Getting support from Oracle is painful.  You always get sent to an over-sea support center where they ask the same 10 questions before escalating your issue, meanwhile, you only get one message per day unless you decide to work in India's time zone; often times, issues will magically resolve themselves after a week with no action done by you. 

There is also no community support.  Or very little.  Most forum topics are asking for success stories with PTF, and the Oracle PTF focus group hasn't proven to be a source of information.  The only documentation Oracle has on the the tool is a chapter in their documentation on PeopleTools, and a $2,000 course that pretty much just walks you through the UI, and the teacher will not understand in-depth technical questions about the tool.

However, in the back of my mind, I keep looking back on my previous decade of experience before coming to work here.  I think back to the days of coding tests in Specflow/C#/Selenium... or Mocha/Webdriver.io/Javascript... heck, the other day, just for laughs I implemented a few of my tests in Robot Framework with Python... and that is when I lost all hope for PTF.

I admit, the idea of hiring a developer to write automated tests can be intimidating, but if you have a budget for using PeopleSoft, you have a budget for it.  The thing about Robot Framework specifically that really won my admiration at this point is actually the report it generates.  Just google RobotFramework tutorials on YouTube, or you might be able to access Lynda.com for free via your public library's website (you don't even need to leave home!)  Why do I happen to like it?  I'm glad you appear to care!

     Robot Framework has a nice keyword driven syntax.  So, you can start by describing your test in english, and gradually convert each sentence into code.  For example, start by writing a user-story like suite name  "As a ___ I want to do ___ so that ____ happens."  Then elaborate with a series of "Given ___ when someones does ____ then ____ happens" tests.  Each line of that file is a keyword, so, you can create a hierarchy of .robot files that define each keyword as a series of steps.  There are tons of libraries to add database, API, and even Selenium/Appium functionality to your tests, and if you really need custom functionality, everything is open source, and there are even tutorials on how to create your own custom libraries written in python.

     The tests support running locally, pointing at your own Selenium grid, or pointing to a cloud provider like browserstack of Saucelabs. You can even run multiple browser simultaneously.  However, best of all, is that report file.  It gives you a great summary that breaks down each file run into a bar graph of how many tests in that file passed/failed.  When you hit failures, clicking on the failed test opens a detailed log page, and you can drill down to the exact step that failed (complete with screenshots.)

     As a proper test automation framework, RobotFramework gives you the ability to write a custom piece of code that automatically run before/after all the tests as well as before/after each test.  Plus, there is a Jenkins plugin that perfectly integrates the whole thing into your build reporting pipeline.  Using a different CI server?  Don't worry, you just need to specify a folder to save the report in, which will contain everything you need.  results are in XML by default, but you can change formats if that works better for you.  I'm sure people will make more plugins for the Atlassian and Microsoft crowd too.

The final amazing part is that there is tons of documentation and an active community.  During the process of writing a few tests as a POC to compare against PTF, I found a link to almost everything I needed on the main website, and nearly all the github documentation pages followed the same format.  Nearly every question I came up with had an answer on Stack Overflow already.  The couple I had to ask were answered pretty quickly.

So, my conclusion is that it's really hard to compare PTF with a Selenium framework.  The theory behard PTF is that your PeopleSoft experts will be able to record tests and play them back rather than running tests manually, and sometimes it works that way, but my experience has not gone that way at all.  The reality has been that I need to fall back on my Selenium experience and web development background constantly.  I frequently end up using browser development tools for inspecting elements and exploring page DOMs to better define my selectors.  I've spent so much time debugging the tests, I constantly wonder if it really is faster than running the tests manually. 

That said, let me circle back for a second to this topic of learning curve, because that seems to be the #1 thing going for PTF at this point.  For that, I point you to Selenium IDE.  A wonderful tool that gives you almost all of the functionality of PTF.  Just record you tests in a browser, and you can play back the tests.  However, when troubleshooting, in both situations, you will need a lot of technical knowledge to parse through the depths.  So, if you are writing Selenium tests in C#, PTF might be easier, but it's not nearly as powerful.  Javascript is probably just as steep of a learning curve.  However, I think Python with Robot Framework is the easiest way to go.

TLDR:

PTF has poor documentation, little-to-no community support, inadequate corporate support, its limited to only controlling a single browser ata time, while connecting to a single PeopleSoft environment at a time, on a dedicated computer, and has no Continuous Integration server support.  On the other hand, Selenium has tons of documentation, a massive community, supports multiple simultaneous browsers, can handle multiple URLs to any site simultaneously, can run in the background on your workstation, and was built to be run from a Continuous Integration server.  Additionally, Selenium is way faster, and only a slightly steeper learning curve in some cases.
   

Anyway... I just had to vent for a few minutes... PTF is driving me nuts right now.

Later,

     SteveO
Spoiler:  I have grown to hate PTF and will never recommend people stray away from Selenium.