Tuesday, July 3, 2007

Wow, an Update!

Well much has happened since my previous update.

The program is modeled after the OSCAR Wizard where you have a few main steps that get you to an end result.

For OSCAR Bench there are 5 distinct steps:
  1. Selection
    1. [script] pre_configure
  2. Configuration
  3. Review
    1. [script] post_configure
  4. Execution
    1. [script] post_execute
  5. Results
Like OSCAR different steps offer various scripts that may be run.
  • pre_configure
    • Prepare any default configuration
  • post_configure
    • Take the user input and do any final processing on it before the benchmark is executed.
  • post_execute
    • Prepare the results
For simplicity I may remove post_execute and just have an execute script that is called. This script would be responsible for launching the benchmark, as well as preparing the results all in one blow. I have a feeling the framework will continue to adapt throughout this project, it seems no matter how much thought I put into it I still notice glaring flaws later on.

The code is being written in 3 distinct layers

  1. Top level Modules
    1. OSCARBench
    2. OSCARBench::Benchmark
  2. Interfaces
    1. OSCARBench::Benchmark::Detect
    2. OSCARBench::Benchmark::Configuration
    3. OSCARBench::Benchmark::Results
  3. Lower Level Tools
    1. XMLParser, XML Validator, Database Connectivity, Utilities
The User Interfaces only reference the top level modules, OSCARBench can determine what is installed on a system, as well as if they are valid and ready to use. Benchmark is a fully Object Oriented class that provides an interface to everything there is to know about a benchmark, this includes results, configuration, what scripts it needs executed, where it is located. The top level modules make use of the interfaces which in turn call the low level tools. This allows me to know things like 'Where benchmarks are installed' without having to know where I get that information or how it will be stored. A goal with this design is to make it so that uses can save their results / configurations not only locally but possibly to another location, such as a website where others may see what type of performance they get.

Well that stuff is boring and very academic, so onto the cool stuff...... pictures!


The selector page showing the HPCC benchmark, and HPL. I have not entered the descriptions so they just show as blank ><. I will likely remove the 'Help' Tab as that was from a pre-Wizard GUI.


I need to determine good classifications for each benchmark. I also need to figure out how to get Qt to launch a webrowser instead of trying to parse HTML links itself. It is not visible now, but On The Web contains a collection of links to more information. My view is: There are many very smart people that have written information on these tools, rather than try and compete with them, I would rather direct the user to those resources.



The configuration and results pages both appear empty since I am still working with the Config panel. Anyone know how to break a Qt Layout and add elements via a loop? For some reason I get no error yet they do not show -- very likely I am forgeting a key setting, but it will show up sooner or later!



The Execution panel is designed to be informative. 1st there is a progress bar, benchmarks can take on the order of hours to complete so signs that the system is not crashed are nice. 2nd the output of whatever the execute command executes is displayed in the window. This seperates OSCAR Bench output from Benchmark output. Like OSCAR the program spits out quite a bit of output to the command line depending on DEBUG settings. Also the Execute mechanism is designed to allow for a kill switch. The kill button kills the execution without killing the window. This may prove usefull to people who only want to see a few bits of a benchmark without letting it run its full course. Of course killing it will prevent any results from being displayed, but I think it makes the program a little more user friendly since its easy to tell what is going on.



Here you can see the output from an rpmbuild command. The ExecuteFrame is designed to be independent of OSCAR Bench it can execute anything, the only draw back is determining the steps/progress of the progress bar is nearly impossibly without having to write additional code. -- Anyone know a good way to solve this? Pass an arbitrary command to something and somehow determine how long it will take! I feel that is impossible, but ExecuteFrame has several public methods that allow you to control it from outside.

Well thats all for now, tomorrow should finish the Configuration panel. Then I need to work with the flow of execution more. Also I need to disable / enable the back/next buttons appropriately.

No comments: