Tuesday, July 3, 2007

Wow, an Update!

Well much has happened since my previous update.

The program is modeled after the OSCAR Wizard where you have a few main steps that get you to an end result.

For OSCAR Bench there are 5 distinct steps:
  1. Selection
    1. [script] pre_configure
  2. Configuration
  3. Review
    1. [script] post_configure
  4. Execution
    1. [script] post_execute
  5. Results
Like OSCAR different steps offer various scripts that may be run.
  • pre_configure
    • Prepare any default configuration
  • post_configure
    • Take the user input and do any final processing on it before the benchmark is executed.
  • post_execute
    • Prepare the results
For simplicity I may remove post_execute and just have an execute script that is called. This script would be responsible for launching the benchmark, as well as preparing the results all in one blow. I have a feeling the framework will continue to adapt throughout this project, it seems no matter how much thought I put into it I still notice glaring flaws later on.

The code is being written in 3 distinct layers

  1. Top level Modules
    1. OSCARBench
    2. OSCARBench::Benchmark
  2. Interfaces
    1. OSCARBench::Benchmark::Detect
    2. OSCARBench::Benchmark::Configuration
    3. OSCARBench::Benchmark::Results
  3. Lower Level Tools
    1. XMLParser, XML Validator, Database Connectivity, Utilities
The User Interfaces only reference the top level modules, OSCARBench can determine what is installed on a system, as well as if they are valid and ready to use. Benchmark is a fully Object Oriented class that provides an interface to everything there is to know about a benchmark, this includes results, configuration, what scripts it needs executed, where it is located. The top level modules make use of the interfaces which in turn call the low level tools. This allows me to know things like 'Where benchmarks are installed' without having to know where I get that information or how it will be stored. A goal with this design is to make it so that uses can save their results / configurations not only locally but possibly to another location, such as a website where others may see what type of performance they get.

Well that stuff is boring and very academic, so onto the cool stuff...... pictures!


The selector page showing the HPCC benchmark, and HPL. I have not entered the descriptions so they just show as blank ><. I will likely remove the 'Help' Tab as that was from a pre-Wizard GUI.


I need to determine good classifications for each benchmark. I also need to figure out how to get Qt to launch a webrowser instead of trying to parse HTML links itself. It is not visible now, but On The Web contains a collection of links to more information. My view is: There are many very smart people that have written information on these tools, rather than try and compete with them, I would rather direct the user to those resources.



The configuration and results pages both appear empty since I am still working with the Config panel. Anyone know how to break a Qt Layout and add elements via a loop? For some reason I get no error yet they do not show -- very likely I am forgeting a key setting, but it will show up sooner or later!



The Execution panel is designed to be informative. 1st there is a progress bar, benchmarks can take on the order of hours to complete so signs that the system is not crashed are nice. 2nd the output of whatever the execute command executes is displayed in the window. This seperates OSCAR Bench output from Benchmark output. Like OSCAR the program spits out quite a bit of output to the command line depending on DEBUG settings. Also the Execute mechanism is designed to allow for a kill switch. The kill button kills the execution without killing the window. This may prove usefull to people who only want to see a few bits of a benchmark without letting it run its full course. Of course killing it will prevent any results from being displayed, but I think it makes the program a little more user friendly since its easy to tell what is going on.



Here you can see the output from an rpmbuild command. The ExecuteFrame is designed to be independent of OSCAR Bench it can execute anything, the only draw back is determining the steps/progress of the progress bar is nearly impossibly without having to write additional code. -- Anyone know a good way to solve this? Pass an arbitrary command to something and somehow determine how long it will take! I feel that is impossible, but ExecuteFrame has several public methods that allow you to control it from outside.

Well thats all for now, tomorrow should finish the Configuration panel. Then I need to work with the flow of execution more. Also I need to disable / enable the back/next buttons appropriately.

Friday, April 20, 2007

Packaging

I have been reviewing feedback from the developers and the consensus is a single package with multiple benchmarks included. To do this I want to create a framework so that adding / removing benchmarks is possible. Each benchmark will have a config file ( XML likely ) that specifies default parameters, GUI options, and other benchmark specific configurations. There will also be a general config file that specifies each benchmark, its location, an MD5 sum for the rpm/tarball, and global options such as which benchmarks would run as default if you install OSCAR Bench, where the output should go, where the online database is, and other values that would pertain to every benchmark.

When the user runs my package, the first thing to happen is the global config is parsed, this will gather the information needed to allow the user to select which benchmarks to run. It will also provide the location to find and verify the benchmark on the system. Then for each benchmark, the benchmark specific config will be parsed to define the configuration for this specific benchmark. When all of this data is loaded it will be presented to the user. The user will then have the option to change settings ( Each benchmark settings will be a different screen/panel not all clumped together since that would make no sense ). At some point during this stage the 'smart' configuration will take place. Smart being learning characteristics about the cluster that influence the benchmark such as N, P, Q, NB in HPL/HPCC. I would also like to give the user the option to not use my packaged ATLAS but instead run the atlas Makefile since it is very good at configuring itself.

After the user has reviewed the configurations they will then be taken to a screen where they can:
1) Run all benchmarks
2) Run a specific benchmark
3) Reconfigure a specific benchmark

I still need to put some more thought into the config files since I want the package to be extensible. Ideally I want to be able add benchmarks later simply by writing scripts to a) Configure the benchmark, b) run the benchmark, c) parse the results. Then with those scripts and the tarball/rpm I can just add it to the global config, and create a benchmark config and add it into the opkg.

In some ways this approach is like having many small opkgs within one large opkg. If I put time into the framework I believe this package can be used for a very long time.

Friday, March 30, 2007

Packaging

I feel my proposal was ambiguous in terms of packaging.

I think the most flexible solution would be to build binaries of each benchmark, and distribute those binaries in RPM format. This would be the same technique OSCAR uses to distribute core packages. The downside to this is that the benchmarks may not be tuned for the hardware. I don't see this as a major drawback since I can distribute the source, or provide a link to download it, explaining that the benchmark is intended to be compiled on the hardware.

I should be able to package a good variety of binaries. I have access to IA64, x86_64, and i386 architectures. Ideally I would like to package binaries compiled with SSE enabled/disabled, threading enabled/disabled. I am still not sure what the best solution to packaging is. My main concern is 1) user friendly/usefulness, 2) Maintainability.

For now I am continuing to read OSCAR documentation.

Tuesday, March 27, 2007

Abstract

As presented to Google Summer of Code:

Currently OSCAR can install a cluster, and determine if the cluster is usable. What OSCAR can not do is give an estimate on how powerful the cluster is. I propose to integrate ATLAS, HPL, and the new DARPA benchmarks into OSCAR as packages called OSCAR Bench. These packages will allow users to easily install, provide a mechanism to tune, and then run and report the results of these benchmarks. Ideally these results may be submitted to a database so that users may see other results and configuration that lead to those results. Since each cluster is different simply comparing the results of one to another is insufficient, which is why knowing the configuration of the benchmarking program is imperative. I believe this package will be a major enhancement to OSCAR allowing users an easy way analyze their systems. As a side effect this could also help developers identify areas where OSCAR can improve.

OSCAR Bench

This blog will contain status updates, and allow for anonymous feedback.

So what is OSCAR Bench?

OSCAR Bench is group of packages that allow users to easily install and run some of the common benchmarking programs. Tentatively the list includes HPL, and HPCC (ATLAS would be installed as well).

For information on the benchmarks:

High Performance Computing Challenge:
http://icl.cs.utk.edu/hpcc/

High-Performance Linpack:
http://www.netlib.org/benchmark/hpl/

ATLAS:
http://www.netlib.org/atlas/

Information on OSCAR:
http://oscar.openclustergroup.org/


What I do:
http://hpci.latech.edu/