Friday, April 20, 2007

Packaging

I have been reviewing feedback from the developers and the consensus is a single package with multiple benchmarks included. To do this I want to create a framework so that adding / removing benchmarks is possible. Each benchmark will have a config file ( XML likely ) that specifies default parameters, GUI options, and other benchmark specific configurations. There will also be a general config file that specifies each benchmark, its location, an MD5 sum for the rpm/tarball, and global options such as which benchmarks would run as default if you install OSCAR Bench, where the output should go, where the online database is, and other values that would pertain to every benchmark.

When the user runs my package, the first thing to happen is the global config is parsed, this will gather the information needed to allow the user to select which benchmarks to run. It will also provide the location to find and verify the benchmark on the system. Then for each benchmark, the benchmark specific config will be parsed to define the configuration for this specific benchmark. When all of this data is loaded it will be presented to the user. The user will then have the option to change settings ( Each benchmark settings will be a different screen/panel not all clumped together since that would make no sense ). At some point during this stage the 'smart' configuration will take place. Smart being learning characteristics about the cluster that influence the benchmark such as N, P, Q, NB in HPL/HPCC. I would also like to give the user the option to not use my packaged ATLAS but instead run the atlas Makefile since it is very good at configuring itself.

After the user has reviewed the configurations they will then be taken to a screen where they can:
1) Run all benchmarks
2) Run a specific benchmark
3) Reconfigure a specific benchmark

I still need to put some more thought into the config files since I want the package to be extensible. Ideally I want to be able add benchmarks later simply by writing scripts to a) Configure the benchmark, b) run the benchmark, c) parse the results. Then with those scripts and the tarball/rpm I can just add it to the global config, and create a benchmark config and add it into the opkg.

In some ways this approach is like having many small opkgs within one large opkg. If I put time into the framework I believe this package can be used for a very long time.

14 comments:

Anonymous said...

Hi James,

I wish you good luck with what you are trying to do, I have just completed my MSc.IT where my dissertation was in using OSCAR. I was frustrated in trying to get accurate and easily accessible benchmarks.
I have passed your project details to our Linux members at dundee@mailman.lug.org.uk

I wish you all the best in you project

Anonymous said...

DblyPK Your blog is great. Articles is interesting!

Anonymous said...

N8uF3F Thanks to author.

Anonymous said...

Dknikx Wonderful blog.

Anonymous said...

actually, that's brilliant. Thank you. I'm going to pass that on to a couple of people.

Anonymous said...

Wonderful blog.

Anonymous said...

Hello all!

Anonymous said...

Magnific!

Anonymous said...

Magnific!

Anonymous said...

Wonderful blog.

Anonymous said...

Hello all!

Anonymous said...

Please write anything else!

Anonymous said...

jYeXrk Nice Article.

Anonymous said...

Good job!