Crowd-Sourcing System Requirements For Free Software

Written by Michael Larabel in Phoronix on 7 March 2011 at 04:42 PM EST. 1 Comment
PHORONIX
When purchasing commercial software for Windows and Mac OS X you are almost always presented with the system requirements for the software and what the vendor recommends for an optimal experience. When dealing with open-source / Linux software, this is rarely the case. It's far less common to see free software projects that will list their recommended hardware/software configurations, even though in the case of computational and/or graphical intense free software, the recommended system hardware requirements are just as important.

In fact, with such software the system requirements are arguably even more important than for Windows due to the vast number of software/hardware combinations that are possible with an open-source operating system (e.g. open-source vs. closed-source GPU drivers and different compilers) and all the ways in which it can be modified and tweaked. To provide system requirements for free software projects is important, but would place a large, unrealistic burden on the projects themselves as they are often unfunded and have access to only limited software/hardware setups, so why don't we out-source all of the work to the community? It's very well possible.

Most free software projects are unable to provide such system requirements since they are often unfunded work -- particularly for the open-source games and other hobby projects that don't have the backing of any corporation -- so it's usually just a handful of developers (or even a sole developer) without a mass collection of hardware at one's disposal, no formal QA process, etc. Besides the hardware, there's many different Linux distributions shipping with varying versions of compilers, different drivers, different kernels, etc. The status quo for free software projects and system requirements is that they are rarely provided and when they are provided they are extremely vague. So let's make it better and it's now possible to efficiently do so with OpenBenchmarking.org.

With thousands of benchmarks going on and any independent party being free to submit their own test profiles and suites, there's nothing that's holding back free software projects (or even commercial software projects) from outsourcing the work to the community.

Over the weekend into OpenBenchmarking.org I pushed the OpenBenchmarking.org Performance Classification Index (OPCI) feature. Read that blog post for all of the features, but it comes down to now indexing the most commonly tested hardware and classifying the performance of all test results into low, mid, and high-end segments. So you can easily see a list of overall -- for all tests hosted by OpenBenchmarking.org -- a list of the rated processors, graphics cards, motherboards, and disks that are self-hosting. The OpenBenchmarking.org Performance Classification Index was then resolved down to the test profile level to be able to answer questions like what is the best graphics card for this game?

As of my latest code push a few minutes ago, it's now evaluated to low, medium, and high-end hardware at the test and suite levels. In other words, for any test out there, you can now see what would result in a low, medium, or high end experience based upon the collection of community data that's available.

As an example, at a glance you can easily see a variety of CPUs and how they perform with the Apache web-server. Just not how their performance is classified, but for many of them, what their real-time price is too. You can also see what graphics cards do the best with the OpenArena game, what what CPUs yield the best SSL performance, or what GPUs and drivers are even capable of running Unigine Heaven. There's over 120 test profiles already and new ones can be easily created.

All OpenBenchmarking.org safeguards and features apply to this data too, including eliminating outliers, results that are potentially fudged, not showing hardware with limited result data, results with high standard deviations between runs, etc. All a project (or any individual) has to do is create a test profile, which is comprised of XML files and bash scripts, and then have the community run the tests for them. These tests, of course, are fully automated from test installation to execution and reporting the results by running a simple command like phoronix-test-suite benchmark nexuiz.

It doesn't stop there for providing useful benefits to the end-user, but these performance classifications can now also be examined at the suite level. If what matters to you in your next hardware upgrade isn't limited to a single type of workload, just find the most relevant suite that matches your particular needs. For example, if you're building a new server or looking at an upgrade of select components, simply looking at the Apache performance classifications may not be enough. But if you're looking at the web-server performance, disk, database, cryptography, and some encoding, you would probably stumble across the server test suite. If you look at that page, it then shows you the performance classified when analyzing all of the test profiles contained within that suite. As another example, if you are a developer and code compilation is important to you, there is the code compiler suite that provides a composite for all of the tests where code compiling is done.

When a test profile is made, OpenBenchmarking.org takes care of figuring out what hardware/software differences are important, etc. Right now all of this information is classified in one-dimension (e.g. the CPU-bound tests only showing the processor classifications, not the memory, disk, or other less vital classifications), but multi-dimensional classifications are coming soon. Since OpenBenchmarking.org can auto-determine what component changes are important based upon analyzing all of the community data (see this morning's look at exploring the 7-Zip data-set), it can know whether to also show the classifications for the motherboard, the compiler, etc based upon whether changing them out actually yields a noticeable difference.

While performance is what we talk about most and that's what most of the test profiles look at, the Phoronix Test Suite already has support for carrying out image quality comparisons and looking for pixel-by-pixel differences while taking into account thresholds, can measure the battery power consumption, CPU temperatures, CPU usage, and various other metrics. Not only are the test profiles abstracted out to allow any application / software component to be plugged in, but those fields too can be plugged in and then leveraged by these performance classifications too. All that's needed is the test profile.

Software projects don't even need to rely upon community members themselves to run the test profiles, but if they use test profiles from the Phoronix Test Suite to carry out automated benchmarks for quality assurance purposes, that data too can be plugged into OpenBenchmarking.org for these performance classifications and other features. As alluded to a few weeks ago, Wine / CodeWeavers may do just that for having the Phoronix Test Suite monitor the performance on a per-commit or per-day basis using Phoromatic for managing their test nodes and to utilize automatic detection of regressions. If this is done and the results also collected on OpenBenchmarking.org, then users wondering how different hardware / software is classified for running programs within Wine, they can learn that too off the automated test information.

I've also made offers before that any known community projects that are serious about automated performance management and regression monitoring can contact me (or on Twitter) and chances are there's even hardware within the Phoronix test farm (a.k.a. my office) to dedicate to the cause for routine, automated testing. This is just like the daily Linux kernel benchmarks and daily Ubuntu benchmarks (those aren't currently active as the next-generation Phoromatic is being worked on at the moment, which is to be powered by the OpenBenchmarking.org external API).

This though is only the beginning and more pivotal features are imminent. Contact us if interested (bottom of this blog post) or to learn more information and plans as we continue to drive crowd-sourced benchmarking on this open and collaborative testing platform.
Related News
About The Author
Michael Larabel

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.

Popular News This Week