Audience Participation! Benchmark Edition

aford's picture

My company recently purchased new servers with dedicated storage, and any time this happens (and I have the luxury of time to do it), I take the opportunity to re-test my old assumptions concerning what storage configuration(s) work best.  The tests are ongoing, and will be for several days now, but I thought it would be fun to ask you all to guess what you think the results will be.*

First, the storage array details:

It's equipped with 13x 15K RPM spinning drives with 12 Gbit/sec SAS interfaces.  The storage is not shared, meaning the host to which it's connected is the ONLY one that's using it.  One drive is reserved as a hot spare, leaving 12 usable drives.  I'm testing the following configurations for performance:

  1. One Giant RAID-10 ("1x12R10"): Stripe and mirror all 12 drives in a single 12-drive stripeset
  2. One Giant RAID-6 ("1x12R6"): Stripe all the drives in a single 12-drive stripeset with two parity drives
  3. Two big RAID-10s ("2x6R10"): Two sets of six drives, each mirrored and striped as RAID-10
  4. Two big RAID-6s ("2x6R6"): Two sets of six drives, striped with two parity drives per set
  5. Three small RAID-10s ("3x4R10"): Three sets of four drives, each mirrored and striped as RAID-10
  6. Plain RAID-1 ("6xR1"): Drives are mirrored but not striped

In all cases, I've set up 6 equally-sized LUNs. In the 6xR1 case, each LUN corresponds neatly to a mirrored pair; In the 3x4R10 case, there are two LUNs on each stripeset; for the two-stripeset cases, there are three LUNs per set; and in the one-big-stripe case, obviously, all LUNs on the single set.  In all cases, I have disabled the storage array's controller cache, because I want to emulate times of very high activity and test the disk, not the cache.

I'm using two different tests as benchmarks: The first (call it the "OLTP test") is the AGS TPM-C test, which you're familiar with if you've ever participated the Advance Data Tools "Fastest DBA" contest.  For this test, all of the large tables are fragmented by hash across six dbspaces, one on each LUN.  There are six index dbspaces, also one per LUN, across which the indexes have been distributed in round-robin fashion. I did not fragment/partition any of the indexes.  In this test, I have the buffer pools tuned very small, because to repeat, I'm trying to test the disk, not the rest of the system.

The second test (call it the "batch test") is a very large purge job that sequentially scans a table with two indexes and deletes about 2 million rows from that table.  The table itself is fragmented using a hash across three dbspaces, each on a separate LUN; there's a BYTE column that's stored in a multi-chunk blobspace; the chunks for the blobspace are distributed across all six LUNs, but no single chunk spans more than one LUN.  And both of the indexes are fragmented six ways, with one fragment on each LUN.  Three of these batch jobs are run simultaneously, one for each table fragment.

In both cases, there's a rootdbs, a dbspace for the physical log, and two logical log dbspaces.  All four of these dbspaces are on separate LUNs from one another, and the logical logs alternate between the two log dbspaces (so that log #2 is not in the same dbspace as log #1).

So place your bets! Which storage configuration will work best for the OLTP test? Which will work best for the batch test?  I'm curious to hear what you think.  I'll post the complete results when I have them.  Feel free to ask questions if you have any.

* - This will also be a good test to see if anybody's actually bothering to read these posts. :)