As with many facets of life, database benchmarking has several myths or “urban legends” that need summarily dispelled.  So I’m going to write a few short blogs focusing one by one on some of these misunderstood database benchmarking issues. Note that I am not preaching that database benchmarking is a worthwhile task, because there are many who feel it’s not. In fact I recently read an excellent Forrester paper by Noel Yuhanna on the decreased value of standardized database benchmarks. But for those who do decide to experiment with or perform some standardized database benchmarks, you will want to avoid some common misconceptions and traps.

Let’s start by first examining the concepts and measurements of a standardized database benchmark. In its simplest form, a standardize database benchmark simply requires that a well defined workload is submitted to a database server and that measureable results shall be observed. These results are then to be applied to some complex mathematical formulas which yield comparable numbers (i.e. can be used for “apples to apples” comparisons).

So in the case of the TPC-C (www.tpc.org/tpcc), an older on-line transaction processing (OLTP) benchmark, the metric used to report Maximum Qualified Throughput (MQTh) is the tpm-C – which is the number of “new orders” processed per minute. There’s other concurrent database activity and an order is more than just a single database operation, thus tpm-C represents a true “business throughput” rather than just a discrete transaction execution rate. Thus all TPC-C test results should be reported in tpm-C. Yet most people seem to focus on TPS instead. And many are just looking at the entire database workload in calculating TPS rather than successfully completed “new orders”, so they are twice as wrong (i.e. wrong metric and calculating transaction throughput wrong). Yet most people seem intent on merely getting high TPS rates as the true measure of success.

I liken measuring standardized database benchmarks to something we all know fairly well – driving an automobile. We measure driving success and failures along several driving “business” metrics:

  • Did we arrive okay
  • How long did it take
  • How much fuel did we use
  • Did we keep up with traffic

In short, we generally care about the user perceptions of the experience rather than the internal workings of the automobile engine. So things such as RPM and MPH are less important. That may seem odd – that engine work effort and velocity are not critical. But they are just internal mechanics or side effects of what we really care about (i.e. results).

TPS rates are much like RPM or MPH depending upon your viewpoint. Regardless, they are not the true measure. Either express in terms of tpm-C or something more useful from the real business perspective – such as average response time, which SLA’s often require.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

About bscalzo2

Bert Scalzo is an Oracle ACE, blogger, author, speaker and database technology consultant. His work experience includes stints as product manager for DBArtisan and Rapid SQL at IDERA and chief architect for the popular Toad family of products at Quest Software. He has three decades of Oracle® database experience and previously worked for both Oracle Education and Oracle Consulting. Bert holds several Oracle Masters certifications and his academic credentials include a BS, MS and Ph.D. in computer science, as well as an MBA. He has presented at numerous Oracle conferences and user groups, including OOW, ODTUG, IOUG, OAUG, RMOUG and many others. Bert’s areas of interest include data modeling, database benchmarking, database tuning and optimization, "star schema" data warehouses, Linux® and VMware®. He has written for Oracle Technology Network (OTN), Oracle Magazine, Oracle Informant, PC Week (eWeek), Dell Power Solutions Magazine, The LINUX Journal, LINUX.com, Oracle FAQ and Toad World. Bert has also written the following books: • Oracle DBA Guide to Data Warehousing and Star Schemas • TOAD Handbook (1st Edition) • TOAD Handbook (2nd Edition) • TOAD Pocket Reference (2nd Edition) • Database Benchmarking: Practical Methods for Oracle & SQL Server • Advanced Oracle Utilities: The Definitive Reference • Oracle on VMware: Expert Tips for Database Virtualization • Introduction to Oracle: Basic Skills for Any Oracle User • Introduction to SQL Server: Basic Skills for Any SQL Server User • Toad Unleashed • Leveraging Oracle Database 12cR2 Testing Tools • Database Benchmarking and Stress Testing (coming 2018)

Category

Uncategorized