Servers everywhere are being virtualized. Yet many DBA’s are hesitant to embrace virtualized database servers. I’ve been mystified by this anomaly, so I’ve asked those opposed for their rationale. While there are numerous arguments against, basically two pervasive themes surface from amongst all the replies.
First and foremost, DBA’s almost universally claim that their databases are “mission critical” and cannot suffer any performance hit that virtualization would necessarily impose. I hypothesize that these people must either consider shared resources as negative or have read that virtualization overhead can be from 5% to 15% – and they cannot suffer that loss.
However those very same DBA’s quickly allowed the single most important database performance factor (Disk IO) to become shared well over a decade ago. We all quickly embraced new Storage Area Network (SAN) disk arrays in order to get large pools of storage. Yet very few of those SAN’s were dedicated to a single database, or even a single DBA’s multiple databases. SAN’s were generally shared resources, and often without the DBA fully aware of who was sharing their spindles. We simply asked for “black box” amounts of space that were assigned for our use as LUN’s.
Today we’re simply permitting the three remaining key components (CPU, memory and networking) to be shared like our storage. If we so brazenly accepted it for Disk IO back then, how can we now say that the much less important database performance factors cannot be shared? I believe it’s just resistance to change.
As for the virtualization overhead, it’s a non-factor. If we were simply going to virtualize the database server and place it back on the same physical server, then sure – there would be a slight performance reduction. However DBA’s generally order excess capacity for growth, thus most servers are idle more than 50% of the time overall. But most virtualization efforts are to replace smaller servers with much larger shared ones. So losing my four CPU and 16GB RAM physical server and then being allocated the same or more resources from a much larger shared server should be a non-issue. As long as there is not over-allocation of resources on the physical virtual servers (i.e. hosts), then the negative performance impact should range from minimal to non-existent. Thus if four quad CPU and 16GB database servers were re-hosted to a virtualized host that had 32 CPU’s and 128GB of memory – the performance could actually be better (or at worst about the same).
The second pervasive but veiled theme is one regarding “loss of control”. You’re not going to like this observation nor be happy with me for making it. But in the good old days the DBA was a god. We often had unfettered access to our hardware platform. It was not uncommon to have “root” access. We often performed numerous complex design and management tasks, including hardware platform research and ordering, operating system configuration and tuning, storage design and allocation, capacity monitoring and projections, and so on. Thus the DBA knowledge and responsibilities were Herculean – and we loved it that way.
But in a virtualized world, now the DBA simply treats everything as a “black box” that someone else both provides and manages. We cannot venture into the server room anymore and knowingly point to our static resources such as disks. Nor can we really know exactly where our virtual machine is being hosted, because it can move – sometimes dynamically. Plus we have to ask someone else for things we used to do for ourselves. It’s a bit unnerving for those who remember the good old days.
Yes – there are some very valid performance issues that must be addressed when you virtualize your database, and those cannot be left to the defaults or chance. But most people seem to object more in the abstract. You’re not going to stop virtualization – so might as well learn to embrace and even like it.