Databases help the business access information more quickly. This, in turn, allows employees to get more done in less time, which allows businesses to grow more rapidly. When businesses grow more rapidly, there are generally more jobs and higher pay and the economy does better. This all sounds wonderful, doesn’t it? So then why don’t we see databases helping companies and organizations progress their business faster?
Productivity is a main contributor, but there are many others. In fact, databases are very difficult to understand on a holistic level. From programming to maintaining and from deploying to managing, databases can be tricky to keep up with when thinking about the business. In this post, we’ll take a closer look at what challenges are faced by virtualization admins who are, in one way or another, connected to Microsoft SQL Server (MSSQL) in addition to some of the advantages of using the software.
The main reason Microsoft SQL Server is a favorite of developers and virtualization admins alike is its ease of use. Development and troubleshooting are typically the toughest aspects to perfect when thinking about getting a SQL project into production. MSSQL comes with excellent tools that will save you a lot of time in these areas – tools like SQL Server Profiler, SQL Server Management Studio, BI tools and Database Tuning Advisor.
Setting up almost everything, from installing on a VM to initial query writing and editing, is incredibly easy with MSSQL – especially in comparison to other SQL products. If there are problems in any stage of development, there is a plethora of online support and documentation in addition to live product support, whereas the support options for other SQL products are not nearly as robust.
MSSQL Memory Management Challenges
Although we can always program code better to be a little (or a lot) more efficient, the real struggles with MSSQL happen when managing it in a virtual environment. MSSQL tends to “hog” the allocated memory, at least from the VM’s perspective, seemingly using 100% of its virtual memory. A common practice is to simply ignore this monitored utilization and take the DBA’s recommendation for VM memory configuration. This only leads to inefficiency or even performance problems and doesn’t give any application-tier insight to the infrastructure. By giving an arbitrary amount of resource capacity to the VM running the application, resources not used are wasted. On the flip side, when the capacity isn’t big enough, application performance can suffer. Without the proper insight or visibility into the application to find out actual resource utilization, virtualization admins are often the scapegoat for when something goes wrong.
The Finger-Pointing Scenario
So, what if performance does become an issue? Who’s at fault, and how do you resolve it? Is it the coding, the resource supply on the virtual machine or internal to the application? Without proper communication or awareness of all parties involved, it’s often difficult to pinpoint the bottleneck. This is when the finger-pointing begins. If the application’s middleware components were aware of the sizing of the underlying VM infrastructure and vice versa, then there wouldn’t be any bottlenecks from a resource allocation perspective.
How else is the VM supposed to know how much memory to give to the application if the database memory isn’t sized correctly which is leading to an unacceptable response time or QoS? In this situation, rather than spending time troubleshooting, it would benefit the performance output of the application if there was a proper supply of computing resources to the consuming database server. Conversely, if the DBA had proof/justification that the underlying infrastructure was free of resource contention bottlenecks, s/he would be able to troubleshoot performance issues much more quickly. Eliminating bottlenecks on the infrastructure reduces the number of places a finger can point in a degraded performance situation and enables teams to refocus their time on innovative projects for the business.