What does the new Basefarm Service Desk mean?

In January, we introduced new procedures for how we handle cases at Basefarm Service Desk. We thought that you as our customers are probably wondering what it will mean for you, and that’s why we are now giving you a more detailed explanation:

The new procedures mean that Basefarm Service Desk in Sweden now takes care of all cases (incident management & change management) for Swedish customers during daytime, from 7 AM to 5 PM. Other times (weekend & night) we have passive readiness where Basefram Service Desk in Sweden works in close collaboration with Basefarm Service Desk in Norway in order to solve the cases. Earlier, Sweden was staffed between 8 AM to 11 PM on weekdays.

Overall, for our customers it means that we are always available 24/7 both in Sweden and Norway. This result in benefits for you as a customer:

  • Accessibility increases on local level, 24/7-support also in Sweden
  • We are physically closer to you as a customer
  • Our new model contributes to shorter solution times
  • We will increase the ability to proactively work

This will mean that Basefarm Service Desk in Sweden have a wider overall perspective and can be more proactive and resolve cases faster than before. Norway can now focus on cases only from Norwegian customers and thus also increase the availability locally.

It’s mainly our Swedish customers that will notice the changes. Basefarm Service Desk in Norway already works with solving cases on a local level at daytime.

Do you have any questions regarding the changes? Post a comment below or contact your local Basefarm Service Desk.

Tracing select statements on specific objects in SQL Server without using Profiler

A developer of an application asked me an interesting question the other day. He wanted to know (for reasons not worth going into here) whether his application issued select statements against a specific table in his database. This database was in production and under heavy load, so although we could run a server side SQL Profiler here, and then read through the results, this could be a time consuming process which could generate an extremely large amount of data (and also a quite heavy load on the server). We also wanted to run this monitoring for a number of days if possible, so we needed something more lightweight if possible.

I thought about this for a while and realised that the best way to achieve this (assuming you are running SQL 2008 or later) would be through the new functionality SQL Audit. This uses the extended events framework as the basis for its tracing and therefore falls into the lightweight category.

Here’s an example of what I wrote, converted into simple test objects which you can try yourself. This example requires a table called dbo.audit_test to be present in a database named audit_db for you to test against.


USE master ;
GO
-- Create the server audit.
CREATE SERVER AUDIT test_server_audit
TO FILE ( FILEPATH =
'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA' ) ;
GO

— Enable the server audit.
ALTER SERVER AUDIT test_server_audit
WITH (STATE = ON) ;
GO
— Move to the target database.
USE audit_db ;
GO

— Create the database audit specification.
CREATE DATABASE AUDIT SPECIFICATION audit_test_table
FOR SERVER AUDIT test_server_audit
ADD (SELECT , INSERT, UPDATE
ON dbo.audit_test BY dbo,guest, public, db_datareader)
WITH (STATE = ON) ;
GO

/*
do some work here
which will trigger the audit to record something
*/

/* uncomment these statements to turn off the audit at either DB or server level

–turn off the database audit
use audit_db
go
ALTER DATABASE AUDIT SPECIFICATION audit_test_table
WITH (STATE = OFF)

use master
go
–turn off the server audit
ALTER SERVER AUDIT test_server_audit
WITH (STATE = OFF) ;
GO

*/

Here’s the key things to note about the above example:

1. This one actually traces 3 type of table access SELECT, INSERT and UPDATE
2. It traces specific users and groups – you can change these as relevant to your example
3. It writes the output to the default DATA directory of a default 2008 R2 install – change as you see fit
4. You need to watch the potential file space this will take up as it can be very verbose in big systems
5. Watching the file space used in real time will not work, as it holds most of the data in memory and flushes when you stop the trace

Once you have the output you need (and you have turned off the audit – don’t forget!) you simply run something like this to view the data (you’ll need to locate the exact file name created each time you turn the audit on or off).


SELECT COUNT(*), statement
FROM sys.fn_get_audit_file ('C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\test_server_audit_7E707DDD-03F3-4FFA-B24B-BB0DDBF4D5F3_0_129714455341990000.sqlaudit',default,default)
group by statement
GO

As you can see the above does a simple count and aggregate of the results, but there are many columns in the output which you can write TSQL against (although since it’s backed by a file the access might be slow if you have large files or slow disks!).

I found this to be a most effective technique and it didn’t impact the performance of the server whatsoever.

You can read more about SQL Server Audit at MSDN.