Presenting at SQL Saturday Paris #sqlsatparis

I got word yesterday that I will be presenting at SQL Saturday #420 in Paris. This will be my first international SQL Server event. I’ll be presenting Transaction Log Internals: Virtual Log Files. As many times as I’ve given this presentation, it never gets old. Although I will say that it’s a little less exciting in SQL 2014 than it was in earlier versions. I’ll also be representing PASS at this event.

Paris is an incredible city, and I haven’t really spent much time there in a long time. So I’m going to spend a few days to give myself an opportunity to fall in love with the city of lights again.

Presenting at the PASS DBA Virtual Chapter

On July 8, I will be presenting for the PASS DBA Virtual Chapter. I haven’t presented for this group in a couple of years, so I’m pretty excited about this. I’ll be presenting Do More With Less: SQL CMS and MSX.

The hardest part of this group is that it’s done via webcast. I have trouble juggling the demo, presentation, and questions all at the same time. This time around, I’ve asked my new friend Andy Mallon (blog | twitter) to be my moderator. I can only imagine how much fun this is going to be.

Human Factors of a DR Test

I’m going to go out on a limb and assume that everyone does a regular disaster recovery test. You DO have a disaster recovery plan, don’t you? What happens if a comet hits your data center? Or a terrorist attack. Or the power grid goes offline and you run out of fuel for the generators. How do you recover that? If you don’t, you have some catching up to do.

One of the phrases that dovetails nicely into disaster recovery is business continuity. As technology people, we tend to think about how we get the systems back online in case of a failure. But what about the business itself?

The company I work for is owned by a bank, so we don’t talk about disaster recovery and business continuity as individual constructs. We talk about disaster recovery and business continuity as a single entity. And our parent company takes it very seriously. A couple of years ago, we had hundreds of employees in New York and New Jersey who were impacted by Hurricane Sandy. Basically, all of our people based in and around New York City were out of commission. Our clients never knew it. Do you know why? It’s because we had business continuity plans. The two primary data centers  weren’t impacted because they’re in the midwest. But our people were. By leveraging our people in western Pennsylvania as well as a multitude of offshore resources, the only thing clients saw was that different team members were responding to their service requests as different times of the day. Tedious planning really paid off.

Every year, we do our regular “India out” exercise. This is in addition to regular technology DR tests. We simulate a situation where all of our offshore teams become unavailable. And we do this with our US-based teams filling in the gaps. It demonstrates to our auditors, our clients, and to ourselves, that we’re ready in case a crisis would hit. About a year ago, we had to implement those contingencies during a period of civil unrest that threatened to shut down our offices just outside of Mumbai. These “India out” exercises are what I refer to as a “scheduled bad day.” They really suck. Parts of our US teams work hours where we should be sleeping. It always confuses clients when they see me answering an email at 4AM. When I explain that we’re doing a business continuity test, they always appreciate it.

What gets me is when we do a disaster recovery and business continuity test at the same time. Occasionally during a DR test, they’ll throw us for a loop by saying we can’t use offshore resources. Or they’ll occasionally say that we can only use offshore resources. Or people in a given facility should be assumed offline. The worst is when they tell us that certain tools, such as our ITSM, email, or IM tools are unavailable during the test.

We train for crazy things. They’re inconvenient. There are always lessons learned. We hope they are things we never have to do in an actual disaster. But then again, nobody thought Hurricane Sandy would happen, either.

Look for me at #sqlsatpuertorico

Next weekend is SQL Saturday #373 in San Juan, Puerto Rico. There is something about having a training event on a lovely tropical island that appeals to me. This will be my second time speaking at their event. These guys run a great event.

I will be presenting Recovery and Backup for Beginners. That’s one of my favorite topics. So many presenters try to do 400-level topics, and I like to do the intro stuff as well. Sometimes people forget that we have a lot of people who are just getting their feet wet with SQL Server.

I’m also presenting Transaction Log Internals: Virtual Log Files. This will be my first time presenting it after learning about the new algorithm in SQL 2014 for VLF creation. I need to figure out if I want to demo that in both SQL 2012 and SQL 2014, or if I want to just mention that this has changed in SQL 2014.

Handy Query: Is LPIM Enabled?

My counterpart in our R&D department is a strong believer that enabling Lock Pages in Memory is required for our product. I don’t always agree, especially for our small virtualized instances. However, when we’re trying to troubleshoot a problem, he always asks me if LPIM is enabled.

For ages, we had to look at the top of the SQL Server log file to see this. The problem is that we cycle our error logs weekly to keep them a manageable size. That means that after a few weeks, this data simply isn’t available. So one day, I took to the Twitterverse to find if there was another way to know. Glenn Berry from SQL Skills rescued me.

SELECT locked_page_allocations_kb
FROM sys.dm_os_process_memory;

The solution is brilliant in it’s simplicity. If the value is greater than zero, the service is locking pages in memory. If it’s zero, it either isn’t (or can’t) locking pages in memory.

Handy Query: When Did My Instance Start?

Handy Query: When Did My Instance Start?

My company has a rule. We must reboot each of our Windows servers at least quarterly in order to ensure they have the latest patches. Finding out how long a Windows server has been up has long driven us nuts. However, SQL Server does know how long the instance has been up.

What is one of the things that SQL Server does when it starts? It recreates tempdb! That means if we can determine when tempdb was created, we know when SQL Server started.

SELECT crdate [INSTANCE START TIME] FROM master.dbo.sysdatabases WHERE NAME='tempdb'

instanceuptime

 

This works in my environment because we have business rules in place that say when we stop a SQL instance, we reboot the server. Additionally, all of my instances are standalone (non-clustered) SQL instances.

« Previous Entries