When I interviewed for my current position, the hiring manager asked me if I knew anything about VLFs. My response was “Very large filesystem?” Clearly I missed that question. She was talking about SQL Server’s Virtual Log Files.
Until that day, I had never heard of VLFs, and to this day, a lot of DBAs don’t know what they are. But before I get too far into this, let me state that I’m hardly the expert on this. Kimberly Tripp from SQL Skills has written several really good articles on transaction log throughput where she talks about VLFs.
VLFs are a construct within the SQL Server transaction log file that breaks the transaction log file into chunks. Those are the segements that get marked for being in use or ready to be reused.
The bottom line is that if you have lots of really small VLFs in your log file, you probably have a performance problem and you may not even know it. The best way to avoid this is to avoid the default settings on a new database for the transaction log file size and growth. If you start with a 1 MB transaction log file that grows by 10%, you’re going to have lots of little VLFs before you know what hit you.
My suggestion for a good practice is to start with a larger transaction log file and grow in larger increments. In my environment, for our datamart environments, we start with an 8 GB transaction log file and grow them in 8 GB increments. For smaller databases with smaller throughput, we set a minimum of a 256 MB transaction log file with a growth of 256 MB.
In this session, I talk about how this can be a Goldilocks problem. You might have too many or too few. Or they might be too big or too small. The real way to know is to baseline and test for your environment. The one thing I would suggest is that you don’t want to grow your transaction log file by more then 8 GB at a time. This will lead to 16 VLFs created per growth. And that would mean VLFs greater than 512 MB.
The real value in this session is the demo. The first script creates a database and starts stuffing it with data. It’s absolutely not a model of good design. In fact, it’s far from it. I have multiple columns that contain GUIDs. Worse yet, I have those columns indexed. The reason I’m doing this is because I want to create a ton of transaction log volume. Kick this thing off and let it run. It’ll grow your transaction log to about 12 GB.
The second script will show the VLFs in your database. This script executes
DBCC LOGINFO WITH TABLERESULTS
Notice that as the first script is running, the number of VLFs will continue to grow on each execution.
There are some cool details on this output, including the size of each VLF and the LSN of when it was created. Lets do a little analysis on it. What I’m doing is pumping that data into a temp tble and then grouping on the LSN. I can then see how much the log file was grown each time and the number of VLFs it created. What you see can be quite staggering.
CREATE TABLE #VLFS (recoveryunit int, --remove this column for SQL 2005/2008 environments fileid int, filesize bigint, startoffset bigint, fseqno bigint, status int, parity int, createlsn varchar(1000)) use miketest go INSERT INTO #VLFS EXEC('DBCC LOGINFO WITH TABLERESULTS') select createlsn, sum(cast(filesize as float))/1024/1024 [size], count(startoffset) [count] from #vlfs where createlsn <> '0' group by createlsn order by 3, 2, 1 drop table #vlfs go use master go
Look at all of those growths. And they’re tiny. That can’t be good. You can see that we’re we’re growing our transaction log file by less than 1 MB several times. But look at what happens closer to the end of the output. We’re growing our transaction log file in much larger increments. And the bigger the growth, the more VLFs.
Here is what I’ve been able to prove. If your growth is less than 512 KB, you create one VLF. from 512 KB to 768 KB, you create two VLFs. 768 KB to 1 MB creates three VLFs. 1 MB to 64 MB creates four VLFs. Until you reach 1 GB, it’s eight VLFs. Anything over 1 GB is 16 VLFs.
If you poke around your environment, you probably have databases that look just like this. The question is How do I fix it? The answer is simple. It depends. If you’re not seeing a noticable performance impact, you may want to leave it alone. This is where If it isn’t broken, don’t fix it could apply. If you are facing a performance problem from too many VLFs, I’d suggest doing this during a quiet period for your database.
First, note how big your transaction log file is. Because I know how my files grow, I’m going to put it back to the size it was before. Then I use DBCC SHRINKFILE to shrink the transaction log file and then grow it back to the original size, making sure I only grow it in 8 GB increments.
The real lesson here is something that I talk about in other posts: Defy the Defaults! If you’re creating databases with 1 MB transaction files and allowing them to grow in 10% increments, you’re probably doing it wrong.
Our event was pretty successful.And we learned a lot along the way. Here are a few lessons we certainly learned.
One of the challenges with our venue is that we don’t really have anywhere for people to sit down and properly eat lunch. So we wanted to do something active at lunch. This can be tricky. If you do something that people don’t like, you risk having them leave, and they’re gone for the rest of the day. I wanted to do something that kept people engaged.
That’s when we started kicking around the idea of doing an “Ask the Experts” session. It became our ATE Lunch idea.
One of the things I strongly believe is that people often come to any type of a training event with one particular problem they want to solve or one particular question they want to have answered. THIS would be that opportunity to do so. We took our three big multi-purpose rooms and split them up by topic. We had one room for Application Development, Database Administration, and Business Intelligence And we asked our speakers to field questions for their particular area of expertise.
I poked my head into each of the rooms over the course of the hour, and people were talking. I heard questions being answered, and that was the goal. Was it the best use of our time? Maybe not. But it was an effective use, especially considering that we didn’t have to move any chairs around.
There are some stories that are just too absurd to believe, and they’re often too far-fetched to make up. This is one of them.
We were just kicking off our event. People were coming in the doors, they were visiting sponsors, they were getting coffee, and my stress level was starting to subside just a bit.
That’s when I heard the shouting. Someone was yelling for security. It seems that someone had crashed our party. He was shouting, flashing his badge around, telling us that he was from the Department of the Navy and that he had just compromised our event. I grabbed my cell phone and called the facility manager for Microsoft.The shouting continued. That’s when I confronted him, keeping my hands in my pockets to ensure I didn’t appear to be physically threatening, which isn’t much of a problem anyway at 5’8″ and about 30 pounds overweight.
What was really impressive is that this guy somehow got into our venue, picked up an attendee bag, lanyard and all, helped himself to coffee and donuts, and mingling with our participants. I verbally confronted him and started leading him to the door. That’s when the facility team took over the situation and escorted him out of the building.
The guy’s “badge” was really an ID card from the Department of Veterans Affairs. He was probably a patient at a VNA hospital, and we should honor his service to the nation. And he was probably just happy that he got coffee and a donut.
This is probably one for the SQL Saturday record books.
One of my favorite things about a SQL Saturday event is the speaker room. Sure, there is always some good gossip going on, but you also get to take part in some really good conversations. I’ve heard people talk about doing stuff with SQL Server that I never dreamed would even be possible. And the storytelling that goes on is always fun.
As someone who likes to speak at these events, one of my top priorities was to take care of our speakers. Sure, we had a dinner the night before, and we had a little gift for them in addition to the event shirts. But I wanted to let them know that I appreciated them. Speakers give up a good chunk of their weekend to volunteer at these events. Often times, they do it on their own dime. So I wanted a little treat. I called on the good folks at Sweet Cupcake Bakery in Boston. The cupcakes are both delicious and beautiful. As the day progressed, I spoke to our sponsors and told them that we had cupcakes in the speaker room. I don’t think we threw away any cupcakes.
Don’t be surprised if we do that again next year.
One of the best decisions we made very early on was to enlist the help of Paresh Motiwala to coordinate our volunteers. That man kept me sane.
The backbone of a successful SQL Saturday event is the volunteers. They do everything from picking up the coffee to staffing the registration tables to helping people find the restrooms. And we had an amazing group of volunteers.
Being completely honest here, I’m not sure what Paresh did or how he did this. I just know that he had the list of things that needed to be done, the list of people willing to volunteer, and he made the rest happen. Paresh is a natural leader, and he’s incredibly organized and process driven. Getting him to lead our volunteers is something I hope to do again next year.
There is one thing that I did that I will never regret. I enlisted the help of one of my closest friends, Andrew Mannone. He’s not part of the SQL Server community. That also means he didn’t care about seeing certain speakers, and it meant that he could stay close to the registration desk all day. I introduced him to Paresh, handed him my credit card, some cash and said “Whatever you guys need, just take care of it. Don’t ask. Just do. And get me a receipt.” Those are the friends you know you can count on. And it worked well. Next year, I’d like to get a few more volunteers from outside the community for the registration desk. That will mean our volunteers can sit in on sessions as well.