Recently I ran into a bug related to autogrow of the log file. Because this was the second time I encounted this particular bug, I decided to blog about it.

The symptoms of the problem was the one specific database took a very long time to be brought online after a simple service restart. The database was simply staying in the “in recovery” state for a long time. The reason for this was a very high number of Virtual Log Files in the database. Let me set up a demo for you, so you can see what this is all about:

  1. USE master  
  2. GO  
  3. CREATE DATABASE VLFHell  
  4. GO  

You can see how many VLF’s are in your database by running this:

  1. DBCC LOGINFO  
image

 

This shows, that there are a total of 5 virtual logfiles within the logfile of the new database, and you can see the size of each virtual logfile. We will now take a full backup of the database, to make the recovery model work as full (until the first full backup, the logfile will work as if it was in simple mode)

The default configuration for a SQL Server is, that the logfile will autogrow by 10%, so let’s see how this will behave if we generate some dummy data:

  1. USE [VLFHell]  
  2. GO  
  3. CREATE TABLE MakeMyLogFileGoCrazy  
  4. (  
  5. id bigint identity(1,1),  
  6. WasteOfSpace1 uniqueidentifier default newid(),  
  7. WasteOfSpace2 datetime2 default sysdatetime(),  
  8. WasteOfSpace3 char(8000)  
  9. )  
  10. GO  
  11. INSERT INTO MakeMyLogFileGoCrazy (WasteOfSpace3)  
  12. VALUES ('Geniiius - i-catching solutions')  
  13. GO  
  14. INSERT INTO MakeMyLogFileGoCrazy (WasteOfSpace3)  
  15. SELECT WasteOfSpace3 FROM MakeMyLogFileGoCrazy  
  16. GO 17  

Now let’s see the number of virtual log files:

  1. DBCC LOGINFO  

image

The number of rows returned is now 259, which is not that bad. If you look at the FileSize column, you can see that the sizes of the virtual log files in the bottom of the view, is slightly bigger than the first once. This is because of the growth in percent.

You hit the bug if you set your log file to auto grow with a multipla of 4GB, and that was exactly the autogrow size I ran into the other day.

Now let os try to see what happens, if we reset the test setup, but make the logfile grow with 4GB instead of the default 10%:

  1. USE [master]  
  2.   
  3. GO  
  4. IF EXISTS (SELECT * FROM sys.databases where name = 'VLFHell')  
  5. BEGIN  
  6. ALTER DATABASE [VLFHell] SET SINGLE_USER WITH ROLLBACK IMMEDIATE  
  7. DROP DATABASE VLFHell  
  8. END  
  9. GO  
  10. CREATE DATABASE [VLFHell] ON PRIMARY  
  11. (  
  12. NAME = N'VLFHell',  
  13. FILENAME = N'C:\SQLData\VLFHell.mdf',  
  14. SIZE = 5MB,  
  15. FILEGROWTH = 1MB  
  16. )  
  17. LOG ON  
  18. (  
  19. NAME = N'VLFHell_log',  
  20. FILENAME = N'C:\SQLData\VLFHell_log.LDF' ,  
  21. SIZE = 1MB ,  
  22. FILEGROWTH = 4GB --Notice the 4GB filegrowth  
  23. )  
  24. GO  
  25. BACKUP DATABASE [VLFHell] to DISK='C:\SQLData\VLFHell.Bak' WITH COMPRESSION, INIT  
  26. GO  
  27. USE [VLFHell]  
  28. GO  
  29. CREATE TABLE MakeMyLogFileGoCrazy  
  30. (  
  31. id bigint identity(1,1),  
  32. WasteOfSpace1 uniqueidentifier default newid(),  
  33. WasteOfSpace2 datetime2 default sysdatetime(),  
  34. WasteOfSpace3 char(8000)  
  35. )  
  36. GO  
  37. INSERT INTO MakeMyLogFileGoCrazy (WasteOfSpace3)  
  38. VALUES ('Geniiius - i-catching solutions')  
  39. GO  
  40. INSERT INTO MakeMyLogFileGoCrazy (WasteOfSpace3)  
  41. SELECT WasteOfSpace3 FROM MakeMyLogFileGoCrazy  
  42. GO 17  
  43. DBCC LOGINFO  

And the results from the dbcc command:

image

Now the number of VLF’s are 4933. The table generated is only around 1GB of data, so imagine what would happen if you where loading hundreds of gigabytes of data into it? Also notice the size of each added VLF. Now they are only 248KB, which causes physical fragmentation, as well as overhead when analysing the logfile in the process of bringing the DB online.

The case I stumpled upon recentlty, had hundred of thousands of VLF’s, and the database took hours to be brought online… not the most ideal situation!

If you have a database that stays a bit too long in the “in recovery” state after a reboot, I would certainly recommend looking at the DBCC LOGINFO command. If you have thousands of VLF’s, the solution is quite simple. Make sure to backup the log, shrink the log file, and grow in resonable sizes not equal to any multipla of 4GB:

  1. BACKUP LOG [VLFHell] TO DISK = 'G:\Data\VLFHell.trn'  
  2. GO  
  3. CHECKPOINT  
  4. GO  
  5. DBCC SHRINKFILE (N'VLFHell_log' , 1)  
  6. GO  
  7. ALTER DATABASE [VLFHell]  
  8. MODIFY FILE ( NAME = N'VLFHell_log'SIZE = 2GB )  
  9. GO  
  10. DBCC LOGINFO  
  11. GO  
Perhaps you need to backup the log a few times before being able to shink the file completely. But now the number of virtual logfiles is reduced to just 20:

image

Conclusion

If you for some reason have configured your logfile to grow in 4GB multiplas, you could suffer from this bug, which could lead to long recovery times after a reboot. The bug is confirmed in both SQL Server 2005, SQL Server 2008, SQL Server 2008 SP1 and SQL Server 2008 R2. In Denali it is however not an issue, so the bug has been fixed there.

@HenrikSjang

 
Geniiius ApS