The SQL Server platform is a robust system, designed to handle data in a relational system through the “ACID” properties of transactions. That is, to ensure that any piece of work (a transaction) would be atomic, consistent, isolated, and durable. Failing to achieve these properties would make the environment very hard to use, because you simply wouldn’t be able to guarantee that something you thought had happened had happened.
Except that there are times that we are happy enough to relax these rules. Kind of. Like when we use the much-maligned NOLOCK hint. NOLOCK can give actual incorrect results – like trying to count the number of people running around a sports field, you’re better off taking a photo and using that, so that you can be sure to count everyone exactly once. Using NOLOCK impacts the isolation level of a query, and we no longer have the full set of ACID properties with this hint.
Durability in SQL is handled by making sure that data changes are written to disk before the calling application is informed that the transaction is complete. We don’t walk out of a shop with our goods before the cashier has confirmed that the credit card payment has worked. If we did, and the payment bounced, the cashier would be calling us back pretty quickly. In SQL this is about confirming that the transaction log entry has been written, and it’s why you shouldn’t use disks with write-cache for databases
And yet, in-memory features of SQL, commonly called “Hekaton” handles transactions without writing to disk. The durability is delayed. This month, Todd Kleinhans invites us to write about Hekaton.
For me, the danger associated with losing those ACID properties of transactions is pretty serious. If I need a set of database changes to be treated as an atomic block, I surround it with BEGIN TRAN and COMMIT TRAN. I only use NOLOCK if I’m okay with my results being impacted by other activity. And similarly with delayed durability, I have to be okay with a committed transaction not actually surviving.
If a SQL system goes down the transaction log is the most important thing to save. If you can’t save it, then you’re already suffering from a lack of durability. The data files will have the data up to the last checkpoint, but not necessarily anything afterwards. So the log files are the things that DBAs really protect. Typically when a SQL environment goes down, files aren’t getting lost, and nothing of consequence disappears.
But with delayed durability, any transaction that hasn’t been made durable yet is going to disappear. That’s huge!
Hekaton was announced with new records being set for speed, demonstrations of in-memory stored procedures being converted into well-optimised C code, where inserting into a table involved inserting an instance of a struct object into an array. The speed benefits were obvious. But without durability, the use seemed very limited. And with durability the main benefits didn’t seem to be so significant. The promises of amazing performance were tempered by the danger.
There were at least two situations when I was okay with this though. And I still am.
One is in an ETL process that I can restart if needed (and my “at least two” is because of other processes in this same kind, such as logic testing in non-production environments). Suppose I have an overnight process which loads up a staging environment and then applies a bunch of transforms. If this gets interrupted, I’m going to start again from the beginning. This means I’m happy enough for my processing to disregard most of the ACID properties, especially if it means it can do that work quicker, by not having to write everything to the transaction log.
The other is in tempdb. After all, if the SQL service has to restart, tempdb is reset. So by very definition, tempdb isn’t durable. Don’t write anything to a temporary table, table variable, or anywhere else in tempdb if you want it to survive a restart!
And this is where I missed some stuff about Hekaton, for quite some time. I couldn’t see a reason why in-memory and delayed durability wouldn’t be on automatically for tempdb, and always configured that way, including the newer in-memory metadata features. Delayed durability is still a configuration option for tempdb, but it’s set to disabled.
I asked some Microsoft people about it last year, and found that it wasn’t just me who wasn’t clear about it. Bob Ward wrote about the fact that I’d asked him in this article: https://techcommunity.microsoft.com/t5/sql-server-blog/i-o-optimizations-for-tempdb-in-sql-server-and-azure-sql/ba-p/3894163
They did investigate it though, and found that the configuration option doesn’t really do anything. Even though the setting for delayed durability says disabled, tempdb uses lazy commits for the log file, which is just another word for the same thing. More about this here: https://learn.microsoft.com/en-us/sql/relational-databases/databases/tempdb-database
What does this mean from a more practical viewpoint?
I think one idea that comes from this is the question of how tempdb gets used in batch processing like ETL. Should all objects that don’t need to persist be handled by tempdb? Staging data? Intermediate results? There are often times when people create tables in the local database to avoid problems like collation pain, but what is being lost by doing that? Larger log backups in the database that matters, perhaps? General performance?
I’m not saying to go and rewrite your code. I’m sure that’s not insignificant. I’m just suggesting some things to ponder, research, and test.
And come back here to let me know what you discover.
@robfarley.com@bluesky (previously @rob_farley@twitter)