Greetings stalwart technical nuonians! I am back with a follow on to my previous MVCC post. Part 1 gave a high-level abstract overview of the problem that MVCC was meant to solve, and a basic sketch of how a database would use MVCC to maximize concurrency. Now, in part 2 I hope to make these concepts more concrete by giving specific examples.
Example 1: Reads and Updates
Consider a table with at least 3 live rows in it. Assume each row has some ID value. Given that, let’s examine the case where one transaction is reading the entire table while another transaction is updating a row concurrently (concurrently means that both transactions are active and processing at the same time, so their effects can’t be ordered abstractly into a history).
Transaction 1 starts ‘before’ transaction 2, and starts reading every entry in the table it can find. Meanwhile, transaction 2 starts up and hunts for an updatable record. Transaction 2 outraces transaction 1 to record 100, slips in early and updates it. Because transaction 2 is still active, the update is pending (until a commit, it isn’t visible to other transactions). In a lock-based system, transaction 1 might have to lock the table as a whole or validate that no record changed while it was reading it. But with MVCC, when transaction 1 finally gets around to reading record 100 it just reads the most recent committed version (which is 1). Hooray! No writer blocked a reader and no reader blocked a writer. No one had to block or roll back.
Example 2: Multiple non-overlapping updates
Consider the same table as in example 1. However, in this case two separate transactions are going to attempt to update rows in the table.
Transaction 1 and 2 start simultaneously. Transaction 1 wants to update record 142, transaction 2 wants to update record 100. Both transactions encounter a fully committed version as the most recent, so they can update them immediately. No blocking required, and both transactions can commit in any order or fail in any order or combination and they won’t have to co-ordinate with each other.
Example 3: Update Conflict
Consider the same table as example 1. In this example, two transactions will attempt to update the same record. Conflict may be inevitable, it’s just a question of how you detect it, when you detect it and how you deal with it.
Transaction 1 and 2 are active at the same time, and both are attempting to update record 100. Assume that transaction 2 wins the race and installs a new version of record 100 with its requested value. That version is ‘pending’ or ‘locked’, which means that transaction 1 will see that it lost the race when it hits record 100. At this point, the behavior of the transaction is dependent upon the isolation state requested by the user and the concurrency controller in the database. If the user has requested some read consistency, then the concurrency controller has two choices: it can fail T1 right away and let the user decide how to proceed (probably through retrying). Another option would be to ‘pause’ T1 until the final state of T2 is known. The reason why you might want to do this is that T2 could fail later on for some other reason. If that happens, then version 3 of record 100 is basically ‘bad’ and can be ignored by T1 (and anyone else), and then T1 can start up again and update record 100. However, if T2 doesn’t fail, then T1 will have waited for a bit and will have to fail anyway. Both options have merit, and the choice between the two is an engineering trade-off that’s dependent upon the characteristics of the application. NuoDB allows users to choose the behavior they want. Failing early is the behavior in ‘consistent read’ mode (nuodb-ese for snapshot isolation), however for blocking behavior the user can run in ‘write committed’ or 'read committed' mode which will ensure that highly-contended data won’t lead to massive amounts of transaction failures. For finer-grained control, the user can use the ‘SELECT FOR UPDATE’ statement in ‘consistent read’ mode to switch to blocking behavior for a subset of the data being manipulated by the transaction.
The following diagram illustrates the case where T1 waits for T2, and T2 fails later on:
T2 fails and then the system is responsible for rolling back its changes. Note that all the rollback stuff can happen concurrently with other transaction’s forward-going operations. This is an important feature of nuoDB, because it means that transaction rollback doesn’t require other transactions to block or pause in any way while the world is being ‘reset’. Any performant modify-in-place transactional system must have a highly concurrent undo mechanism, otherwise everything would come to a screeching halt whenever a transaction failed.
NB: Write-locking record versions is an example of eager failure detection. A concurrency controller that uses a system like this will be able to detect transactional interference when it happens. Another option would be lazy failure detection, where the transaction double checks it’s initial state at commit and only detects conflicts after the fact. The semantics of SQL definitely make eager failure detection preferable (SQL statements can’t return until their effects have succeeded, i.e. they have synchronous behavior). In general, eager detection schemes for write conflicts tend to perform better.