Context
A few years after I created the Elastic Binary Trees (ebtree), I started to get some feedback from users who needed to use them in multi-threaded environments, asking if an MT-safe version was available. By then, nothing was available, and all I could suggest them was to place some locks around all the tree manipulation code. While this generally works well, it was sub-optimal because these trees feature some really nice properties that are partially impaired if surrounded by frequent, heavy locks. Thus it required some extra thinking, leading to a first minimal solution emerging in 2012 and having improved since.Common use cases
It looks like the most common use cases of ebtrees are :- data caches (like the LRU cache example from ebtree that ended up in HAProxy as well)
- log parsing / analysis (i.e.: count number of occurrences of a number of criteria and their relation with other criteria)
- data conversion (like in HAProxy's maps, mostly used for IP-based Geolocation)
- the trees are mostly read from, and seldom updated (typically less than 2-3% updates)
- these operations are performed as a very small part of something more complex, and often a single high-level operation will require multiple tree accesses. Thus these operations need to remain extremely fast.
- the workloads involving these operations tend to be heavy and to benefit from multi-threading
For other users requiring parallelism and some insertions (caches, log processing), R/W locks limit the scalability to the ratio of time spent writing vs the time spent doing something else not requiring access to the tree. Indeed, the write lock being exclusive, each time there is a write operation, the tree is not usable and all readers have to wait.
Observing how the time is spent in ebtrees
Ebtrees present an average lookup and insertion time of O(log(N)) where N is the number of entries in the tree. In practice this is not totally exact for small trees, because an ebtree is a modified radix tree, so the number of visited nodes during a lookup or an insertion in fact corresponds to the number of common prefixes along the chain to the requested node, which can be as high as the number of bits in the data element. The tree descent is done by comparing some bit fields with the data found at each location and deciding to go left or right based on this. The final operation (returning the data found, or inserting/deleting a node) is very fast as it's only a matter of manipulating 4 pointers in the worst case. The sequence below shows how a node "3" would be added to a tree already containing nodes 1, 2, 4, and 5 :On a core i7-6700K running at 4.4 GHz, a lookup in a tree containing 1 million nodes over a depth of 20 levels takes 40 ns when all the traversed nodes are hot in the L1 cache, and up to 600 ns when looking up non-existing random values causing mostly cache misses. This fixes a limit to 1.67 million lookups per second in the worst case (e.g. cache being intentionally attacked).
This ratio is interesting because it shows that for some use cases, a thread would possibly want to benefit from the fast lookup of common keys (e.g. 40ns to look up the source IP address of a packet belonging to a recent communication), without being too much disturbed by another thread having to insert some random values locking the tree for 600 ns.
For a workload consisting in 1% writes in the worst case conditions above, a single core would spend 87% of its time in reads, and 13% in writes, for a total of 22 million lookups per second. This means that even if all reads could be parallelized on an infinite number of cores, the maximum performance that could be achieved would be around 160 million lookups per second, or less than 8 times the capacity of a single core.
Below is a graphical representation of how the time would be spent between fast reads and slow writes on this worst-case workload, on a single core machine, then on a 8-core machine, showing that the 8-core machine would be only 4 times faster than the single-core one, spending 48% of its total CPU resources waiting while the write lock is held. The 100 operations are performed every 1095 ns, bringing 91 million operations per second.
Note that this problem is not specific to trees in fact. It also affects other structures such as hash tables, to certain extents. Hash tables are made of linked lists attached to an array of list heads. Most of the time, elements are not ordered, so additions are performed to the head (or tail) of the list and it is possible to lock a list head (or the whole hash table depending on requirements) during a simple insertion. But there is a special case which concerns uniqueness : if you want to insert an element and ensure it is unique (like a cache usually does), you need to atomically look it up then insert the new one, and the same problem happens, except that this time the worst lookup time can be O(N) :
Designing a solution
Since most of the time is spent looking up a position in the storage structure (tree, list, etc), and this lookup by itself doesn't result in modifying the structure, it seems pointless to prevent other readers from accessing the structure. In an ideal case, we would like to have the ability to "upgrade" the lock from Read to Write only after the lookup. This cannot be done without some constraints since if it were universally possible, it would mean that all readers could be upgraded to writer some of which would be stuck at certain positions while a writer is currently modifying these same locations.But most of the time we know that we're performing a lookup planning for a write operation. So we could have an intermediary lock state that does not exclude readers, and which at the same time is guaranteed to be atomically upgradable to a write lock once all readers are gone.
This is exactly what the initial design of the Progressive Locks (plock) does. The new intermediary lock state is called "S" for "seek" as it is only used while seeking over the storage area and without performing any modification. Just like for W locks, a single S lock may be held at a time, but while the W and R locks are exclusive, the S and R locks are compatible.
In the worst case above, is we consider that among the 600ns it takes to insert a key into the tree, 595ns are used for the lookup and 5ns are used to update the pointers, it becomes obvious that the whole tree is locked only for 5ns every 600ns, so 595ns out of the 600ns are usable for read operations that happen in parallel. Thus the sequence becomes the following one :
- a slow writer starts to seek through the whole tree using the S lock, taking 595ns
- in parallel, other threads continue their read operations. During the same time frame, each thread may initiate 15 read operations of 40ns
- once the writer finds the location where it wants to insert its new node, it upgrades its S lock to W.
- other threads stop initiating read operations and complete existing ones
- the writer waits for all readers to be gone. In the worst case it takes the time it takes for a full read operation, hence 40ns max (20ns avg)
- after 635ns max, the writer is alone and can modify the tree
- after 640ns max, the write operation is completed, the lock is removed and readers can enter again.
The writer will wait between 0 and 40ns to get exclusive access, thus 20ns on average. As a result, this will bring one write every 620ns and all read requests in parallel. This means that 100 operations are performed every 620ns, bringing a throughput of 161 million operations per second, or 77% faster than when using an R/W lock as above :
Initial implementation
It appeared critical that a single atomic operation could be used to take a lock, so as not to repeat the performance problem of classical R/W locks. Given that the number of readers needs to be known all the time, the lock needs to be large enough to store a number as large as the maximum number of readers, in addition to other information like the other lock states, into a single word-sized memory area guaranteeing the possibility of atomic accesses.Usually locks are implemented using a compare-and-swap operation (e.g. CMPXCHG on x86), which allows to check for contention before performing an operation. But while using compare-and-swap is convenient when dealing with a few distinct values, it's not convenient at all when dealing with an integer which can vary very quickly depending on the number of readers, because it would require many failed attempts and could even never succeed for some users.
On x86 systems, we have a very convenient XADD instruction. This is an example of fetch-and-add operation : it atomically reads a value from a memory location, adds it to a register, and stores the result back to memory, while returning the original value into the register.
Using XADD definitely helps counting readers, but it is subject to overflows, and doesn't appear very convenient to count write requesters. While we do know that we'll never have many write requesters at once, a safe design mandates to be able to let all threads compete in parallel in this situation.
Thus the idea emerged of splitting an integer into 3 parts :
- one to count the readers
- one to count the demands for S locks
- one to count the demands for W locks
As mentionned, the W locks need to be exclusive to all other locks. The S locks need to be exclusive to S and W locks. The R locks need to be exclusive to W locks. This means that it is possible to overflow certain values into the other ones without a risk :
- if the R bits overflow into the S bits, since S locks are stricter than R locks, the guarantees offered to readers are preserved
- if the S bits overflow into the W bits, since W locks are stricter than S, the guarantees offered to S users are preserved
So in practice the limit is really dictated by the number of bits to represent W. And in order to get optimal performance, and avoid forcing readers to wait, it seems important to have the same number of bits for R. In theory, a single bit is needed for S. But given that concurrent accesses on S would result in occasional appearance of W, this would occasionally make some readers wait while not needed. Thus the choice was made to use two bits for S, supporting up to 3 concurrent seek attempts without perturbing readers, which leaves enough time for these seekers to step back.
This provides the following distribution :
- on 32-bit platforms :
- 14 bits for W
- 2 bits for S
- 14 bits for R
- on 64-bit platforms :
- 30 bits for W
- 2 bits for S
- 30 bits for R
The initial naive encoding showed that certain combinations correspond to non-existing cases in practices and that a more advanced encoding could help store one extra state.
Indeed, the progress made on ebtrees makes it apparent that certain O(1) operations like the removal of a node might one day be implemented as atomic operations, without requiring any locking. But how is this related to a lock implementation then ? Porting an application or a design from locked to lock-free is never a straightforward operation and can sometimes span over several years making small progress at each attempt. Being able to use a special lock state to mention that it is compatible only with itself and exclusive to all other ones can be very convenient during the time it takes to transform the application.
So a new "A" (for "atomic") state was introduced and all the encoding of the states based on the bits above was redefined to be a bit more complex but still fast and safe, as described below.
Current implementation
The 5 known states of Progressive Locks are now :- U (unlocked) : nobody claims the lock
- R (read-locked) : some users are reading the shared resource
- S (seek-locked) : reading is still OK but nobody else may seek nor write
- W (write-locked) : exclusive access for writing
- A (atomic-locked) : some users are performing safe atomic operations which are potentially not compatible with the ones covered by R/S/W.
The S lock cannot be taken if another S or W lock is already held. But once the S lock is held, the owner is automatically granted the right to upgrade it to W without checking for other writers. And it can take and release the W lock multiple times atomically if needed. It must only wait for last readers to
leave.
The A lock supports concurrent write accesses and is used when certain atomic operations can be performed on a structure which also supports non-atomic operations. It is exclusive with the other locks. It is weaker than the S/W locks (S being a promise of "upgradability"), and will wait for any readers to leave before proceeding.
The lock compatibility matrix now looks like below. Compatible locks are immediately obtained. Incompatible locks simply result in the requester to wait for the incompatible ones to leave :
Many transitions are possible between these states for a given thread, as represented on the transition graph below :
Some additional state transitions may be attempted but are not guaranteed to succeed. This is the case when holding a Read lock and trying to turn it into another lock which is limited to a single instance : another thread might attempt to perform exactly the same transition and by definition this can not work for both of them. Moreover, the one succeeding will wait for readers to quit, so the one failing will also have to drop its Read lock in this case before trying again. Nevertheless, the plock API provides an easy access to these uncommon features.
The transitions to W and A are performed in two phases :
- request the lock
- wait for previous incompatible users to leave
The locks are implemented using cumulable bit fields representing from the
lowest to the highest bits :
- R: the number of readers (read, seek, write)
- S: the number of seek requests
- W: the number of write requests
on a concurrent read, the maximum number of concurrent threads supported is dictated by the number of bits used to encode the readers.
Since the writers are totally exclusive to any other activity, they require the same number of bits in order to accept the worst possibly imaginable scenario where all threads would request a write lock at the exact same instant so the number of bits used to represent writers is the same as the number used to represent the number of readers.
The number of seek requests remains on a low bit count and this number is placed just below the write bit count so that if it overflows, it temporarily overflows into the write bits and appears as requesting an exclusive write access. This allows the number of seek bits to remain very low, 1 technically, but 2 to avoid needless lock/unlock sequences during common conflicts.
In terms of representation, we now have this :
- R lock is made of the R bits
- S lock is made of S+R bits
- W lock is made of W+S+R bits
- A lock is made of W bits only
Having only the W bits set in W:S could be confused with the A lock except that the A lock doesn't hold the R bits and is exclusive with regular locks so this situation is unambiguous as well.
In practice, it is useful to understand all states that can be observed in the lock, including the transient ones shown below. "N" means "anything greater than zero". "M" means "greater than 1 (many)". "X" means "anything including zero" :
The lock can be upgraded and downgraded between various states at the demand of the requester :
- U⇔A : pl_take_a() / pl_drop_a() (adds/subs W)
- U⇔R : pl_take_r() / pl_drop_r() (adds/subs R)
- U⇔S : pl_take_s() / pl_drop_s() (adds/subs S+R)
- U⇔W : pl_take_w() / pl_drop_w() (adds/subs W+S+R)
- S⇔W : pl_stow() / pl_wtos() (adds/subs W)
- S⇒R : pl_stor() (subs S)
- W⇒R : pl_wtor() (subs W+S)
In case of conflict, the requester has to periodically observe the lock's status by reading it, and trying again once the lock's status appears to be compatible with the request. It is very important not to try to take the lock again without reading first because of the way CPU caches work : each write attempt to a shared memory location (here the lock) will broadcast a request to all other caches to flush their pending changes and to invalidate the cache line. If the lock is located close to a memory location being modified under this lock's protection, repeated write attempts to the lock will significantly slow down the progress of the other thread holding the lock since the cache line will have to bounce back and forth between competing threads.
Instead, by only reading the lock, the other caches will still have to flush their pending changes but not to invalidate the cache line. This way it will still make progress, albeit not as fast as what it could do without this.
In order to improve the situation, Progressive locks implement an exponential backoff : each failed attempt causes the failing thread to wait twice as long as the previous time before trying again. This well known method used in locks and in network protocols comes with three immediate benefits :
- it limits the disturbance caused to the thread holding the lock
- it avoids cache storms when using many concurrent threads
- it provides a much better fairness to all threads by increasing the randomness of their attempts
Performance measurements
A new locking mechanism couldn't work without some performance benchmarks. One such utility implementing a simple LRU cache using a hash table is provided with the software package, under the name lrubench. This utility looks up and inserts random values into a cache if they are not present yet. It allows to set the number of concurrent threads, the cache size (number of keys kept before eviction), the key space to control the cache hit ratio, the cost of a cache miss (number of loops to perform on a simple operation mimicking a real, more expensive operation like a regex call), and the locking strategy.For each locking strategy, the principle is the same :
- perform a lookup of the value in the cache
- if the value is present, use it
- otherwise perform the expensive operation
- insert the result of this expensive operation into the cache, only if it was not added in the mean time by another thread, otherwise replace this older entry with the new one
- in case of insertion, remove possible excess elements from the cache to keep it to the desired size
- pthread spinlock : the first lookup is protected using pthread_spinlock(). During the insertion, the lookup, the possible removal, and the insertion are atomically performed using pthread_spinlock(). No parallel operation is possible on the cache.
- pthread rwlock : the first lookup is protected using pthread_rwlock_rdlock(). During the insertion, the lookup, the possible removal, and the insertion are atomically performed using pthread_rwlock_wrlock(). Initial lookups may be processed in parallel but not the second ones.
- plock W lock : the first lookup is protected using a W lock. During the insertion, the lookup, the possible removal, and the insertion are atomically performed using a W lock. No parallel operation is possible on the cache. This is comparable to the pthread spinlock strategy.
- plock S lock : the first lookup is protected using an S lock. During the insertion, the lookup, the possible removal, and the insertion are atomically performed using an S lock. No parallel operation is possible on the cache. This is comparable to the plock W lock above but is slightly faster because the S lock doesn't have to check for readers. It is the closest plock alternative to the pthread spinlock.
- plock R+W lock : the first lookup is protected using a R lock. During the insertion, the lookup, the possible removal, and the insertion are atomically performed using a W lock. Initial lookups may be processed in parallel but not the second ones. This is comparable to the pthread rwlock strategy, but is much faster due to the single atomic operation.
- plock R+S>W lock : the first lookup is protected using a R lock. During the insertion, the lookup is performed using an S lock, then the possible removal, and the insertion are atomically performed by upgrading the S lock to a W lock. Initial lookups may be processed in parallel as well as in parallel to the second ones. Only one second lookup may be performed at a time, and initial lookups are only blocked during the short write period at the end.
- plock R+R>S>W lock : the first lookup is protected using a R lock. During the insertion, the lookup is performed using an R lock, then the possible removal, and the insertion are atomically performed by upgrading trying to upgrade the R lock to an S lock, or performing the operation again using an S lock in case of conflict. Then the S lock is upgraded to aa W lock to complete the operation. Initial lookups may be processed in parallel as well as in parallel to the second ones. The second lookups may be performed in parallel as well, unless a write access is already claimed, forcing a second lookup to be done.
- plock R+R>W lock : the first lookup is protected using a R lock. During the insertion, the lookup is performed using an R lock, then the possible removal, and the insertion are atomically performed by upgrading trying to upgrade the R lock to a W lock, or performing the operation again using a W lock in case of conflict. Initial lookups may be processed in parallel as well as in parallel to the second ones. The second lookups may be performed in parallel as well, unless a write access is already claimed, forcing a second lookup to be done.
The graphs below represent the number of cache lookup operations per second achieved using these different locks, for 6 different cache hit ratios (50%, 80%, 90%, 95%, 98%, 99%), for 3 different cache miss costs (30, 100, 300 snprintf() loops), and with a varying number of threads. The left part of the graph uses only one thread per CPU core (no HyperThreading involved), and the right part of the graph uses two threads per CPU core (HyperThreading).
Low miss cost (30 loops)
(click on the graphs to zoom)
Medium miss cost (100 loops)
(click on the graphs to zoom)
High miss cost (300 loops)
(click on the graphs to zoom)
It is obviously not a surprise that the difference between locking mechanisms fades away when the miss cost increases and the hit ratio diminishes since more time is proportionally spent computing a value than competing for a lock. But the difference at high hit ratios for large numbers of threads is stunning : at 24 threads, the plock can be respectively 5.5 and 8 times faster than R/W locks and spinlocks! And even at low hit ratios and high miss cost, the plocks are never below the pthread locks and become better past 8-10 threads.
The more time the lookup takes, the wider the gap between the different mechanisms, since in the case of spinlocks or R/W locks, exclusive access is required during the insertion, making even a single thread doing an occasional insert stop all other threads from using the cache. This is not the case with plocks, except in the R+R>W mode upon failure to upgrade due to two threads trying to concurrently insert an entry. This is what causes the R+R>W mechanism to converge to the R+W mechanism when the thread count increases.
It looks like the upgrade attempts from the R lock are pointless on this workload, because there is rarely a contention between two insertions. It starts to be slightly beneficial only when dealing with important lookup times and cache miss ratios. But in this case one of the threads has to retry the operation, which is not much different from being serialized around an S lock.
At low hit ratios (50%), the spinlocks are cheaper than R/W locks (less operations, balance around 80% and 10 threads), and the plock S and W variants are slightly faster than the spinlocks and converge towards them with more threads. One explanation might be the use of exponential back-off in plock that makes it behave slightly better for a comparable workload. But at even only 80% cache hit ratio (hence 20% miss), all strict locks behave similarly and the benefit of the upgradable locks becomes obvious.
It is also visible on graphs with high hit ratios that the spinlock performance degrades as the number of threads increases. One could think that this would indicate that the spinlocks do not use an exponential back-off to avoid polluting the bus and caches, but the curve is perfectly merged with the plock-S and plock-W curves, and plock does use exponential back-off. In fact the reason here is different. It's simply that plock-S and plock-W start with an aggressive locked XADD operation, and that the spinlocks use a locked CMPXCHG operation. Both operations unconditionally issue a bus write cycle, even the CMPXCHG! This is explained in the Intel Software Developer's manual in the CMPXCHG instruction description : "To simplify the interface to the processor’s bus, the destination operand receives a write cycle without regard to the result of the comparison. The destination operand is written back if the comparison fails; otherwise, the source operand is written into the destination. (The processor never produces a locked read without also producing a locked write.)". These write operations have a negative effect on the other CPU's caches, forcing invalidations, which explains why the performance slowly degrades.
For plock, an attempt was made to read before writing, but this results in a small performance loss for most common cases which isn't worth the change. In the end this preliminary read is only performed before taking the R lock, because in general this lock is taken before a long operation so a double read is easily amortized, and because not doing it increases the time it takes for a W lock holder to see the last reader quit.
Integrating Progressive locks into a program
The plock Git tree can be consulted and cloned from here. The code was made to have the least possible dependencies. It only consists in an include file bringing all the code as macros, and a second include file providing the atomic operations. The choice of macros was made so that the calls may automatically be used on 32- and 64- bit locks. 32-bit platforms may only use 32-bit locks, but 64-bit platforms may use both. For most use cases, it's likely that even a 64-bit platform will not need more than 16383 threads and may prefer to save space by using only 32-bit locks.A program only needs to do this to use plocks :
#include <plock.h>
From this point, adding a lock to a structure simply requires an integer, long or even a pointer :
struct cache { struct list head; int plock; };
A lock value of zero describes and unlocked state, which is easy to initialize since any structure allocated with calloc() starts with an initialized lock.
Appending an entry without requiring any lookup would consist in simply taking the W lock around the cache access :
struct cache_node *__cache_append(struct cache *, const char*); void cache_insert(struct cache *cache, const char *str) { struct cache_node *node; pl_take_w(&cache->plock); node = __cache_append(cache, str); pl_drop_w(&cache->plock); }
Then reading from the cache would basically consist in taking the R lock before operating :
struct cache_node *__cache_find(const struct cache *, const char*); int cache_find(struct cache *cache, const char *str) { const struct cache_node *node; int ret; pl_take_r(&cache->plock); node = __cache_find(cache, str); ret = node ? node->value : -1; pl_drop_r(&cache->plock); return ret; }
Deleting an entry from the cache would consist in taking the S lock during the lookup and upgrading it to a W lock for the final operation :
struct cache_node *__cache_find(const struct cache *, const char*); void __cache_free(struct cache *, struct cache_node *); void cache_delete(struct cache *cache, const char *str) { struct cache_node *node; pl_take_s(&cache->plock); node = __cache_find(cache, str); if (!node) { pl_drop_s(&cache->plock); return; } pl_stow(&cache->plock); __cache_free(cache, node); pl_drop_w(&cache->plock); }
Inserting an entry only if it doesn't already exist can be done in multiple ways (typically taking the S lock for a lookup and upgrading it if no node is found). One interesting way when the risk of collision is low and the insertion rate is high (eg: log indexing) consists in using a read lock and to attempt an upgrade to W, or falling back to an S lock in case of failure. This way many threads may insert in parallel and their atomicity checks will be parallelized yet will guarantee atomicity :
struct cache_node *__cache_find(const struct cache *, const char*); struct cache_node *__cache_append(struct cache *, const char*); void cache_insert_unique(struct cache *cache, const char *str) { struct cache_node *node; pl_take_r(&cache->plock); node = __cache_find(cache, str); if (node) { pl_drop_r(&cache->plock); return; } if (!pl_try_rtow) { /* upgrade conflict, must drop R and try again */ pl_drop_r(&cache->plock); pl_take_s(&cache->plock); node = __cache_find(cache, str); if (node) { pl_drop_s(&cache->plock); return; } pl_stow(&cache->plock); } /* here W is held */ node = __cache_append(cache, str); pl_drop_w(&cache->plock); }
The lrubench.c test program provided with plock gives a good illustration of how the locks can be used using various strategies compared to spinlocks and standard RW locks.
Extra work
Some cleanup work remains to be done on the locks, particularly on the atomic operations which confusingly are prefixed with "pl_". Some of them will move to a distinct low-level library. This is not important since users should normally not use these functions directly.It seems useless to implement progressive locks on 16-bit values (would support at most 127 threads, or 63 if keeping the 2 application bits). But maybe some environments could make good use of them by reusing holes in existing structures.
I made a few attempts at using the same mechanism to implement tickets to grant an even better fairness to many threads, but didn't find a way to guarantee uniqueness of ticket numbers without performing multiple atomic operations.
It would seem that sort of an atomic "test-and-add" operation would be beneficial here to avoid taking the lock then rolling it back in case of conflict. It would be a mix of a test-and-set and an addition replacing the XADD so that the addition is only performed if certain bits are not set (typically S and W bits). While no such instruction exists for now, a few different options should probably be considered :
- HLE (Hardware Lock Elision), which is a new mechanism introduced with Intel's TSX extensions. The idea is to define a code section where locks are ignored and the operation is attempted directly into the L1 cache. If it succeeds, we've saved some bus locks. If it fails, it is attempted again with the real locks. It seems that it could be used for the read+XADD at least when taking an R lock, and avoid having to grab the bus lock at this point. Some experimentation is required. What's nice is that code written like this is backwards compatible with older processors.
- RTM (Restricted Transactional Memory) is a more complex mechanism allowing to know if a sequence of operations has a chance to succeed lockless or requires locking. In case of failure the operation will be attempted again. It matches very closely the R->S and R->M opportunistic upgrades that we have in the example above. This could also be used to insert or delete a node in the tree after a lockless lookup.
Annex A: detailed locking rules
The rights and expectations these locks impose to their users are the following :- unlocked (U) :
- nobody may perform visible modifications to the protected area at all
- anybody can read the protected area at their own risk
- anybody may immediately turn the lock to any other state
- read-locked (R) :
- requester must wait for writers to leave before being granted the lock
- owner may read any location from the protected structure without any requirement for ordering
- others may read any location from the protected structure without any requirement for ordering
- others may also be granted an R lock
- one other user may also be granted an S lock
- one other may take a W lock but will first have to wait for all readers to leave before proceeding with writes
- seek-locked (S) :
- requester must wait for writers and seekers to leave before being granted the lock
- owner may read any location from the protected structure without any requirement for ordering
- others may read any location from the protected structure without any requirement for ordering
- others may also be granted an R lock
- owner may upgrade the lock to a W lock at any instant without prior notification
- owner must wait for readers to leave after the lock is upgraded before performing any write
- owner may release the lock without performing an upgrade
- owner may also downgrade the S lock to an R lock
- write-locked (W) :
- requester must wait for writers and seekers to leave before being granted the lock (this is guaranteed when upgrading from S)
- others may not try to grab any lock once this lock is held
- the lock is granted once all other users have left
- owner may perform any modification to the protected structure
- others may not access the protected structure at all
- owner may downgrade the lock to an S lock
- owner may downgrade the lock to an R lock
- atomic (A) :
- requester must wait for writers and seekers to leave before being granted the lock, but doesn't exclude other write-atomic users
- the lock is granted once all other users have left
- owner is not guaranteed to be alone since other users may hold the A lock in parallel and perform modifications at the same time ; observations and retries might be necessary to complete certain operations
- owner may only carefully apply modifications to the protected structure using atomic operations and memory barriers in a way that is compatible with its own usage
- others may only carefully apply modifications to the protected structure using atomic operations and memory barriers in a way that is compatible with others' usage
- the protections involved with this type of lock are 100% application specific. Most commonly this will be used to reinitialize some elements, release a series of pointers using atomic operations and all compatible functions will use this lock instead of any of the 3 above.
Testing ebtree with Intel RTM (Restricted Transactional Memory) would be very interesting.
ReplyDeleteI worked a bit with it (see https://github.com/cosmos72/stmx), and it's very easy to use, provided that you have:
1) an Intel CPU that provides it
2) a compiler with inline assembly - gcc/g++ are fine
In my experience though, debugging RTM issues is completely another matter... since they completely rollback on abort, it's impossible to run them in a debugger, and it's almost impossible even to see where/why they aborted.
Hint: usually when allocating memory, performing system calls, or at context switches.
Thanks for your feedback. I haven't used RTM yet, however I've tried with HLE (not published results yet). HLE already exploses all scores on random accesses even for high write ratios. But I predict that for isolated, highly concurrent accesses it will misbehave. In the mean time I improved the locks to support guaranteed upgradability from R to A to support concurrent write access. I still need to document it.
DeleteThanks for the idea of allocating more bits for exclusive and semi-exclusive states, to allow the use of XADD.
ReplyDelete> Thus the choice was made to use two bits for S, supporting up to 3 concurrent seek attempts without perturbing readers, which leaves enough time for these seekers to step back.
This sentence is confusing. How there could be concurrent seekers?
Note that the most common term for your "Seek" state is "Update" (MS SQL Server, some IBM's DB, several Java libraries), or "Upgrade" (Boost). Also "Intent Exclusive" is used in Mongo. It might make sense to rename it to one of the existing names.
Regarding the concurrent seekers, you need to keep in mind that what matters is not the number of those who succeed, but the number of those who try. So between the moment a thread checks if it's alone and the moment it adds itself, another thread might have done the same and the second one has to step back. This is something that doesn't happen with a compare-and-swap operation.
DeleteRegarding the terminology, initially I went with "upgrade" but during the various experimentations to get this done, I figured a number of other types of upgrades, making this term very likely confusing for the long term. I also thought about "shared write" but it's not a write, and it conflicts with the atomic one which allows writes to be shared. Here the goal was to mention that the thread requires no change to be done while holding this state while being guaranteed to be able to write, and ultimately I went with the type that matched the type of operation made in the trees. However I like the "intent" you suggest as it describes the purpose well. It's difficult to find some appropriate terminology in this area as there are many possible use cases.
This looks really promising, thank you for posting this and providing access to plock source. I'm busy testing plock inside a C++ project and ran into some compilation issues. The pl_deref_long and pl_deref_int macros does an implicit cast from (void *) to (unsigned long *) and (unsigned int *) respectively, however C++ forbids implicit conversion from (void *). I've changed those macros to:
ReplyDelete#define pl_deref_long(p) ({ volatile unsigned long *__pl_l = (unsigned long *)(p); *__pl_l; })
#define pl_deref_int(p) ({ volatile unsigned int *__pl_i = (unsigned int *)(p); *__pl_i; })
Would the above change have any side affects that I need to be aware of?
Not any that I can think of. Anyway these are part of the elements I want to migrate to a distinct lib as they are not directly related to plocks. I think that an equivalent to linux kernel's READ_ONCE() would be better.
Delete