[v2] x86: Optimize atomic_compare_and_exchange_[val|bool]_acq [BZ #28537]
Checks
Context |
Check |
Description |
dj/TryBot-apply_patch |
success
|
Patch applied to master at the time it was sent
|
dj/TryBot-32bit |
success
|
Build for i686
|
Commit Message
From the CPU's point of view, getting a cache line for writing is more
expensive than reading. See Appendix A.2 Spinlock in:
https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/xeon-lock-scaling-analysis-paper.pdf
The full compare and swap will grab the cache line exclusive and cause
excessive cache line bouncing. Load the current memory value first,
which should be atomic, check and return immediately if writing cache
line may fail to reduce cache line bouncing on contended locks.
This fixes BZ# 28537.
---
sysdeps/x86/atomic-machine.h | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
@@ -73,9 +73,18 @@ typedef uintmax_t uatomic_max_t;
#define ATOMIC_EXCHANGE_USES_CAS 0
#define atomic_compare_and_exchange_val_acq(mem, newval, oldval) \
- __sync_val_compare_and_swap (mem, oldval, newval)
+ ({ __typeof (*(mem)) oldmem = *(mem), ret; \
+ ret = (oldmem == (oldval) \
+ ? __sync_val_compare_and_swap (mem, oldval, newval) \
+ : oldmem); \
+ ret; })
#define atomic_compare_and_exchange_bool_acq(mem, newval, oldval) \
- (! __sync_bool_compare_and_swap (mem, oldval, newval))
+ ({ __typeof (*(mem)) oldmem = *(mem); \
+ int ret; \
+ ret = (oldmem == (oldval) \
+ ? !__sync_bool_compare_and_swap (mem, oldval, newval) \
+ : 1); \
+ ret; })
#define __arch_c_compare_and_exchange_val_8_acq(mem, newval, oldval) \