You're trying to do something like :
Thread1 :
Produce 1
sem.post
Thread2 :
Produce 2
sem.post
Thread 3 :
sem.wait
Consume 1
Thread 4 :
sem.wait
Consume 2
and we assert that the Consume succeeds in both cases. Produce/Consume use a queue or some other kind of lock-free communication structure.
Why can this fail ?
1. A too-weak semaphore . Assuming out Produce and Consume are lock-free and not necessarily synchronized on a single variable with something strong like an acq_rel RMW op, we are relying on the semaphore to synchronize publication.
That is, in this model we assume that the semaphore has something like an "m_count" internal variable, and that both post and wait do an acq_rel RMW on that single variable. You could certainly make a correct counting semaphore which does not have this behavior - it would be correct in the sense of controlling thread flow, but it would not provide the additional behavior of providing a memory ordering sync point.
You usually have something like :
Produce :
store X = A
sem.post // sync point B
Consume:
sem.wait // sync point B
load X // <- expect to see A
you expect the consume to get what was made in the produce, but that is only gauranteed if the sem post/wait acts as a memory sync point.
There are two reasons I say sem should act like it has an internal "m_count" which is acq_rel , not just release at post and acquire at wait as you might think. One is you want sem.wait to act like a #StoreLoad, so that the loads which occur after it in the Consume will see preceding stores in the Produce. An RMW acq_rel is one way to get a #StoreLoad. The other is that by using an RMW acq_rel on a single variable (or behaving as if you do), it creates a total order on modifications to that variable. For example if T3 seems T1.post and T2.post and then does its T3.wait , T4 cannot see T1.post T3.wait T4.wait or any funny other order.
Obviously if you're using an OS semaphore you aren't worrying about this, but there are lots of cases where you use this pattern with something "semaphore-like" , such as maybe "eventcount".
2. You're on POSIX and forget that sem.wait has spurious wakeups on POSIX. Oops.
3. Your queue can temporarily appear smaller than it really is.
Say, as a toy example, adding a node is done something like this :
new_node->next = NULL;
old_head = queue->head($).exchange( new_node );
// (*)
new_node->next = old_head;
There is a moment at (*) where you have truncated the queue down to 1 element. Until you fix the next pointer, the queue has been
made to appear smaller than it should be. So pop might not get the items it expects to get.
This looks like a bad way to do a queue, but actually lots of lock free queues have this property in more or less obvious ways. Either the Push or the Pop can temporarily make the queue appear to be smaller than it really is. (for example a common pattern is to have a dummy node, and if Pop takes off the dummy node, it pushes it back on and tries again, but this causes the queue to appear one item smaller than it really is for a while).
If you loop, you should find the item that you expected in the queue. However, this is a nasty form of looping because it's not just due to contention on a variable; if in the example above the thread is swapped out while it sits at point (*), then nobody can make progress on this queue until that thread gets time.
The result I find is that ensuring that waking from sem.wait always implies there is an item ready to pop is not worth the trouble. You can do it in isolated cases but you have to be very careful. A much easier solution is to loop on the pop.
No comments:
Post a Comment