"ad-hoc" multi-threading refers to sharing of data across threads without an explicit sharing mechanism (such as a queue or a mutex). There's nothing wrong per-se with ad-hoc multi-threading, but too often people use it as an excuse for "comment moderated handoff" which is no good.
The point of this post is : protect your threading! Use name-changes and protection classes to make access lifetimes very explicit and compiler (or at least assert) moderated rather than comment-moderated.
Let's look at some examples to be super clear. Ad-Hoc multi-threading is something like this :
int shared; thread0 : { shared = 7; // no atomics or protection or anything // shared is now set up start thread1; // .. do other stuff .. kill thread1; wait thread1; print shared; } thread1 : { shared ++; }this code works (assuming that thread creation and waiting has some kind of memory barrier in it, which it usually does), but the hand-offs and synchronization are all ad-hoc and "comment moderated". This is terrible code.
I believe that even with something like a mutex, you should make the protection compiler-enforced, not comment-enforced.
Comment-enforced mutex protection is something like :
struct MyStruct s_data; Mutex s_data_mutex; // lock s_data_mutex before touching s_dataThat's okay, but comment-enforced code is always brittle and bug-prone. Better is something like :
struct MyStruct s_data_needs_mutex; Mutex s_data_mutex; #define MYSTRUCT_SCOPE(name) MUTEX_IN_SCOPE(s_data_mutex); MyStruct & name = s_data_needs_mutex;assuming you have some kind of mutex-scoper class and macro. This makes it impossible to accidentally touch the protected stuff outside of a lock.
Even cleaner is to make a lock-scoper class that un-hides the data for you. Something like :
//----------------------------------- templateErrkay.<
typename t_data> class ThinLockProtectedHolder; template<
typename t_data> class ThinLockProtected { public: ThinLockProtected() : m_lock(0), m_data() { } ~ThinLockProtected() { } protected: friend class ThinLockProtectedHolder<
t_data>; OodleThinLock m_lock; t_data m_data; }; template<
typename t_data> class ThinLockProtectedHolder { public: typedef ThinLockProtected<
t_data> t_protected; ThinLockProtectedHolder(t_protected * ptr) : m_protected(ptr) { OodleThinLock_Lock(&(m_protected->m_lock)); } ~ThinLockProtectedHolder() { OodleThinLock_Unlock(&(m_protected->m_lock)); } t_data & Data() { return m_protected->m_data; } protected: t_protected * m_protected; }; #define TLP_SCOPE(t_data,ptr,data) ThinLockProtectedHolder<
t_data> RR_STRING_JOIN(tlph,data) (ptr); t_data & data = RR_STRING_JOIN(tlph,data).Data(); //-------- /* // use like : ThinLockProtected<
int> tlpi; { TLP_SCOPE(int,&tlpi,shared_int); shared_int = 7; } */ //-----------------------------------
So the point of this whole post is that even when you are just doing ad-hoc thread ownership, you should still use a robustness mechanism like this. For example by direct analogy you could use something like :
//========================================================================= templatewhich provides scoped checked ownership of variable hand-offs without any explicit mutex.<
typename t_data> class AdHocProtectedHolder; template<
typename t_data> class AdHocProtected { public: AdHocProtected() : #ifdef RR_DO_ASSERTS m_lock(0), #endif m_data() { } ~AdHocProtected() { } protected: friend class AdHocProtectedHolder<
t_data>; #ifdef RR_DO_ASSERTS U32 m_lock; #endif t_data m_data; }; #ifdef RR_DO_ASSERTS void AdHoc_Lock( U32 * pb) { U32 old = rrAtomicAddExchange32(pb,1); RR_ASSERT( old == 0 ); } void AdHoc_Unlock(U32 * pb) { U32 old = rrAtomicAddExchange32(pb,-1); RR_ASSERT( old == 1 ); } #else #define AdHoc_Lock(xx) #define AdHoc_Unlock(xx) #endif template<
typename t_data> class AdHocProtectedHolder { public: typedef AdHocProtected<
t_data> t_protected; AdHocProtectedHolder(t_protected * ptr) : m_protected(ptr) { AdHoc_Lock(&(m_protected->m_lock)); } ~AdHocProtectedHolder() { AdHoc_Unlock(&(m_protected->m_lock)); } t_data & Data() { return m_protected->m_data; } protected: t_protected * m_protected; }; #define ADHOC_SCOPE(t_data,ptr,data) AdHocProtectedHolder<
t_data> RR_STRING_JOIN(tlph,data) (ptr); t_data & data = RR_STRING_JOIN(tlph,data).Data(); //==================================================================
We can now revisit our original example :
AdHocProtected<
int> ahp_shared;
thread0 :
{
{
ADHOC_SCOPE(int,&ahp_shared,shared);
shared = 7; // no atomics or protection or anything
// shared is now set up
}
start thread1;
// .. do other stuff ..
kill thread1;
wait thread1;
{
ADHOC_SCOPE(int,&ahp_shared,shared);
print shared;
}
}
thread1 :
{
ADHOC_SCOPE(int,&ahp_shared,shared);
shared ++;
}
And now we have code which is efficient, robust, and safe from accidents.
No comments:
Post a Comment