Why is the standard C# event invocation pattern thread-safe without a memory barrier or cache invalidation? What about similar code? -
in c#, standard code invoking event in thread-safe way:
var handler = somethinghappened; if(handler != null) handler(this, e);
where, potentially on thread, compiler-generated add method uses delegate.combine
create new multicast delegate instance, sets on compiler-generated field (using interlocked compare-exchange).
(note: purposes of question, don't care code runs in event subscribers. assume it's thread-safe , robust in face of removal.)
in own code, want similar, along these lines:
var localfoo = this.memberfoo; if(localfoo != null) localfoo.bar(localfoo.baz);
where this.memberfoo
set thread. (it's 1 thread, don't think needs interlocked - maybe there's side-effect here?)
(and, obviously, assume foo
"immutable enough" we're not actively modifying while in use on thread.)
now i understand obvious reason thread-safe: reads reference fields atomic. copying local ensures don't 2 different values. (apparently guaranteed .net 2.0, assume it's safe in sane .net implementation?)
but don't understand is: memory occupied object instance being referenced? particularly in regards cache coherency? if "writer" thread on 1 cpu:
thing.memberfoo = new foo(1234);
what guarantees memory new foo
allocated doesn't happen in cache of cpu "reader" running on, uninitialized values? ensures localfoo.baz
(above) doesn't read garbage? (and how guaranteed across platforms? on mono? on arm?)
and if newly created foo happens come pool?
thing.memberfoo = foopool.get().reset(1234);
this seems no different, memory perspective, fresh allocation - maybe .net allocator magic make first case work?
my thinking, in asking this, memory barrier required ensure - not memory accesses cannot moved around, given read dependent - signal cpu flush cache invalidations.
my source wikipedia, make of will.
(i might speculate maybe interlocked-compare-exchange on writer thread invalidates cache on reader? or maybe all reads cause invalidation? or pointer dereferences cause invalidation? i'm particularly concerned how platform-specific these things sound.)
update: make more explicit question cpu cache invalidation , guarantees .net provides (and how guarantees might depend on cpu architecture):
- say have reference stored in field
q
(a memory location). - on cpu a (writer) initialize object @ memory location
r
, , write referencer
q
- on cpu b (reader), dereference field
q
, , memory locationr
- then, on cpu b, read value
r
assume gc not run @ point. nothing else interesting happens.
question: prevents r
being in b's cache, before a has modified during initialisation, such when b reads r
gets stale values, in spite of getting fresh version of q
know r
in first place?
(alternate wording: makes modification r
visible cpu b @ or before point change q
visible cpu b.)
(and apply memory allocated new
, or memory?)+
note: i've posted a self-answer here.
this question. let consider first example.
var handler = somethinghappened; if(handler != null) handler(this, e);
why safe? answer question first have define mean "safe". safe nullreferenceexception? yes, pretty trivial see caching delegate reference locally eliminates pesky race between null check , invocation. safe have more 1 thread touching delegate? yes, delegates immutable there no way 1 thread can cause delegate half-baked state. first 2 obvious. but, scenario thread doing invocation in loop , thread b @ later point in time assigns first event handler? safe in sense thread see non-null value delegate? surprising answer probably. reason default implementations of add
, remove
accessors event create memory barriers. believe version of clr took explicit lock
, later versions used interlocked.compareexchange
. if implemented own accessors , omitted memory barrier answer no. think in reality highly depends on whether microsoft added memory barriers construction of multicast delegate itself.
on second , more interesting example.
var localfoo = this.memberfoo; if(localfoo != null) localfoo.bar(localfoo.baz);
nope. sorry, not safe. let assume memberfoo
of type foo
defined following.
public class foo { public int baz = 0; public int daz = 0; public foo() { baz = 5; daz = 10; } public void bar(int x) { x / daz; } }
and let assume thread following.
this.memberfoo = new foo();
despite may think there nothing mandates instructions have executed in order defined in code long intent of programmer logically preserved. c# or jit compilers formulate following sequence of instructions.
/* 1 */ set register = alloc-memory-and-return-reference(typeof(foo)); /* 2 */ set register.baz = 0; /* 3 */ set register.daz = 0; /* 4 */ set this.memberfoo = register; /* 5 */ set register.baz = 5; // foo.ctor /* 6 */ set register.daz = 10; // foo.ctor
notice how assignment memberfoo
occurs before constructor run. valid because not have unintended side-effects perspective of thread executing it. could, however, have major impact on other threads. happens if null check of memberfoo
on reading thread occurred when writing thread fininished instruction #4? reader see non-null value , attempt invoke bar
before daz
variable got set 10. daz
still hold default value of 0 leading divide 0 error. of course, theoretical because microsoft's implementation of clr creates release-fence on writes prevent this. but, specification technically allow it. see this question related content.
Comments
Post a Comment