Re: SUSv3's "memory location" and threads

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 28 Aug 2007 10:42:35 -0700, Ian Lance Taylor <iant@xxxxxxxxxx> wrote:
> "Adam Olsen" <rhamph@xxxxxxxxx> writes:
>
> > On 27 Aug 2007 23:52:08 -0700, Ian Lance Taylor <iant@xxxxxxxxxx> wrote:
> > > "Adam Olsen" <rhamph@xxxxxxxxx> writes:
> > >
> > > > I realize I'm "probably" safe if I ensure adjacent members are at
> > > > least as large as int (or long, or maybe long long, depending), but
> > > > I'm extremely mistrustful of C.  I'm hoping to find a way to be *sure*
> > > > I'm behaving in a correct, portable manor.
> > >
> > > Unfortunately there is no such way.  The only way you could be sure in
> > > a portable manner is for the C and/or C++ language standard to define
> > > how multi-threaded code should behave.  But they don't.
> > >
> > > The next C++ standard, C++0x is intended to include definitions for
> > > multi-threaded programs.  But it does not exist yet.
> >
> > I'm not concerned about C/C++ themselves, but rather how SUSv3 is
> > interpreted.  Even then I already know they don't actually specify
> > these details, so I'm looking for a de facto interpretation.
> >
> > If everybody (threaded application developers, kernel developers, *gcc
> > developers*) decide they'll use int's size and alignment, then it
> > might as well be written in stone.
>
> Modern processors really deal in cache line sizes when accessing
> memory.  The processors provide specific locking and/or cache control
> instructions to deal with this, and mutex implementations use them.
> There is ongoing work and research about how to best implement
> multi-threaded memory access.  I don't think it is yet possible to
> write anything in stone about this.

>From what I've seen (at least on x86), cache line size only affects
performance, not semantics.  If two threads write to different parts
of a cache line they get "false sharing".  The writes themselves are
still only a small portion of each cache line.

Changing this would have far reaching effects.  malloc for instance
would have to internally align blocks on 64 byte boundaries (or
whatever the local cache line size is).  In fact, the cache line size
varies from cpu to cpu, or even within a cpu (L1 vs L2 vs L3).
Incorporating the cache line size into the semantics would effectively
create a new architecture (and the cache line size would have to be
permanently fixed for that architecture, preventing future changes to
the cache line size that may improve performance.)

So I do still think that, for the purposes of C at least, it is set in
stone.  The most that might happen is for sizeof(void *) to be the
official size and alignment, not sizeof(int).

-- 
Adam Olsen, aka Rhamphoryncus

[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux