Re: Semaphores in multiprocessor systems.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 17, 2003 at 01:51:55PM +0200, Magnus Myrefors wrote:
> 
> ----- Original Message ----- 
> From: "Jan Hudec" <bulb@ucw.cz>
> To: "Magnus Myrefors" <tkv764x@tninet.se>
> Cc: <kernelnewbies@nl.linux.org>
> Sent: Tuesday, July 15, 2003 8:28 PM
> Subject: Re: Semaphores in multiprocessor systems.
> 
> 
> > On Tue, Jul 15, 2003 at 03:08:16PM +0200, Magnus Myrefors wrote:
> > > I wonder if it is considered to be a multiprocessor-system in linux if
> > > you install linux on two pc's and connect them with ethernet or a bus
> > > ?
> >
> > No, it's called a cluster. In a wider sense it is a multiprocessor
> > system, but I suppose you wanted to say "SMP system", which it is not.
> >
> > SMP (Symetric MultiProcessing) system is a system, that contains several
> > processors connected to a common bus and sharing a single memory unit.
> >
> > Note: You _can't_ connect two computers with a local bus.
> >
> > > If it is, how do you synchronize access to a memoryunit also connected
> > > with the two pc's with semaphores?
> >
> > They don't have a common memory unit. Only way processes can communicate
> > is by sending each other network packets.
> >
> > > Does this type of programming lie within linux?
> >
> > No, Linux is not a distributed system.
> >
> > > If it isn't I suppose you could solve it something
> > > like this:
> >
> > >    I guess you cannot have the code for the semaphore compiled on the
> > >    memoryunit since the two pc's may have different
> > >    hardware-structures. So, instead I believe you can have
> > >    a memoryaddress on the memoryunit to protect some codearea. The
> > >    address initially set to 1 and decreased when one pc wants to
> > >    access the codearea and set back to 1 when finished.
> >
> > Not that I really know what you wan't to do, but I really fear it won't
> > work for the case you want.
> >
> > This will work for *LOCAL* lock implementation:
> > - You have a shared variable initialized to 1.
> > - Process wanting to lock does:
> >     1 Atomic dec
> >     2 Check if it's zero
> >     3 If it is zero: locked ok.
> >     4 Else atomic inc and goto 1.
> > - Unlocking is just atomic inc.
> > Problem is with the "atomic" - you can't get that on distributed system.
> >
> > So for a distributed system you need some distributed locking mechanizm.
> > There is one implemented in any library for distributed computations.
> > Probably best such library is LAM/MPI.
> >
> 
>   Continuing on the local approach:
>     (In the case  when you are about to send data between pc1 and pc2
>     using a bus and don't use regular packages)

You are not going to. I can't be done with normal computers.

Local bus is the one to which CPUs and Memory are attached. You can't
connect these. You migh be able to connect the peripheral buses (like
SCSI ones), but it would behave like a network as far as mutliprocessing
is concerned.

>     Then if you use a single test to check if it's ok for the
>     receiving pc to start reading the data protected by the "spin
>     lock" type of semaphor it should be possible to loose data if the
>     sending pc works faster than the receiving.  To avoid that you
>     could number the chunks of data in an ascending manner and perhaps
>     using some of the bandwith of the bus to exchange information
>     about the current chunk-number.  You may also want to use two
>     variables to check, one that is set to 1 (by the sender) if the
>     sender is accessing the data to be transmitted and 0 if the data
>     is avaible and the other set to 1 (by the receiver) if the
>     receiver is accessing the data and to 0 if the data is avaible.

That's how producer/consumer problem is solved.

However, you don't have several preconditions. Primarily, you don't have
a shared variable.

>        So the receiving pc does this:
>        (*)Check if it's ok to read data (by checking the variable
> "changing_data") AND compare
>             "chunk_number_read with "current_chunk_number" (should not be
> equal).
>                If the checks are ok:
>                  Set the variable "reading_data" to 1.
>                  Read the data
>                  Increase the variabel "chunk_number_read".
>                  Set variabel "reading_data" back to 0
>                If they are not:
>                  Do nothing (wait and check (*) again )
> 
>         The sending pc does this:
>      (**)Check the variable "reading_data" AND compare "chunk_number_read"
> with
>             the sender's "current_chunknumber" ("current_chunknumber" should
> be one over
>             "chunk_number_read").
>                If the checks are ok:
>                  Set variable "changing_data" to 1.
>                  Change the data
>                  Set "changing_data" back to 0
>                If they are not:
>                  Do nothing (wait and check (**) again)
> 
> 
>   Magnus

This is correct algorithm if you have implementation of shared memory
with synchronization variables. Go and get one ;-).

-------------------------------------------------------------------------------
						 Jan 'Bulb' Hudec <bulb@ucw.cz>
--
Kernelnewbies: Help each other learn about the Linux kernel.
Archive:       http://mail.nl.linux.org/kernelnewbies/
FAQ:           http://kernelnewbies.org/faq/



[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux