Performance Issue: GNBD Server Multithreaded per client / Async IO

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I had read the message of Benjamin Marzinski(redhat staff), in reply to
Raz Ben-Jehuda in the topic "Reduced performance question" in this list,
and have some comments about it.
Benjamin Marzinski state that GNBD server use only one thread per
client. This scenario have bad performance issues as described in the
message. I believe using multithreaded in the server/client comunication
will give us better performance for simultaneous process accessing the
gnbd device, and use more bandwidth available in the network
interconnect. There is any planning or possibility to implement this ?
And about async io ? 
the roadmap of gnbd doesnt have planning of implementing it ?
thereis one public roadmap of cluster-suite ? 
i believe beside gnbd and iscsi have the same proposal of giving access
to a block device over network, they have different concept design
issues and places that are better to be used. 
Is redhat planning to substitute gnbd with iscsi ? if don't i believe
that improving gnbd performance is critical, and need to be done.

my 2 cents.

Leonardo Mello

On Qua, 2006-01-04 at 15:36 -0600, Benjamin Marzinski wrote: 
> On Mon, Dec 19, 2005 at 08:37:56AM +0200, Raz Ben-Jehuda(caro) wrot
> >    2. What is GNBD IO model ?
> 
> the gnbd driver works just like any block device driver, except that instead
> of writing the data to a locally attached device, it sends the data to the
> server over the network, with a small (28 byte) header attached.  Once the
> server has written the data to the underlying storage, a reply header is sent
> back. For reads, only a header is send to the server, and the server sends
> back the data with a reply header attached. 
> 
> On the server, there is one thread for each client/device pair. This thread
> simply looks at the header, performs the necessary IO on the local device,
> and sends back a reply header, along with the data for read request. If you
> have cached mode on, the server will go through the cache, and use readahead.
> If you don't, the server will use direct IO to the device.  At any rate, in
> order to guarantee consistency in case of a crash, the server will not send
> the reply until the data is on disk (the underlying device is opened O_SYNC).
> 
> Most of the slowness comes from the server. There is only a single thread
> per client/device pair, so you will not start on the next request until the
> current one completes to disk.  The best solution to this would probably
> be async IO.  That way, the thread could pass the IO to the underlying device
> as quickly as it comes in (up till it runs short of memory) and then reply
> to the client as that IO comepletes to disk. 

Attachment: signature.asc
Description: This is a digitally signed message part

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux