Re: Scaled down a bit.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 28 Nov 2007, Anand Babu Periasamy wrote:

     MRI imaging.  The real programs suck in between 5 and 8 MB worth
of image, crunch them for what seems forever, and then write the
resulting image.  They do this for about 100 slices per scan.  We have a
300 node cluster that runs these things.  This means the local private
gigbit net they're on and the servers are periodically swamped to say
the least.  Not a lot of client rereading going on and I discovered
the caching on the server didn't really help yesterday.  I tried
readahead and that was statistically zero help, pages wre 2 and page
size was 128KB.  I tried a 512KB page size and for some reason made
matters worse apparently.

     Based on whats below I think io-threads might help in the
multi-node case.

     Any help appreciated.

io-thread on client side improves performance when multiple files are read/written simultaneously by multiple applications or a single muli-threaded application.

In terms of few ms difference between nfs and glusterfs, there are some
more performance optimizations possible. Booster will also help you.
We balance between elegance and performance. Our focus was on scalability and reliability so far. It made sense for us to take advantage of
modern technologies like Infiniband/10GigE RDMA, Clustered I/O, high
level performance modules to show several times more performance than
NFS instead of worrying about little issues. Avati is just testing new
patch that will put fuse on separate thread on client side. This will improve responsiveness :)

If you tell us more about your application environment, we can suggest
if there are ways to improve performance further.


Chris Johnson writes:

On Wed, 28 Nov 2007, Kevan Benson wrote:

      Ok, better.  Does work on the client side though.  Doesn't seem
to be to great on the server side for some reason.

      I just tried my simple test with readahead on the cl;ient side.
No difference.  Here's what I used.

volume readahead
   type performance/read-ahead
   option page-size 128kb ### in bytes
option page-count 2 ### memory cache size is page-count x page-size per file
   subvolumes client
end-volume
~ Maybe the page size or count needs to be bigger?

Krishna Srinivas wrote:
On Nov 28, 2007 11:11 PM, Chris Johnson <johnson@xxxxxxxxxxxxxxxxxxx> wrote:
On Wed, 28 Nov 2007, Kevan Benson wrote:
Chris Johnson wrote:
     I also tried the io-cache on the client side.  MAN does that
work.  I had a 256 MB cache defind.  A reread of my 24 MB file took 72
MS.  I don't think it even bothered with the server much.  I need to
try that on the server.  Might help if a bunch of computer nodes
hammer on the same file at the same time.
Careful with io-cache and io-threads together, depending on where you define it (I think), the cache is per-thread. so if you have 8 threads and a 256 MB
cache defined, be prepared for 2 GB of cache use...


No, If you define one io-cache translator there is only one cache. All the threads will refer to the same io-cache translator with which it is associated

Ah. Is this newer? I thought I tried this a few months ago and saw a lot of memory usage. Maybe I just ASSumed. ;)

--

-Kevan Benson
-A-1 Networks





------------------------------------------------------------------------------- Chris Johnson |Internet: johnson@xxxxxxxxxxxxxxxxxxx Systems Administrator |Web: http://www.nmr.mgh.harvard.edu/~johnson
NMR Center                  |Voice:    617.726.0949
Mass. General Hospital      |FAX:      617.726.7422
149 (2301) 13th Street |What the country needs is dirtier fingernails and
Charlestown, MA., 02129 USA |cleaner minds.  Will Rogers

-------------------------------------------------------------------------------


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel

--
Anand Babu Periasamy
GPG Key ID: 0x62E15A31
Blog [http://ab.freeshell.org]
The GNU Operating System [http://www.gnu.org]





-------------------------------------------------------------------------------
Chris Johnson               |Internet: johnson@xxxxxxxxxxxxxxxxxxx
Systems Administrator       |Web:      http://www.nmr.mgh.harvard.edu/~johnson
NMR Center                  |Voice:    617.726.0949
Mass. General Hospital      |FAX:      617.726.7422
149 (2301) 13th Street      |"Survival is insufficient"
Charlestown, MA., 02129 USA |             Seven of nine.
-------------------------------------------------------------------------------




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux