Performance optimization tips Gluster 3.3? (small files / directory listings)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 08, 2012 at 02:23:57PM -0400, olav johansen wrote:
>    This is a single thread trying to process a sequential task where the
>    latency really becomes a problem with ls -aR I get similar speed:

That's interesting.

>    [@web1 files]# time ls -aR|wc -l
>    1968316
>    real    27m23.432s
>    user    0m5.523s
>    sys     0m35.369s
>    [@web1 files]# time ls -aR|wc -l
>    1968316
>    real    26m2.728s
>    user    0m5.529s
>    sys     0m33.779s

That's an average of 0.8ms per file, which isn't too bad if you're also
getting similar times with ls -laR.

If you're getting much better figures with NFS then it may be down to
something like client-side caching as you suggested.  You may need to do
some more direct looking at what's going on, e.g. with strace, to be sure
what's going on.

>    Don't get me wrong, Gluster rocks but in our current case latency is
>    killing us, and I'm looking for help on solving this.
>    One idea I haven't had a chance to try in terms of latency is to split
>    the 6x1TB raid 10 on each brick to 3x (2x1TB RAID 1)  not sure if
>    gluster can even do this.  (A1->B1, A2->B2,A3->B3 as one volume)

Sure it can do that - it's called a distributed replicated volume. It
doesn't care if the bricks are on the same node.  I very much doubt it will
make any difference in latency, but feel free to test.

If the latency is in the network then you could try using 10GE (but use SFP+
with fibre or direct-attach cables; don't use 10GE over CAT6 because that
has an even longer latency than 1GE), or Infiniband.

Regards,

Brian.


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux