Re: Question about unify over afr

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thursday 28 August 2008 15:29:06 Łukasz Mierzwa napisał(a):
> Thursday 28 of August 2008 12:39:03 napisałeś(-łaś):
> > On Thu, Aug 28, 2008 at 3:01 PM, Łukasz Mierzwa <l.mierzwa@xxxxxxxxx> 
wrote:
> > > Thursday 28 of August 2008 07:06:30 Krishna Srinivas napisał(a):
> > >> On Wed, Aug 27, 2008 at 10:55 PM, Łukasz Mierzwa <l.mierzwa@xxxxxxxxx>
> > >
> > > wrote:
> > >> > Tuesday 26 August 2008 16:28:41 Łukasz Mierzwa napisał(a):
> > >> >> Hi,
> > >> >>
> > >> >> I testing glusterfs for small files storage, first I've setup a
> > >> >> single disk gluster server, connected to it from another machine
> > >> >> and served those files with nginx. That worked ok, I got good
> > >> >> performance, on average about +20ms slower for each request but
> > >> >> that's ok. Now I've setup unify over afr (2 afr groups with 3
> > >> >> servers each, unify and afr on the client side, namespace dir is on
> > >> >> every server, as other stuff afr'ed on the client side), this is
> > >> >> mounted on one of those 6 servers. After writing ~200GB files from
> > >> >> production server I started to do some tests and I've noticed that
> > >> >> doing simple ls on that mount point causes as many writes as reads,
> > >> >> this has to do something to either unify or afr, I suspect that
> > >> >> those writes are do to namespace but I need to do more debugging.
> > >> >> It's very annoying that simple reads are causing so many writes.
> > >> >> All my servers are in sync so there should not be any need for
> > >> >> sealf-healing. Before I start debugging it I wanted to ask if this
> > >> >> is normal? Shoud afr or unify generate so many writes to namespace
> > >> >> or maybe xattrs during reads (storage is on ext3 with users_xattrs
> > >> >> on)?
> > >> >
> > >> > I tested it a little bit today and I found out that if I got 1 or 2
> > >> > nodes in my afr group for namespace there are no writes at all while
> > >> > doing ls, if I add one or more nodes they are starting to get
> > >> > writes. WTF?
> > >>
> > >> Do you mean that your NS is getting write() calls when you do "ls"?
> > >
> > > It seems so. I will split my NS and DATA bricks to different disks
> > > today so I will be 100% sure. What I am sure now is that I am getting
> > > as many writes as reads when I do "ls" and have more than 2 NS bricks
> > > in AFR.
> >
> > reads/writes should not happen when you do an 'ls' where are you seeing
> > reads and writes being done? How are you seeing it? are you strace'ing
> > the glusterfsd?
> >
> > Krishna
>
> I first noticed them when I looked at rrd graphs for those machines, I
> wanted to see if AFR is balancing reads. I can see them in rrd graphs
> generated from collectd, dstat, iotop and iostat, they are happening. I
> first tried to find something in my config and forgot about such obvious
> step as straceing glusterfs. I attach log from one of the servers, I
> straced gluster-server on this machine, You can see that there is a lot of
> mkdir/chown/chmod on files that are already there, all bricks were online
> when I was writing files to gluster client so no self-heal should be
> needed. I've also attached client and server configs.

I attached strace log from gluster-server, this time I removed all but last 
two ns servers, I straced brick with ns, no mkdir/chmod this time.

-- 
Łukasz Mierzwa

Grono.net S.A.
 ul. Szturmowa 2a, 02-678 Warszawa
 Sąd Rejonowy dla m.st. Warszawy, XIII Wydział Gospodarczy;
 Nr KRS 0000292169 , NIP: 929-173-90-15  Regon: 141197097 Kapitał
 zakładowy: 550.000,00 zł
 http://grono.net/
 
 Treść tej wiadomości jest poufna i prawnie chroniona. Odbiorca może być
 jedynie jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie
 jesteś adresatem niniejszej wiadomości, jej rozpowszechnianie,
 kopiowanie, rozprowadzanie lub inne działanie o podobnym charakterze
 jest prawnie zabronione i może by karalne. Jeżeli wiadomość ta trafiła
 do Ciebie omyłkowo, uprzejmie prosimy o odesłanie jej na adres nadawcy i
 usunięcie.

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux