Re: Question about unify over afr

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 28, 2008 at 3:01 PM, Łukasz Mierzwa <l.mierzwa@xxxxxxxxx> wrote:
> Thursday 28 of August 2008 07:06:30 Krishna Srinivas napisał(a):
>> On Wed, Aug 27, 2008 at 10:55 PM, Łukasz Mierzwa <l.mierzwa@xxxxxxxxx>
> wrote:
>> > Tuesday 26 August 2008 16:28:41 Łukasz Mierzwa napisał(a):
>> >> Hi,
>> >>
>> >> I testing glusterfs for small files storage, first I've setup a single
>> >> disk gluster server, connected to it from another machine and served
>> >> those files with nginx. That worked ok, I got good performance, on
>> >> average about +20ms slower for each request but that's ok. Now I've
>> >> setup unify over afr (2 afr groups with 3 servers each, unify and afr on
>> >> the client side, namespace dir is on every server, as other stuff afr'ed
>> >> on the client side), this is mounted on one of those 6 servers. After
>> >> writing ~200GB files from production server I started to do some tests
>> >> and I've noticed that doing simple ls on that mount point causes as many
>> >> writes as reads, this has to do something to either unify or afr, I
>> >> suspect that those writes are do to namespace but I need to do more
>> >> debugging. It's very annoying that simple reads are causing so many
>> >> writes. All my servers are in sync so there should not be any need for
>> >> sealf-healing. Before I start debugging it I wanted to ask if this is
>> >> normal? Shoud afr or unify generate so many writes to namespace or maybe
>> >> xattrs during reads (storage is on ext3 with users_xattrs on)?
>> >
>> > I tested it a little bit today and I found out that if I got 1 or 2 nodes
>> > in my afr group for namespace there are no writes at all while doing ls,
>> > if I add one or more nodes they are starting to get writes. WTF?
>>
>> Do you mean that your NS is getting write() calls when you do "ls"?
>
> It seems so. I will split my NS and DATA bricks to different disks today so I
> will be 100% sure. What I am sure now is that I am getting as many writes as
> reads when I do "ls" and have more than 2 NS bricks in AFR.
>

reads/writes should not happen when you do an 'ls' where are you seeing
reads and writes being done? How are you seeing it? are you strace'ing
the glusterfsd?

Krishna

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux