Re: Question about unify over afr

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Lukas,

Which version of glusterfs are you using?

Did you restart the glusterfsd server on 192.168.1.40 after
creating the directories? (to rule out mkdirs contributing to
the writes data)

apparently just the find and ls are creating the writes?

Thanks
Krishna

On Tue, Sep 16, 2008 at 1:55 AM, Łukasz Mierzwa <l.mierzwa@xxxxxxxxx> wrote:
> 1. I got 6 servers in glusterfs cluster 192.168.1.40-45, each one is running
> glusterfs-server with config file that is in debug.tar.
> Data and namespace dirs were placed on different raid arrays.
> 2. On 192.168.1.40 I started glusterfs-client
> 3. Then I created ~200k dirs (no files), all servers were online when I was
> creating dirs, no network problems.
> 4. I started gluster client through strace with debugging enabled.
> 5. On 192.168.1.45 I attached strace to '[glusterfs]' process (server)
> 6. After that I run 'ls -lh' (first 3-4 minutes)
> 7. and 'find > /dev/null' on client mount point (192.168.1.40). (rest of
> graphs)
> 8. After ~15 minutes I terminated find.
>
> On 192.168.1.40 I checked /proc/diskstats:
> a) before any reads from gluster:
>   9    0 md0 89202 0 713592 0 10590 0 84720 0 0 0 0
>   9    1 md1 90853 0 726812 0 14580 0 116640 0 0 0 0
> b) after all reads:
>   9    0 md0 100430 0 803416 0 19256 0 154048 0 0 0 0
>   9    1 md1 102083 0 816652 0 26604 0 212832 0 0 0 0
>
> (154048 - 84720) * 512b = ~34MB written to namespace array (vs ~44MB read but
> some reads where cached from previous tests, I forgot to clear it)
> (212832 - 116640) * 512b = ~47MB written to data array (vs ~44MB read but some
> reads where cached from previous tests, I forgot to clear it)
> all while doing simple ls and find, no other data (logs or whatever, strace and
> gluster logs were written to tmpfs) were written to those arrays, they were
> only used by glusterfs. There were only dirs so 34MB is a not so small amount
> of data.
> All arrays were formatted with ext3 and mounted with
> "rw,noatime,nodiratime,user_xattr,acl,commit=60".
> I've checked if running 'find' directly on my ext3 mount point will make any
> writes to fs, I stopped gluster-server and client
> before:
>   9    1 md1 18714 0 149700 0 23931 0 191448 0 0 0 0
> after find (5 minutes later):
>   9    1 md1 294078 0 2352612 0 23931 0 191448 0 0 0 0
>
> ~1GB of reads and no writes at all, gluster is the source of writes to those
> arrays not fs itself.
>
> debug.tar with all logs, configs and rrd graphs:
> https://doc.grono.org/debug.tar
>
> --
> Łukasz Mierzwa
>
> Grono.net S.A.
>  ul. Szturmowa 2a, 02-678 Warszawa
>  Sąd Rejonowy dla m.st. Warszawy, XIII Wydział Gospodarczy;
>  Nr KRS 0000292169 , NIP: 929-173-90-15  Regon: 141197097 Kapitał
>  zakładowy: 550.000,00 zł
>  http://grono.net/
>
>  Treść tej wiadomości jest poufna i prawnie chroniona. Odbiorca może być
>  jedynie jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie
>  jesteś adresatem niniejszej wiadomości, jej rozpowszechnianie,
>  kopiowanie, rozprowadzanie lub inne działanie o podobnym charakterze
>  jest prawnie zabronione i może by karalne. Jeżeli wiadomość ta trafiła
>  do Ciebie omyłkowo, uprzejmie prosimy o odesłanie jej na adres nadawcy i
>  usunięcie.
>

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux