Zero byte versions in DHT volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dan,

volume perseus
   type protocol/client
   option transport-type tcp
   option remote-host perseus
   option remote-port 6996
   option remote-subvolume brick1
end-volume

volume romulus
   type protocol/client
   option transport-type tcp
   option remote-host romulus
   option remote-port 6996
   option remote-subvolume brick1
end-volume

both volumes are pointing to brick1, hence the distribute sees the file twice.

On Tue, Jun 8, 2010 at 4:40 AM, Dan Bretherton <dab at mail.nerc-essc.ac.uk> wrote:
> Dear All,
>
> I have found a large number of files like the one below in a DHT volume.
>
> ---------T ?1 nobody nobody ? ? 0 2010-05-01 18:34 d80bao.daj0710
>
> For those I have checked, a real file (i.e. non-zero size and normal
> permissions and ownership) exists on one brick and its zero byte counterpart
> is on one of the other bricks. I originally found these files by looking in
> the individual server bricks, but I discovered that I can also find them
> using the "find" command in the glusterfs DHT volume itself. ?However,
> listing a directory containing these files with "ls" does not list the zero
> byte versions, but instead two copies of the normal versions are shown.
> Does anybody have an idea what is going on? ?This strange behaviour is
> clearly going to confuse some users, and there are sometimes also long
> delays when listing the affected directories. ?I am using GlusterFS version
> 3.0.3 now but I also noticed this behaviour in 2.0.8. ?My client and server
> volume files are shown below.
>
> Regards,
> Dan Bretherton.
>
> #
> ##
> ### One of the server vol files ###
> ##
> #
> volume posix1
> ?type storage/posix
> ?option directory /local
> end-volume
>
> volume posix2
> ?type storage/posix
> ?option directory /local2/glusterfs
> end-volume
>
> volume posix3
> ?type storage/posix
> ?option directory /local3/glusterfs
> end-volume
>
> volume locks1
> ? ?type features/locks
> ? ?subvolumes posix1
> end-volume
>
> volume locks2
> ? ?type features/locks
> ? ?subvolumes posix2
> end-volume
>
> volume locks3
> ? ?type features/locks
> ? ?subvolumes posix3
> end-volume
>
> volume io-cache1
> ?type performance/io-cache
> ?subvolumes locks1
> end-volume
>
> volume io-cache2
> ?type performance/io-cache
> ?subvolumes locks2
> end-volume
>
> volume io-cache3
> ?type performance/io-cache
> ?subvolumes locks3
> end-volume
>
> volume writebehind1
> ?type performance/write-behind
> ?subvolumes io-cache1
> end-volume
>
> volume writebehind2
> ?type performance/write-behind
> ?subvolumes io-cache2
> end-volume
>
> volume writebehind3
> ?type performance/write-behind
> ?subvolumes io-cache3
> end-volume
>
> volume brick1
> ? ?type performance/io-threads
> ? ?subvolumes writebehind1
> end-volume
>
> volume brick2
> ? ?type performance/io-threads
> ? ?subvolumes writebehind2
> end-volume
>
> volume brick3
> ? ?type performance/io-threads
> ? ?subvolumes writebehind3
> end-volume
>
> volume server
> ? ?type protocol/server
> ? ?option transport-type tcp
> ? ?option auth.addr.brick1.allow *
> ? ?option auth.addr.brick2.allow *
> ? ?option auth.addr.brick3.allow *
> ? ?option listen-port 6996
> ? ?subvolumes brick1 brick2 brick3
> end-volume
>
> #
> ##
> ### Client vol file ###
> ##
> #
> volume remus
> ? ?type protocol/client
> ? ?option transport-type tcp
> ? ?option remote-host remus
> ? ?option remote-port 6996
> ? ?option remote-subvolume brick3
> end-volume
>
> volume perseus
> ? ?type protocol/client
> ? ?option transport-type tcp
> ? ?option remote-host perseus
> ? ?option remote-port 6996
> ? ?option remote-subvolume brick1
> end-volume
>
> volume romulus
> ? ?type protocol/client
> ? ?option transport-type tcp
> ? ?option remote-host romulus
> ? ?option remote-port 6996
> ? ?option remote-subvolume brick1
> end-volume
>
> volume distribute
> ? ?type cluster/distribute
> ? ?option min-free-disk 20%
> ? ?#option lookup-unhashed yes
> ? ?subvolumes remus perseus romulus
> end-volume
>
> volume writebehind
> ?type performance/write-behind
> ?subvolumes distribute
> end-volume
>
> volume io-threads
> ?type performance/io-threads
> ?subvolumes writebehind
> end-volume
>
> volume io-cache
> ? ?type performance/io-cache
> ? ?option cache-size 512MB
> ? ?subvolumes io-threads
> end-volume
>
> volume main
> ?type performance/stat-prefetch
> ?subvolumes io-cache
> end-volume
>
>
> --
> Mr. D.A. Bretherton
> Reading e-Science Centre
> Environmental Systems Science Centre
> Harry Pitt Building
> 3 Earley Gate
> University of Reading
> Reading, RG6 6AL
> UK
>
> Tel. +44 118 378 7722
> Fax: +44 118 378 6413
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux