Re: Question on cluster/nufa: No space left on device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I think distribute doesn't have any schedulers yet so the file
distribution is according to some hash function.
If the node that results from the hash calculation doesn't have enough
space the write will fail. This is inherent to the design of using a
hash to calculate where the file is stored.
You can try unify if you want the kind of behaviour that you were expecting.

FIlipe

On Mon, Feb 16, 2009 at 16:20, Fred Hucht <fred@xxxxxxxxxxxxxx> wrote:
> Hi,
>
> experimenting with the actual glusterfs-2.0.0pre18 and cluster/nufa I ran
> into this: When the local filespace is full, gluster returns
>
> dd: writing `bla1': No space left on device
>
> with log
>
> 2009-02-16 15:44:05 W [posix.c:1688:posix_writev] sc0-posix: writev failed:
> No space left on device
> 2009-02-16 15:44:05 E [fuse-bridge.c:1602:fuse_writev_cbk] glusterfs-fuse:
> 42908: WRITE => -1 (No space left on device)
>
> instead of sending the data to another node where still is much space. Is
> this expected behavior?
>
> This is the local volume after the dd:
> # df -h /export/scratch
> Filesystem            Size  Used Avail Use% Mounted on
> /dev/mapper/system-scratch
>                       50G   50G   20K 100% /export/scratch
>
> and this the glusterfs:
>
> # df -h /scratch
> Filesystem            Size  Used Avail Use% Mounted on
> glusterfs              18T  241G   17T   2% /scratch
>
> Note that on the local /export/scratch I have 45GB outside of the gluster
> directory /export/scratch/DHT.
>
> server config:
> ---------------------------
> volume sc0-posix
>  type storage/posix
>  option directory /export/scratch/DHT
> end-volume
>
> volume sc0-locks
>  type features/posix-locks
>  subvolumes sc0-posix
> end-volume
>
> volume sc0-ioth
>  type performance/io-threads
>  option thread-count 4
>  subvolumes sc0-locks
> end-volume
>
> volume sc0
>  type performance/read-ahead
>  subvolumes sc0-ioth
> end-volume
>
> volume server
>  type protocol/server
>  option transport-type tcp/server
>  subvolumes  sc0
>  option auth.addr.sc0.allow 127.0.0.1,192.168.1.*
> end-volume
> ---------------------------
>
> client config:
> ---------------------------
> volume sc0
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 127.0.0.1
>  option remote-subvolume sc0
> end-volume
>
> volume sc1
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 192.168.1.1
>  option remote-subvolume sc1
> end-volume
>
> [...]
>
> volume sc86
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 192.168.1.86
>  option remote-subvolume sc86
> end-volume
>
> volume sc87
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 192.168.1.87
>  option remote-subvolume sc87
> end-volume
>
> volume scratch
> #  type cluster/distribute
>  type cluster/nufa
>  option local-volume-name sc0
>  subvolumes sc0 sc1 sc2 sc3 sc4 sc5 sc6 sc7 sc8 sc9 sc10 sc11 sc12 sc13 sc14
> sc15 sc16 sc17 sc18 sc19 sc20 sc21 sc22 sc23 sc24 sc25 sc26 sc27 sc28 sc29
> sc30 sc31 sc32 sc33 sc34 sc35 sc36 sc37 sc38 sc39 sc40 sc41 sc42 sc43 sc44
> sc45 sc46 sc47 sc48 sc49 sc50 sc51 sc52 sc53 sc54 sc55 sc56 sc57 sc58 sc59
> sc60 sc61 sc62 sc63 sc64 sc65 sc66 sc67 sc68 sc69 sc70 sc71 sc72 sc73 sc74
> sc75 sc76 sc77 sc78 sc79 sc80 sc81 sc82 sc83 sc84 sc85 sc86 sc87
> end-volume
>
> volume scratch-io-threads
>  type performance/io-threads
>  option thread-count 4
>  subvolumes scratch
> end-volume
>
> volume scratch-write-behind
>  type performance/write-behind
>  option block-size 128kB
>  option flush-behind off
>  subvolumes scratch-io-threads
> end-volume
>
> volume scratch-read-ahead
>  type performance/read-ahead
>  option page-size 128kB # unit in bytes
>  option page-count 2    # cache per file  = (page-count x page-size)
>  subvolumes scratch-write-behind
> end-volume
>
> volume scratch-io-cache
>  type performance/io-cache
>  option cache-size 64MB
>  option page-size 512kB
>  subvolumes scratch-read-ahead
> end-volume
> ---------------------------
>
> Any help is appreciated!
>
> Fred
>
> Dr. Fred Hucht <fred@xxxxxxxxxxxxxx>
> Institute for Theoretical Physics
> University of Duisburg-Essen, 47048 Duisburg, Germany
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxx
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux