Distribute translator and differing brick sizes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tuesday 27 January 2009 13:38:14 Keith Freedman wrote:
> At 10:34 PM 1/26/2009, Andrew McGill wrote:
> >On Tuesday 27 January 2009 01:27:42 Sean Davis wrote:
> > > If I am putting together several volumes of varying sizes using
> > > distribute, what type of load balancing should I expect?  I understand
> > > hashing and it sounds like if the disk fills, then it is not used, but
> > > can I use ALU scheduler to cut things off before the disk becomes full
> > > to allow for growth of directories and files?
> > >
> > > How are people approaching this?
> >
> >To implement artificial quotas, I've created multiple loopback filesystems
> >with unit sizes, and shared those with AFR.  This is far from optimal, but
> > it does mean that I can be sure that the volumes are a the same size.
> > Conceivably, they can be enlarged if they run out of space.  LVM would be
> > just as good/bad, but I don't want to take the machines down to resize
> > partitions.
>
> aren't quotas enforced on the server side if they're enabled there?
>
> I'm not using quotas so I can't test this for you, but logically this
> seems like it would work since gluster ultimately is bound by the
> rules of it's underlying filesystems.
>
> I'm just not sure how it would behave if someone tries to append or
> write a file that would cause an over-quota problem and, in the case
> of HA/AFR what would happen if quotas were turned on on one server
> and not on another?
>
> hopefully someone will calarify so we'll both know :)
Just to be clear, I don't want per-user quotas, but an artificial limit on the 
amount of disk space that a glusterfs brick will use -- ie. a false "disk 
free" for the glusterfs files.  

In my case, I can spare 200Mb on a few machines for glusterfs files.  There is 
250Mb of space free.  I don't want to use the last 50Mb for glusterfs but for 
important mail loops.  Since the AFR code only considers the disk free on the 
first brick, I suppose I could implement what I'm doing by having a loopback 
filesystem on the first brick, and using plain files as part of the regular 
filesystem on the others.... (now that's not such a bad idea, you know)....

&:-)



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux