I've been running a test on 2TB glusterfs. I've filled almost 700GB total,
but one
brick got full (0 Bytes Available). I had an aluu.min-free-disk option set
to 6GB, so I expected
that the volume would not fill up and leave 6GB *Available*. The drive is a
500GB drive
and after formatting ext3, shows 459GB. I see it says that 440GB is used,
but that 19GB
is not useful to me, so I assume that the 6GB option would have meant that I
would have
6GB available for use. Did I have to specify 25GB min-free-disk to get 6GB
of Available space?
I'm not sure if this is a bug, or if I'm incorrectly using the feature. As a
general rule, I like
to keep at least 10% available for each disk before going critical, but I
don't have a feature
to specify % of HD total size.
root@device128:/BRICK# df -kh
Filesystem Size Used Avail Use% Mounted on
/dev/hdd1 459G 440G 0 100% /mnt/hdd1
glusterfs 1.9T 671G 1.1T 38% /glusterfs
volume bricks
type cluster/unify
option namespace glfsd129-500GB-IDEps-ns
subvolumes glfsd49-40GB-118IDEss glfsd48-750GB-101SATAp0
glfsd115-250GB glfsd128-500GB-128IDEss glfsd129-500GB-IDEps
# ALU
option scheduler alu
option alu.limits.min-free-disk 6GB
option alu.limits.max-open-files 10000
option alu.order
disk-usage:read-usage:write-usage:open-fles-usage:disk-speed-usage
option alu.disk-usage.entry-threshold 2GB
option alu.disk-usage.exit-threshold 60MB
option alu.open-files-usage.entry-threshold 1024
option alu.open-files-usage.exit-threshold 32
option alu.stat-refresh.interval 10sec
end-volume
_________________________________________________________________
http://newlivehotmail.com