Hi,
I am trying to attach a cache tier to normal distributed volume. I am seeing write failures when the cache brick becomes full. following are the steps
>> 1. create volume using hdd brick
root@host:~/gluster/glusterfs# gluster volume create vol host:/data/brick1/hdd/
volume create: vol: success: please start the volume to access data
root@host:~/gluster/glusterfs# gluster volume start vol
volume start: vol: success
>> 2. mount and write one file of size 1G
root@host:~/gluster/glusterfs# mount -t glusterfs host:/vol /mnt
root@host:~/gluster/glusterfs# dd if=/dev/zero of=/mnt/file1 bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 1.50069 s, 715 MB/s
root@host:~/gluster/glusterfs# du -sh /data/brick*
1.1G /data/brick1
60K /data/brick2
>> 3. attach ssd brick as tier
root@host:~/gluster/glusterfs# gluster volume attach-tier vol host:/data/brick2/ssd/
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y
volume attach-tier: success
volume rebalance: vol: success: Rebalance on vol has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: dea8d1b7-f0f4-4c17-94f5-ba0e263bc561
root@host:~/gluster/glusterfs# gluster volume rebalance vol tier status
Node Promoted files Demoted files Status
--------- --------- --------- ---------
localhost 0 0 in progress
volume rebalance: vol: success
>> 4. write data to fill up cache tier
root@host:~/gluster/glusterfs# dd if=/dev/zero of=/mnt/file2 bs=1G count=9 oflag=direct
9+0 records in
9+0 records out
9663676416 bytes (9.7 GB) copied, 36.793 s, 263 MB/s
root@host:~/gluster/glusterfs# du -sh /data/brick*
1.1G /data/brick1
9.1G /data/brick2
root@host:~/gluster/glusterfs# gluster volume rebalance vol tier status
Node Promoted files Demoted files Status
--------- --------- --------- ---------
localhost 0 0 in progress
volume rebalance: vol: success
root@host:~/gluster/glusterfs# gluster volume rebalance vol status
Node Rebalanced-files size scanned failures skipped status run time in secs
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
localhost 0 0Bytes 0 0 0 in progress 112.00
volume rebalance: vol: success
root@host:~/gluster/glusterfs# dd if=/dev/zero of=/mnt/file3 bs=1G count=5 oflag=direct
dd: error writing â/mnt/file3â: No space left on device
dd: closing output file â/mnt/file3â: No space left on device
root@host:~/gluster/glusterfs# du -sh /data/brick*
1.1G /data/brick1
9.3G /data/brick2
>>>> there is lot of space free in cold brick but writes are failing...
root@vsan18:~/gluster/glusterfs# df -h
. <cut>
.
/dev/sdb3 231G 1.1G 230G 1% /data/brick1
/dev/ssd 9.4G 9.4G 104K 100% /data/brick2
host:/vol 241G 11G 230G 5% /mnt
Please let me know if I am missing something.
Is this behavior expected. shouldn't the files be re-balanced?
Thanks,
Ameet
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users