Re: Question regarding tiering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Subject:          Question regarding tiering
> Date:         Thu, 1 Oct 2015 17:48:41 +0530
> From:         Ameet Pyati <ameet.pyati@xxxxxxxxxxxx>
> To:         gluster-users@xxxxxxxxxxx
> 
> 
> 
> Hi,
> 
> I am trying to attach a cache tier to normal distributed volume. I am
> seeing write failures when the cache brick becomes full. following are
> the steps
> 
> 
> *>> 1. create volume using hdd brick*
> 
> /root@host:~/gluster/glusterfs# gluster volume create vol
> host:/data/brick1/hdd//
> /volume create: vol: success: please start the volume to access data/
> /root@host:~/gluster/glusterfs# gluster volume start vol/
> /volume start: vol: success/
> 
> *>> 2. mount and write one file of size 1G*


The tech preview version of tiering does not gracefully handle a hot tier. When the feature is out of tech preview (later this fall?) a watermarking feature will exist. It will aggressively move data off the hot tier when its utilization crosses the watermark. 

The watermark's value is expressed as a percentage of the total storage. So if you set the watermark to 80%, then when the hot tier is 80% full, the system will begin aggressively moving data off the hot tier to the cold tier.  

There are some other mechanisms that are being explored to buttress watermarking:

- take a statfs of the hot tier every X number of I/Os, so we discover the system is "in the red zone" sooner.

- check the return value of a file operation for "out of space", and redirect that file operation to the cold tier if this happens. (ideal, but may be complex)

Together these ideas should eventually provide for a more resilient and responsive system.

> 
> /root@host:~/gluster/glusterfs# mount -t glusterfs host:/vol /mnt/
> /root@host:~/gluster/glusterfs# dd if=/dev/zero of=/mnt/file1 bs=1G count=1/
> /1+0 records in/
> /1+0 records out/
> /1073741824 bytes (1.1 GB) copied, 1.50069 s, 715 MB/s/
> /
> /
> /root@host:~/gluster/glusterfs# du -sh /data/brick*/
> /1.1G Â  Â /data/brick1/
> /60K Â  Â  /data/brick2/
> 
> *
> *
> *>> 3. attach ssd brick as tier*
> /
> /
> /root@host:~/gluster/glusterfs# gluster volume attach-tier vol
> host:/data/brick2/ssd//
> /Attach tier is recommended only for testing purposes in this release.
> Do you want to continue? (y/n) y/
> /volume attach-tier: success/
> /volume rebalance: vol: success: Rebalance on vol has been started
> successfully. Use rebalance status command to check status of the
> rebalance process./
> /ID: dea8d1b7-f0f4-4c17-94f5-ba0e263bc561/
> /
> /
> /root@host:~/gluster/glusterfs# gluster volume rebalance vol tier status/
> /Node                 Promoted files       Demoted files Â
> Â  Â  Â Status/
> /--------- Â  Â  Â  Â  Â  Â --------- Â  Â  Â  Â  Â  Â --------- Â  Â
> Â  Â  Â  Â ---------/
> /localhost            0                    0       Â
> Â  Â  Â  Â  Â  Â in progress/
> /volume rebalance: vol: success/
> 
> *
> *
> *>> 4. write data to fill up cache tier*
> 
> /root@host:~/gluster/glusterfs# dd if=/dev/zero of=/mnt/file2 bs=1G
> count=9 oflag=direct/
> /9+0 records in/
> /9+0 records out/
> /9663676416 bytes (9.7 GB) copied, 36.793 s, 263 MB/s/
> /root@host:~/gluster/glusterfs# du -sh /data/brick*/
> /1.1G Â  Â /data/brick1/
> /9.1G Â  Â /data/brick2/
> /root@host:~/gluster/glusterfs# gluster volume rebalance vol tier status/
> /Node                 Promoted files       Demoted files Â
> Â  Â  Â Status/
> /--------- Â  Â  Â  Â  Â  Â --------- Â  Â  Â  Â  Â  Â --------- Â  Â
> Â  Â  Â  Â ---------/
> /localhost            0                    0       Â
> Â  Â  Â  Â  Â  Â in progress/
> /volume rebalance: vol: success/
> /root@host:~/gluster/glusterfs# gluster volume rebalance vol  status/
> /Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Node
> Rebalanced-files          size       scanned      failures Â
>     skipped               status   run time in secs/
> /Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â --------- Â  Â
> Â ----------- Â  ----------- Â  ----------- Â  ----------- Â
> ----------- Â  Â  Â  Â  ------------ Â  Â  --------------/
> /                               localhost         Â
>      0        0Bytes             0             0 Â
>           0          in progress             112.00/
> /volume rebalance: vol: success/
> /
> /
> /root@host:~/gluster/glusterfs# dd if=/dev/zero of=/mnt/file3 bs=1G
> count=5 oflag=direct/
> /dd: error writing â/mnt/file3â: No space left on device/
> /dd: closing output file â/mnt/file3â: No space left on device/
> /
> /
> /root@host:~/gluster/glusterfs# du -sh /data/brick*/
> /1.1G Â  Â /data/brick1/
> /9.3G Â  Â /data/brick2/
> 
> *>>>> there is lot of space free in cold brick but writes are failing...*
> 
> /root@vsan18:~/gluster/glusterfs# df -h/
> /. <cut>/
> /./
> //dev/sdb3 Â  Â  Â  231G Â 1.1G Â 230G Â  1% /data/brick1/
> //dev/ssd       9.4G  9.4G  104K 100% /data/brick2/
> /host:/vol     241G   11G  230G   5% /mnt/
> 
> Please let me know if I am missing something.Â
> Is this behavior expected. shouldn't the files be re-balanced? Â
>
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux