Re: Ceph cache tier and rbd volumes/SSD primary, HDD replica crush rule!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Mihai Gheorghe
> Sent: 12 January 2016 14:25
> To: ceph-users@xxxxxxxxxxxxxx
> Subject:  Ceph cache tier and rbd volumes/SSD primary, HDD
> replica crush rule!
> 
> Hello,
> 
> I have a question about how cache tier works with rbd volumes!?
> 
> So i created a pool of SSD's for cache and a pool on HDD's for cold storage
> that acts as backend for cinder volumes. I create a volume in cinder from an
> image and spawn an instance. The volume is created in the cache pool as
> expected and it will be flushed to the cold storage after a period of inactivity
> or after the cache pool reaches 40% full as i understand.

Cache won't be flushed after inactivity the cache agent only works on % full (either # of objects or bytes)

> 
> Now after the volume is flushed to the HDD and i make a read or write
> request in the guest OS, how does ceph handle it. Does it upload the whole
> rbd volume from the cold storage to the cache pool or only a chunk of it
> where the request is made from the guest OS?

The cache works on hot objects, so particular objects (normally 4MB) of the RBD will be promoted/demoted over time depending on access patterns.

> 
> Also, is the replication in ceph syncronious or async? If i set a crush rule to use
> as primary host the SSD one and for replication the HDD one, would the
> writes and reads on the SSD;s be slowed down by the replication on the
> mechanical drive?
> Would this configuration be viable? (i ask this because i don't have the
> number of SSD to make a pool of size 3 on them)

Its sync replication. If you have a very heavy read workload, you can do what you suggest and set the SSD OSD to be the primary copy for each PG, writes will still be limited to the speed of the spinning disks, but reads will be serviced from the SSD's. However there is a risk in degraded scenarios that your performance could dramatically drop if more IO is diverted to spinning disks.

> 
> Thank you!

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux