On 09/03/2015, at 22.44, Nick Fisk <nick@xxxxxxxxxx> wrote: > Either option #1 or #2 depending on if your data has hot spots or you need > to use EC pools. I'm finding that the cache tier can actually slow stuff > down depending on how much data is in the cache tier vs on the slower tier. > > Writes will be about the same speed for both solutions, reads will be a lot > faster using a cache tier if the data resides in it. Of course, a large cache tier miss rate would be a 'hit' on perf :) Assuming that RBD client/OS page caching do help read OPs to some degree, though memory can't cache as much data as a larger SSD. /Steffen > >> -----Original Message----- >> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of >> Steffen Winther >> Sent: 09 March 2015 20:47 >> To: ceph-users@xxxxxxxxxxxxxx >> Subject: Re: EC Pool and Cache Tier Tuning >> >> Nick Fisk <nick@...> writes: >> >>> My Ceph cluster comprises of 4 Nodes each with the following:- 10x 3TB >>> WD Red Pro disks - EC pool k=3 m=3 (7200rpm) 2x S3700 100GB SSD's (20k >>> Write IOPs) for HDD Journals 1x S3700 400GB SSD (35k Write IOPs) for >>> cache tier - 3x replica >> If I have following 4x node config: >> >> 2x S3700 200GB SSD's >> 4x 4TB HDDs >> >> What config to aim for to optimize RBD write/read OPs: >> >> 1x S3700 200GB SSD for 4x journals >> 1x S3700 200GB cache tier >> 4x 4TB HDD OSD disk >> >> or: >> >> 2x S3700 200GB SSD for 2x journals >> 4x 4TB HDD OSD disk >> >> or: >> >> 2x S3700 200GB cache tier >> 4x 4TB HDD OSD disk >> >> /Steffen >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Attachment:
signature.asc
Description: Message signed with OpenPGP using GPGMail
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com