Re: EC Pool and Cache Tier Tuning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Either option #1 or #2 depending on if your data has hot spots or you need
to use EC pools. I'm finding that the cache tier can actually slow stuff
down depending on how much data is in the cache tier vs on the slower tier.

Writes will be about the same speed for both solutions, reads will be a lot
faster using a cache tier if the data resides in it.

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Steffen Winther
> Sent: 09 March 2015 20:47
> To: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  EC Pool and Cache Tier Tuning
> 
> Nick Fisk <nick@...> writes:
> 
> > My Ceph cluster comprises of 4 Nodes each with the following:- 10x 3TB
> > WD Red Pro disks - EC pool k=3 m=3 (7200rpm) 2x S3700 100GB SSD's (20k
> > Write IOPs) for HDD Journals 1x S3700 400GB SSD (35k Write IOPs) for
> > cache tier - 3x replica
> If I have following 4x node config:
> 
>   2x S3700 200GB SSD's
>   4x 4TB HDDs
> 
> What config to aim for to optimize RBD write/read OPs:
> 
>   1x S3700 200GB SSD for 4x journals
>   1x S3700 200GB cache tier
>   4x 4TB HDD OSD disk
> 
> or:
> 
>   2x S3700 200GB SSD for 2x journals
>   4x 4TB HDD OSD disk
> 
> or:
> 
>   2x S3700 200GB cache tier
>   4x 4TB HDD OSD disk
> 
> /Steffen
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux