Re: About Ceph SSD and HDD strategy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I found this without much effort.
http://www.sebastien-han.fr/blog/2012/11/15/make-your-rbd-fly-with-flashcache/


On Mon, Oct 7, 2013 at 11:39 AM, Jason Villalta <jason@xxxxxxxxxxxx> wrote:
I also would be interested in how bcache or flashcache would integrate. 


On Mon, Oct 7, 2013 at 11:34 AM, Martin Catudal <mcatudal@xxxxxxxxxx> wrote:
Thank's Mike,
     Kyle Bader suggest me also to use my large SSD (900 GB) as cache
drive using "bcache" or "flashcache".
Since I have already plan to use SSD for my journal, I would certainly
use also SSD as cache drive in addition.

I will have to read documentation about "bcache" and his integration
with Ceph.

Martin

Martin Catudal
Responsable TIC
Ressources Metanor Inc
Ligne directe: (819) 218-2708

Le 2013-10-07 11:25, Mike Lowe a écrit :
> Based on my experience I think you are grossly underestimating the expense and frequency of flushes issued from your vm's.  This will be especially bad if you aren't using the async flush from qemu >= 1.4.2 as the vm is suspended while qemu waits for the flush to finish.  I think your best course of action until the caching pool work is completed (I think I remember correctly that this is currently in development) is to either use the ssd's as large caches with bcache or to use them for journal devices.  I'm sure there are some other more informed opinions out there on the best use of ssd's in a ceph cluster and hopefully they will chime in.
>
> On Oct 6, 2013, at 9:23 PM, Martin Catudal <mcatudal@xxxxxxxxxx> wrote:
>
>> Hi Guys,
>>      I read all Ceph documentation more than twice. I'm now very
>> comfortable with all the aspect of Ceph except for the strategy of using
>> my SSD and HDD.
>>
>> Here is my reflexion
>>
>> I've two approach in my understanding about use fast SSD (900 GB) for my
>> primary storage and huge but slower HDD (4 TB) for replicas.
>>
>> FIRST APPROACH
>> 1. I can use PG with cache write enable as my primary storage that's
>> goes on my SSD and let replicas goes on my 7200 RPM.
>>       With the cache write enable, I will gain performance for my VM
>> user machine in VDI environment since Ceph client will not have to wait
>> for the replicas write confirmation on the slower HDD.
>>
>> SECOND APPROACH
>> 2. Use pools hierarchies and let's have one pool for the SSD as primary
>> and lets the replicas goes to a second pool name platter for HDD
>> replication.
>>      As explain in the Ceph documentation
>>      rule ssd-primary {
>>                ruleset 4
>>                type replicated
>>                min_size 5
>>                max_size 10
>>                step take ssd
>>                step chooseleaf firstn 1 type host
>>                step emit
>>                step take platter
>>                step chooseleaf firstn -1 type host
>>                step emit
>>        }
>>
>> At this point, I could not figure out what approach could have the most
>> advantage.
>>
>> Your point of view would definitely help me.
>>
>> Sincerely,
>> Martin
>>
>> --
>> Martin Catudal
>> Responsable TIC
>> Ressources Metanor Inc
>> Ligne directe: (819) 218-2708
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
-- 
Jason Villalta



--
-- 
Jason Villalta
Co-founder
Inline image 1
800.799.4407x1230 | www.RubixTechnology.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux