Re: Best layout for SSD & SAS OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I wouldn't advise upgrading yet if this cluster is going into production. I think several people got bitten last time round when they upgraded to pre hammer.

Here is a good example on how to create separate root's for SSD's and HDD's

http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds

The rulesets then enable you to pin pools to certain crush roots.

I highly recommend you use the "osd crush location hook =" config directive to use a script to auto place the OSD's on startup.

Nick

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> German Anders
> Sent: 04 September 2015 17:18
> To: Nick Fisk <nick@xxxxxxxxxx>
> Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
> Subject: Re:  Best layout for SSD & SAS OSDs
> 
> Thanks a lot Nick, regarding the power feeds, we only had two circuits for all
> the racks, so I'll to do in the crush the "rack" bucket and separate the osd
> servers on the rack buckets, then regarding the SSD pools, I've installed the
> hammer version and wondering to upgrade to Infernalis v9.0.3 and apply the
> SSD cache, or stay on Hammer and do the SSD pools and maybe left two
> 800GB SSD for later used as Cache (1.6TB per OSD server), do you have a
> crushmap example for this type of config?
> Thanks a lot,
> Best regards,
> 
> 
> German
> 
> 2015-09-04 13:10 GMT-03:00 Nick Fisk <nick@xxxxxxxxxx>:
> Hi German,
> 
> Are the power feeds completely separate (ie 4 feeds in total), or just each
> rack has both feeds? If it’s the latter I don’t see any benefit from including
> this into the crushmap and would just create a “rack” bucket. Also assuming
> your servers have dual PSU’s, this also changes the power failure scenarios
> quite a bit as well.
> 
> In regards to the pools, unless you know your workload will easily fit into a
> cache pool with room to spare, I would suggest not going down that route
> currently. Performance in many cases can actually end up being worse if you
> end up doing a lot of promotions.
> 
> *However* I’ve been doing a bit of testing with the current master and
> there are a lot of improvements around cache tiering that are starting to
> have a massive improvement on performance. If you can get by with just the
> SAS disks for now and make a more informed decision about the cache
> tiering when Infernalis is released then that might be your best bet.
> 
> Otherwise you might just be best using them as a basic SSD only Pool.
> 
> Nick
> 
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> German Anders
> Sent: 04 September 2015 16:30
> To: ceph-users <ceph-users@xxxxxxxxxxxxxx>
> Subject:  Best layout for SSD & SAS OSDs
> 
> Hi cephers,
>    I've the following scheme:
> 7x OSD servers with:
>     4x 800GB SSD Intel DC S3510 (OSD-SSD)
>     3x 120GB SSD Intel DC S3500 (Journals)
>     5x 3TB SAS disks (OSD-SAS)
> The OSD servers are located on two separate Racks with two power circuits
> each.
>    I would like to know what is the best way to implement this.. use the 4x
> 800GB SSD like a SSD-pool, or used them us a Cache pool? or any other
> suggestion? Also any advice for the crush design?
> Thanks in advance,
> 
> 
> German
> 
> 






_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux