Re: Best layout for SSD & SAS OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi German,

 

Are the power feeds completely separate (ie 4 feeds in total), or just each rack has both feeds? If it’s the latter I don’t see any benefit from including this into the crushmap and would just create a “rack” bucket. Also assuming your servers have dual PSU’s, this also changes the power failure scenarios quite a bit as well.

 

In regards to the pools, unless you know your workload will easily fit into a cache pool with room to spare, I would suggest not going down that route currently. Performance in many cases can actually end up being worse if you end up doing a lot of promotions.

 

*However* I’ve been doing a bit of testing with the current master and there are a lot of improvements around cache tiering that are starting to have a massive improvement on performance. If you can get by with just the SAS disks for now and make a more informed decision about the cache tiering when Infernalis is released then that might be your best bet.

 

Otherwise you might just be best using them as a basic SSD only Pool.

 

Nick

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of German Anders
Sent: 04 September 2015 16:30
To: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Best layout for SSD & SAS OSDs

 

Hi cephers,

   I've the following scheme:

7x OSD servers with:

    4x 800GB SSD Intel DC S3510 (OSD-SSD)

    3x 120GB SSD Intel DC S3500 (Journals)

    5x 3TB SAS disks (OSD-SAS)

The OSD servers are located on two separate Racks with two power circuits each.

   I would like to know what is the best way to implement this.. use the 4x 800GB SSD like a SSD-pool, or used them us a Cache pool? or any other suggestion? Also any advice for the crush design?

Thanks in advance,   


German


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux