On 03/25/2013 06:07 PM, Peter_Jung@xxxxxxxx wrote:
Hi, I have a couple of HW provisioning questions in regards to SSD for OSD Journals. I’d like to provision 12 OSDs per a node and there are enough CPU clocks and Memory. Each OSD is allocated one 3TB HDD for OSD data – these 12 * 3TB HDDs are in non-RAID. For increasing access and (sequential) write performance, I’d like to put 2 SSDs for OSD journals – these two SSDs are not mirrored. By the rule of thumb, I’d like to mount the OSD journals (the path below) to the “SSD partitions” accordingly. _/var/lib/ceph/osd/$cluster-$id/journal_ Question 1. Which way is recommended between: (1) Partitions for OS/Boot and 6 OSD journals on #1 SSD, and partitions for the rest 6 OSD journals on #2 SSD;
I'd go this way. You might also consider a RAID1 for the OS/Boot LUN. Another thing you might want to consider is how much sequential write throughput your SSDs are capable of. You'll need a really fast enterprise grade drive to handle sequential writes for 6 hard drives. IF your SSDs are slow you may be better off just relying on a controller with WB cache and putting journals on the OSD disks.
(2) OS/Boot partition on #1 SSD, and separately 12 OSD journals on #2 SSD? BTW, for better utilization of expensive SSDs, I prefer the first way. Should it be okay? Question 2. I have several capacity options for SSDs. What’s the capacity requirement if there are 6 partitions for 6 OSD journals on a SSD?
Each journal can be fairly small. I use 10GB journals but that frankly is rather huge. 1-2GB journals are probably fine. The big thing is that if you over provision the SSD you'll have more Cells available to spread the writes over and keep your SSD alive longer. I'd suggest getting the biggest SSD that is priced reasonably and only partition as much space as you need.
If it’s hard to generalize, please provide me with some guidelines. Thanks, Peter _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com