On Sat, 28 Feb 2015 20:42:35 -0600 Tony Harris wrote: > Hi all, > > I have a small cluster together and it's running fairly well (3 nodes, 21 > osds). I'm looking to improve the write performance a bit though, which > I was hoping that using SSDs for journals would do. But, I was wondering > what people had as recommendations for SSDs to act as journal drives. > If I read the docs on ceph.com correctly, I'll need 2 ssds per node > (with 7 drives in each node, I think the recommendation was 1ssd per 4-5 > drives?) so I'm looking for drives that will work well without breaking > the bank for where I work (I'll probably have to purchase them myself > and donate, so my budget is somewhat small). Any suggestions? I'd > prefer one that can finish its write in a power outage case, the only > one I know of off hand is the intel dcs3700 I think, but at $300 it's > WAY above my affordability range. Firstly, an uneven number of OSDs (HDDs) per node will bite you in the proverbial behind down the road when combined with journal SSDs, as one of those SSDs will wear our faster than the other. Secondly, how many SSDs you need is basically a trade-off between price, performance, endurance and limiting failure impact. I have cluster where I used 4 100GB DC S3700s with 8 HDD OSDs, optimizing the write paths and IOPS and failure domain, but not the sequential speed or cost. Depending on what your write load is and the expected lifetime of this cluster, you might be able to get away with DC S3500s or even better the new DC S3610s. Keep in mind that buying a cheap, low endurance SSD now might cost you more down the road if you have to replace it after a year (TBW/$). All the cheap alternatives to DC level SSDs tend to wear out too fast, have no powercaps and tend to have unpredictable (caused by garbage collection) and steadily decreasing performance. Christian -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Fusion Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com