Hello, As alway there are many similar threads in here, googling and reading up stuff are good for you. On Thu, 10 Mar 2016 16:55:03 +0200 Yair Magnezi wrote: > Hello Cephers . > > I wonder if anyone has some experience with full ssd cluster . > We're testing ceph ( "firefly" ) with 4 nodes ( supermicro > SYS-F628R3-R72BPT ) * 1TB SSD , total of 12 osds . > Our network is 10 gig . Much more, relevant details, from SW versions (kernel, OS, Ceph) and configuration (replica size of your pool) to precise HW info. In particular your SSDs, exact maker/version/size. Where are your journals? Also Firefly is EOL, Hammer and even more so the upcoming Jewel have significant improvements with SSDs. > We used the ceph_deploy for installation with all defaults ( followed > ceph documentation for integration with open-stack ) > As much as we understand there is no need to enable the rbd cache as > we're running on full ssd. RBD cache as in the client side librbd cache is always very helpful, fast backing storage or not. It can significantly reduce the number of small writes, something Ceph has to do a lot of heavy lifting for. > bench marking the cluster shows very poor performance write but mostly > read ( clients are open-stack but also vmware instances ) . Benchmarking how (exact command line for fio for example) and with what results? You say poor, but that might be "normal" for your situation, we can't really tell w/o hard data. "Poor" write performance would indicative of SSDs that are unsuitable for Ceph. > any input is much appreciated ( especially want to know which parameter > is crucial for read performance in full ssd cluster ) > read_ahead in your clients can improve things, but I guess your cluster has more fundamental problems than this. http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-April/028552.html Christian -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Rakuten Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com