Re: NVME SSD for journal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi David,

Yes you are right, I should have worded it as "no performance benefit".   There are certainly benefits in regards to density.
We could have gone for 1x P3700 per 12 Spinners, but went 1x P3700 per 6 spinners due to fear of losing 12 OSD's at once.  

Losing 12 OSD's at once may or may not be an issue for you depending on how large your ceph install is.  Losing 12 OSD's on a cluster with 80 OSD's would be an issue, losing 12 OSD's on a cluster with 1000+, not so much.



On Wed, Jul 8, 2015 at 1:12 AM, David Burley <david@xxxxxxxxxxxxxxxxx> wrote:
Further clarification, 12:1 with SATA spinners as the OSD data drives.

On Tue, Jul 7, 2015 at 9:11 AM, David Burley <david@xxxxxxxxxxxxxxxxx> wrote:
There is at least one benefit, you can go more dense. In our testing of real workloads, you can get a 12:1 OSD to Journal drive ratio (or even higher) using the P3700. This assumes you are willing to accept the impact of losing 12 OSDs when a journal croaks.

On Tue, Jul 7, 2015 at 8:33 AM, Andrew Thrift <andrew@xxxxxxxxxxxxxxxxx> wrote:
We are running NVMe Intel P3700's as journals for about 8 months now.    1x P3700 per 6x OSD.

So far they have been reliable. 

We are using S3700, S3710 and P3700 as journals and there is _currently_ no real benefit of the P3700 over the SATA units as journals for Ceph.   


Regards,







--
David Burley
NOC Manager, Sr. Systems Programmer/Analyst
Slashdot Media




--
David Burley
NOC Manager, Sr. Systems Programmer/Analyst
Slashdot Media


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux