RE: All-Flash Ceph cluster and journal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This link points to a presentation we did a few weeks back where we used NVMe devices for both the data and journal.  We partitioned the devices multiple times to co-locate multiple OSD's per device.  The configuration data on the cluster is in the backup.

http://www.slideshare.net/Inktank_Ceph/accelerating-cassandra-workloads-on-ceph-with-allflash-pcie-ssds

Thanks,

Stephen

-----Original Message-----
From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-owner@xxxxxxxxxxxxxxx] On Behalf Of Piotr.Dalek@xxxxxxxxxxxxxx
Sent: Friday, November 20, 2015 3:50 AM
To: Mike Almateia; Ceph Development
Subject: RE: All-Flash Ceph cluster and journal

> -----Original Message-----
> From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel- 
> owner@xxxxxxxxxxxxxxx] On Behalf Of Mike Almateia
> Sent: Friday, November 20, 2015 11:43 AM

> By now is it resonable to use NVMe Flash for OSD on Ceph? Overpower? 
> Is it possible to achive full speed NVMe Flash driver under Ceph?

Yes and no. Ceph on any Flash drive will perform way better than on regular spinning disks, though certainly will not utilize its full potential. There is ongoing effort from multiple developers from multiple companies to fix that and things are getting better with each release.


With best regards / Pozdrawiam
Piotr Dałek

 {.n +       +%  lzwm  b 맲  r  yǩ ׯzX    ܨ}   Ơz &j:+v        zZ+  +zf   h   ~    i   z  w   ?    & )ߢf
��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux