Re: Ceph, SSD, and NVMe

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi James,

----- "James (Fei) Liu-SSI" <james.liu@xxxxxxxxxxxxxxx> wrote:

> Hi David,
>   Generally speaking, it is going to be super difficult to maximize
> the bandwidth of NVMe with current Ceph latest release. In my humble
> opinion, I don't think Ceph is aiming at high performance storage.

Well, -I'm- certainly aiming at it.  I'm not alone.

Matt

> 
> Here is link for your reference for some good work done by Samsung and
> SanDisk regarding to Ceph optimization for SSD including NVMe.
> 
> http://www.tomsitpro.com/articles/samsung-jbod-nvme-reference-system,1-2809.html
> 
> Regards,
> James
> 
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf
> Of J David
> Sent: Wednesday, September 30, 2015 7:35 AM
> To: ceph-users@xxxxxxxxxxxxxx
> Subject:  Ceph, SSD, and NVMe
> 
> Because we have a good thing going, our Ceph clusters are still
> running Firefly on all of our clusters including our largest, all-SSD
> cluster.
> 
> If I understand right, newer versions of Ceph make much better use of
> SSDs and give overall much higher performance on the same equipment.
> However, the impression I get of newer versions is that they are also
> not as stable as Firefly and should only be used with caution.
> 
> Given our storage consumers have an effectively unlimited appetite for
> IOPs and throughput, more performance would be very welcome.  But not
> if it leads to cluster crashes and lost data.
> 
> What really prompts this is that we are starting to see large-scale
> NVMe equipment appearing in the channel ( e.g.
> http://www.supermicro.com/products/system/1U/1028/SYS-1028U-TN10RT_.cfm
> ).  The cost is significantly higher with commensurately higher
> theoretical perfomance.  But if we're already not pushing our SSD's to
> the max over SAS, the added benefit of NVMe would largely be lost.
> 
> On the other hand, if we could safely upgrade to a more recent version
> that is as stable and bulletproof as Firefly has been for us, but has
> better performance with SSDs, that would not only benefit our current
> setup, it would be a necessary first step for moving onto NVMe.
> 
> So this raises three questions:
> 
> 1) Have I correctly understood that one or more post-FireFly releases
> exist that (c.p.) perform significantly better with all-SSD setups?
> 
> 2) Is there any such release that (generally) is as rock-solid as
> FireFly.  Of course this is somewhat situationally dependent, so I
> would settle for: is there any such release that doesn't have any
> known minding-my-own-business-suddenly-lost-data bugs in a 100% RBD
> use case?
> 
> 3) Has anyone done anything with NVMe as storage (not just journals)
> who would care to share what kind of performance they experienced?
> 
> (Of course if we do upgrade we will do so carefully, do a test cluster
> first, have backups standing by, etc.  But if it's already known that
> doing so will either not improve anything or is likely to blow up in
> our faces, it would be better to leave well enough alone.  The current
> performance is by no means bad, we're just always greedy for more. :)
> )
> 
> Thanks for any advice/suggestions!
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Matt Benjamin
CohortFS, LLC.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://cohortfs.com

tel.  734-761-4689 
fax.  734-769-8938 
cel.  734-216-5309 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux