RE: All-Flash Ceph cluster and journal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It's an example of what Hammer can do, and we're seeing some improvements already with Infernalis.  I agree regarding the tuning and optimization, and a lot of work is currently underway towards that goal as Piotr pointed out.

For completeness we did do a bit of the OSD tweaking a week or so ago (results in the mailinglist).  http://www.spinics.net/lists/ceph-devel/msg27256.html

Thanks,

Stephen

-----Original Message-----
From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-owner@xxxxxxxxxxxxxxx] On Behalf Of Daniel Swarbrick
Sent: Monday, November 23, 2015 11:39 AM
To: ceph-devel@xxxxxxxxxxxxxxx
Subject: Re: All-Flash Ceph cluster and journal

Simple but clever way of ensuring that NVMe's deep queues aren't starved of work to do. But doesn't this suggest that Ceph needs optimizing or tuning for NVMe? Could you not have tweaked OSD parameters to ensure more threads / IO operations in parallel and have the same effect?

On 23/11/15 18:37, Blinick, Stephen L wrote:
> This link points to a presentation we did a few weeks back where we used NVMe devices for both the data and journal.  We partitioned the devices multiple times to co-locate multiple OSD's per device.  The configuration data on the cluster is in the backup.
> 
> http://www.slideshare.net/Inktank_Ceph/accelerating-cassandra-workload
> s-on-ceph-with-allflash-pcie-ssds
> 
> Thanks,
> 
> Stephen
> 


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at  http://vger.kernel.org/majordomo-info.html
��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux