Re: All-Flash Ceph cluster and journal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Simple but clever way of ensuring that NVMe's deep queues aren't starved
of work to do. But doesn't this suggest that Ceph needs optimizing or
tuning for NVMe? Could you not have tweaked OSD parameters to ensure
more threads / IO operations in parallel and have the same effect?

On 23/11/15 18:37, Blinick, Stephen L wrote:
> This link points to a presentation we did a few weeks back where we used NVMe devices for both the data and journal.  We partitioned the devices multiple times to co-locate multiple OSD's per device.  The configuration data on the cluster is in the backup.
> 
> http://www.slideshare.net/Inktank_Ceph/accelerating-cassandra-workloads-on-ceph-with-allflash-pcie-ssds
> 
> Thanks,
> 
> Stephen
> 


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux