Re: Squeezing Performance of CEPH

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



But your then have a very mismatch of performance across your OSD’s which is never recommend by CEPH.

It’s all about what you can do with your current boxes capacity to increase performance across the whole OSD set.

,Ashley

Sent from my iPhone

On 23 Jun 2017, at 10:40 PM, Massimiliano Cuttini <max@xxxxxxxxxxxxx> wrote:

Ashley,

but.. instead of use NVMe as a journal, why don't add 2 OSD to the cluster?
Incresing number of OSD instead of improving performance of actual OSD?



Il 23/06/2017 15:40, Ashley Merrick ha scritto:
Sorry for the not inline reply.

If you can get 6 OSD’s per a NVME as long as your getting a decent rated NVME your bottle neck will be the NVME but will still improve over your current bottle neck.

You could add two NVME OSD’s, but their higher performance would be lost along with the other 12 OSD’s.

,Ashley

Sent from my iPhone

On 23 Jun 2017, at 8:34 PM, Massimiliano Cuttini <max@xxxxxxxxxxxxx> wrote:

Hi Ashley,

You could move your Journal to another SSD this would remove the double write.
If I move the journal to another SSD, I will loss an available OSD, so this is likely to say improve of x2 and then decrease of ...
this should not improve performance in any case on a full SSD disks system.

Ideally you’d want one or two PCIe NVME in the servers for the Journal.
This seems a really good Idea, but image that I have only 2 slots for PCIe and 12 SSD disks.
I image that it's will not be possible place 12 Journal on 2 PCIe NVME without loss performance..... or yes?

Or if you can hold off a bit then bluestore, which removes the double write, however is still handy to move some of the services to a seperate disk.
I hear that bluestore will remove double writing on journal (still not investigated), but I guess Luminous will be fully tested not before the end of the year.
About the today system really don't know if moving on a separate disks will have some impact considering that this is a full SSD disks system.

Even adding 2 PCIe NVME.... why should not use them as a OSD instead of journal solo?

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux