Re: [External Email] Re: Re: Failure Domain = NVMe?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> In my current hardware configurations each NVMe supports multiple OSDs.
> In
> my earlier nodes it is 8 OSDs sharing one NVMe (which is also too
> small).
> In the near term I will add NVMe to those nodes, but I'll still have 5
> OSDs
> some OSDs, and 2 or 3 on all the others.  So an NVMe failure will take
> out
> at least 2 OSDs.  Becasue of this it seems potentially worthwhile to go
> through the trouble of defining failure domain = nvme to assure maximum
> resilience.
> 

Do you have some test results of the increase of write performance using the osd's with the nvme? Just wondering what can be expected.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux