Re: Ceph cluster on AMD based system.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If your crushmap is set to replicate by host you would only ever have one copy on a single host, no matter how many OSD’s you placed on a single NVME/disk.

But yes you would not want to mix OSD based rules and multiple OSD per a physical disk.

On Tue, 5 Mar 2019 at 7:54 PM, Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
 
I see indeed lately people writing about putting 2 osd on a nvme, but
does this not undermine the idea of having 3 copies on different
osds/drives? In theory you could loose 2 copies when one disk fails???




-----Original Message-----
From: Darius Kasparaviius [mailto:daznis@xxxxxxxxx]
Sent: 05 March 2019 10:50
To: ceph-users
Subject: Ceph cluster on AMD based system.

Hello,


I was thinking of using AMD based system for my new nvme based cluster.
In particular I'm looking at
https://www.supermicro.com/Aplus/system/1U/1113/AS-1113S-WN10RT.cfm
and https://www.amd.com/en/products/cpu/amd-epyc-7451 CPU's. Have anyone
tried running it on this particular hardware?

General idea is 6 nodes with 10 nvme drives and 2 osds per nvme drive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux