Re: NVMe's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
Depends on your expected load not? I already read here numerous of times 
that osd's can not keep up with nvme's, that is why people put 2 osd's 
on a single nvme. So on a busy node, you probably run out of cores? (But 
better verify this with someone that has an nvme cluster ;))



-----Original Message-----
To: 'ceph-users'
Subject:  NVMe's

We currently run a SSD cluster and HDD clusters and are looking at 
possibly creating a cluster for NVMe storage.  For spinners and SSDs, it 
seemed the max recommended per osd host server was 16 OSDs ( I know it 
depends on the CPUs and RAM, like 1 cpu core and 2GB memory ).  

 

Questions: 
1.  If we do a jbod setup, the servers can hold 48 NVMes, if the servers 
were bought with 48 cores and 100+ GB of RAM, would this make sense?  

2.  Should we just raid 5 by groups of NVMe drives instead ( and buy 
less CPU/RAM )?  There is a reluctance to waste even a single drive on 
raid because redundancy is basically cephs job.
3.  The plan was to build this with octopus ( hopefully there are no 
issues we should know about ).  Though I just saw one posted today, but 
this is a few months off.  

4.  Any feedback on max OSDs?

5.  Right now they run 10Gb everywhere with 80Gb uplinks, I was thinking 
this would need at least 40Gb links to every node ( the hope is to use 
these to speed up image processing at the application layer locally in 
the DC ).
I haven't spoken to the Dell engineers yet but my concern with NVMe is 
that the raid controller would end up being the bottleneck ( next in 
line after network connectivity ).

 

Regards,

-Brent

 

Existing Clusters:

Test: Nautilus 14.2.11 with 3 osd servers, 1 mon/man, 1 gateway, 2 iscsi 
gateways ( all virtual on nvme )

US Production(HDD): Nautilus 14.2.11 with 12 osd servers, 3 mons, 4 
gateways, 2 iscsi gateways

UK Production(HDD): Nautilus 14.2.11 with 12 osd servers, 3 mons, 4 
gateways

US Production(SSD): Nautilus 14.2.11 with 6 osd servers, 3 mons, 3 
gateways,
2 iscsi gateways

 

 

 

 

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux