Re: Bluestore HDD Cluster Advice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Martin,

Hardware has already been aquired and was spec'd to mostly match our current clusters which perform very well for us. I'm really just hoping to hear from anyone who may have experience moving from filestore => bluestore with and HDD cluster. Obviously we'll be doing testing but it's always helpful to hear firsthand experience.

That said there is reasoning behind our choices.

CPU: Buys us some additional horsepower for collocating RGW. We run 12c currently and they stay very busy. Since we're adding an additional workload it seemed warranted.
Memory: The Intel Procs in the R740's are 6 channel instead of 4 so the bump to 192GB was the result of that change. We run 128 today.
NIC's: A few reasons
  • Microbursts, our workload seems to generate them pretty regularly and we've had a tough time taming them using buffers, 25G should eliminate that even though we won't ever use the sustained bandwidth.
  • Port waste: We're running large compute nodes so the choice came down to 4x10G or 2x25G per compute. The switches are more expensive (though not terribly) but we get the benefit of using fewer ports.
  • Features: The 25G switches support some features we were looking for such as EVPN, VxLAN etc.
Disk: 512e. I'm interested to hear about the performance difference here. Does Ceph not recognize the physical sector size as being 4k?

Thanks,
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux