New hardware for OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,
we are currently in the process of buying new hardware to expand an
existing Ceph cluster that already has 1200 osds.
We are currently using 24 * 4 TB SAS drives per osd with an SSD journal
shared among 4 osds. For the upcoming expansion we were thinking of
switching to either 6 or 8 TB hard drives (9 or 12 per host) in order to
drive down space and cost requirements.

Has anyone any experience in mid-sized/large-sized deployment using such
hard drives? Our main concern is the rebalance time but we might be
overlooking some other aspects.

We currently use the cluster as storage for openstack services: Glance,
Cinder and VMs' ephemeral disks.

Thanks in advance for any advice.

Mattia
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux