Re: New hardware for OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Mon, 27 Mar 2017 12:27:40 +0200 Mattia Belluco wrote:

> Hello all,
> we are currently in the process of buying new hardware to expand an
> existing Ceph cluster that already has 1200 osds.

That's quite sizable, is the expansion driven by the need for more space
(big data?) or to increase IOPS (or both)?

> We are currently using 24 * 4 TB SAS drives per osd with an SSD journal
> shared among 4 osds. For the upcoming expansion we were thinking of
> switching to either 6 or 8 TB hard drives (9 or 12 per host) in order to
> drive down space and cost requirements.
> 
> Has anyone any experience in mid-sized/large-sized deployment using such
> hard drives? Our main concern is the rebalance time but we might be
> overlooking some other aspects.
> 

If you researched the ML archives, you should already know to stay well
away from SMR HDDs. 

Both HGST and Seagate have large Enterprise HDDs that have
journals/caches (MediaCache in HGST speak IIRC) that drastically improve
write IOPS compared to plain HDDs.
Even with SSD journals you will want to consider those, as these new HDDs
will see at least twice the action than your current ones. 

Rebalance time is a concern of course, especially if your cluster like
most HDD based ones has these things throttled down to not impede actual
client I/O.

To get a rough idea, take a look at:
https://www.memset.com/tools/raid-calculator/

For Ceph with replication 3 and the typical PG distribution, assume 100
disks and the RAID6 with hotspares numbers are relevant.
For rebuild speed, consult your experience, you must have had a few
failures. ^o^

For example with a recovery speed of 100MB/s, a 1TB disk (used data with
Ceph actually) looks decent at 1:16000 DLO/y. 
At 5TB though it enters scary land

Christian

> We currently use the cluster as storage for openstack services: Glance,
> Cinder and VMs' ephemeral disks.
> 
> Thanks in advance for any advice.
> 
> Mattia
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux