Re: Suggestions/experiences with mixed disk sizes and models from 4TB - 14TB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Have had some good experiences with ST10000NM0156-2AA111

Also running with EC but using RBD for slow storage for VM's, have had good performance for what id expect for 10TB drive's and EC.

I would definitely say getting helium drives helps VS standard air filled when you get to the 8TB+ drives.

On Wed, Jan 16, 2019 at 5:34 PM Götz Reinicke <goetz.reinicke@xxxxxxxxxxxxxxx> wrote:
Dear Ceph users,

I’d like to get some feedback for the following thought:

Currently I run some 24*4TB bluestore OSD nodes. The main focus is on storage space over IOPS.

We use erasure code and cephfs, and things look good right now.

The „but“ is, I do need more disk space and don’t have so much more rack space available, so I was thinking of adding some 8TB or even 12TB OSDs and/or exchange over time 4TB OSDs with bigger disks.

My question is: How are your experiences with the current >=8TB SATA disks are some very bad models out there which I should avoid?

The current OSD nodes are connected by 4*10Gb bonds, so for replication/recovery speed is a 24 Chassis with bigger disks useful, or should I go with smaller chassis? Or dose the chassi sice does not matter at all that much in my setup.

I know, EC is quit computing intense, so may be bigger disks hav also there an impact?

Lot’s of questions, may be you can help answering some.

        Best regards and Thanks a lot for feedback . Götz


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux