Re: Ceph Journal Disk Size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



ask the other guys on the list, but for me to lose 4TB of data is to much, the cluster will still running fine, but in some point you need to recover that disk, and also if you lose one server with all the 4TB disk in that case yeah it will hurt the cluster, also take into account that with that kind of disk you will get no more than 100-110 iops per disk

German Anders
Storage System Engineer Leader
Despegar | IT Team
office +54 11 4894 3500 x3408
mobile +54 911 3493 7262
mail ganders@xxxxxxxxxxxx

2015-07-01 20:54 GMT-03:00 Nate Curry <curry@xxxxxxxxxxxxx>:

4TB is too much to lose?  Why would it matter if you lost one 4TB with the redundancy?  Won't it auto recover from the disk failure?

Nate Curry

On Jul 1, 2015 6:12 PM, "German Anders" <ganders@xxxxxxxxxxxx> wrote:
I would probably go with less size osd disks, 4TB is to much to loss in case of a broken disk, so maybe more osd daemons with less size, maybe 1TB or 2TB size. 4:1 relationship is good enough, also i think that 200G disk for the journals would be ok, so you can save some money there, the osd's of course configured them as a JBOD, don't use any RAID under it, and use two different networks for public and cluster net.

German

2015-07-01 18:49 GMT-03:00 Nate Curry <curry@xxxxxxxxxxxxx>:
I would like to get some clarification on the size of the journal disks that I should get for my new Ceph cluster I am planning.  I read about the journal settings on http://ceph.com/docs/master/rados/configuration/osd-config-ref/#journal-settings but that didn't really clarify it for me that or I just didn't get it.  I found in the Learning Ceph Packt book it states that you should have one disk for journalling for every 4 OSDs.  Using that as a reference I was planning on getting multiple systems with 8 x 6TB inline SAS drives for OSDs with two SSDs for journalling per host as well as 2 hot spares for the 6TB drives and 2 drives for the OS.  I was thinking of 400GB SSD drives but am wondering if that is too much.  Any informed opinions would be appreciated.

Thanks,

Nate Curry


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux