Re: Ceph Journal Disk Size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/02/15 18:27, Shane Gibson wrote:

On 7/2/15, 9:21 AM, "Nate Curry" <curry@xxxxxxxxxxxxx> wrote:

Are you using the 4TB disks for the journal?

Nate - yes, at the moment the Journal is on 4 TB 7200 rpm disks as well as the OSDS.  It's what I've got for hardware ... sitting around in 60 servers that I could grab.  I realize it's less than idea - but ... beggars ... 

I've structured my OSDs with 1 spinning disk as Journal to 5 OSDs,

Ouch. These spinning disks are probably a bottleneck: there are regular advices on this list to use one DC SSD for 4 OSDs. You would probably better off with a dedicated partition at the beginning of each OSD disk or worse one file on the filesystem but it should still be better than a shared spinning disk.

Anyway given that you get to use 720 disks (12 disks on 60 servers), I'd still prefer your setup to mine (24 OSDs) even with what I consider a bottleneck your setup as probably far more bandwidth ;-)

A reaction to one of your earlier mails:
You said you are going to 8TB drives. The problem isn't so much with the time needed to create new replicas when an OSD fails but the time to fill one freshly installed. The rebalancing is much faster when you add 4 x 2TB drives than 1 x 8TB drives.

Lionel
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux