Re: OSD to OSD Communication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/30/2013 01:47 PM, Dimitri Maziuk wrote:
On 08/30/2013 01:38 PM, Geraint Jones wrote:


On 30/08/13 11:33 AM, "Wido den Hollander" <wido@xxxxxxxx> wrote:

On 08/30/2013 08:19 PM, Geraint Jones wrote:
Hi Guys

We are using Ceph in production backing an LXC cluster. The setup is : 2
x Servers, 24 x 3TB Disks each in groups of 3 as RAID0. SSD for
journals. Bonded 1gbit ethernet (2gbit total).


I think you sized your machines too big. I'd say go for 6 machines with
8 disks each without RAID-0. Let Ceph do it's job and avoid RAID.

Typical traffic is fine - its just been an issue tonight :)

If you hosed and have to recover an 9TB filesystem, you'll have problems
no matter what, ceph or no ceph. You *will* have a disk failure every
once in a while, and there's no "r" in raid-0, so don't think what
happened is not typical.

(There's nothing wrong with raid as long it's >0.)

One exception: Some controllers (looking at you LSI!) don't expose disks as JBOD or if they do, don't let you use write-back cache. In those cases we some times have people make single-disk RAID0 LUNs. :)




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux