Re: OSD to OSD Communication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Is that not the point of the cluster_network - that it shouldn't be able
to communicate with other networks...

On 30/08/13 1:57 PM, "Gregory Farnum" <greg@xxxxxxxxxxx> wrote:

>Assuming the networks can intercommunicate, yes.
>-Greg
>Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
>On Fri, Aug 30, 2013 at 1:09 PM, Geraint Jones <geraint@xxxxxxxxxx> wrote:
>> One Other thing
>>
>> If I set cluster_network on node0 and restart it, then do the same on
>> node1 will I be able to maintain availability while I roll the change
>>out ?
>>
>> On 30/08/13 11:47 AM, "Dimitri Maziuk" <dmaziuk@xxxxxxxxxxxxx> wrote:
>>
>>>On 08/30/2013 01:38 PM, Geraint Jones wrote:
>>>>
>>>>
>>>> On 30/08/13 11:33 AM, "Wido den Hollander" <wido@xxxxxxxx> wrote:
>>>>
>>>>> On 08/30/2013 08:19 PM, Geraint Jones wrote:
>>>>>> Hi Guys
>>>>>>
>>>>>> We are using Ceph in production backing an LXC cluster. The setup is
>>>>>>: 2
>>>>>> x Servers, 24 x 3TB Disks each in groups of 3 as RAID0. SSD for
>>>>>> journals. Bonded 1gbit ethernet (2gbit total).
>>>>>>
>>>>>
>>>>> I think you sized your machines too big. I'd say go for 6 machines
>>>>>with
>>>>> 8 disks each without RAID-0. Let Ceph do it's job and avoid RAID.
>>>>
>>>> Typical traffic is fine - its just been an issue tonight :)
>>>
>>>If you hosed and have to recover an 9TB filesystem, you'll have problems
>>>no matter what, ceph or no ceph. You *will* have a disk failure every
>>>once in a while, and there's no "r" in raid-0, so don't think what
>>>happened is not typical.
>>>
>>>(There's nothing wrong with raid as long it's >0.)
>>>--
>>>Dimitri Maziuk
>>>Programmer/sysadmin
>>>BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
>>>
>>>_______________________________________________
>>>ceph-users mailing list
>>>ceph-users@xxxxxxxxxxxxxx
>>>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux