Re: how to set up disks in the same host

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you Uwe. That clarifies it.


On Sat, Dec 7, 2013 at 5:43 PM, Uwe Grohnwaldt <uwe@xxxxxxxxxxxxx> wrote:
> hi,
>
> you can find the ceph architecture on this page: http://ceph.com/docs/master/architecture/
>
> the interessting picture for you may be this one:
> http://ceph.com/docs/master/_images/ditaa-54719cc959473e68a317f6578f9a2f0f3a8345ee.png
>
> your client writes the file to one osd, and before this osd acknowledges your write request, it ensure that it is copied to other osd(s).
>
> Mit freundlichen Grüßen / Best Regards,
> --
> Uwe Grohnwaldt
>
> ----- Original Message -----
>> From: "Cristian Falcas" <cristi.falcas@xxxxxxxxx>
>> To: "Wido den Hollander" <wido@xxxxxxxx>
>> Cc: ceph-users@xxxxxxxxxxxxxx
>> Sent: Samstag, 7. Dezember 2013 15:44:08
>> Subject: Re:  how to set up disks in the same host
>>
>> Hi,
>>
>> Thank you for the answer.
>>
>> But how fast is the replication from ceph happening? Because if it's
>> not almost instantaneously you will loose some data is the hard drive
>> fails.
>>
>> Does anybody know how fast are the copies written on other OSDs?
>>
>>
>>
>> On Fri, Dec 6, 2013 at 12:31 PM, Wido den Hollander <wido@xxxxxxxx>
>> wrote:
>> > On 12/06/2013 11:00 AM, Cristian Falcas wrote:
>> >>
>> >> Hi all,
>> >>
>> >> What will be the fastest disks setup between those 2:
>> >> - 1 OSD build from 6 disks in raid 10 and one ssd for journal
>> >> - 3 OSDs, each with 2 disks in raid 1 and a common ssd for all
>> >> journals (or more ssds if ssd performance will be an issue)
>> >>
>> >> Mainly, will 1 OSD raid 10 be faster or slower then independent
>> >> OSDs?
>> >>
>> >
>> > Simply run 6 OSDs without any RAID and one SSD for the journaling.
>> >
>> > Danger is though, if you loose the journal, you loose all OSDs. So
>> > better
>> > place two SSDs and then 3 OSDs per journal and make sure that using
>> > crush
>> > objects go two OSDs on the different journals.
>> >
>> > You shouldn't use RAID underneath a OSD, let the replication handle
>> > all
>> > that.
>> >
>> >> Best regards,
>> >> Cristian Falcas
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> ceph-users@xxxxxxxxxxxxxx
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>
>> >
>> >
>> > --
>> > Wido den Hollander
>> > 42on B.V.
>> >
>> > Phone: +31 (0)20 700 9902
>> > Skype: contact42on
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux