Re: Re: Re: thanks gregf,question about ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry I didn't get to this very quickly -- in future if you keep your
questions on the list, rather than to a single person, you will get
faster responses.

2011/1/14 yueluck <yueluck@xxxxxxx>:
> because ceph is not stable now, my xen-disk store on ceph.
> so i can not use ceph in my product!!!
> you do not have much experence here? you are developper,i think?
Yes, I am a developer on Ceph. I have limited experience with other
storage systems, though, especially those targeted at Xen node
storage.

Beyond that, there isn't a single "best" solution to this kind of
problem. It depends a great deal on your individual needs, and I don't
know what they are except that you want to store images. So, I can
recommend that you *look at* RBD, which is a part of Ceph but
considerably less complicated (and more stable) than the Ceph
POSIX-compliant filesystem, and I can recommend that you *look at*
Sheepdog, which is designed precisely and only for storing VM images
(although I think it's focused on KVM). Determining which of these (or
other alternatives that I'm unaware of) is best for you is something
you have to do on your own. It will require a lot of work no matter
what solution you pick -- if it was an easy problem to solve, somebody
would have done so already.
-Greg

> i am read source codeÂdepend onÂlogs, know outline, but not details.
> do youÂread code ?
>
> -----------------------------------
> AtÂ2011-01-15Â01:18:56ï"GregoryÂFarnum"Â<gregf@xxxxxxxxxxxxxxx>Âwrote:
>
>>IÂdon'tÂhaveÂmuchÂexperienceÂhere,ÂbutÂifÂCeph'sÂownÂRBDÂdoesn't
>>satisfyÂyourÂneedsÂIÂguessÂI'dÂlookÂatÂSheepdog.
>>
>>2011/1/13ÂyueluckÂ<yueluck@xxxxxxx>:
>>>ÂcanÂyouÂRecommendÂmeÂotherÂdfs,likeÂceph.ÂyouÂknowÂorÂhaveÂexperenceÂonÂit.
>>>ÂiÂwillÂstoreÂxen-diskÂonÂit.ÂÂÂitÂshouldÂÂstable,multiÂnodes,goodÂThroughput
>>>
>>>
>>>
>>>Âthanks
>>>Â------------------------------------------------------
>>>
>>>ÂAtÂ2011-01-10Â02:05:52ï"GregoryÂFarnum"Â<gregf@xxxxxxxxxxxxxxx>Âwrote:
>>>
>>>>MovingÂthisÂtoÂtheÂlistÂwhereÂitÂbelongs.
>>>>
>>>>2011/1/9ÂyueluckÂ<yueluck@xxxxxxx>:
>>>>>Â1.ceph-client-standalone.gitÂ,ceph-client.git
>>>>>ÂwhatÂisÂtheÂdifferentÂofÂtheÂclient.git???iÂknowÂceph-client-standalone.git
>>>>>ÂisÂforÂceph.ko,
>>>>>ÂwhatÂceph-client.gitÂdo?
>>>>ceph-client.gitÂisÂtheÂbranchÂweÂdoÂallÂourÂin-kernelÂdevelopmentÂon.
>>>>ceph-client-standaloneÂisÂtheÂfs/cephÂtreeÂofÂceph-clientÂwith
>>>>additionalÂ#ifdefsÂforÂbackporting.ÂIfÂpossibleÂyouÂshouldÂbeÂusing
>>>>ceph-clientÂasÂit'sÂmoreÂtestedÂandÂup-to-date.
>>>>
>>>>>Â2.ÂwhatÂisÂdifferentÂofÂscsi-osd,rados?
>>>>>Âscsi-osdÂisÂinÂSCSIÂArchitectureÂModel,ÂtheyÂareÂobject-basedÂstorage?Âif
>>>>>ÂradosÂdependÂonÂscsi-osd(osd.ko,libosd.ko.....)?
>>>>CephÂOSDsÂareÂObjectÂStorageÂDevices/Daemons.ÂTheyÂareÂconceptually
>>>>similarÂtoÂSCSIÂOSDsÂbutÂareÂnotÂatÂallÂtheÂsameÂthing.
>>>>
>>>>>Â3.doÂyouÂhaveÂcephÂcallÂflow,document(exceptÂwiki),letÂmeÂknowÂceph
>>>>>ÂarchitectureÂquickly.
>>>>Unfortunately,Âno.ÂHowever,ÂSage'sÂthesisÂ(whichÂisÂavailableÂonÂthe
>>>>CephÂwebsite)ÂhasÂaÂprettyÂgoodÂdescriptionÂofÂtheÂprotocols,ÂifÂnot
>>>>theÂmoduleÂarchitecture.ÂYouÂcanÂcheckÂthatÂoutÂforÂaÂlotÂmore
>>>>informationÂthanÂisÂonÂtheÂwiki.Â:)
>>>>-Greg
>>>
>>>
>>>
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux