Re: Questions about ceph OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Originally, the client <-> mds messages contained the full path of the
files written, similar to cifs. This was a bit racy and problematic,
but generally with a little modifications to the client code and to
the protocol you'd be able to build and send that full path through
with the osd I/O operations (again, I'm not sure what can of worms
would be opened with the full-path, or whether it's actually feasible
nowadays).
The next thing you'd need is to modify the osd operations handling,
and mirror each write operation to your ext3 partition. I think that'd
be the easiest path to go, though somewhat hacky.

Yehuda

On Tue, Nov 2, 2010 at 7:53 PM, Jeff Wu <cpwu@xxxxxxxxxxxxx> wrote:
> Hi Yehuda,
>
> Thank you for your reply so quickly,your idea is very useful for us.
> our design is that it only use ceph as our file system,
> don't use NFS or CIFS . i will continue to study ceph codes ,thanks.
>
> Jeff.wu
>
>
>
>
> 在 2010-11-03三的 10:39 +0800,Yehuda Sadeh Weinraub写道:
>> Hi,
>>
>>   there are probably a few directions that you can go, but I'm not
>> sure which would be the easiest or whether it'd actually make sense.
>> Just to have an idea about what you're really looking for, why
>> wouldn't doing nfs mount over secret/ work? Is there a real need for
>> that metadata to go through the mds?
>>
>> Yehuda
>>
>> On Tue, Nov 2, 2010 at 7:18 PM, Jeff Wu <cpwu@xxxxxxxxxxxxx> wrote:
>> > hi,
>> >
>> > Any ideas or suggestions about this? or , To solve this issue,i need
>> > deeply look into what parts ceph codes ,OSD ,CRUSH ,.OSD cluster
>> > expansion .. etc ? Thanks.
>> >
>> >
>> > Jeff.Wu
>> >
>> >
>> > ---------- Forwarded message ----------
>> > From: Jeff Wu <cpwu@xxxxxxxxxxxxx>
>> > To: "sage@xxxxxxxxxxxx" <sage@xxxxxxxxxxxx>, "ceph-devel@xxxxxxxxxxxxxxx" <ceph-devel@xxxxxxxxxxxxxxx>
>> > Date: Tue, 2 Nov 2010 14:49:58 +0800
>> > Subject: Questions about ceph OSD
>> > Hi ,
>> >
>> > I have recently been working the preliminary research whether ceph
>> > can use at our cloud computing storage system in the future ,
>> > ceph is a very very excellent file system.But ,Now ,i hit a very
>> > important problem for us:
>> >
>> > Normally, as ceph described , all of the data are saved at OSDs with
>> > CRUSHMAP at ceph system.
>> >
>> > but my goal is to :
>> > 1. add a ext3 disk to ceph system ,but doesn't set it as a OSD;
>> > 2. save the normal data at the OSDs of a ceph system;
>> > 3. save the special data to a local ext3 disk at a ceph system,
>> >   the ext3 disk is not set as a ceph OSD,but its infos are added to
>> >  MDS and  MON,when Write the special data at a ceph client ,these
>> > special data are not saved to ceph server OSDs ,but saved at the local
>> > ext3 disk.
>> >
>> > like this:
>> > At ceph client:
>> > $ mount.ceph 1.2.3.4:6789:/ /ceph
>> > $ cd /ceph
>> > $ ls
>> > cloud user secret
>> >
>> > when do writing,"cloud and user" folder data are saved to ceph OSDs ,
>> > but "secret" folder data are saved to a local ext3 disk .
>> >
>> > My question is:
>> > Q1- If i add a new OSD to the ceph system ,but i do a special CRUSH
>> > placement map . Could this ways solved this problem ?,If can ,How ?
>> >
>> > Q2 - If to implement this function as a patch myself, Could you give me
>> > any ideas ? or what parts of ceph codes(server/client) that i need look
>> > into ?
>> >
>> > Thanks for all help and reply .
>> >
>> >
>> > Jeff.Wu
>> > Transoft.inc ,Sr.software engineer.
>> >
>> >
>> >
>> >
>> >
>> > --
>> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> > the body of a message to majordomo@xxxxxxxxxxxxxxx
>> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> >
>> >
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux