Re: Document and example for libceph, librbd and librados

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 5 May 2011, tsk wrote:
> 2011/5/4 Wido den Hollander <wido@xxxxxxxxx>:
> > Hi,
> >
> > On Wed, 2011-05-04 at 22:22 +0800, tsk wrote:
> >> 2011/5/4 Wido den Hollander <wido@xxxxxxxxx>:
> >> > Hi,
> >> >
> >> > On Wed, 2011-05-04 at 20:36 +0800, tsk wrote:
> >> >> Hi folks,
> >> >>
> >> >>      Is there any document for libceph, librbd and librados, and where
> >> >> could I get some example for them?
> >> >
> >> > What kind of examples are you looking for? In the Ceph git repository
> >> > there is:
> >> >
> >> > * rados.cc (The rados tool)
> >> > * testradospp.cpp
> >> > * testrbdpp.cpp
> >>
> >> Thx!
> >> I have run over these test code yet,  I think they are useful for me.
> >>
> >> Hmm, Is there any simple application example for these library?
> >
> > What kind of example are you looking for? I think everything you are
> > searching for is in the testradospp/testrbdpp files, those really show
> > what you can do.
> >
> > Could you explain what you are trying to achieve/write?
> >
> > Wido
> 
> I have an application which is decided to be achieved by ceph, which
> is the most suitable. But there are some requirements I don't know how
> to satisfy with the ceph libs:
> 
> There will be almost 10000 big file, each may be 10G~500G. Kernel
> client will not be used, instead, a user space process will handle
> read and write requests for each file via using the libs of ceph.
> Because this will bypass the kernel VFS, and should get better
> performance.
> 
> Problem I want to issue:
> 1.   I need to set size of replicas for each file. e.g 1 replica for A
> file, 2 for B file, 3 for C file.etc.
> I think I should create several pools, each with a different replica
> size. File for different replica size should be drop into the
> corresponding pool. Is that workable?

Yep.  There is a per-file ioctl you can use to set the pool after creating 
the file but before writing any data to it, or there is a directory ioctl 
that sets the default layout/pool for new files created in that subtree.  
There should be equivalent libceph calls (if they aren't there yet, they 
are easily added in to complete the interface).
 
> 2.   Snapshot should be able to create, delete, list, rollback for each file.
> Which APIs should I use in libceph, librbd and librados?

The file system doesn't have a built-in 'rollback' operation.  You 
currently need to recopy the old data over the new, unfortunately.  It is 
possible to make this efficient (at least for individual files), but we 
haven't done it yet.
 
> 3.   The user space process and the cosd process will run on the same
> host. To lower the network utilization, force-feeding will be used. I
> guess reading performance may be better when do this, is it right? I
> need to confirm this.  How to configure the force-feeding optionÿÿ In
> the libs or the CRUSH map?

There is a 'preferred' field you can set via the libceph interface.  
There is also a get_local_osd() that tells you an osd on the local node 
(if there is one).

sage

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux