Re: Difference between CephFS and RBD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



CephFS doesn't use RBD, it uses the Rados protocol (that RBD uses
behind the scenes). You can set striping parameters for files in
CephFS, though, just as you can for RBD.

The real problem here, as I understand it, is simultaneous access to
the same file. Write-locks happen at the file level, so multiple nodes
reading/writing to the same file are not going to be fast (or even
possible depending on how well behaved the client is).

Multiple writers to the same file are generally something that should
be avoided with most filesystems. Ideally you would have one writer
that data would get streamed to. Or you could have every MPI
"instance" write to its own file and concatenate the files at the end.

--
Adam


On Mon, Jul 6, 2015 at 3:04 PM, Hadi Montakhabi <hadi@xxxxxxxxx> wrote:
> Great explanation!
>
> Consider this scenario:
> Given an MPI application which runs on a number of clients (more than one),
> and does some I/O operations.
> Based on the explanation above, it looks like it is not possible to have
> multiple clients read or write to the same file at the same time when we
> have ceph block device mounted.
> Therefore, the feasible option could be to mount CephFS over RBD for
> different clients to be able to do parallel I/O and take advantage of the
> features Ceph provides using RBD (striping for instance).
> Does this sound correct?
>
> Thanks,
> Hadi
>
> On Mon, Jul 6, 2015 at 1:28 PM, Scott Laird <scott@xxxxxxxxxxx> wrote:
>>
>> CephFS is a filesystem, rbd is a block device.  CephFS is a lot like NFS;
>> it's a filesystem shared over the network where different machines can
>> access it all at the same time.  RBD is more like a hard disk image, shared
>> over the network.  It's easy to put a normal filesystem (like ext2) on top
>> of it and mount it on a computer, but if you mount the same RBD device on
>> multiple computers at once then Really Bad Things are going to happen to the
>> filesystem.
>>
>> In general, if you want to share a bunch of files between multiple
>> machines, then CephFS is your best bet.  If you want to store a disk image,
>> perhaps for use with virtual machines, then you want RBD.  If you want
>> storage that is mostly compatible with Amazon's S3, then use radosgw.
>>
>> On Mon, Jul 6, 2015 at 8:04 AM Hadi Montakhabi <hadi@xxxxxxxxx> wrote:
>>>
>>> Hello Cephers,
>>>
>>> I can't quite grasp the difference between CephFS and Rados Block Device
>>> (RBD).
>>> In both cases we do mount the storage on the client and it is using the
>>> storage from the storage cluster.
>>> Would someone explain it to me like I am five please?
>>>
>>> Thanks,
>>> Hadi
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux