Re: [PATCH] Documentation: modern versions of ceph are not backed by btrfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 5, 2019 at 1:34 PM Jeff Layton <jlayton@xxxxxxxxxx> wrote:
>
> Signed-off-by: Jeff Layton <jlayton@xxxxxxxxxx>
> ---
>  Documentation/filesystems/ceph.txt | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/Documentation/filesystems/ceph.txt b/Documentation/filesystems/ceph.txt
> index 1177052701e1..e5b69bceb033 100644
> --- a/Documentation/filesystems/ceph.txt
> +++ b/Documentation/filesystems/ceph.txt
> @@ -22,9 +22,7 @@ In contrast to cluster filesystems like GFS, OCFS2, and GPFS that rely
>  on symmetric access by all clients to shared block devices, Ceph
>  separates data and metadata management into independent server
>  clusters, similar to Lustre.  Unlike Lustre, however, metadata and
> -storage nodes run entirely as user space daemons.  Storage nodes
> -utilize btrfs to store data objects, leveraging its advanced features
> -(checksumming, metadata replication, etc.).  File data is striped
> +storage nodes run entirely as user space daemons.  File data is striped
>  across storage nodes in large chunks to distribute workload and
>  facilitate high throughputs.  When storage nodes fail, data is
>  re-replicated in a distributed fashion by the storage nodes themselves

Applied.  I updated the links at the bottom as well.

Thanks,

                Ilya



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux