Re: xattrs vs omap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It is replaced with the following config option..

// Use omap for xattrs for attrs over
// filestore_max_inline_xattr_size or
OPTION(filestore_max_inline_xattr_size, OPT_U32, 0)     //Override
OPTION(filestore_max_inline_xattr_size_xfs, OPT_U32, 65536)
OPTION(filestore_max_inline_xattr_size_btrfs, OPT_U32, 2048)
OPTION(filestore_max_inline_xattr_size_other, OPT_U32, 512)

// for more than filestore_max_inline_xattrs attrs
OPTION(filestore_max_inline_xattrs, OPT_U32, 0) //Override
OPTION(filestore_max_inline_xattrs_xfs, OPT_U32, 10)
OPTION(filestore_max_inline_xattrs_btrfs, OPT_U32, 10)
OPTION(filestore_max_inline_xattrs_other, OPT_U32, 2)


If these limits crossed, xattrs will be stored in omap..

For ext4, you can use either filestore_max*_other or filestore_max_inline_xattrs/ filestore_max_inline_xattr_size. I any case, later two will override everything.

Thanks & Regards
Somnath

-----Original Message-----
From: Christian Balzer [mailto:chibi@xxxxxxx]
Sent: Wednesday, July 01, 2015 5:26 PM
To: Ceph Users
Cc: Somnath Roy
Subject: Re:  xattrs vs omap


Hello,

On Wed, 1 Jul 2015 15:24:13 +0000 Somnath Roy wrote:

> It doesn't matter, I think filestore_xattr_use_omap is a 'noop'  and
> not used in the Hammer.
>
Then what was this functionality replaced with, esp. considering EXT4 based OSDs?

Chibi
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf
> Of Adam Tygart Sent: Wednesday, July 01, 2015 8:20 AM
> To: Ceph Users
> Subject:  xattrs vs omap
>
> Hello all,
>
> I've got a coworker who put "filestore_xattr_use_omap = true" in the
> ceph.conf when we first started building the cluster. Now he can't
> remember why. He thinks it may be a holdover from our first Ceph
> cluster (running dumpling on ext4, iirc).
>
> In the newly built cluster, we are using XFS with 2048 byte inodes,
> running Ceph 0.94.2. It currently has production data in it.
>
> From my reading of other threads, it looks like this is probably not
> something you want set to true (at least on XFS), due to performance
> implications. Is this something you can change on a running cluster?
> Is it worth the hassle?
>
> Thanks,
> Adam
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ________________________________
>
> PLEASE NOTE: The information contained in this electronic mail message
> is intended only for the use of the designated recipient(s) named above.
> If the reader of this message is not the intended recipient, you are
> hereby notified that you have received this message in error and that
> any review, dissemination, distribution, or copying of this message is
> strictly prohibited. If you have received this communication in error,
> please notify the sender by telephone or e-mail (as shown above)
> immediately and destroy any and all copies of this message in your
> possession (whether hard copies or electronically stored copies).
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


--
Christian Balzer        Network/Systems Engineer
chibi@xxxxxxx           Global OnLine Japan/Fusion Communications
http://www.gol.com/

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux