setfattr ... does not work anymore for pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hmm, you're not allowed to set real xattrs on the CephFS root and
we've had issues a few times with that and the layout xattrs. There
might have been a bug with that on v0.81 which is fixed in master, but
I don't remember exactly when it last happened.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Mon, Aug 18, 2014 at 1:19 PM, Kasper Dieter
<dieter.kasper at ts.fujitsu.com> wrote:
> Hi Sage,
>
> I know about the setattr syntax from
>         https://github.com/ceph/ceph/blob/master/qa/workunits/fs/misc/layout_vxattrs.sh
> =
> setfattr -n ceph.dir.layout.pool -v data dir
> setfattr -n ceph.dir.layout.pool -v 2 dir
>
> But, in my case it is not working:
>
> [root at rx37-1 ~]# setfattr -n ceph.dir.layout.pool -v 3 /mnt/cephfs/ssd-r2
> setfattr: /mnt/cephfs/ssd-r2: Invalid argument
>
> [root at rx37-1 ~]# setfattr -n ceph.dir.layout.pool -v SSD-r2 /mnt/cephfs/ssd-r2
> setfattr: /mnt/cephfs/ssd-r2: Invalid argument
>
> [root at rx37-1 ~]# strace setfattr -n ceph.dir.layout.pool -v SSD-r2 /mnt/cephfs/ssd-r2
> (...)
> setxattr("/mnt/cephfs/ssd-r2", "ceph.dir.layout.pool", "SSD-r2", 6, 0) = -1 EINVAL (Invalid argument)
>
> Same with ceph-fuse:
> [root at rx37-1 ~]# strace setfattr -n ceph.dir.layout.pool -v SSD-r2 /mnt/ceph-fuse/ssd-r2
> (...)
> setxattr("/mnt/ceph-fuse/ssd-r2", "ceph.dir.layout.pool", "SSD-r2", 6, 0) = -1 EINVAL (Invalid argument)
>
>
> Setting all layout attribute at once does not work either:
> [root at rx37-1 cephfs]# setfattr -n ceph.dir.layout -v "stripe_unit=2097152 stripe_count=1 object_size=4194304 pool=SSD-r2" /mnt/cephfs/ssd-r2
> setfattr: /mnt/cephfs/ssd-r2: Invalid argument
>
>
> How can I debug this further ?
> It seems the Directory has no "layout" at all:
>
> # getfattr -d -m - /mnt/cephfs/ssd-r2
> # file: ssd-r2
> ceph.dir.entries="0"
> ceph.dir.files="0"
> ceph.dir.rbytes="0"
> ceph.dir.rctime="0.090"
> ceph.dir.rentries="1"
> ceph.dir.rfiles="0"
> ceph.dir.rsubdirs="1"
> ceph.dir.subdirs="0"
>
>
> Kind Regards,
> -Dieter
>
>
>
> On Mon, Aug 18, 2014 at 09:37:39PM +0200, Sage Weil wrote:
>> Hi Dieter,
>>
>> There is a new xattr based interface.  See
>>
>>       https://github.com/ceph/ceph/blob/master/qa/workunits/fs/misc/layout_vxattrs.sh
>>
>> The nice part about this interface is no new tools are necessary (just
>> standard 'attr' or 'setfattr' commands) and it is the same with both
>> ceph-fuse and the kernel client.
>>
>> sage
>>
>>
>> On Mon, 18 Aug 2014, Kasper Dieter wrote:
>>
>> > Hi Sage,
>> >
>> > a couple of months ago (maybe last year) I was able to change the
>> > assignment of Directorlies and Files of CephFS to different pools
>> > back and forth (with cephfs set_layout as well as with setfattr).
>> >
>> > Now (with ceph v0.81 and Kernel 3.10 an the client side)
>> > neither 'cephfs set_layout' nor 'setfattr' works anymore:
>> >
>> > # mount | grep ceph
>> > ceph-fuse on /mnt/ceph-fuse type fuse.ceph-fuse (rw,nosuid,nodev,allow_other,default_permissions)
>> > 192.168.113.52:6789:/ on /mnt/cephfs type ceph (name=admin,key=client.admin)
>> >
>> > # ls -l /mnt/cephfs
>> > total 0
>> > -rw-r--r-- 1 root root 0 Aug 18 21:06 file
>> > -rw-r--r-- 1 root root 0 Aug 18 21:10 file2
>> > -rw-r--r-- 1 root root 0 Aug 18 21:11 file3
>> > drwxr-xr-x 1 root root 0 Aug 18 20:54 sas-r2
>> > drwxr-xr-x 1 root root 0 Aug 18 20:54 ssd-r2
>> >
>> > # getfattr -d -m - /mnt/cephfs
>> > getfattr: Removing leading '/' from absolute path names
>> > # file: mnt/cephfs
>> > ceph.dir.entries="5"
>> > ceph.dir.files="3"
>> > ceph.dir.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 pool=SAS-r2"
>> > ceph.dir.rbytes="0"
>> > ceph.dir.rctime="0.090"
>> > ceph.dir.rentries="1"
>> > ceph.dir.rfiles="0"
>> > ceph.dir.rsubdirs="1"
>> > ceph.dir.subdirs="2"
>> >
>> > # setfattr -n ceph.dir.layout.pool -v SSD-r2 /mnt/cephfs
>> > setfattr: /mnt/cephfs: Invalid argument
>> >
>> > # ceph osd dump | grep pool
>> > pool 0 'data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8064 pgp_num 8064 last_change 1 flags hashpspool crash_replay_interval 45 stripe_width 0
>> > pool 1 'metadata' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8064 pgp_num 8064 last_change 1 flags hashpspool stripe_width 0
>> > pool 2 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8064 pgp_num 8064 last_change 1 flags hashpspool stripe_width 0
>> > pool 3 'SSD-r2' replicated size 2 min_size 2 crush_ruleset 3 object_hash rjenkins pg_num 6000 pgp_num 6000 last_change 404 flags hashpspool stripe_width 0
>> > pool 4 'SAS-r2' replicated size 2 min_size 2 crush_ruleset 4 object_hash rjenkins pg_num 6000 pgp_num 6000 last_change 408 flags hashpspool stripe_width 0
>> > pool 5 'SSD-r3' replicated size 3 min_size 2 crush_ruleset 3 object_hash rjenkins pg_num 4000 pgp_num 4000 last_change 413 flags hashpspool stripe_width 0
>> > pool 6 'SAS-r3' replicated size 3 min_size 2 crush_ruleset 4 object_hash rjenkins pg_num 4000 pgp_num 4000 last_change 416 flags hashpspool stripe_width 0
>> >
>> > # getfattr -d -m - /mnt/cephfs/ssd-r2
>> > getfattr: Removing leading '/' from absolute path names
>> > # file: mnt/cephfs/ssd-r2
>> > ceph.dir.entries="0"
>> > ceph.dir.files="0"
>> > ceph.dir.rbytes="0"
>> > ceph.dir.rctime="0.090"
>> > ceph.dir.rentries="1"
>> > ceph.dir.rfiles="0"
>> > ceph.dir.rsubdirs="1"
>> >
>> > # setfattr -n ceph.dir.layout.pool -v SSD-r2 /mnt/cephfs/ssd-r2
>> > setfattr: /mnt/cephfs/ssd-r2: Invalid argument
>> >
>> > # cephfs /mnt/cephfs/ssd-r2       set_layout -p 3 -s 4194304 -u 4194304 -c 1
>> > Error setting layout: (22) Invalid argument
>> >
>> >
>> > Any recommendations ?
>> > Is this a bug, or a new feature ?
>> > Do I have to use a newer Kernel ?
>> >
>> >
>> > Kind Regards,
>> > -Dieter
>> >
>> >
>> >
>> > On Sat, Aug 31, 2013 at 02:26:48AM +0200, Sage Weil wrote:
>> > > On Fri, 30 Aug 2013, Joao Pedras wrote:
>> > > >
>> > > > Greetings all!
>> > > >
>> > > > I am bumping into a small issue and I am wondering if someone has any
>> > > > insight on it.
>> > > >
>> > > > I am trying to use a pool other than 'data' for cephfs. Said pool has id #3
>> > > > and I have run 'ceph mds add_data_pool 3'.
>> > > >
>> > > > After mounting cephfs seg faults when trying to set the layout:
>> > > >
>> > > > $> cephfs /path set_layout -p 3
>> > > >
>> > > > Segmentation fault
>> > > >
>> > > > Actually plainly running 'cephfs /path set_layout' without more options will
>> > > > seg fault as well.
>> > > >
>> > > > Version is 0.61.8 on ubuntu 12.04.
>> > > >
>> > > > A question that comes to mind here is if there is a way of accomplishing
>> > > > this when using ceph-fuse (<3.x kernels).
>> > >
>> > > You can adjust this more easily using the xattr interface:
>> > >
>> > >  getfattr -n ceph.dir.layout <dir>
>> > >  setfattr -n ceph.dir.layout.pool -v mypool
>> > >  getfattr -n ceph.dir.layout <dir>
>> > >
>> > > The interface tests are probably a decent reference given this isn't
>> > > explicitly documented anywhere:
>> > >
>> > >  https://github.com/ceph/ceph/blob/master/qa/workunits/misc/layout_vxattrs.sh
>> > >
>> > > sage
>> > > _______________________________________________
>> > > ceph-users mailing list
>> > > ceph-users at lists.ceph.com
>> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> >
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux