cephfs set_layout on filesystem root?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I've been experimenting with using cephfs to set
the object/stripe size on a Ceph filesystem root,
and it seems to not persist across a filesystem
restart.  Is that expected behavior?

To reproduce on current testing branch (6d0dc4bf6):

---
# create a new filesystem, mount it; then
# from a client:

[root@an1024 ~]# df -h /mnt/ceph
Filesystem            Size  Used Avail Use% Mounted on
172.17.40.34:/         13T  510M   13T   1% /ram/mnt/ceph

[root@an1024 ~]# cephfs /mnt/ceph/ show_layout
layout.data_pool:     0
layout.object_size:   4194304
layout.stripe_unit:   4194304
layout.stripe_count:  1
layout.preferred_osd: -1

[root@an1024 ~]# cephfs /mnt/ceph set_layout -s 262144 -c 1 -u 262144
[root@an1024 ~]# cephfs /mnt/ceph/ show_layout
layout.data_pool:     0
layout.object_size:   262144
layout.stripe_unit:   262144
layout.stripe_count:  1
layout.preferred_osd: -1

[root@an1024 ~]# touch /mnt/ceph/test1
[root@an1024 ~]# cephfs /mnt/ceph/test1 show_layout
layout.data_pool:     0
layout.object_size:   262144
layout.stripe_unit:   262144
layout.stripe_count:  1
layout.preferred_osd: -1

[root@an1024 ~]# umount /mnt/ceph
[root@an1024 ~]# mount.ceph an14-ib0:/ /mnt/ceph
[root@an1024 ~]# df -h /mnt/ceph
Filesystem            Size  Used Avail Use% Mounted on
172.17.40.34:/         13T  178M   13T   1% /ram/mnt/ceph

[root@an1024 ~]# cephfs /mnt/ceph/ show_layout
layout.data_pool:     0
layout.object_size:   262144
layout.stripe_unit:   262144
layout.stripe_count:  1
layout.preferred_osd: -1
---

OK, so far.  After filesystem unmount/shutdown/restart/mount:

---

[root@an1024 ~]# df -h /mnt/ceph
Filesystem            Size  Used Avail Use% Mounted on
172.17.40.34:/         13T  450M   13T   1% /ram/mnt/ceph

[root@an1024 ~]# cephfs /mnt/ceph/ show_layout
layout not specified

---
Hmmm, not what I was expecting.  Also:
---

[root@an1024 ~]# cephfs /mnt/ceph/test1 show_layout
layout.data_pool:     0
layout.object_size:   262144
layout.stripe_unit:   262144
layout.stripe_count:  1
layout.preferred_osd: -1

[root@an1024 ~]# touch /mnt/ceph/test2
[root@an1024 ~]# cephfs /mnt/ceph/test2 show_layout
layout.data_pool:     0
layout.object_size:   4194304
layout.stripe_unit:   4194304
layout.stripe_count:  1
layout.preferred_osd: -1

---
Also not what I was expecting.  I thought my
256 KiB setting from before should still be in
effect.

Am I missing something?

Thanks -- Jim




--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux