Hi Ilya, Thanks for your quick response! As mentioned in the ceph-csi ticket [1], it is normal to do this on a loop device and a rbd image which is created through k8s PVC but not applied in pod. [1] https://github.com/ceph/ceph-csi/issues/3424 Best Regards, Liang Zheng Ilya Dryomov <idryomov@xxxxxxxxx> 于2022年10月12日周三 19:15写道: > On Wed, Oct 12, 2022 at 9:37 AM 郑亮 <zhengliang0901@xxxxxxxxx> wrote: > > > > Hi all, > > I have create a pod using rbd image as backend storage, then map rbd > image > > to local block device, and mount it with ext4 filesystem. The `df` > > displays the disk usage much larger than the available space displayed > > after disabling ext4 journal. The following is the steps to reproduce, > > thanks in advance. > > > > Environment details > > > > - Image/version of Ceph CSI driver : cephcsi:v3.5.1 > > - Kernel version : > > > > 🍺 /root/go/src/ceph/ceph-csi ☞ git:(devel) uname -a > > Linux k1 3.10.0-1127.el7.x86_64 #1 SMP Tue Mar 31 23:36:51 UTC 2020 > > x86_64 x86_64 x86_64 GNU/Linux > > > > > > - Mounter used for mounting PVC (for cephFS its fuse or kernel. for > rbd > > its > > krbd or rbd-nbd) : krbd > > - Kubernetes cluster version : > > > > 🍺 /root/go/src/ceph/ceph-csi ☞ git:(devel) kubectl version > > Client Version: version.Info{Major:"1", Minor:"22", > > GitVersion:"v1.22.7", > > GitCommit:"b56e432f2191419647a6a13b9f5867801850f969", > > GitTreeState:"clean", BuildDate:"2022-02-16T11:50:27Z", > > GoVersion:"go1.16.14", Compiler:"gc", Platform:"linux/amd64"} > > Server Version: version.Info{Major:"1", Minor:"22", > > GitVersion:"v1.22.7", > > GitCommit:"b56e432f2191419647a6a13b9f5867801850f969", > > GitTreeState:"clean", BuildDate:"2022-02-16T11:43:55Z", > > GoVersion:"go1.16.14", Compiler:"gc", Platform:"linux/amd64"} > > > > > > - Ceph cluster version : > > > > 🍺 /root/go/src/ceph/ceph-csi ☞ git:(devel)ceph --version > > ceph version 15.2.16 (d46a73d6d0a67a79558054a3a5a72cb561724974) octopus > (stable) > > > > Steps to reproduce > > > > Steps to reproduce the behavior: > > > > 1. Create a storageclass with storageclass > > < > https://github.com/ceph/ceph-csi/blob/devel/examples/rbd/storageclass.yaml > > > > 2. Then create pvc, and test pod like below > > > > ➜ /root ☞ cat csi-rbd/examples/pvc.yaml > > --- > > apiVersion: v1 > > kind: PersistentVolumeClaim > > metadata: > > name: rbd-pvc > > spec: > > accessModes: > > - ReadWriteOnce > > resources: > > requests: > > storage: 50Gi > > storageClassName: csi-rbd-sc > > > > 🍺 /root ☞cat pod.yaml > > apiVersion: v1 > > kind: Pod > > metadata: > > name: csi-rbd-demo-pod > > spec: > > containers: > > - name: web-server > > image: docker.io/library/nginx:latest > > volumeMounts: > > - name: mypvc > > mountPath: /var/lib/www/html > > volumes: > > - name: mypvc > > persistentVolumeClaim: > > claimName: rbd-pvc > > readOnly: false > > > > > > 1. The following steps are executed in ceph cluster > > > > 🍺 /root/go/src/ceph/ceph-csi ☞ git:(devel) rbd ls -p > > pool-51312494-44b2-43bc-8ba1-9c4f5eda3287 > > csi-vol-ad0bba2a-49fc-11ed-8ab9-3a534777138b > > > > 🍺 /root/go/src/ceph/ceph-csi ☞ git:(devel) rbd map > > > pool-51312494-44b2-43bc-8ba1-9c4f5eda3287/csi-vol-ad0bba2a-49fc-11ed-8ab9-3a534777138b > > /dev/rbd0🍺 /root/go/src/ceph/ceph-csi ☞ git:(devel) lsblk -f > > NAME FSTYPE LABEL UUID > > MOUNTPOINT > > sr0 > > vda > > ├─vda1 xfs a080444c-7927-49f7-b94f-e20f823bbc95 > /boot > > ├─vda2 LVM2_member jDjk4o-AaZU-He1S-8t56-4YEY-ujTp-ozFrK5 > > │ ├─centos-root xfs 5e322b94-4141-4a15-ae29-4136ae9c2e15 > / > > │ └─centos-swap swap d59f7992-9027-407a-84b3-ec69c3dadd4e > > └─vda3 LVM2_member Qn0c4t-Sf93-oIDr-e57o-XQ73-DsyG-pGI8X0 > > └─centos-root xfs 5e322b94-4141-4a15-ae29-4136ae9c2e15 > / > > vdb > > vdc > > rbd0 ext4 e381fa9f-9f94-43d1-8f3a-c2d90bc8de27 > > > > 🍺 /root/go/src/ceph/ceph-csi ☞ git:(devel) mount /dev/rbd0 > > /mnt/ext4🍺 /root/go/src/ceph/ceph-csi ☞ git:(devel) df -hT | egrep > > 'rbd|Type' > > Filesystem Type Size Used Avail Use% Mounted on > > /dev/rbd0 ext4 49G 53M 49G 1% /mnt/ext4🍺 > > /root/go/src/ceph/ceph-csi ☞ git:(devel) umount /mnt/ext4🍺 > > /root/go/src/ceph/ceph-csi ☞ git:(devel) tune2fs -o > > journal_data_writeback /dev/rbd0 > > tune2fs 1.46.5 (30-Dec-2021)🍺 /root/go/src/ceph/ceph-csi ☞ > > git:(devel) tune2fs -O "^has_journal" /dev/rbd0 * <= disable > > ext4 journal* > > tune2fs 1.46.5 (30-Dec-2021)🍺 /root/go/src/ceph/ceph-csi ☞ > > git:(devel) e2fsck -f /dev/rbd0 > > e2fsck 1.46.5 (30-Dec-2021) > > Pass 1: Checking inodes, blocks, and sizes > > Pass 2: Checking directory structure > > Pass 3: Checking directory connectivity > > Pass 4: Checking reference counts > > Pass 5: Checking group summary information > > /dev/rbd0: 11/3276800 files (0.0% non-contiguous), 219022/13107200 > > blocks🍺 /root/go/src/ceph/ceph-csi ☞ git:(devel) mount /dev/rbd0 > > /mnt/ext4🍺 /root/go/src/ceph/ceph-csi ☞ git:(devel) df -hT | egrep > > 'rbd|Type' > > Filesystem Type Size Used Avail Use% Mounted on > > /dev/rbd0 ext4 64Z 64Z 50G 100% /mnt/ext4 > > > > 🍺 /root/go/src/ceph/ceph-csi ☞ git:(devel)mount | grep rbd > > /dev/rbd0 on /mnt/ext4 type ext4 (rw,relatime,stripe=1024) > > > > Actual results > > > > The disk usage much larger than the available space displayed by the df > command > > after disabling ext4 journal > > > > 🍺 /root/go/src/ceph/ceph-csi ☞ git:(devel)df -T | egrep 'rbd|Type' > > Filesystem Type 1K-blocks > > Used Available Use% Mounted on > > /dev/rbd0 ext4 73786976277711028224 > > 73786976277659475512 51536328 100% /mnt/ext4 > > > > Expected behavior > > > > The df command could show disk usage normally. > > Hi Liang, > > As mentioned in the ceph-csi ticket [1], this doesn't seem to have > anything to do with RBD. Does it reproduce with a loop device? > > $ truncate -s 50G backingfile > $ sudo losetup -f --show backingfile > $ sudo mkfs.ext4 ... > > Note that Ceph CSI adds some hard-coded mkfs options, so the problem > may also be lying there. IIRC you would need the following to match > it: > > $ sudo mkfs.ext4 -m0 -Enodiscard,lazy_itable_init=1,lazy_journal_init=1 ... > > [1] https://github.com/ceph/ceph-csi/issues/3424 > > Thanks, > > Ilya > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx