On Thu, Aug 25, 2016 at 11:12 AM, Lazuardi Nasution <mrxlazuardin@xxxxxxxxx> wrote: > Hi Gregory, > > Since I have mounted it with /etc/fstab, of course it is kernel client. What > log do you mean? I cannot find anything related on dmesg. > login to the node that run dd. find process id of dd. run 'cat /proc/<PID of dd>/stack. This should tell us where does dd hang. Regard Yan, Zheng > Best regards, > > > On Aug 25, 2016 00:46, "Gregory Farnum" <gfarnum@xxxxxxxxxx> wrote: >> >> On Wed, Aug 24, 2016 at 10:25 AM, Lazuardi Nasution >> <mrxlazuardin@xxxxxxxxx> wrote: >> > Hi, >> > >> > I have problem with CephFS on writing big size file. I have found that >> > my >> > OpenStack Nova backup was not working after I change the rbd based mount >> > of >> > /var/lib/nova/instances/snapshots to CephFS based (mounted on /etc/fstab >> > on >> > all Nova compute nodes). I couldn't relize the cause until I tried to do >> > some test below. >> > >> > [root@compute-aa instances]# sudo -S -u nova dd if=/dev/zero >> > of=./snapshots/test/test.dat bs=4096 count=1024 >> > 1024+0 records in >> > 1024+0 records out >> > 4194304 bytes (4.2 MB) copied, 0.002956 s, 1.4 GB/s >> > >> > It seem that making 4.2MB size file is OK. The "test" folder was been >> > created by nova user too. >> > >> > [root@compute-aa instances]# sudo -S -u nova dd if=/dev/zero >> > of=./snapshots/test/test.dat bs=4096 count=1048576 >> > >> > This test was hang and gave me "D state" of dd process which cannot be >> > killed, so I must reboot that node. This make afraid to do some more >> > test >> > with lower size. When I tried to see the folder content, I found >> > following >> > result. >> > >> > [root@compute-aa ~]# ls -lah /var/lib/nova/instances/snapshots/test/ >> > total 0 >> > drwxr-xr-x 1 nova nova 0 Aug 24 22:35 . >> > drwxr-xr-x 1 nova nova 0 Aug 24 22:33 .. >> > -rw-r--r-- 1 nova nova 0 Aug 24 22:35 test.dat >> > >> > I prefer to use CephFS since the snapshot folder is only for temporary. >> > Please help me to solve this problem. >> >> Are you using ceph-fuse or the in-kernel CephFS client? Have you >> checked for errors in the client's log or dmesg? >> -Greg > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com