Re: cephfs kernel client - page cache being invaildated.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> On Sun, Oct 14, 2018 at 8:21 PM <jesper@xxxxxxxx> wrote:
>> how many cephfs mounts that access the file? Is is possible that some
>> program opens that file in RW mode (even they just read the file)?
>
>
> The nature of the program is that it is "prepped" by one-set of commands
> and queried by another, thus the RW case is extremely unlikely.
> I can change permission bits to rewoke the w-bit for the user, they
> dont need it anyway... it is just the same service-users that generates
> the data and queries it today.

Just to remove the suspicion of other clients fiddling with the files I did a
more structured test. I have 4 x 10GB files from fio-benchmarking, total
40GB . Hosted on

1) CephFS /ceph/cluster/home/jk
2) NFS /z/home/jk

First I read them .. then sleep 900 seconds .. then read again (just with dd)

jk@sild12:/ceph/cluster/home/jk$ time  for i in $(seq 0 3); do echo "dd
if=test.$i.0 of=/dev/null bs=1M"; done  | parallel -j 4 ; sleep 900; time 
for i in $(seq 0 3); do echo "dd if=test.$i.0 of=/dev/null bs=1M"; done  |
parallel -j 4
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 2.56413 s, 4.2 GB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 2.82234 s, 3.8 GB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 2.9361 s, 3.7 GB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 3.10397 s, 3.5 GB/s

real    0m3.449s
user    0m0.217s
sys     0m11.497s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 315.439 s, 34.0 MB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 338.661 s, 31.7 MB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 354.725 s, 30.3 MB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 356.126 s, 30.2 MB/s

real    5m56.634s
user    0m0.260s
sys     0m16.515s
jk@sild12:/ceph/cluster/home/jk$


Then NFS:

jk@sild12:~$ time  for i in $(seq 0 3); do echo "dd if=test.$i.0
of=/dev/null bs=1M"; done  | parallel -j 4 ; sleep 900; time  for i in
$(seq 0 3); do echo "dd if=test.$i.0 of=/dev/null bs=1M"; done  | parallel
-j 4
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 1.60267 s, 6.7 GB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 2.18602 s, 4.9 GB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 2.47564 s, 4.3 GB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 2.54674 s, 4.2 GB/s

real    0m2.855s
user    0m0.185s
sys     0m8.888s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 1.68613 s, 6.4 GB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 1.6983 s, 6.3 GB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 2.20059 s, 4.9 GB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 2.58077 s, 4.2 GB/s

real    0m2.980s
user    0m0.173s
sys     0m8.239s
jk@sild12:~$


Can I ask one of you to run the same "test" (or similar) .. and report back
i you can reproduce it?

Thoughts/comments/suggestions are highly apprecitated?  Should I try with
the fuse-client ?

-- 
Jesper

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux