Re: cephfs kernel client - page cache being invaildated.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/15/18 12:02 PM, jesper@xxxxxxxx wrote:
>>> On Sun, Oct 14, 2018 at 8:21 PM <jesper@xxxxxxxx> wrote:
>>> how many cephfs mounts that access the file? Is is possible that some
>>> program opens that file in RW mode (even they just read the file)?
>>
>>
>> The nature of the program is that it is "prepped" by one-set of commands
>> and queried by another, thus the RW case is extremely unlikely.
>> I can change permission bits to rewoke the w-bit for the user, they
>> dont need it anyway... it is just the same service-users that generates
>> the data and queries it today.
> 
> Just to remove the suspicion of other clients fiddling with the files I did a
> more structured test. I have 4 x 10GB files from fio-benchmarking, total
> 40GB . Hosted on
> 
> 1) CephFS /ceph/cluster/home/jk
> 2) NFS /z/home/jk
> 
> First I read them .. then sleep 900 seconds .. then read again (just with dd)
> 
> jk@sild12:/ceph/cluster/home/jk$ time  for i in $(seq 0 3); do echo "dd
> if=test.$i.0 of=/dev/null bs=1M"; done  | parallel -j 4 ; sleep 900; time 
> for i in $(seq 0 3); do echo "dd if=test.$i.0 of=/dev/null bs=1M"; done  |
> parallel -j 4
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB, 10 GiB) copied, 2.56413 s, 4.2 GB/s
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB, 10 GiB) copied, 2.82234 s, 3.8 GB/s
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB, 10 GiB) copied, 2.9361 s, 3.7 GB/s
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB, 10 GiB) copied, 3.10397 s, 3.5 GB/s
> 
> real    0m3.449s
> user    0m0.217s
> sys     0m11.497s
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB, 10 GiB) copied, 315.439 s, 34.0 MB/s
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB, 10 GiB) copied, 338.661 s, 31.7 MB/s
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB, 10 GiB) copied, 354.725 s, 30.3 MB/s
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB, 10 GiB) copied, 356.126 s, 30.2 MB/s
> 
> real    5m56.634s
> user    0m0.260s
> sys     0m16.515s
> jk@sild12:/ceph/cluster/home/jk$
> 
> 
> Then NFS:
> 
> jk@sild12:~$ time  for i in $(seq 0 3); do echo "dd if=test.$i.0
> of=/dev/null bs=1M"; done  | parallel -j 4 ; sleep 900; time  for i in
> $(seq 0 3); do echo "dd if=test.$i.0 of=/dev/null bs=1M"; done  | parallel
> -j 4
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB, 10 GiB) copied, 1.60267 s, 6.7 GB/s
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB, 10 GiB) copied, 2.18602 s, 4.9 GB/s
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB, 10 GiB) copied, 2.47564 s, 4.3 GB/s
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB, 10 GiB) copied, 2.54674 s, 4.2 GB/s
> 
> real    0m2.855s
> user    0m0.185s
> sys     0m8.888s
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB, 10 GiB) copied, 1.68613 s, 6.4 GB/s
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB, 10 GiB) copied, 1.6983 s, 6.3 GB/s
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB, 10 GiB) copied, 2.20059 s, 4.9 GB/s
> 10240+0 records in
> 10240+0 records out
> 10737418240 bytes (11 GB, 10 GiB) copied, 2.58077 s, 4.2 GB/s
> 
> real    0m2.980s
> user    0m0.173s
> sys     0m8.239s
> jk@sild12:~$
> 
> 
> Can I ask one of you to run the same "test" (or similar) .. and report back
> i you can reproduce it?

here my test on e EC (6+3) pool using cephfs kernel client:

7061+1 records in
7061+1 records out
7404496985 bytes (7.4 GB) copied, 3.62754 s, 2.0 GB/s
7450+1 records in
7450+1 records out
7812246720 bytes (7.8 GB) copied, 4.11908 s, 1.9 GB/s
7761+1 records in
7761+1 records out
8138636188 bytes (8.1 GB) copied, 4.34788 s, 1.9 GB/s
8212+1 records in
8212+1 records out
8611295220 bytes (8.6 GB) copied, 4.53371 s, 1.9 GB/s

real    0m4.936s
user    0m0.275s
sys     0m16.828s

7061+1 records in
7061+1 records out
7404496985 bytes (7.4 GB) copied, 3.19726 s, 2.3 GB/s
7761+1 records in
7761+1 records out
8138636188 bytes (8.1 GB) copied, 3.31881 s, 2.5 GB/s
7450+1 records in
7450+1 records out
7812246720 bytes (7.8 GB) copied, 3.36354 s, 2.3 GB/s
8212+1 records in
8212+1 records out
8611295220 bytes (8.6 GB) copied, 3.74418 s, 2.3 GB/s


No big difference here.
all CentOS 7.5 official kernel 3.10.0-862.11.6.el7.x86_64

HTH
  Dietmar

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux