Re: Readhead Issues using cluster-1.01.00

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Velu Erwan a écrit :

Velu Erwan a écrit :

1°) Why this volume is so big ? On my system it reaches ~8192 ExaBytes !
The first time I saw  that I thought it was an error...

[root@max4 ~]# cat /proc/partitions  | grep  -e "major" -e "diapered"
major minor  #blocks  name
252     0 9223372036854775807 diapered_g1v1
[root@max4 ~]#

I don't know if it's normal or not but gd->capacity is set to zero then -1 is substract.
As gd->capacity is a unsigned long we reach the maximum size.



2°) Regarding the source code, this diaper volume never set the "gd->queue->backing_dev_info.ra_pages" which is set to zero by
gd->queue = blk_alloc_queue(GFP_KERNEL);
Is it needed to enforce the cache/lock management or is it just a miss ?
This could explain why the reading performances are low while gfs_read() makes a generic_file_read() isn't it ?

I've made this patch which still uses a hardcoded value but where the diapered volume have a ra_pages set.
Using 2048 give some excellent results.
This patch make the previous one obsolete for sure. Please found it attached. But I don't know how it affects gfs for its cache/lock management because maybe having some pages in cache could create some coherency troubles.

What do you think about that ?

Erwan,

--- gfs-kernel/src/gfs/diaper.c~	2005-11-02 16:09:20.000000000 +0100
+++ gfs-kernel/src/gfs/diaper.c	2005-11-02 16:09:30.000000000 +0100
@@ -356,7 +356,7 @@
 	gd->fops = &diaper_fops;
 	gd->private_data = dh;
 	gd->capacity--;
-
+        gd->queue->backing_dev_info.ra_pages = 2048;
 	add_disk(gd);
 
 	diaper = bdget_disk(gd, 0);
--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux