Hello all, I have: Two nodes cluster with exactly the same
hardware (4GB RAM, AMD X2, 1TB SATAII disk, 320GB PATA disk) Centos 5.2 2.6.18-92.1.22.el5.centos.plus gfs2-utils-0.1.44-1.el5_2.1 kmod-gfs-0.1.23-5.el5_2.4 I try to create GFS2 on 1TB LV (on the SATA
disk) with ‘mkfs.gfs2 -b 512 -t tweety:gfs2-01 -p lock_dlm -j 10
/dev/mapper/vg1-data1’ I want to use 512 block size. Since the
command looked that it last forever, I used –D option to look on what is
going on. This visualized the fact that nothing happens after initial effort to
create Journal 3. Because I got those freezes during mkfs.gfs2,
I used ‘vmstat –p /dev/sda1’ on a second terminal to observe
disk activity. This confirmed that there is no disk activity after the initial
effort to create Journal 3. I thought that it might be a problem of how
I created the LV so I erase all VG, LV and PV and try to create the file system
on the physical device. The creation of GFS2 on the physical partition
also freezes on Journal 3. This is the extract of the output of ‘mkfs.gfs2
-D -b 512 -t tweety:gfs2-01 -p lock_dlm -j 10 /dev/sda1’ (which is same
to the output I get when using the LV instead of the physical partition): ……………………… ri_addr: 1951423390 ri_length: 269
ri_data0: 1951423659 ri_data: 523884 ri_bitbytes: 130971 ri_addr: 1951947543 ri_length: 269
ri_data0: 1951947812 ri_data: 523884 ri_bitbytes: 130971 ri_addr: 1952471696 ri_length: 269
ri_data0: 1952471965 ri_data: 523884 ri_bitbytes: 130971 ri_addr: 1952995849 ri_length: 269
ri_data0: 1952996118 ri_data: 523884 ri_bitbytes: 130971 Root directory: mh_magic: 0x01161970 mh_type: 4
mh_format: 400 no_formal_ino: 1 no_addr: 399 di_mode: 040755 di_uid: 0
di_gid: 0 di_nlink: 2 di_size: 280 di_blocks: 1 di_atime: 1237752111
di_mtime: 1237752111 di_ctime: 1237752111 di_major: 0 di_minor: 0
di_goal_meta: 399 di_goal_data: 399 di_flags: 0x00000001 di_payload_format:
1200 di_height: 0 di_depth: 0 di_entries: 2 di_eattr: 0 Master dir: mh_magic: 0x01161970 mh_type: 4
mh_format: 400 no_formal_ino: 2 no_addr: 400 di_mode: 040755 di_uid: 0
di_gid: 0 di_nlink: 2 di_size: 280 di_blocks: 1 di_atime: 1237752111
di_mtime: 1237752111 di_ctime: 1237752111 di_major: 0 di_minor: 0
di_goal_meta: 400 di_goal_data: 400 di_flags: 0x00000201 di_payload_format:
1200 di_height: 0 di_depth: 0 di_entries: 2 di_eattr: 0 Super Block: mh_magic: 0x01161970 mh_type: 1
mh_format: 100 sb_fs_format: 1801 sb_multihost_format: 1900 sb_bsize: 512
sb_bsize_shift: 9 no_formal_ino: 2 no_addr: 400 no_formal_ino: 1 no_addr:
399 sb_lockproto: lock_dlm sb_locktable: tweety:gfs2-01 Journal 0: mh_magic: 0x01161970 mh_type: 4
mh_format: 400 no_formal_ino: 4 no_addr: 402 di_mode: 0100600 di_uid: 0
di_gid: 0 di_nlink: 1 di_size: 134217728 di_blocks: 266516 di_atime:
1237752111 di_mtime: 1237752111 di_ctime: 1237752111 di_major: 0 di_minor:
0 di_goal_meta: 4773 di_goal_data: 266917 di_flags: 0x00000200
di_payload_format: 0 di_height: 4 di_depth: 0 di_entries: 0 di_eattr: 0 Journal 1: mh_magic: 0x01161970 mh_type: 4
mh_format: 400 no_formal_ino: 5 no_addr: 266918 di_mode: 0100600 di_uid: 0
di_gid: 0 di_nlink: 1 di_size: 134217728 di_blocks: 266516 di_atime:
1237752111 di_mtime: 1237752111 di_ctime: 1237752111 di_major: 0 di_minor:
0 di_goal_meta: 271289 di_goal_data: 533703 di_flags: 0x00000200 di_payload_format:
0 di_height: 4 di_depth: 0 di_entries: 0 di_eattr: 0 Journal 2: mh_magic: 0x01161970 mh_type: 4
mh_format: 400 no_formal_ino: 6 no_addr: 533704 di_mode: 0100600 di_uid: 0
di_gid: 0 di_nlink: 1 di_size: 134217728 di_blocks: 266516 di_atime:
1237752111 di_mtime: 1237752111 di_ctime: 1237752111 di_major: 0 di_minor:
0 di_goal_meta: 538075 di_goal_data: 800219 di_flags: 0x00000200
di_payload_format: 0 di_height: 4 di_depth: 0 di_entries: 0 di_eattr: 0 Journal 3: After that I have no disk activity and no
logging. There is no message from the kernel. Anyone knows the reason for that behavior,
and what is the minimum block size I can use (I have tested 1024 and it works
fine)? Thank you all for your time. Theophanis Kontogiannis |
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster