ok, figured that out too.. http://www.redhat.com/archives/linux-cluster/2005-January/msg00032.html is what helped. one last newbie question.. (i hope) I had to mount my new gfs filesystem manually with mount -t gfs /dev/pool/gfs1 /mnt/gfs/ the service gfs start did nothing.. returned a prompt seemingly without doing anything.. no errors, nothing in syslog.. nuthing.. hopefully Ill figger this out too. Jason On Fri, May 12, 2006 at 10:34:16PM -0400, Jason wrote: > woohoo! > I got it figgered out.. > Ive got > /dev/sdb1 (10 megs) > /dev/sdb2 (rest of disk) > make the pools, did the ccs_tool create , > did service ccsd start > did service lock_gulmd start (but had to figger out my DNS issues first ;) > now im at the point where I do > gfs_mkfs -p lock_gulm -t bla bla > > and so now im doing > > [root@tf1 cluster]# gfs_mkfs -p lock_gulm -t progressive:gfs1 -j 8 /dev/pool/pool0 > gfs_mkfs: Partition too small for number/size of journals > [root@tf1 cluster]# gfs_mkfs -p lock_gulm -t progressive:gfs1 -j 4 /dev/pool/pool0 > gfs_mkfs: Partition too small for number/size of journals > [root@tf1 cluster]# gfs_mkfs -p lock_gulm -t progressive:gfs1 -j 2 /dev/pool/pool0 > gfs_mkfs: Partition too small for number/size of journals > [root@tf1 cluster]# > > and cant figure out why its giving me grief > > heres my pools config. > > poolname pool0 #name of the pool/volume to create > subpools 1 #how many subpools make up this > subpool 0 128 2 gfs_data #first subpool, zero indexed, 128k stripe, 1 > pooldevice 0 0 /dev/sdb1 #physical device for pool 0, device 0 (again, zero indexed) > pooldevice 0 1 /dev/sdb2 #physical device for pool 0, device 1 (again, zero indexed) > > > regards, > Jason > > -- > > Linux-cluster@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/linux-cluster -- Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster