Find attached the method I used (along with some output) to create the logical volume (note, I think I missed copying some of the screen at some point) Unmount /work7 using the command umount -f /work7 Using fdisk, change the partition usage to 0x8e (free space, logical partition) as shown in this text dump from the screen) p print the partition table t change a partition's system id w write table to disk and exit Command (m for help): p Disk /dev/hdb: 16 heads, 63 sectors, 19841 cylinders Units = cylinders of 1008 * 512 bytes Device Boot Start End Blocks Id System /dev/hdb1 * 1 1560 786208+ 82 Linux swap /dev/hdb2 1561 19841 9213624 83 Linux Command (m for help): t Partition number (1-4): 2 Hex code (type L to list codes): 0x8e Type 0 means free space to many systems (but not to Linux). Having partitions of type 0 is probably unwise. You can delete a partition using the `d' command. Changed system type of partition 2 to 0 (Empty) Command (m for help): p Disk /dev/hdb: 16 heads, 63 sectors, 19841 cylinders Units = cylinders of 1008 * 512 bytes Device Boot Start End Blocks Id System /dev/hdb1 * 1 1560 786208+ 82 Linux swap /dev/hdb2 1561 19841 9213624 0 Empty Once this is completed, exit and write to the partition table (Option w). Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot. Syncing disks. Then use pvcreate to crate a volume group that will contain the logical volumes /userg and /work7. [root@cluster01 root]# pvcreate /dev/hdb2 pvcreate -- physical volume "/dev/hdb2" successfully created Now use lvcreate to create the logical volumes userg and work7. [root@cluster01 root]# lvcreate -L 4000 -n userg cluster01vg lvcreate -- doing automatic backup of "cluster01vg" lvcreate -- logical volume "/dev/cluster01vg/userg" successfully created [root@cluster01 root]# lvcreate -L 4000 -n work7 cluster01vg lvcreate -- doing automatic backup of "cluster01vg" lvcreate -- logical volume "/dev/cluster01vg/work7" successfully created Note that the -L option is the volume size and -n is the name of the logical volume. Next the filesystem needs to be created in the logical volumes. This is performed by the mkfs command. [root@cluster01 root]# mkfs -t ext3 /dev/cluster01vg/work7 mke2fs 1.27 (8-Mar-2002) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 512000 inodes, 1024000 blocks 51200 blocks (5.00%) reserved for the super user First data block=0 32 block groups 32768 blocks per group, 32768 fragments per group 16000 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 35 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@cluster01 root]# mkfs -t ext3 /dev/cluster01vg/userg mke2fs 1.27 (8-Mar-2002) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 512000 inodes, 1024000 blocks 51200 blocks (5.00%) reserved for the super user First data block=0 32 block groups 32768 blocks per group, 32768 fragments per group 16000 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 38 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. The error messages I've getting are all a variation on the following: [root@cluster01 root]# lvdisplay /dev/cluster01vg/userg lvdisplay -- volume group "cluster01vg" of logical volume "/dev/cluster01vg/userg"is not active lvdisplay -- try -D, please [root@cluster01 root]# lvscan lvscan -- volume group "cluster01vg" is NOT active; try -D lvscan -- no logical volumes found Using lvscan -D:- [root@cluster01 root]# lvscan -D lvscan -- reading all physical volumes (this may take a while...) lvscan -- inactive "/dev/cluster01vg/userg" [3.91 GB] lvscan -- inactive "/dev/cluster01vg/work7" [3.91 GB] lvscan -- 2 logical volumes with 7.81 GB total in 1 volume group lvscan -- 2 inactive logical volumes Anyone with any ideas? Andy -----Original Message----- From: Ken Rossman [mailto:rossman@xxxxxxxxxxxx] Sent: Monday, February 09, 2004 2:42 PM To: redhat-list@xxxxxxxxxx Cc: Ken Rossman Subject: Re: lvm on RH8 On Monday, February 9, 2004, at 09:33 AM, Cannon, Andrew wrote: > I'm trying to get lvm working on one of our systems. I've gone through > the > steps (as given in the man pages) and tried to mount the filesystem. I > couldn't mount the filesystem. What error messages do you get? This would be helpful in debugging. Dumb question, but, did you build a file system on the newly created logical volume? -- redhat-list mailing list unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe https://www.redhat.com/mailman/listinfo/redhat-list NNC's UK Operating Companies : NNC Holdings Limited (no. 3725076), NNC Limited (no. 1120437), National Nuclear Corporation Limited (no. 2290928), STATS-NNC Limited (no. 4339062) and Technica-NNC Limited (no. 235856). The registered office of each company is at Booths Hall, Chelford Road, Knutsford, Cheshire WA16 8QZ except for Technica-NNC Limited whose registered office is at 6 Union Row, Aberdeen AB10 1DQ. NNC's head office and principal address is Booths Hall and the switchboard number is 01565 633800. The NNC website is www.nnc.co.uk Any information or opinions in this message which do not relate to our business are not authorised by any of the above companies. Where this message does relate to our business the content is sent to you by the relevant company (as above) and is confidential and intended for the use of the individual or entity to whom it is addressed. The contents do not form part of any contract unless so stated. If you have received this e-mail in error please notify the NNC system manager by email at eadm@xxxxxxxxxx -- redhat-list mailing list unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe https://www.redhat.com/mailman/listinfo/redhat-list