prob using lvm2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

I've got a a 80GB HD partitioned automatically during Fedora Core 4 setup. That is the disk carries 8 partitions where the first to 4th i.e (/dev/hda1  to /dev/hda4) are dedicated to windows OS and (/dev/hda5) is mounted as /boot and (/dev/sda6) as /home (/dev/hda7) as swap & (/dev/hda8) as /.

I have lvm2 installed on my system.

Now can i use lvm2 on the top of all and if yes how?

When i issued the command:

lvm> pvcreate /dev/hda5

the following message was displayed:

  /var/lock/lvm/P_orphans: open failed: Permission denied
  Can't get lock for orphan PVs

please reply.

Thanks & regards
neelima
On 12/12/05, linux-lvm-request@redhat.com <linux-lvm-request@redhat.com > wrote:
Send linux-lvm mailing list submissions to
         linux-lvm@redhat.com

To subscribe or unsubscribe via the World Wide Web, visit
        https://www.redhat.com/mailman/listinfo/linux-lvm
or, via email, send a message with subject or body 'help' to
        linux-lvm-request@redhat.com

You can reach the person managing the list at
        linux-lvm-owner@redhat.com

When replying, please edit your Subject line so it is more specific
than "Re: Contents of linux-lvm digest..."


Today's Topics:

   1. Re: LVM onFly features (Michael Loftis)
   2. Re: LVM onFly features (Nathan Scott)
   3. Converting LVM back to Ext2? (Andrey Subbotin)
   4. Re: Converting LVM back to Ext2? (Anil Kumar Sharma)
   5. Re: Newbie of LVM (Alasdair G Kergon)


----------------------------------------------------------------------

Message: 1
Date: Sun, 11 Dec 2005 18:14:39 -0700
From: Michael Loftis <mloftis@wgops.com>
Subject: Re: LVM onFly features
To: Nathan Scott < nathans@sgi.com>
Cc: linux-xfs@oss.sgi.com, linux-lvm@redhat.com
Message-ID: <9CD94D4B0F3B63057B4C2BC0@dhcp-2-206.wgops.com"> 9CD94D4B0F3B63057B4C2BC0@dhcp-2-206.wgops.com>
Content-Type: text/plain; charset=us-ascii; format=flowed



--On December 12, 2005 9:15:39 AM +1100 Nathan Scott <nathans@sgi.com >
wrote:


>> XFS has terrible unpredictable performance in production.  Also it has
>> very
>
> What on earth does that mean?  Whatever it means, it doesn't
> sound right - can you back that up with some data please?

The worst problems we had we're likely most strongly related to running out
of journal transaction space.  When XFS was under high transaction load
sometimes it would just hang everything syncing meta-data.  From what I
understand this has supposedly been dealt with, but we were still having
these issues when we decommissioned the last XFS based server a year ago.
Another datapoint is the fact we primarily served via NFS, which XFS
(atleast at the time) still didn't behave great with, I never did see any
good answers on that as I recall.

>
>> bad behavior when recovering from crashes,
>
> Details?  Are you talking about this post of yours:
> http://oss.sgi.com/archives/linux-xfs/2003-06/msg00032.html

That particular behavior happened a lot.  And it wasn't annoying that it
happened, so much so that it happened after the system claimed it was
clean.  Further, yes, that hardware has been fully checked out.  There's
nothing wrong with the hardware.  I wish there was, that'd make me feel
better honestly.  The only thing I can reason is bugs in the XFS
fsck/repair tools, or *maybe* an interaction with XFS and the DAC960
controller, or NFS.  The fact that XFS has weird interactions with NFS at
all bugs me, but I don't understand the code involved well enough.  There
might be a decent reason.

>
> There have been several fixes in this area since that post.
>
>> often times it's tools totally fail to clean the filesystem.
>
> In what way?  Did you open a bug report?
>
>> It also needs larger kernel stacks because
>> of some of the really deep call trees,
>
> Those have been long since fixed as far as we are aware.  Do you
> have an actual example where things can fail?

We pulled it out of production and replaced XFS with Reiser.  At the time
Reiser was far more mature on Linux.  XFS Linux implementation (in
combination with other work in the block layer as you mention later) may
have matured to atleast a similar (possibly moreso) point now.  I've just
personally lost more data due to XFS than Reiser.  I've also had problems
with ext3 in the (now distant) past while it was teething still.


>> so when you use it with LVM or MD it
>> can oops unless you use the larger kernel stacks.
>
> Anything can oops in combination with enough stacked device drivers
> (although there has been block layer work to resolve this recently,
> so you should try again with a current kernel...).  If you have an
> actual example of this still happening, please open a bug or at least
> let the XFS developers know of your test case.  Thanks.

That was actually part of the problem.  There was no time, and no hardware,
to try to reproduce the problem in the lab.  This isn't an XFS problem
specifically, this is an Open Source problem really....If you encounter a
bug, and you're unlucky enough to be a bit of an edge case, you better be
prepared to pony up with hardware and mantime to diagnose and reproduce it
or it might not get fixed.  Again though, this is common to the whole open
source community, and not XFS, Linux, LVM, or any other project specific.

Having said that, if you can reproduce it, and get good details, the open
source community has a far better track record of *really* fixing and
addressing bugs than any commercial software.

>
>> We also have had
>> problems with the quota system but the details on that have faded.
>
> Seems like details of all the problems you described have faded.
> Your mail seems to me like a bit of a troll ... I guess you had a
> problem or two a couple of years ago (from searching the lists)
> and are still sore.  Can you point me to mailing list reports of
> the problems you're refering to here or bug reports you've opened
> for these issues?  I'll let you know if any of them are still
> relevent.

No, we had dozens actually.  The only ones that were really crippling were
when XFS would suddenly unmount in the middle of the business day for no
apparent reason.  Without details bug reports are ignored, and we couldn't
really provide details or filesystem dumps because there was too much data,
and we had to get it back online.  We just moved as fast as we could away
from XFS.  It wasn't just a one day thing, or a week, there was a trail of
crashes with XFS at the time.  Sometimes the machine was so locked up from
XFS pulling the rug out that the console was wedged up pretty badly too.

I wanted to provide the information as a data point from the other side as
it were not get into a pissing match with the XFS developers and community.
XFS is still young, as is ReiserFS.  and while Reiser is a completely new
FS and XFS has roots in IRIX and other implementations, their age is
similar since XFS' Linux implementation is around the same age.  If the
state has change in the last 6-12 months then so much the better.  The
facts are that XFS during operation had many problems, and as we pulled it
out still had many unresolved problems as we replaced it with ReiserFS.
And Reiser has been flawless except for one problem already mentioned on
Linux-LVM very clearly caused by an external SAN/RAID problem which EMC has
corrected (completely as an aside -- anyone running a CX series REALLY
needs to be on the latest code rev, you might never run into the bug, and
i'm still not sure exactly which one we hit, there were atleast two that
could have caused the data corruption, but if you do, it can be ugly).


The best guess that I have as to why we had such a bad time with XFS was
the XFS+NFS interaction and possibly an old (unknown to me -- this is just
a guess) bug that may have created some minor underlying corruption that
the repair tools couldn't fully fix or diagnose may have caused our
continual (seemingly random) problems.  I don't believe in really random
problems, atleast not in computers anyway.

>
> cheers.
>
> --
> Nathan
>



--
"Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds."
-- Samuel Butler



------------------------------

Message: 2
Date: Mon, 12 Dec 2005 13:28:30 +1100
From: Nathan Scott <nathans@sgi.com>
Subject: Re: LVM onFly features
To: Michael Loftis < mloftis@wgops.com>
Cc: linux-xfs@oss.sgi.com, linux-lvm@redhat.com
Message-ID: <20051212132830.A7432365@wobbly.melbourne.sgi.com"> 20051212132830.A7432365@wobbly.melbourne.sgi.com>
Content-Type: text/plain; charset=us-ascii

On Sun, Dec 11, 2005 at 06:14:39PM -0700, Michael Loftis wrote:
> --On December 12, 2005 9:15:39 AM +1100 Nathan Scott < nathans@sgi.com>
> The worst problems we had we're likely most strongly related to running out
> of journal transaction space.  When XFS was under high transaction load

Can you define "high load" for your scenario?

> sometimes it would just hang everything syncing meta-data.  From what I

There is no situation in which XFS will "hang everything".  A process
that is modifying the filesystem may be paused briefly waiting for space
to become available in the log, and that involves flushing the in-core
log buffers.  But only processes that need log space will be paused
waiting for that (relatively small) write to complete.  This is also not
a behaviour peculiar to XFS, and with suitable tuning in terms of mkfs/
mount/sysctl parameters, it can be completely controlled.

> understand this has supposedly been dealt with, but we were still having
> these issues when we decommissioned the last XFS based server a year ago.

I'd like some more information describing your workload there if
you could provide it.  Thanks.

> Another datapoint is the fact we primarily served via NFS, which XFS
> (atleast at the time) still didn't behave great with, I never did see any
> good answers on that as I recall.

Indeed.  Early 2.6 kernels did have XFS/NFS interaction problems,
with NFS using generation number zero as "magic", and XFS using
that as a valid gen number.  That was fixed a long time ago.

> controller, or NFS.  The fact that XFS has weird interactions with NFS at
> all bugs me, but I don't understand the code involved well enough.  There
> might be a decent reason.

No, there's no reason, and XFS does not have "wierd interactions"
with NFS.

> >> It also needs larger kernel stacks because
> >> of some of the really deep call trees,
> >
> > Those have been long since fixed as far as we are aware.  Do you
> > have an actual example where things can fail?
>
> We pulled it out of production and replaced XFS with Reiser.  At the time
> Reiser was far more mature on Linux.  XFS Linux implementation (in

Not because of 4K stacks though surely?  That kernel option wasn't around
then I think, and the reiserfs folks have also had a bunch of work to do
in that area too.

> > Seems like details of all the problems you described have faded.
> > Your mail seems to me like a bit of a troll ... I guess you had a
> > problem or two a couple of years ago (from searching the lists)
> > and are still sore.  Can you point me to mailing list reports of
> > the problems you're refering to here or bug reports you've opened
> > for these issues?  I'll let you know if any of them are still
> > relevent.
>
> No, we had dozens actually.  The only ones that were really crippling were
> when XFS would suddenly unmount in the middle of the business day for no
> apparent reason.  Without details bug reports are ignored, and we couldn't

The NFS issue had the unfortunate side effect of causing filesystem
corruption and hence forced filesystem shutdowns would result.  There
were also bugs on that error handling path, so probably you hit two
independent XFS bugs on a pretty old kernel version.

> I wanted to provide the information as a data point from the other side as
> it were not get into a pissing match with the XFS developers and community.

You were claiming long-resolved issues that existed in an XFS version
from an early 2.6 kernel as still relevent.  That is quite misleading,
and doesn't provide useful information to anyone.

cheers.

--
Nathan



------------------------------

Message: 3
Date: Mon, 12 Dec 2005 15:25:23 +0700
From: Andrey Subbotin <eploko@gmail.com>
Subject: [linux-lvm] Converting LVM back to Ext2?
To: linux-lvm@redhat.com
Message-ID: <45980936.20051212152523@gmail.com">45980936.20051212152523@gmail.com>
Content-Type: text/plain; charset=us-ascii

Hello all.

I've got a 200GB HD partitioned automatically during Fedora Core 4 setup. That is the disk carries 2 partitions where the first one (/dev/sda1) is ext3 mounted as /boot and the second one (/dev/sda2) is an LVM.

That is all clear and fancy but the problem is I'm faced with the fact I need to migrate the HD to Ext2 FS, so I could convert it to FAT32 later, so I could access it from a copy of the Windows OS I happen to boot recently to do some work. The LVM on /dev/sda2 is full of data I need to save and the problem is I don't have a spare HD to temporarily copy all those 200GB to.

If I had a spare HD I would eventually mount it, make a new Ext2 partition on it and then copy all the data from the LogicalVolume to the new partition. Then I would fire up fdisk and kill the LVM, thus freeing the space on the drive. Then, moving the data back to the first HD would be a snap. But without a spare disk I face a real challenge.

My initial idea was to reduce the FS inside the LogicalVolume (it has ~40GB free of space) and then reduce the size of the LogicalVolume and then reduce the PhysicalVolume /dev/sda2 by the freed number of cylinders. Then, I would create an ext2 partition over the freed cylinders and move some files from the LogicalVolume onto it. Then I thought I would repeat the process several times effectively migrating my data from the ever-shrinking LVM to the ever-growing plain Ext2 FS.

The problem is I have little idea how I can shrink an LVM partition on /dev/sda2. And there seem to be very little information on this on the net.

So far, I have lvreduce'd the FS inside the LogicalVolume and the LogicalVolume itselft to 35700000 4k blocks. Now, how do I redeuce the number of cyllinders occupied by the LVM on /dev/sda?

I would really apreciate any help or ideas.
Thanks a lot in advance.

--
See you,
Andrey

ICQ UIN: 114087545
Journal: http://www.livejournal.com/users/e_ploko/



------------------------------

Message: 4
Date: Mon, 12 Dec 2005 19:50:30 +0530
From: Anil Kumar Sharma <xplusaks@gmail.com >
Subject: Re: Converting LVM back to Ext2?
To: Andrey Subbotin <eploko@gmail.com>, LVM general discussion and
        development < linux-lvm@redhat.com>
Message-ID:
        <52fe6b680512120620m2d9d462erdc37b7f3d79183de@mail.gmail.com">52fe6b680512120620m2d9d462erdc37b7f3d79183de@mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

U may reduce LV size and get some free space but that still will be lying on
PV and there is no pvreduce (afaik).

So, I think (I may be wrong), U are out of luck. Get your luck back, have
same temporary storage for your data to change the size or convert PV.

UC, LVM is good for multi-partitions and multi-disks, everything in
multiples. That's the playground for LVM.
When U (re)start for FC4 or FC5, have your linux space on multiple PV's.
I will suggest utilize all 4 primary  partitions,
1. boot, 2. dual boot (if required) else PV and 3.& 4 also PV's. Swap goes
in LVM.
LVM can make them look like one or as desired partitions (LV's), which U can
change as per your requirements, even for dual boot.

Hard luck with "auto partition" - it is good for itself! smart fellow not
caring for our moods.




On 12/12/05, Andrey Subbotin <eploko@gmail.com > wrote:
>
> Hello all.
>
> I've got a 200GB HD partitioned automatically during Fedora Core 4 setup.
> That is the disk carries 2 partitions where the first one (/dev/sda1) is
> ext3 mounted as /boot and the second one (/dev/sda2) is an LVM.
>
> That is all clear and fancy but the problem is I'm faced with the fact I
> need to migrate the HD to Ext2 FS, so I could convert it to FAT32 later, so
> I could access it from a copy of the Windows OS I happen to boot recently to
> do some work. The LVM on /dev/sda2 is full of data I need to save and the
> problem is I don't have a spare HD to temporarily copy all those 200GB to.
>
> If I had a spare HD I would eventually mount it, make a new Ext2 partition
> on it and then copy all the data from the LogicalVolume to the new
> partition. Then I would fire up fdisk and kill the LVM, thus freeing the
> space on the drive. Then, moving the data back to the first HD would be a
> snap. But without a spare disk I face a real challenge.
>
> My initial idea was to reduce the FS inside the LogicalVolume (it has
> ~40GB free of space) and then reduce the size of the LogicalVolume and then
> reduce the PhysicalVolume /dev/sda2 by the freed number of cylinders. Then,
> I would create an ext2 partition over the freed cylinders and move some
> files from the LogicalVolume onto it. Then I thought I would repeat the
> process several times effectively migrating my data from the ever-shrinking
> LVM to the ever-growing plain Ext2 FS.
>
> The problem is I have little idea how I can shrink an LVM partition on
> /dev/sda2. And there seem to be very little information on this on the net.
>
> So far, I have lvreduce'd the FS inside the LogicalVolume and the
> LogicalVolume itselft to 35700000 4k blocks. Now, how do I redeuce the
> number of cyllinders occupied by the LVM on /dev/sda?
>
> I would really apreciate any help or ideas.
> Thanks a lot in advance.
>
> --
> See you,
> Andrey
>
> ICQ UIN: 114087545
> Journal: http://www.livejournal.com/users/e_ploko/
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>



--
Anil Kumar Shrama
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://www.redhat.com/archives/linux-lvm/attachments/20051212/312e7a1b/attachment.html

------------------------------

Message: 5
Date: Mon, 12 Dec 2005 15:21:26 +0000
From: Alasdair G Kergon < agk@redhat.com>
Subject: Re: Newbie of LVM
To: LVM general discussion and development <linux-lvm@redhat.com>
Message-ID: <20051212152126.GA25866@agk.surrey.redhat.com"> 20051212152126.GA25866@agk.surrey.redhat.com>
Content-Type: text/plain; charset=us-ascii

On Fri, Dec 09, 2005 at 02:12:43PM -0500, Matthew Gillen wrote:
> Way Loss wrote:

> > /dev/md5              153G  119G   27G  82% /www

> >     My md5 is almost full and I wanna use LVM to merge
> > my md5 with a new partition from a new hdd. I wanna
> > ask if this possible for LVM to merge 2 partition
> > together while one of them have data on it? I can't
> > suffer any data loss and want to make sure that LVM
> > works perfectly to what I want.

> You're out of luck.  You can't take an existing partition and keep the
> data yet switch it over to LVM.

See also:
  https://www.redhat.com/archives/linux-lvm/2005-October/msg00110.html

Alasdair
--
agk@redhat.com



------------------------------

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm

End of linux-lvm Digest, Vol 22, Issue 12
*****************************************

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux