Re: [IMPORTANT]LVM+iSCSI issue..Local Disk disappeared..

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We configured it as

Case 1

pv1= localdisk1    pv2 =localdisk2

Vg1= pv1,pv2

created lv1 of size =pv1size + pv2size

Deleted lv1,
 
removed pv2,

vg is getting displayed properly in vgs command


Case 2

pv1= localdisk1    pv2 =remotedisk2(iscsi disk)

Vg1= pv1,pv2

created lv1 of size =pv1size + pv2size

Deleted lv1,

logout from iscsi target

vg is not getting displayed properly in vgs command


On Wed, Oct 15, 2008 at 9:36 PM, Peter Larsen <plarsen@ciber.com> wrote:
On Wed, 2008-10-15 at 18:50 +0530, himanshu padmanabhi wrote:
>
> I am using "iscsi-initiator-utils-6.2.0.865-0.2.fc7" as initiator and
> "iscsitarget-0.4.15-1" as target.
>
>
> Following is the scenario
>
>
> PV  =  local_disk1       remote_disk1  (i.e PV is formed using 2
> disks
> 1 local and 1 from iscsitarget)

Strange construction?
Why not simply format each target as a separate PV and then join them in
a single VG? Much easier to manage.

That aside - I wouldn't mix and match that way. You're getting very
different response times and security issues on each device. I would
treat them very different.

> VG  =  Vgname

You need to assign the PV's to your VG.

> LV =  lv_localdisk_1       lv_localremotedisk1
> lv_remotedisk1 (i.e. 1 lv only from local disk,1 from remote
> iscsitarget and one from combination of both)

That makes no sense. You don't use physical volumes when you create
logical ones. You use groups. You don't assign physical devices like
that when you do LVs.

> I performed "logout" operation on "remote_disk1" after deactivating
> "lv_localremotedisk1 and           lv_remotedisk1" on it using
> "lvchange" command.
>
>
> Then result I should at least get,
>
>
> PV  =  local_disk1    (remote_disk1 is removed now)
>
>
> VG = Vgname
>
>
> LV = lv_localdisk1     (So LVs containing PVs as "remote_disk1" are
> deactivated.It can be in combination with local disk also)

No - things don't work that way. If you damage/remove a PV from a VG -
everything in that VG gets disabled until the whole VG is operational.
It doesn't matter if your LV is in the area damaged or not.

> i.e They all were lost temporarily.

Not temporarily. As long as the VG is bad, nothing is there. You should
see all available PVs but the way you set it up, you didn't do a PV on
each disk so when the disk-group sees one disk missing the whole disk
group is offline, you loose your PV, your VG gets deactivated = the
result you see.

> When I logged in to the same target and activated the LVs on
> "remote_disk1",I got my original configuration.i.e.

Because your PV is now present, the VG can activate etc.


---
Regards
   Peter Larsen

We have met the enemy, and he is us.
               -- Walt Kelly

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



--
Regards,
Himanshu Padmanabhi

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux