Heinz,
I thought I would forward this one to you just in case there is a bug on the
delete. It is probably just my limited understanding of dmraid, but I thought
after I deleted the partition in cfdisk and then deactivated the device
(nvidia_ecaejfdi) node nvidia_ecaejfdip9 should disappear? It is still there.
How do I get rid of it? This is on Archlinux. Here is a bit more info about my
setup:
[14:17 archangel:/home/david/archlinux/dmraid] # l /dev/mapper
total 0
drwxr-xr-x 2 root root 0 2009-06-22 13:44 .
drwxr-xr-x 23 root root 0 2009-06-22 13:44 ..
crw-rw---- 1 root root 10, 60 2009-06-22 01:40 control
brw------- 1 root disk 254, 0 2009-06-22 13:48 nvidia_ecaejfdi
brw------- 1 root disk 254, 2 2009-06-22 01:40 nvidia_ecaejfdip5
brw------- 1 root disk 254, 3 2009-06-22 01:40 nvidia_ecaejfdip6
brw------- 1 root disk 254, 4 2009-06-22 01:40 nvidia_ecaejfdip7
brw------- 1 root disk 254, 5 2009-06-22 01:40 nvidia_ecaejfdip8
brw------- 1 root disk 254, 10 2009-06-22 13:44 nvidia_ecaejfdip9
(I have deleted this in cfdisk) ^^^^^^^^^^^^^^^^^^
brw------- 1 root disk 254, 1 2009-06-22 01:40 nvidia_fdaacfde
brw------- 1 root disk 254, 6 2009-06-22 01:40 nvidia_fdaacfdep5
brw------- 1 root disk 254, 7 2009-06-22 01:40 nvidia_fdaacfdep6
brw------- 1 root disk 254, 8 2009-06-22 01:40 nvidia_fdaacfdep7
brw------- 1 root disk 254, 9 2009-06-22 01:40 nvidia_fdaacfdep8
[14:13 archangel:/home/david/archlinux/dmraid] # dmraid -rd -v -v
NOTICE: /dev/sdd: asr discovering
NOTICE: /dev/sdd: ddf1 discovering
NOTICE: /dev/sdd: hpt37x discovering
NOTICE: /dev/sdd: hpt45x discovering
NOTICE: /dev/sdd: isw discovering
NOTICE: /dev/sdd: jmicron discovering
NOTICE: /dev/sdd: lsi discovering
NOTICE: /dev/sdd: nvidia discovering
NOTICE: /dev/sdd: nvidia metadata discovered
NOTICE: /dev/sdd: pdc discovering
NOTICE: /dev/sdd: sil discovering
NOTICE: /dev/sdd: via discovering
NOTICE: /dev/sdc: asr discovering
NOTICE: /dev/sdc: ddf1 discovering
NOTICE: /dev/sdc: hpt37x discovering
NOTICE: /dev/sdc: hpt45x discovering
NOTICE: /dev/sdc: isw discovering
NOTICE: /dev/sdc: jmicron discovering
NOTICE: /dev/sdc: lsi discovering
NOTICE: /dev/sdc: nvidia discovering
NOTICE: /dev/sdc: nvidia metadata discovered
NOTICE: /dev/sdc: pdc discovering
NOTICE: /dev/sdc: sil discovering
NOTICE: /dev/sdc: via discovering
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: nvidia metadata discovered
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: nvidia metadata discovered
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
INFO: RAID devices discovered:
/dev/sdd: nvidia, "nvidia_ecaejfdi", mirror, ok, 1465149166 sectors, data@ 0
/dev/sdc: nvidia, "nvidia_fdaacfde", mirror, ok, 976773166 sectors, data@ 0
/dev/sdb: nvidia, "nvidia_ecaejfdi", mirror, ok, 1465149166 sectors, data@ 0
/dev/sda: nvidia, "nvidia_fdaacfde", mirror, ok, 976773166 sectors, data@ 0
[14:13 archangel:/home/david/archlinux/dmraid] # mc
How do I remove the nvidia_ecaejfdip9 entry?
--
David C. Rankin, J.D.,P.E.
Rankin Law Firm, PLLC
510 Ochiltree Street
Nacogdoches, Texas 75961
Telephone: (936) 715-9333
Facsimile: (936) 715-9339
www.rankinlawfirm.com
On Monday 22 June 2009 03:13:41 am Tobias Powalowski wrote:
> I need to explain this a bit more:
>
<snip>
>
> Now i cfdisk this device, are the nodes updated then?
> - I mean do i get the /dev/mapper/vidia_fffadgicp1 p2 p3 autimatically or
> do I need to run dmraid -ay for every partition i created?
> - Also what happens if partitions are deleted?
>
ADDING NEW PARTITION (PART9) - 10G IN SIZE:
cfdisk (util-linux-ng 2.14.2)
Disk Drive: /dev/mapper/nvidia_ecaejfdi
Size: 750156372992 bytes, 750.1 GB
Heads: 255 Sectors per Track: 63 Cylinders:
91201
Name Flags Part Type FS Type
[Label] Size (MB)
----------------------------------------------------------------------------------------------------------
nvidia_ecaejfdi5Boot Logical Linux ext3
20003.85 *
nvidia_ecaejfdi6 Logical Linux ext3
123.38
nvidia_ecaejfdi7 Logical Linux ext3
39999.54
nvidia_ecaejfdi8 Logical Linux swap / Solaris
1998.75
nvidia_ecaejfdi9 Logical Linux
10001.95
Pri/Log Free Space
678026.29
(write & quit)
New node NOT created:
[13:39 archangel:~] # l /dev/mapper
total 0
drwxr-xr-x 2 root root 0 2009-06-22 01:40 .
drwxr-xr-x 23 root root 0 2009-06-22 01:41 ..
crw-rw---- 1 root root 10, 60 2009-06-22 01:40 control
brw------- 1 root disk 254, 0 2009-06-22 13:38 nvidia_ecaejfdi
brw------- 1 root disk 254, 2 2009-06-22 01:40 nvidia_ecaejfdip5
brw------- 1 root disk 254, 3 2009-06-22 01:40 nvidia_ecaejfdip6
brw------- 1 root disk 254, 4 2009-06-22 01:40 nvidia_ecaejfdip7
brw------- 1 root disk 254, 5 2009-06-22 01:40 nvidia_ecaejfdip8
<snip>
[13:44 archangel:~] # dmraid -ay nvidia_ecaejfdi
RAID set "nvidia_ecaejfdi" already active
RAID set "nvidia_ecaejfdip5" already active
RAID set "nvidia_ecaejfdip6" already active
RAID set "nvidia_ecaejfdip7" already active
RAID set "nvidia_ecaejfdip8" already active
RAID set "nvidia_ecaejfdip9" was activated
[13:44 archangel:~] # l /dev/mapper
total 0
drwxr-xr-x 2 root root 0 2009-06-22 13:44 .
drwxr-xr-x 23 root root 0 2009-06-22 13:44 ..
crw-rw---- 1 root root 10, 60 2009-06-22 01:40 control
brw------- 1 root disk 254, 0 2009-06-22 13:38 nvidia_ecaejfdi
brw------- 1 root disk 254, 2 2009-06-22 01:40 nvidia_ecaejfdip5
brw------- 1 root disk 254, 3 2009-06-22 01:40 nvidia_ecaejfdip6
brw------- 1 root disk 254, 4 2009-06-22 01:40 nvidia_ecaejfdip7
brw------- 1 root disk 254, 5 2009-06-22 01:40 nvidia_ecaejfdip8
brw------- 1 root disk 254, 10 2009-06-22 13:44 nvidia_ecaejfdip9
<snip>
DELETING PARTITION (PART 9) - 10G IN SIZE
cfdisk (util-linux-ng 2.14.2)
Disk Drive: /dev/mapper/nvidia_ecaejfdi
Size: 750156372992 bytes, 750.1 GB
Heads: 255 Sectors per Track: 63 Cylinders:
91201
Name Flags Part Type FS Type
[Label] Size (MB)
----------------------------------------------------------------------------------------------------------
nvidia_ecaejfdi5Boot Logical Linux ext3
20003.85 *
nvidia_ecaejfdi6 Logical Linux ext3
123.38
nvidia_ecaejfdi7 Logical Linux ext3
39999.54
nvidia_ecaejfdi8 Logical Linux swap / Solaris
1998.75
Pri/Log Free Space
688028.23
(write & quit)
Partition NOT removed from /dev/mapper:
[13:48 archangel:~] # l /dev/mapper
total 0
drwxr-xr-x 2 root root 0 2009-06-22 13:44 .
drwxr-xr-x 23 root root 0 2009-06-22 13:44 ..
crw-rw---- 1 root root 10, 60 2009-06-22 01:40 control
brw------- 1 root disk 254, 0 2009-06-22 13:48 nvidia_ecaejfdi
brw------- 1 root disk 254, 2 2009-06-22 01:40 nvidia_ecaejfdip5
brw------- 1 root disk 254, 3 2009-06-22 01:40 nvidia_ecaejfdip6
brw------- 1 root disk 254, 4 2009-06-22 01:40 nvidia_ecaejfdip7
brw------- 1 root disk 254, 5 2009-06-22 01:40 nvidia_ecaejfdip8
brw------- 1 root disk 254, 10 2009-06-22 13:44 nvidia_ecaejfdip9
<snip>
[13:59 archangel:~] # dmraid -an nvidia_ecaejfdi
[13:59 archangel:~] # l /dev/mapper
total 0
drwxr-xr-x 2 root root 0 2009-06-22 13:44 .
drwxr-xr-x 23 root root 0 2009-06-22 13:44 ..
crw-rw---- 1 root root 10, 60 2009-06-22 01:40 control
brw------- 1 root disk 254, 0 2009-06-22 13:48 nvidia_ecaejfdi
brw------- 1 root disk 254, 2 2009-06-22 01:40 nvidia_ecaejfdip5
brw------- 1 root disk 254, 3 2009-06-22 01:40 nvidia_ecaejfdip6
brw------- 1 root disk 254, 4 2009-06-22 01:40 nvidia_ecaejfdip7
brw------- 1 root disk 254, 5 2009-06-22 01:40 nvidia_ecaejfdip8
brw------- 1 root disk 254, 10 2009-06-22 13:44 nvidia_ecaejfdip9
<snip>
Huh??? Why wasn't nvidia_ecaejfdip9 deactivated. I have deleted the partition
in cfdisk, tried to activate it (y) and deactivate it (n) and still it is
there. Is this a bug? Or do I need to erase the metadata in some other way??
cc: Tobias
--
David C. Rankin, J.D.,P.E.
Rankin Law Firm, PLLC
510 Ochiltree Street
Nacogdoches, Texas 75961
Telephone: (936) 715-9333
Facsimile: (936) 715-9339
www.rankinlawfirm.com
--- Begin Message ---
On Monday 22 June 2009 03:13:41 am Tobias Powalowski wrote:
> I need to explain this a bit more:
>
<snip>
>
> Now i cfdisk this device, are the nodes updated then?
> - I mean do i get the /dev/mapper/vidia_fffadgicp1 p2 p3 autimatically or
> do I need to run dmraid -ay for every partition i created?
> - Also what happens if partitions are deleted?
>
ADDING NEW PARTITION (PART9) - 10G IN SIZE:
cfdisk (util-linux-ng 2.14.2)
Disk Drive: /dev/mapper/nvidia_ecaejfdi
Size: 750156372992 bytes, 750.1 GB
Heads: 255 Sectors per Track: 63 Cylinders:
91201
Name Flags Part Type FS Type
[Label] Size (MB)
----------------------------------------------------------------------------------------------------------
nvidia_ecaejfdi5Boot Logical Linux ext3
20003.85 *
nvidia_ecaejfdi6 Logical Linux ext3
123.38
nvidia_ecaejfdi7 Logical Linux ext3
39999.54
nvidia_ecaejfdi8 Logical Linux swap / Solaris
1998.75
nvidia_ecaejfdi9 Logical Linux
10001.95
Pri/Log Free Space
678026.29
(write & quit)
New node NOT created:
[13:39 archangel:~] # l /dev/mapper
total 0
drwxr-xr-x 2 root root 0 2009-06-22 01:40 .
drwxr-xr-x 23 root root 0 2009-06-22 01:41 ..
crw-rw---- 1 root root 10, 60 2009-06-22 01:40 control
brw------- 1 root disk 254, 0 2009-06-22 13:38 nvidia_ecaejfdi
brw------- 1 root disk 254, 2 2009-06-22 01:40 nvidia_ecaejfdip5
brw------- 1 root disk 254, 3 2009-06-22 01:40 nvidia_ecaejfdip6
brw------- 1 root disk 254, 4 2009-06-22 01:40 nvidia_ecaejfdip7
brw------- 1 root disk 254, 5 2009-06-22 01:40 nvidia_ecaejfdip8
<snip>
[13:44 archangel:~] # dmraid -ay nvidia_ecaejfdi
RAID set "nvidia_ecaejfdi" already active
RAID set "nvidia_ecaejfdip5" already active
RAID set "nvidia_ecaejfdip6" already active
RAID set "nvidia_ecaejfdip7" already active
RAID set "nvidia_ecaejfdip8" already active
RAID set "nvidia_ecaejfdip9" was activated
[13:44 archangel:~] # l /dev/mapper
total 0
drwxr-xr-x 2 root root 0 2009-06-22 13:44 .
drwxr-xr-x 23 root root 0 2009-06-22 13:44 ..
crw-rw---- 1 root root 10, 60 2009-06-22 01:40 control
brw------- 1 root disk 254, 0 2009-06-22 13:38 nvidia_ecaejfdi
brw------- 1 root disk 254, 2 2009-06-22 01:40 nvidia_ecaejfdip5
brw------- 1 root disk 254, 3 2009-06-22 01:40 nvidia_ecaejfdip6
brw------- 1 root disk 254, 4 2009-06-22 01:40 nvidia_ecaejfdip7
brw------- 1 root disk 254, 5 2009-06-22 01:40 nvidia_ecaejfdip8
brw------- 1 root disk 254, 10 2009-06-22 13:44 nvidia_ecaejfdip9
<snip>
DELETING PARTITION (PART 9) - 10G IN SIZE
cfdisk (util-linux-ng 2.14.2)
Disk Drive: /dev/mapper/nvidia_ecaejfdi
Size: 750156372992 bytes, 750.1 GB
Heads: 255 Sectors per Track: 63 Cylinders:
91201
Name Flags Part Type FS Type
[Label] Size (MB)
----------------------------------------------------------------------------------------------------------
nvidia_ecaejfdi5Boot Logical Linux ext3
20003.85 *
nvidia_ecaejfdi6 Logical Linux ext3
123.38
nvidia_ecaejfdi7 Logical Linux ext3
39999.54
nvidia_ecaejfdi8 Logical Linux swap / Solaris
1998.75
Pri/Log Free Space
688028.23
(write & quit)
Partition NOT removed from /dev/mapper:
[13:48 archangel:~] # l /dev/mapper
total 0
drwxr-xr-x 2 root root 0 2009-06-22 13:44 .
drwxr-xr-x 23 root root 0 2009-06-22 13:44 ..
crw-rw---- 1 root root 10, 60 2009-06-22 01:40 control
brw------- 1 root disk 254, 0 2009-06-22 13:48 nvidia_ecaejfdi
brw------- 1 root disk 254, 2 2009-06-22 01:40 nvidia_ecaejfdip5
brw------- 1 root disk 254, 3 2009-06-22 01:40 nvidia_ecaejfdip6
brw------- 1 root disk 254, 4 2009-06-22 01:40 nvidia_ecaejfdip7
brw------- 1 root disk 254, 5 2009-06-22 01:40 nvidia_ecaejfdip8
brw------- 1 root disk 254, 10 2009-06-22 13:44 nvidia_ecaejfdip9
<snip>
[13:59 archangel:~] # dmraid -an nvidia_ecaejfdi
[13:59 archangel:~] # l /dev/mapper
total 0
drwxr-xr-x 2 root root 0 2009-06-22 13:44 .
drwxr-xr-x 23 root root 0 2009-06-22 13:44 ..
crw-rw---- 1 root root 10, 60 2009-06-22 01:40 control
brw------- 1 root disk 254, 0 2009-06-22 13:48 nvidia_ecaejfdi
brw------- 1 root disk 254, 2 2009-06-22 01:40 nvidia_ecaejfdip5
brw------- 1 root disk 254, 3 2009-06-22 01:40 nvidia_ecaejfdip6
brw------- 1 root disk 254, 4 2009-06-22 01:40 nvidia_ecaejfdip7
brw------- 1 root disk 254, 5 2009-06-22 01:40 nvidia_ecaejfdip8
brw------- 1 root disk 254, 10 2009-06-22 13:44 nvidia_ecaejfdip9
<snip>
Huh??? Why wasn't nvidia_ecaejfdip9 deactivated. I have deleted the partition
in cfdisk, tried to activate it (y) and deactivate it (n) and still it is
there. Is this a bug? Or do I need to erase the metadata in some other way??
--
David C. Rankin, J.D.,P.E.
Rankin Law Firm, PLLC
510 Ochiltree Street
Nacogdoches, Texas 75961
Telephone: (936) 715-9333
Facsimile: (936) 715-9339
www.rankinlawfirm.com
--- End Message ---