Re: Block device naming

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I understand that changing the block device name does not matter
between reboots.
But as I understood in these cases, the order of HCTL (Host: Channel:
Target: Lun) for devices is changed. Unfortunately, I did not capture
the order of HCTL before the failure and I can't provide evidence. But
if I rely on my brain, then I know that the order of the HCTL before
the failure was different in all the cases presented.
This is indirectly confirmed by how the state of the pool in zfs is
demonstrated. And it seems that it depends on how the device was added
(by scsi-id or by wwn-id).
By scsi-id (when there were messages in the dmesg about device
changes), the failure was shown as follows:
---
# zpool status
  pool: pool
 state: UNAVAIL
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: http://zfsonlinux.org/msg/ZFS-8000-HC
  scan: scrub repaired 0 in 1h39m with 0 errors on Sun Oct  8 02:03:34 2017
config:

    NAME                                      STATE     READ WRITE CKSUM
    pool                                      UNAVAIL      0     0
0  insufficient replicas
      scsi-3600144f0c7a5bc61000058d3b96d001d  FAULTED      3     0
0  too many errors

errors: 51 data errors, use '-v' for a list
---
Than in normal state zpool status show:
---
# zpool status
  pool: pool
 state: ONLINE
  scan: scrub repaired 0 in 1h39m with 0 errors on Sun Oct  8 02:03:34 2017
config:

    NAME                                      STATE     READ WRITE CKSUM
    pool                                      ONLINE       0     0     0
      scsi-3600144f0c7a5bc61000058d3b96d001d  ONLINE       0     0     0

errors: No known data errors
---

And in another case, when the LUN is imported by wwn-id (and now any
errors in dmesg) in error state, zpool status is:
---
# zpool status
  pool: pool1
 state: ONLINE
  scan: scrub repaired 0B in 17h30m with 0 errors on Sun Apr 14 17:54:55 2019
config:

NAME                                      STATE     READ WRITE CKSUM
pool1                                     ONLINE       0     0     0
  sdc                                     ONLINE       0     0     0

errors: No known data errors
---
In the status there are no errors, but show block device name from /dev/
Than in normal state zpool status show wwn-id from /dev/disk/by-id
instead of device name from /dev:
---
root@lpr11a:~# zpool status
  pool: pool1
 state: ONLINE
  scan: scrub repaired 0B in 17h30m with 0 errors on Sun Apr 14 17:54:55 2019
config:

NAME                                      STATE     READ WRITE CKSUM
pool1                                     ONLINE       0     0     0
  wwn-0x600144f0b49c14d100005b7af8ee001c  ONLINE       0     0     0

errors: No known data errors
---

P.S. I would also like to note /dev/disk is not reflect reality - SSD
are not disks.

On Thu, May 16, 2019 at 5:07 PM Hannes Reinecke <hare@xxxxxxx> wrote:
>
> On 5/16/19 3:49 PM, Alibek Amaev wrote:
> > I have more example from IRL:
> > In Aug 2018 I was start server with attached storages by FC from ZS3
> > and ZS5 (it is Oracle ZFS Storage Appliance, not NetApp and also
> > export space as LUN) server use one LUN from ZS5. And recently on
> > server stopped all IO on this exported LUN  and io-wait is grow, in
> > dmesg no any errors or any changes about FC, no errors in
> > /var/log/kern.log* /var/log/syslog.log*, no throttling, no edac errors
> > or other.
> > But before reboot I saw:
> > wwn-0x600144f0b49c14d100005b7af8ee001c -> ../../sdc
> > I try to run partprobe or try to copy from this block device some data
> > to /dev/null by dd - operations wasn't finished IO is blocked
> > And after reboot i seen:
> > wwn-0x600144f0b49c14d100005b7af8ee001c -> ../../sdd
> > And server is run ok.
> >
> > Also I have LUN exported from storage in shared mode and it accesible
> > for all servers by FC. Currently this LUN not need, but now I doubt it
> > is possible to safely remove it...
> >
> It's all a bit conjecture at this point.
> 'sdc' could be show up as 'sdd' after the next reboot, with no
> side-effects whatsoever.
> At the same time, 'sdc' could have been blocked by a host of reasons,
> none of which are related to the additional device being exported.
>
> It doesn't really look like an issue with device naming; you would need
> to do proper investigation on your server to figure out why I/O stopped.
> Device renaming is typically the least likely cause here.
>
> Cheers,
>
> Hannes
> --
> Dr. Hannes Reinecke                Teamlead Storage & Networking
> hare@xxxxxxx                                   +49 911 74053 688
> SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
> GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
> HRB 21284 (AG Nürnberg)




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux