RE: Single-drive RAID0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: NeilBrown [mailto:neilb@xxxxxxx]
> Sent: Tuesday, February 15, 2011 1:02 AM
> To: Wojcik, Krzysztof
> Cc: linux-raid@xxxxxxxxxxxxxxx
> Subject: Re: Single-drive RAID0
> 
> On Mon, 14 Feb 2011 17:04:38 +0000 "Wojcik, Krzysztof"
> <krzysztof.wojcik@xxxxxxxxx> wrote:
> > > It is possible that there is something subtle about the precise
> device
> > > geometry.
> > > Could you send me
> > >    sfdisk -l /dev/sda
> > > (or whatever device you are using)
> 
> You didn't include this information...

Disk /dev/sdc: 30515 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sdc1          0+   6374    6375-  51207187   83  Linux
/dev/sdc2       6375   12749    6375   51207187+  83  Linux
/dev/sdc3          0       -       0          0    0  Empty
/dev/sdc4          0       -       0          0    0  Empty

> 
> > > This patch should fix that particular problem.  Let me know if you
> > > can still produce any of these errors with the patch applied.
> >
> >
> > Unfortunately issue is reproducible with path applied.
> > I tried to reproduce it on other setup (other PC and disks) also...
> issue exists :(
> >
> > Interesting observation is that when I stop array just after creation
> and then reassemble it again, everything work fine.
> > On older kernel version (I tried 2.6.34) issue is NOT reproducible
> also.
> >
> 
> All very strange.  And as I cannot reproduce it, it is very hard to
> debug.
> 
> Maybe some daemon is accessing the array at some awkward time and
> causing
> different behaviour for you...
> 
> If it is repeatable enough that you could try 'git bisect' to find
> which
> commit introduced the problem, that would be helpful but I suspect it
> would
> be very time-consuming.
> 
> It might help to put a "WARN_ON(1)" in the place where it prints
> "detected
> capacity change ..." so we get a stack trace and can see how it got
> there.
> That might git a hint to what is looping.
> Also a printk in md_open if it returns ERESTARTSYS would be
> interesting.

In attachment part of logs from kernel with WARN_ON(1) and value returned by md_open() 
(line preceded with "##### KW: err= x").

I am trying to look for in new areas. I've run:
Udevd --debug --dedug-traces

Logs from udev and kernel in attachment.
Maybe it will help to find solution...
It seems to udev adds and removes device in loop...

Regards
Krzysztof

> 
> Thanks,
> NeilBrown

Attachment: kernel_log.tgz
Description: kernel_log.tgz

Attachment: kernel_with_udev.tgz
Description: kernel_with_udev.tgz

Attachment: udev_logs.tgz
Description: udev_logs.tgz


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux