RE: Single-drive RAID0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: NeilBrown [mailto:neilb@xxxxxxx]
> Sent: Monday, February 21, 2011 3:15 AM
> To: Wojcik, Krzysztof
> Cc: linux-raid@xxxxxxxxxxxxxxx
> Subject: Re: Single-drive RAID0
> 
> On Wed, 16 Feb 2011 16:57:06 +0000 "Wojcik, Krzysztof"
> <krzysztof.wojcik@xxxxxxxxx> wrote:
> 
> > >
> > > It looks like the 'bdev' passed to md_open keeps changing, which it
> > > shouldn't.
> > >
> > > If the above doesn't help, please add:
> > >
> > > 	printk("bdev=%p, mddev=%p, disk=%p dev=%x\n", bdev, mddev, mddev-
> > > >gendisk, bdev->bd_dev);
> > >
> > > at the top of 'md_open', and see what it produces.
> >
> > I tried this also but I can't see any useful information that can
> make me some idea.
> > Could you look at logs in attachment?
> >
> 
> 
> I've made some progress.
> 
> 
> The struct block_device is definitely getting freed and re-allocated.
> 
> This can happen if you
> 
>   mknod /dev/.tmp b 9 0
>   open /dev/.tmp and do stuff, then close
>   rm /dev/.tmp
> 
> 
> If you open something in /dev which stays there, e.g. /dev/md0, then as
> long
> as the inode for /dev/md0 remains in the inode cache it will hold a
> reference
> to the block_device and so the same one will get reused.
> But when to 'rm /dev/.tmp', the inode gets flushed from the cache and
> the
> reference of the block_device is dropped.  If that was the only
> reference
> then the block_device is discarded.
> 
> Each time a new block_device is created for a given block device, it
> will
> re-do the partition scan which will mean that the partition devices get
> deleted and recreated.
> 
> Maybe you are racing with this somehow and that is how the partition
> devices
> appear to not be there - udev is removing and recreating them.
> 
> Something you could try would be to change fs/block_dev.c.
> At about line 470 you should find:
> 
> static const struct super_operations bdev_sops = {
> 	.statfs = simple_statfs,
> 	.alloc_inode = bdev_alloc_inode,
> 	.destroy_inode = bdev_destroy_inode,
> 	.drop_inode = generic_delete_inode,
> 	.evict_inode = bdev_evict_inode,
> };
> 
> If you change the 'drop_inode' line to
>         .drop_inode = do_not_drop_inode,
> 
> and define a function:
> 
> 
> static int do_not_drop_inode(struct inode *inode)
> {
> 	return 0;
> }
> 
> That will cause the block_device to remain in a cache for a while after
> the
> last reference is dropped.
> That should make your symptoms go away.
> 
> I'm not sure that is the "right" fix, but it should confirm what I
> suspect
> is happening.

Hi,

I tested your proposal.
It resolve the problem.
What are your future plans?
Do you need to consult the patch with someone involved in the block devices code?
When can I expect final patch?

> 
> If it does, then we can explore more...
> Did you say that this works better on earlier kernels?  If so, what is
> the
> latest kernel that you have tried that you don't have this problem on?

I've tested on kernel 2.6.32 from RHEL6.0 and, if I remember correctly, 2.6.34- they work properly.

Regards
Krzysztof

> 
> Thanks,
> NeilBrown


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux