Re: [PATCH v2 1/3] badblocks: Add core badblock management code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2015-12-04 at 15:30 -0800, James Bottomley wrote:
[...]
> > + * We return
> > + *  0 if there are no known bad blocks in the range
> > + *  1 if there are known bad block which are all acknowledged
> > + * -1 if there are bad blocks which have not yet been acknowledged
> > in metadata.
> > + * plus the start/length of the first bad section we overlap.
> > + */
> 
> This comment should be docbook.

Applicable to all your comments - (and they are all valid), I simply
copied over all this from md. I'm happy to make the changes to comments,
and the other two things (see below) if that's the right thing to do --
I just tried to keep my own changes to the original md badblocks code
minimal.
Would it be better (for review-ability) if I made these changes in a new
patch on top of this, or should I just squash them into this one?

> 
> > +int badblocks_check(struct badblocks *bb, sector_t s, int sectors,
> > +			sector_t *first_bad, int *bad_sectors)
> [...]
> > +
> > +/*
> > + * Add a range of bad blocks to the table.
> > + * This might extend the table, or might contract it
> > + * if two adjacent ranges can be merged.
> > + * We binary-search to find the 'insertion' point, then
> > + * decide how best to handle it.
> > + */
> 
> And this one, plus you don't document returns.  It looks like this
> function returns 1 on success and zero on failure, which is really
> counter-intuitive for the kernel: zero is usually returned on success
> and negative error on failure.
> 
> > +int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
> > +			int acknowledged)
> [...]
> > +
> > +/*
> > + * Remove a range of bad blocks from the table.
> > + * This may involve extending the table if we spilt a region,
> > + * but it must not fail.  So if the table becomes full, we just
> > + * drop the remove request.
> > + */
> 
> Docbook and document returns.  This time they're the kernel standard
> of
> 0 on success and negative error on failure making the convention for
> badblocks_set even more counterintuitive.
> 
> > +int badblocks_clear(struct badblocks *bb, sector_t s, int sectors)
> > +{
> [...]
> > +#define DO_DEBUG 1
> 
> Why have this at all if it's unconditionally defined and always set.

Neil - any reason or anything you had in mind for this? Or is it just an
artifact and can be removed.

> 
> > +ssize_t badblocks_store(struct badblocks *bb, const char *page,
> > size_t len,
> > +			int unack)
> [...]
> > +int badblocks_init(struct badblocks *bb, int enable)
> > +{
> > +	bb->count = 0;
> > +	if (enable)
> > +		bb->shift = 0;
> > +	else
> > +		bb->shift = -1;
> > +	bb->page = kmalloc(PAGE_SIZE, GFP_KERNEL);
> 
> Why not __get_free_page(GFP_KERNEL)?  The problem with kmalloc of an
> exactly known page sized quantity is that the slab tracker for this
> requires two contiguous pages for each page because of the overhead.

Cool, I didn't know about __get_free_page - I can fix this up too.

> 
> James
> 
> ��.n��������+%������w��{.n�����{������ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux