Re: After memory pressure: can't read from tape anymore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2010-12-03 at 16:59 +0200, Kai MÃkisara wrote:
> On 12/03/2010 02:27 PM, FUJITA Tomonori wrote:
> > On Mon, 29 Nov 2010 19:09:46 +0200 (EET)
> > Kai Makisara<Kai.Makisara@xxxxxxxxxxx>  wrote:
> >
> >>> This same behaviour appears when we're doing a few incremental backups;
> >>> after a while, it just isn't possible to use the tape drives anymore -
> >>> every I/O operation gives an I/O Error, even a simple dd bs=64k
> >>> count=10. After a restart, the system behaves correctly until
> >>> -seemingly- another memory pressure situation occured.
> >>>
> >> This is predictable. The maximum number of scatter/gather segments seems
> >> to be 128. The st driver first tries to set up transfer directly from the
> >> user buffer to the HBA. The user buffer is usually fragmented so that one
> >> scatter/gather segment is used for each page. Assuming 4 kB page size, the
> >> maximu size of the direct transfer is 128 x 4 = 512 kB.
> >
> > Can we make enlarge_buffer friendly to the memory alloctor a bit?
> >
> > His problem is that the driver can't allocate 2 mB with the hardware
> > limit 128 segments.
> >
> > enlarge_buffer tries to use ST_MAX_ORDER and if the allocation (256 kB
> > page) fails, enlarge_buffer fails. It could try smaller order instead?
> >
> > Not tested at all.
> >
> >
> > diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c
> > index 5b7388f..119544b 100644
> > --- a/drivers/scsi/st.c
> > +++ b/drivers/scsi/st.c
> > @@ -3729,7 +3729,8 @@ static int enlarge_buffer(struct st_buffer * STbuffer, int new_size, int need_dm
> >   		b_size = PAGE_SIZE<<  order;
> >   	} else {
> >   		for (b_size = PAGE_SIZE, order = 0;
> > -		     order<  ST_MAX_ORDER&&  b_size<  new_size;
> > +		     order<  ST_MAX_ORDER&&
> > +			     max_segs * (PAGE_SIZE<<  order)<  new_size;
> >   		     order++, b_size *= 2)
> >   			;  /* empty */
> >   	}
> 
> You are correct. The loop does not work at all as it should. Years ago,
> the strategy was to start with as big blocks as possible to minimize the 
> number s/g segments. Nowadays the segments must be of same size and the 
> old logic is not applicable.
> 
> I have not tested the patch either but it looks correct.
> 
> Thanks for noticing this bug. I hope this helps the users. The question 
> about number of s/g segments is still valid for the direct i/o case but 
> that is optimization and not whether one can read/write.

Realistically, though, this will only increase the probability of making
an allocation work, we can't get this to a certainty.

Since we fixed up the infrastructure to allow arbitrary length sg lists,
perhaps we should document what cards can actually take advantage of
this (and how to do so, since it's not set automatically on boot).  That
way users wanting tapes at least know what the problems are likely to be
and how to avoid them in their hardware purchasing decisions.  The
corollary is that we should likely have a list of not recommended cards:
if they can't go over 128 SG elements, then they're pretty much
unsuitable for modern tapes.

James


--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux