Stefan Richter wrote:
Martin Peschke wrote:
It seems to be safe to replace all 4 occurrences of GFP_ATOMIC in
scsi_scan.c by GFP_KERNEL. I found that calling code always held a mutex
(indicating process context) while not acquiring a spin_lock or such
inside the mutex sections and when using GFP_ATOMIC (see details below).
Please use diff's -p option for postings like this.
okay
Did you check Documentation/scsi/scsi_mid_low_api.txt with respect to
the detailed description of all exported functions which you modify? All
of them should contain a remark like "Might block: yes" or something
else in the way of "do not call in atomic context". Although I suppose
that all or most of them do so already.
If scsi_mid_low_api.txt does not fully reflect what your patch imposes,
please modify scsi_mid_low_api.txt in the same patch.
Thanks for the hint.
My changes conform to this description, as far as scsi_mid_low_api.txt
covers the interfaces touched by my patch.
Looks like a documentation update is needed regardless of my patch:
scsi_get_host_dev(), scsi_scan_target(), __scsi_add_device() are not
documented, though being exported. These are the ones affected by my
patch. I didn't check for other misses.
scsi_mid_low_api.txt says scsi_add_host() never blocks. This function
calls sysfs routines which might block (on mutex), and it uses
GFP_KERNEL - the latter not being my fault :)
scsi_host_get() "currently may block but may be changed to not block" -
current code won't block.
(My observations come from 2.6.18-rc4-mm2.)
You need to make sure that it does not break _any_ caller. (SCSI is more
than the bundle of interconnect drivers for SPI hardware.) Also take
precautions for future callers or future changes to current callers.
Sure. I found that all these interface functions acquire a mutex. That is,
any caller which doesn't guarantee process context would be broken anyway,
even without me changing GFP_ATOMICs.
Martin
-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html