On 10/22/2015 12:16 AM, Benjamin Marzinski wrote: > On Mon, Oct 12, 2015 at 08:35:22AM +0200, Hannes Reinecke wrote: >> On 10/08/2015 09:44 PM, Benjamin Marzinski wrote: >>> Currently, running the alua prioritizer on a path causes 5 ioctls on many >>> devices. get_target_port_group_support() returns whether alua is >>> supported. get_target_port_group() gets the TPG id. This often takes two >>> ioctls because 128 bytes is not a large enough buffer size on many >>> devices. Finally, get_asymmetric_access_state() also often takes two >>> ioctls because of the buffer size. This can get to be problematic when >>> there are thousands of paths. The goal of this patch to to cut this down >>> to one call in the usual case. >>> >>> In order to do this, I've added a context pointer to the prio structure, >>> similar to what exists for the checker structure, and initprio() and >>> freeprio() functions to the prioritizers. The only one that currently uses >>> these is the alua prioritizer. It caches the type of alua support, the TPG >>> id, and the necessary buffer size. The only thing I'm worried about with >>> this patch is whether the first two values could change. In order to deal >>> with that possibility, whenever a path gets a change event, or becomes >>> valid again after a failure, it resets the context structure values, which >>> forces all of them to get checked the next time the prioritizer is called. >>> >> Hmm. What about reading /sys/block/sdX/device/vpg_pg83 ? >> That carries the same information, and you would need to call the >> ioctl only once ... > > Sure. If you want to write a patch, that would be fine by me. But I > still think caching the result, so you don't need to rerun this makes > sense. > Well ... the information is already cached in sysfs; I doubt read() is that much of an overhead. Cheers, Hannes -- Dr. Hannes Reinecke zSeries & Storage hare@xxxxxxx +49 911 74053 688 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton HRB 21284 (AG Nürnberg) -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel