On Tue, Jun 13, 2017 at 14:45:09, Harris, James R <james.r.harris@xxxxxxxxx> wrote: > >> On Jun 13, 2017, at 12:45 PM, Harris, James R <james.r.harris@xxxxxxxxx> wrote: >> >> If using ioengine=/path/to/ioengine to dynamically load >> an ioengine, there is a race between free_ioengine()'s >> dlclose() and the td->io_ops dereference in reap_threads(). >> >> To resolve, td->io_ops must be set to NULL before calling >> dlclose() - this ensures the name check in reap_threads() >> does not try to reference memory no longer accessible due >> to the ioengine getting closed. >> >> Signed-off-by: Jim Harris <james.r.harris@xxxxxxxxx> >> --- >> ioengines.c | 4 ++-- >> 1 file changed, 2 insertions(+), 2 deletions(-) >> >> diff --git a/ioengines.c b/ioengines.c >> index c90a2ca5..6caf73da 100644 >> --- a/ioengines.c >> +++ b/ioengines.c >> @@ -170,10 +170,10 @@ void free_ioengine(struct thread_data *td) >> td->eo = NULL; >> } >> >> + td->io_ops = NULL; >> + >> if (td->io_ops_dlhandle) >> dlclose(td->io_ops_dlhandle); >> - >> - td->io_ops = NULL; >> } >> >> void close_ioengine(struct thread_data *td) >> -- >> 2.12.2 >> > >Please ignore. This only narrows the race window - it does not close it. > >-- I also have a system that is prone to this segfault. Without some type of mutex, td_iops cannot be dereferenced safely in reap_threads(). This alternative patch works for me: Signed-off-by: Justin Eno <jeno@xxxxxxxxxx> --- backend.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/backend.c b/backend.c index ba6f585..17c0d5b 100644 --- a/backend.c +++ b/backend.c @@ -1929,11 +1929,7 @@ static void reap_threads(unsigned int *nr_running, uint64_t *t_rate, for_each_td(td, i) { int flags = 0; - /* - * ->io_ops is NULL for a thread that has closed its - * io engine - */ - if (td->io_ops && !strcmp(td->io_ops->name, "cpuio")) + if (!strcmp(td->o.ioengine, "cpuio")) cputhreads++; else realthreads++; -- 1.8.3.1 -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html