Re: [PATCH 4/9] firewire: don't use PREPARE_DELAYED_WORK

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/20/2014 09:13 PM, Tejun Heo wrote:
On Thu, Feb 20, 2014 at 09:07:27PM -0500, Peter Hurley wrote:
On 02/20/2014 08:59 PM, Tejun Heo wrote:
Hello,

On Thu, Feb 20, 2014 at 08:44:46PM -0500, Peter Hurley wrote:
+static void fw_device_workfn(struct work_struct *work)
+{
+	struct fw_device *device = container_of(to_delayed_work(work),
+						struct fw_device, work);

I think this needs an smp_rmb() here.

The patch is equivalent transformation and the whole thing is
guaranteed to have gone through pool->lock.  No explicit rmb
necessary.

The spin_unlock_irq(&pool->lock) only guarantees completion of
memory operations _before_ the unlock; memory operations which occur
_after_ the unlock may be speculated before the unlock.

IOW, unlock is not a memory barrier for operations that occur after.

It's not just unlock.  It's lock / unlock pair on the same lock from
both sides.  Nothing can sip through that.

CPU 0                            | CPU 1
                                 |
 INIT_WORK(fw_device_workfn)     |
                                 |
 workfn = funcA                  |
 queue_work_on()                 |
 .                               | process_one_work()
 .                               |   ..
 .                               |   worker->current_func = work->func
 .                               |
 .                               |   speculative load of workfn = funcA
 .                               |   .
 workfn = funcB                  |   .
 queue_work_on()                 |   .
   local_irq_save()              |   .
   test_and_set_bit() == 1       |   .
                                 |   set_work_pool_and_clear_pending()
   work is not queued            |     smp_wmb
    funcB never runs             |     set_work_data()
                                 |       atomic_set()
                                 |   spin_unlock_irq()
                                 |
                                 |   worker->current_func(work)  @ fw_device_workfn
                                 |      workfn()  @ funcA


The speculative load of workfn on CPU 1 is valid because no rmb will occur
between the load and the execution of workfn() on CPU 1.

Thus funcB will never execute because, in this circumstance, a second
worker is not queued (because PENDING had not yet been cleared).

Regards,
Peter Hurley


--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux