Jamie Lokier wrote:
Artem Bityutskiy wrote:
Jens Axboe wrote:
+static void bdi_queue_work(struct backing_dev_info *bdi, struct bdi_work
*work)
+{
+ if (work) {
+ work->seen = bdi->wb_mask;
+ BUG_ON(!work->seen);
+ atomic_set(&work->pending, bdi->wb_cnt);
+ BUG_ON(!bdi->wb_cnt);
+
+ /*
+ * Make sure stores are seen before it appears on the list
+ */
+ smp_mb();
+
+ spin_lock(&bdi->wb_lock);
+ list_add_tail_rcu(&work->list, &bdi->work_list);
+ spin_unlock(&bdi->wb_lock);
+ }
Doesn't spin_lock() include an implicit memory barrier?
After &bdi->wb_lock is acquired, it is guaranteed that all
memory operations are finished.
I'm pretty sure spin_lock() is an "acquire" barrier, which just guarantees
loads/stores after the spin_lock() are done after taking the lock.
It doesn't guarantee anything about loads/stores before the spin_lock().
Right, but comment says memops should be flushed before the
list is changed.
--
Best Regards,
Artem Bityutskiy (Артём Битюцкий)
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html