Re: [PATCH 0/2] sg: fix races during device removal (v2)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Stefan Richter wrote:
> What are locks for?
>
> They serialize accesses to data, in order to protect the data from
> invalid modifications due to concurrent accesses.
>   
...
> What are reference counters for?
>
> Well, the name says it.  I'd better ask:  What is reference counting
> for?  Of course to prevent premature destruction of objects; IOW to
> implement lifetime rules.
>
> Locks don't implement lifetime rules.  They only implement serialization
> of accesses.
>
>   
...
> In your case, you want to extend sg_index_lock's purposes to also
> serialize access to a state variable of the sg device objects.  Nothing
> wrong with that, as long as you and, later on, readers of your code are
> aware what is going on.
>
>   

OK, so your argument now seems to be that the code just might in
practice work correctly, but in theory it is not philosophically right,
semantically pure, or politically correct because it uses fundamental
programming constructs in odd ways that are not the "right" ways taught
in CS courses that everyone understands and loves.  That is a valid
viewpoint I suppose, but I tend to be much more pragmatic.  BTW, you
might want to do a quick grep of the kernel source for all users of
atomic_inc_not_zero() and make sure they all follow the established CS
curricula:

net/netfilter/nf_conntrack_expect.c:

struct nf_conntrack_expect *
nf_ct_expect_find_get(struct net *net, const struct nf_conntrack_tuple *tuple)
{
	struct nf_conntrack_expect *i;

	rcu_read_lock();
	i = __nf_ct_expect_find(net, tuple);
	if (i && !atomic_inc_not_zero(&i->use))
		i = NULL;
	rcu_read_unlock();

	return i;
}

void nf_ct_expect_put(struct nf_conntrack_expect *exp)
{
	if (atomic_dec_and_test(&exp->use))
		call_rcu(&exp->rcu, nf_ct_expect_free_rcu);
}


fs/nfs/nfs4state.c:

static struct nfs4_state *
__nfs4_find_state_byowner(struct inode *inode, struct nfs4_state_owner *owner)
{
	struct nfs_inode *nfsi = NFS_I(inode);
	struct nfs4_state *state;

	list_for_each_entry(state, &nfsi->open_states, inode_states) {
		if (state->owner != owner)
			continue;
		if (atomic_inc_not_zero(&state->count))
			return state;
	}
	return NULL;
}

struct nfs4_state *
nfs4_get_open_state(struct inode *inode, struct nfs4_state_owner *owner)
{
	struct nfs4_state *state, *new;
	struct nfs_inode *nfsi = NFS_I(inode);

	spin_lock(&inode->i_lock);
	state = __nfs4_find_state_byowner(inode, owner);
	spin_unlock(&inode->i_lock);
	...
}

void nfs4_put_open_state(struct nfs4_state *state)
{
	struct inode *inode = state->inode;
	struct nfs4_state_owner *owner = state->owner;

	if (!atomic_dec_and_lock(&state->count, &owner->so_lock))
		return;
	spin_lock(&inode->i_lock);
	list_del(&state->inode_states);
	list_del(&state->open_states);
	spin_unlock(&inode->i_lock);
	spin_unlock(&owner->so_lock);
	iput(inode);
	nfs4_free_open_state(state);
	nfs4_put_state_owner(owner);
}


... etc.  But I'm sure all those silly network and RCU developers didn't
understand what they were doing.

I would still like for some other people to weigh in with their opinions
before I spend more effort on another patch or else push forward with
kref_get_not_zero().

Tony
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux