On Mon, Jan 31, 2022 at 2:21 PM Jeff Layton <jlayton@xxxxxxxxxx> wrote: > > On Mon, 2022-01-31 at 12:37 +0300, Ivan Zuboff wrote: > > Hello, Jeff! > > > > Several weeks ago I mailed linux-fsdevel about some weird behavior > > I've found. To me, it looks like a bug. Unfortunately, I've got no > > response, so I decided to forward this message to you directly. > > > > Sorry for the interruption and for my bad English -- it's not my > > native language. > > > > Hope to hear your opinion on this! > > > > Best regards, > > Ivan > > > > Sorry I missed your message. Re-cc'ing linux-fsdevel, so others can join > in on the discussion: > > > ---------- Forwarded message --------- > > From: Ivan Zuboff <anotherdiskmag@xxxxxxxxx> > > Date: Mon, Jan 10, 2022 at 1:46 PM > > Subject: Bug: lockf returns false-positive EDEADLK in multiprocess > > multithreaded environment > > To: <linux-fsdevel@xxxxxxxxxxxxxxx> > > > > > > As an application-level developer, I found a counter-intuitive > > behavior in lockf function provided by glibc and Linux kernel that is > > likely a bug. > > > > In glibc, lockf function is implemented on top of fcntl system call: > > https://github.com/lattera/glibc/blob/master/io/lockf.c > > man page says that lockf can sometimes detect deadlock: > > http://manpages.ubuntu.com/manpages/xenial/man3/lockf.3.html > > Same with fcntl(F_SETLKW), on top of which lockf is implemented: > > http://manpages.ubuntu.com/manpages/hirsute/en/man3/fcntl.3posix.html > > > > Deadlock detection algorithm in the Linux kernel > > (https://github.com/torvalds/linux/blob/master/fs/locks.c) seems buggy > > because it can easily give false positives. Suppose we have two > > processes A and B, process A has threads 1 and 2, process B has > > threads 3 and 4. When this processes execute concurrently, following > > sequence of actions is possible: > > 1. processA thread1 gets lockI > > 2. processB thread2 gets lockII > > 3. processA thread3 tries to get lockII, starts to wait > > 4. processB thread4 tries to get lockI, kernel detects deadlock, > > EDEADLK is returned from lockf function > > > > Steps to reproduce this scenario (see attached file): > > 1. gcc -o edeadlk ./edeadlk.c -lpthread > > 2. Launch "./edeadlk a b" in the first terminal window. > > 3. Launch "./edeadlk a b" in the second terminal window. > > > > What I expected to happen: two instances of the program are steadily working. > > > > What happened instead: > > Assertion failed: (lockf(fd, 1, 1)) != -1 file: ./edeadlk.c, line:25, > > errno:35 . Error:: Resource deadlock avoided > > Aborted (core dumped) > > > > Surely, this behavior is kind of "right". lockf file locks belongs to > > process, so on the process level it seems that deadlock is just about > > to happen: process A holds lockI and waits for lockII, process B holds > > lockII and is going to wait for lockI. However, the algorithm in the > > kernel doesn't take threads into account. In fact, a deadlock is not > > going to happen here if the thread scheduler will give control to some > > thread holding a lock. > > > > I think there's a problem with the deadlock detection algorithm > > because it's overly pessimistic, which in turn creates problems -- > > lockf errors in applications. I had to patch my application to use > > flock instead because flock doesn't have this overly-pessimistic > > behavior. > > > > > > The POSIX locking API predates the concept of threading, and so it was > written with some unfortunate concepts around processes. Because you're > doing all of your lock acquisition from different threads, obviously > nothing should deadlock, but all of the locks are owned by the process > so the deadlock detection algorithm can't tell that. > > If you have need to do something like this, then you may want to > consider using OFD locks, which were designed to allow proper file > locking in threaded programs. Here's an older article that predates the > name, but it gives a good overview: > > https://lwn.net/Articles/586904/ > > -- > Jeff Layton <jlayton@xxxxxxxxxx> Thank you very much for your reply. Yes, I've considered OFD locks and flock for my specific task, and flock seemed the more reasonable solution because of its portability (which is valuable for my task). So my specific problem is indeed solved, I just wanted to warn kernel developers about such kind of unexpectable behavior deep under the hood. I thought that maybe if the algorithm in locks.c can't detect deadlock without such false positives then maybe it shouldn't try to do it at all? I have no specific stance on this question, I just wanted to inform the people who may care about it and maybe would want to do something about it. At least there will be messages in mailing list archives explaining the situation for people who will face the same problem -- not bad in itself! Best regards, Ivan