On Wed, Apr 01, 2015 at 08:19:20PM +0300, Ilya Dryomov wrote: > Following nbd and iscsi, commit 89baaa570ab0 ("libceph: use memalloc > flags for net IO") set SOCK_MEMALLOC and PF_MEMALLOC flags for rbd and > cephfs. However it turned out to not play nice with loopback scenario, > leading to lockups with a full socket send-q and empty recv-q. > > While we always advised against colocating kernel client and ceph > servers on the same box, a few people are doing it and it's also useful > for light development testing, so rather than reverting make sure to > not set those flags in the loopback case. > This does not clarify why the non-loopback case needs access to pfmemalloc reserves. Granted, I've spent zero time on this but it's really unclear what problem was originally tried to be solved and why dirty page limiting was insufficient. Swap over NFS was always a very special case minimally because it's immune to dirty page throttling. -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html