On Fri, Oct 27, 2017 at 05:33:20PM +0800, Yubin Ruan wrote: > And here are some more modification to some wording in chapter 7, but I am not > sure whether you like it or not. > > Anyway, chapter 7 makes me feel good ;-) It makes me know that home-brewing a > lock primitives with atomic instructions(which is what I was doing) is > something that is possible and used in production. It most certainly is possible, but it is wise less often than it might seem. ;-) > Thanks, > Yubin BTW, I cannot accept patches without a valid Signed-off-by. As Akira suggested, please see the FAQ for more details. > diff --git a/locking/locking.tex b/locking/locking.tex > index 14db27d..a9f46f1 100644 > --- a/locking/locking.tex > +++ b/locking/locking.tex > @@ -2166,8 +2166,8 @@ Signal-handler deadlocks can be explicitly avoided as follows: > of a signal handler. > \item If the application invokes the library function > while holding a lock acquired within a given signal > - handler, then that signal must be blocked every time that the > - library function is called outside of a signal handler. > + handler, then that signal must be blocked every time that lock > + is to be acquired outside of a signal handler. > \end{enumerate} I covered this on in my earlier email. > These rules can be enforced by using tools similar to > @@ -2329,7 +2329,7 @@ locking's villainy. > If there are a very large number of uses of a callback-heavy library, > it may be wise to again add a parallel-friendly API to the library in > order to allow existing users to convert their code incrementally. > -Alternatively, some advocate use of transactional memory in these cases. > +Alternatively, some advocate using transactional memory in these cases. > While the jury is still out on transactional memory, > Section~\ref{sec:future:Transactional Memory} discusses its strengths and > weaknesses. And for this one, I am not seeing the problem with the original wording. Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe perfbook" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html