On 2011-12-06 13:56, J. Bruce Fields wrote: > On Tue, Dec 06, 2011 at 01:27:53PM +0200, Benny Halevy wrote: >> On 2011-12-06 04:10, J. Bruce Fields wrote: >>> On Sun, Dec 04, 2011 at 03:48:16PM +0200, Benny Halevy wrote: >>>> From: Benny Halevy <bhalevy@xxxxxxxxxx> >>>> >>>> Signed-off-by: Benny Halevy <bhalevy@xxxxxxxxxx> >>>> --- >>>> fs/nfsd/bl_ops.c | 2 +- >>>> 1 files changed, 1 insertions(+), 1 deletions(-) >>>> >>>> diff --git a/fs/nfsd/bl_ops.c b/fs/nfsd/bl_ops.c >>>> index 89249c4..4d2939e 100644 >>>> --- a/fs/nfsd/bl_ops.c >>>> +++ b/fs/nfsd/bl_ops.c >>>> @@ -57,7 +57,7 @@ >>>> #endif >>>> >>>> >>>> -typedef enum {True, False} boolean_t; >>>> +typedef enum {False = 0, True = !False} boolean_t; >>> >>> Shouldn't we just use "bool"? >> >> Yes, in some cases. In others, the boolean status doesn't make sense >> and I'd like to replace it with an integer. > > I believe casts from bools to integers are defined to convert false and > true to 0 and 1 respectively, so you should be fine. True, but in several cases like layout_cache_fill_from* or extents_get a single status bit hides errors I'd rather percolate up the stack. I'm not diving into this right now because this code needs a overhaul to allow memory allocation outside the lock. I'm thinking of a dual pass implementation, doing a pass under the lock to calculate how many items to allocate. Doing the allocation after releasing the (now mutex but better be spin) lock, and then running through the list again under lock using the allocated memory. The hard parts in this scheme are if the state of the layout changes and we need more memory in the second pass. In this case we'll need yet another iteration for the remainder, rinse, wash, repeat. Benny > > --b. > -- > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html