Re: [PATCH 1/3] add QSORT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 04.10.2016 um 07:28 schrieb Kevin Bracey:
On 04/10/2016 01:00, René Scharfe wrote:
Am 03.10.2016 um 19:09 schrieb Kevin Bracey:
As such, NULL checks can still be elided even with your change. If you
effectively change your example to:

    if (nmemb > 1)
        qsort(array, nmemb, size, cmp);
    if (!array)
        printf("array is NULL\n");

array may only be checked for NULL if nmemb <= 1. You can see GCC doing
that in the compiler explorer - it effectively turns that into "else
if".

We don't support array == NULL together with nmemb > 1, so a segfault
is to be expected in such cases, and thus NULL checks can be removed
safely.

Possibly true in practice.

But technically wrong by the C standard - behaviour is undefined if the
qsort pointer is invalid. You can't formally expect the defined
behaviour of a segfault when sending NULL into qsort. (Hell, maybe the
qsort has its own NULL check and silently returns!

A qsort(3) implementation that doesn't segfault is inconvenient, but still safe. I'm more concerned about NULL checks being removed from our code.

So if it's not a program error for array to be NULL and nmemb to be zero
in your code, and you want a diagnostic for array=NULL, nmemb non-zero,
I think you should put that diagnostic into sane_qsort as an assert or
something, not rely on qsort's undefined behaviour being a segfault.

    sane_qsort(blah)
    {
         if (nmemb >= 1) {
             assert(array);
             qsort(array, nmemb, ...);
         }
    }

Can't invoke undefined behaviour from NULL without triggering the
assert. (Could still have other invalid pointers, of course).

We could do that, but I think it's not necessary. We'd get a segfault when accessing the sorted array anyway. (If we don't look at the data after sorting then we can get rid of the sorting step altogether.)

Usually I am on the side of "no NULL checks", as I make the assumption
that we will get a segfault as soon as NULL pointers are used, and those
are generally easy to diagnose. But seeing a compiler invoking this sort
of new trickery due to invoking undefined behaviour is making me more
nervous about doing so...

I was shocked a bit myself when I learned about this, but let's not panic. :)

To make that check really work, you have to do:

    if (array)
        qsort(array, nmemb, size, cmp);
    else
        printf("array is NULL\n");

So maybe your "sane_qsort" should be checking array, not nmemb.

It would be safe, but arguably too much so, because non-empty arrays
with NULL wouldn't segfault anymore, and thus become harder to
identify as the programming errors they are.
Well, you get the print. Although I guess you're worrying about the
second if being real code, not a debugging check.

Yes, but the optimization is valid: If nmemb > 0 then array can only be NULL if we have a bug, and then we'd get a segfault eventually. So such checks can be removed safely.

I must say, this is quite a courageous new optimisation from GCC. It
strikes me as finding a language lawyer loophole that seems to have been
intended for something else (mapping library functions directly onto
CISCy CPU intrinsics), and using it to invent a whole new optimisation
that seems more likely to trigger bugs than optimise any significant
amount of code in a desirable way.

Yeah, and the bugs triggered are quite obscure in this case. But having richer type information and thus restricting the range of possible values for variables *can* enable useful optimizations.

Doubly weird as there's no (standard) language support for this. I don't
know how you'd define "my_qsort" that triggered the same optimisations.

The nonnull attribute is a GCC extension, but it's also supported by clang:

  http://clang.llvm.org/docs/AttributeReference.html#nonnull-gnu-nonnull

I don't know if other compilers support it as well, or if there are efforts underway to standardize it.

I've seen similar
library-knowledge-without-any-way-to-reproduce-in-user-code
optimisations like "malloc returns a new pointer that doesn't alias with
anything existing" (and no way to reproduce the optimisation with
my_malloc_wrapper). But those seemed to have a clear performance
benefit, without any obvious traps. Doubtful about this one.

Still we have to deal with it..

So let's summarize; here's the effect of a raw qsort(3) call:

array == NULL  nmemb  bug  QSORT  following NULL check
-------------  -----  ---  -----  --------------------
            0      0  no   qsort  is skipped
            0     >0  no   qsort  is skipped
            1      0  no   qsort  is skipped (bad!)
            1     >0  yes  qsort  is skipped

Here's what the current implementation (nmemb > 1) does:

array == NULL  nmemb  bug  QSORT  following NULL check
-------------  -----  ---  -----  --------------------
            0      0  no   noop   is executed
            0      1  no   noop   is executed
            0     >1  no   qsort  is skipped
            1      0  no   noop   is executed
            1      1  yes  noop   is executed
            1     >1  yes  qsort  is skipped

With the micro-optimization removed (nmemb > 0) the matrix gets simpler:

array == NULL  nmemb  bug  QSORT  following NULL check
-------------  -----  ---  -----  --------------------
            0      0  no   noop   is executed
            0     >0  no   qsort  is skipped
            1      0  no   noop   is executed
            1     >0  yes  qsort  is skipped

And with your NULL check (array != NULL) we'd get:

array == NULL  nmemb  bug  QSORT  following NULL check
-------------  -----  ---  -----  --------------------
            0      0  no   qsort  reuses check result
            0     >0  no   qsort  reuses check result
            1      0  no   noop   reuses check result
            1     >0  yes  noop   reuses check result

Did I get it right? AFAICS all variants (except raw qsort) are safe -- no useful NULL checks are removed, and buggy code should be noticed by segfaults in code accessing the sorted array. So the advantage of the current code is that it won't call qsort for nmemb <= 1. And the advantage of checking the pointer is that the result of that check can be reused by later checks. I think the former is more useful, but only slightly.

René



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]