Re: [RFC] Convert builin-mailinfo.c to use The Better String Library.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Pierre Habouzit wrote:
On Sat, Sep 08, 2007 at 11:50:34PM +0000, Andreas Ericsson wrote:

You can tell C compilers to
check all array accesses, but that is a performance issue.
Runtime checking of arrays in D is a performance issue too, so it is selectable via a command line switch.
Same as in C then.

  HAHAHAHAHAHA. Please, who do you try to convince here ? Except in the
local scope, there is few differences between a foo* and a foo[] in C.


"Runtime checking of arrays is a performance issue." It's true whether it's
done manually by the coder or by the compiler. The difference is that in C,
you get to choose where it should be done.


But more importantly,
2) For dynamically sized arrays, the dimension of the array is carried
with the array, so loops automatically loop the correct number of times.
No runtime check is necessary, and it's easier for the code reviewer to
visually check the code for correctness.
But this introduces handy but, strictly speaking, unnecessary overhead
as well, meaning, in short; 'D is slower than C, but easier to write
code in'.

  That's BS. See the strbuf API I've been pushing recently ? It has
simplified git's code a lot, because each time git had to deal with a
growing string, it had to deal with at least three variables: the buffer
pointer, the current occupied length, and its allocated size. That was
three thing to have variable names for, and to pass to functions.


Yup. I applaud your efforts, but it does come with a slight overhead,
except where it replaces faulty code. In practice, it's probably better
to use the api for all the string-handling, as none of it is performance-
critical.


  Now instead, it's just one struct. D gives that gratis. There is no
performance loss because you _need_ to do the same. How do you deal with
dynamic arrays if you dont't store their lenght and size somewhere ? Or
are you the kind of programmer that write:

  /* 640kb should be enough for everyone… */
  some_type *array = malloc(640 << 10);


No, but it would depend on what I am to do with it.


So in essence, it's a bit like Python, but a teensy bit faster and a
lot easier to shoot yourself in the foot with.

What was the niche you were going for when you thought up D? It can't
have been systems programming, because *any* extra baggage is baggage
one would like to get rid of. If it was application programming I fail
to see how one more language would help, as there will be portability
problems galore and it's still considerably slower to develop in than
fe Python, while at the same time being considerably easier to mess up
in.

  Right now I'm just laughing. There is for sure overheads in some
places of D, but the example you take, and what you try to attack in D
is definitely not where you lose any kind of performance. You could have
attacked the GC instead (which is after all an easy classical target).


I was asking what role D was designed to fill. I didn't mean it as an
attack, but re-reading what I wrote earlier I see it came off a bit harsh.


  Just to evaluate the silliness of your arguments:
  * http://www.digitalmars.com/d/comparison.html so that you can tell
    what the D features really are,

You may notice that the feature-list is being provided by the creators
and marketeers of the D language. Walter Bright certainly seems like a
nice enough person, but it's possible it's a tad biased.


  * http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=all
    so that you can know what the D performance really is about. Of
    course those are only micro benchmarks, but well, python is "just"
    15 times slower than D, and D seems to be 10% slower.


I get it to 7.7xC and 1.2xC, respectively, but whatever. It still means
performance-critical apps will be written in C, while
insert-script-language-of-choice will still be used for prototyping and
not-so performance-critical apps.


Well then I'm
    okay with D, I'm ready to buy 10% faster CPUs and avoid a lot of
    painful debugging time. In my world, 10% faster hardware is cheaper
    by many orders of magnitude than skilled programmers, but YMMV.


I'm curious as to how many fewer bugs D developers write compared to C
programmers. I guess it's hard to do a fair test given the comparatively
shallow pool of D gurus around, but it'd still be interesting to see a
practical test. 20% increase in runtime is certainly acceptable for
never having to see a bug again, but is it acceptable for 10% fewer bugs?
Or 20% fewer?

--
Andreas Ericsson                   andreas.ericsson@xxxxxx
OP5 AB                             www.op5.se
Tel: +46 8-230225                  Fax: +46 8-230231
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux