Re: [PATCH v2] CONTRIBUTING: Please sign your emails with PGP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Alex,

At 2023-11-22T14:47:58+0100, Alejandro Colomar wrote:
> +   Sign your emails with PGP
> +        It is strongly encouraged that you sign all of your emails sent
> +        to the mailing list, (especially) including the ones containing
> +        patches, with your PGP key.  This helps establish trust between
> +        you and other contributors of this project, and prevent others
> +        impersonating you.  If you don't have a key, it's not mandatory
> +        to sign your email, but you're encouraged to create and start
> +        using a PGP key.

I think you should alter this advice to employ the active voice, not the
passive.  When an authority is dispensing advice or direction, people
need to know who that authority is.  In this case, it would appear to be
the Linux man-pages project maintainers.  If there is an external
authority whose advice you are transmitting, then that authority should
likewise be cited by name.

Such a practice is important for long-term project governance because
that way your successors know at whose discretion the advice can/should
be updated.  While it does sometimes happen that a project changes
ownership into hands that are reckless and produce senseless churn such
that careless retention of old advice is actually preferable, in my
experience, it is at least as common for them to pass to people who are
uncertain of the motivations behind certain decisions, or cannot tell
which decisions were made with deliberation (as opposed to "going along
to get along") or following a recommended best practice that has become
invalidated by the passing decades.

The recent conversations about string copying on this list reflect just
how complex and frustrating such matters can be in another domain.
"Everybody" assumed for decades that copying strings in C was a
trivial matter.[1]  Now, we look back over three decades of our brethren
crucified upon CVE crosses along the Appian Way to a better C standard
library, and realize that Seventh Edition Unix probably should have some
offered something like a string_copying(7) document.

> +        There are many ways you can sign your patches, and it depends on
> +        your preferred tools.  You can use git-send-email(1) in
> +        combination with mutt(1).  For that, do the following.
> +
> +        In <~/.gitconfig>, add the following section:
> +
> +            [sendemail]
> +                sendmailcmd = mutt -H - && true
> +
> +        And then, patch mutt(1) to enable encryption in batch and mailx
> +        modes, which is disabled in upstream mutt(1).  You can find a
> +        patch here:
> +        <https://gitlab.com/muttmua/mutt/-/merge_requests/173>.

I find it awkward to "strongly recommend" a best practice that isn't
easily facilitated by _any_ readily available tool without further
hacking.

That you have to dispense this advice suggests to me that the status quo
has not yet caught up with your ambitions.  I would soften the strength
of your recommendation and explicitly concede that better tooling
support is necessary to advance the state of the art.

I "manually" sign my messages to this list (that is, via keyboard-driven
menu selections in neomutt(1)).  But I don't produce patches in
sufficient volume that this tedium rises to a serious annoyance.  So
what you might do for the time being is to focus on advice to similarly
situated users, and concede that, for people who are high frequency
patch generators, technology is lacking at present.

Regards,
Branden

[1] I encourage anyone with either a reverential or heretical turn of
    mind to review §5.5 of the 2nd edition of _The C Programming
    Language_ and consider it light of our string_copying(7)
    discussions.  I would attend particularly to what is implied by the
    recommendation of Exercise 5-5 to implement strncat(3), strncmp(3),
    and strncpy(3) from scratch.  (A Kernighan & Ritchie idolator might
    claim that they perceived all of the conceivable problems in 1988,
    and offered the exercise as an elliptical means of warning the
    sufficiently savvy reader that the standard library had gone astray.
    Personally, I think such an inference is inconsistent with Ritchie's
    own expressed opinions about obscurantism.[2]  But if there's one
    thing brogrammers are free with, it is negative assessments of
    others' intellects.

    I recently read an ACM oral history interview with the designer of
    the Pentium Pro.[3][4][5]  He passed along some excellent advice to
    anyone who has to endure a toxic working environment.

    "...if some human mind created something then your human mind can
    understand it.  You should always assume that, because if you assume
    it, it throws away all the doubts that are otherwise going to bother
    you, and it's going to free you to just concentrate on 'what am I
    seeing, how is it working, what should I do about it, what am I
    trying to learn from this'.  Never, ever think that you're not smart
    enough; that's all nonsense." -- Robert P. Colwell

[2] https://web.archive.org/web/20150218135530/http://cm.bell-labs.com/cm/cs/who/dmr/odd.html
[3] https://www.sigmicro.org/media/oralhistories/colwell.pdf

[4] Only one point concerns me about it.  There's a clash between
    Colwell's assertion that Intel had employed formal methods to
    validate the original Pentium's floating point unit, and accounts
    I've received from other sources (independently of but consistently
    with the Wikipedia article about the processor) that Intel had
    consciously _foregone_ such methods for that chip (presumably due to
    expense or deadline pressure).  The result was the infamous FDIV bug
    and, so I hear, a resolution to never again skip formal verification
    of the FPU.  I wonder if Colwell was mistaken here.

[5] Colwell's assessment of and stories about the ill-fated i432
    architecture are also worth reading.  The popular conception of that
    CPU constitutes a negative lesson like the one commonly expressed by
    Linux hackers who thoughtlessly traffic in Torvalds quotations about
    microkernels (as shibboleths of their clubhouse memberships?), in
    which we can observe the imprudence of parroting claims about why a
    technology failed.  If we accept Colwell's account, the compiler
    group tasked with supporting i432 was effectively a resistance
    movement against the chip architecture, and deliberately sandbagged
    the machine's performance.  That it was "stupid" to try to support
    the Ada programming language in a CPU, and to design it such that
    access checks were supported in a robust and hierarchical way, is
    the _wrong_ lesson to draw from i432's failure--just as
    "microkernels are inherently inefficient, hurr hurr" is precisely
    the wrong one to take from a single example, Mach, not being
    performant in the 1990s.

    But don't despair.  We can combat the claims of the ignorant by
    gathering and evaluating objective empirical measurements.  And then
    your management will select the wrong ones, to the exclusion of all
    others.

    Masaki Kobayashi warned us--there's no escaping the human condition.

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Kernel Documentation]     [Netdev]     [Linux Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux