Re: Merging Core and Extras affecting security updates

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 22 Jan 2007, Mark J Cox wrote:

> That's a good summary of the process the Red Hat security response team 
> currently follow --

It was not me who broke into your offices and stole your papers. ;)

> with the addition that when we discover something by tracking that
> doesn't have a CVE we ensure that it gets a CVE name assigned to it.  
> Therefore the CVE list can be used as a definitive list for anything
> that may affect Red Hat or Fedora (indeed for Fedora tracking right now
> we base it off the CVENEW mails)

How much time does it take to get a new CVE number? Hours? Days?
How do you handle duplicate CVEs? (I don't know how often it happens
nowadays but they had some duplicate entries in the past.)

> (OVAL just provides a way of expressing how to detect the presence of some 
> particular CVE named flaw)

I know. I occurred to me a corpus of OVAL files (or anything similar in a
machine-readable form) might be used for automatic checks whether the
corresponding issues are relevant and (after the fact) whether they have
been handled properly.

> > This sounds like a task for CVSS:
> I don't believe CVSS really works for open source software (or indeed any 
> software that is shipped by multiple vendors).

It is impossible to assign a fixed universal severity rating to a
vulnerability in a software package used in more than one environment
and one configuration (there are many examples of software being secure 
in one configuration and providing "instant remote root" in another 
configuration).

> Even assigning a Base score is tricky to do.  Is that flaw in
> ImageMagick where a buffer overflow could be triggered if you open a
> malicious file you were given "remote" or "local"?  The attacker is
> remote.  If you argue it's "local"  then how about a flaw in something
> like xpdf?  is that also local?

Most vulnerabilities exploitable by evil files are "remote" because you do
not need a local account to exploit them. But the attacker cannot initiate
the execution of vulnerable code at will and it should be expressed as
"high access complexity" in CVSS terminology.

> NVD say these are "user complicit" and marked as local.

I think they got it wrong. See above.

> But doesn't it depend on if xpdf is associated with pdf files in your
> web browser?  It's no user complicit if all you need to do is visit some
> malicious website.

Well, it is "user complicit" as long as the attacker needs to convince a
local user to do something (to visit a website, to download and view a
file, to view an email attachment...).

On the other hand I find the choice of words ("complicit") misleading
when it is used to describe attacks when the user expects the operation
(visiting a website etc.) not to cause any harm.

> How about if the flaw is in some widely used library like libpng; won't 
> the local/remote rating depend on exactly what software is using that 
> library in what ways?

Sure. In fact anything can depend on the way the library is used: the same
vulnerability can be irrelevant in the context of one program (using the
library to process good trusted data only, e.g. the files shipped with the
program), a minor problem in the context of another program (the library
is used to process trusted data only but some inputs can trigger the bug,
e.g. outputs from some other program, and it makes the program using the
library unreliable), or a major hole in the context of yet another program
(accepting untrusted data from anyone in the Internet, e.g. files uploaded
via a publicly accessible web form, and feeding them to the library).

I myself would compute some kind of aggregate score over all reasonable
uses of the library. The exact meaning of "reasonable" and "not
reasonable" is fuzzy but I hope a little bit of common sense will help in
most cases. E.g. feeding an untrusted image to libpng is reasonable,
feeding an untrusted .gtkrc to libgtk is not reasonable (and it is the
program using the library that needs fixing in the first place). An
important reality check (in both directions): all existing uses in the
distro should be reasonable.

> Then you have to take into account the version of Fedora Core or RHEL as 
> each has different security technologies.  Some double-free flaws can lead 
> to a complete loss of integrity on some systems, but no loss of integrity 
> but partial availability on others.

A vulnerable program running on a system with the double-free protection
and the same program running on a system without it are two different
pieces of software from this point of view. We are dealing with a certain
kind of a staged attack here: the first stage of the attack is to exploit
a vulnerability in the program itself violate the integrity of its
communication with the memory allocator, and the second stage is to 
subvert the allocator through this channel.

It might be possible to rate such a program with an expression depending
on variables describing the rest of the system (e.g. its sensitivity to
double-frees), and compute multiple final results for every environment
being taken into account.


--Pavel Kankovsky aka Peak  [ Boycott Microsoft--http://www.vcnet.com/bms ]
"Resistance is futile. Open your source code and prepare for assimilation."

--
Fedora-security-list mailing list
Fedora-security-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/fedora-security-list

[Index of Archives]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Coolkey]

  Powered by Linux