Re: covert channel and noise -- was Re: proposal ...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 15 Feb 2004, Ed Gerck wrote:

> Dean Anderson wrote:
> > 
> > It isn't the case that the spammer intended to send a message about
> > the superbowl, but somehow "noise" altered the message to a
> > solicitation on viagra. Rather, they intended to send a message on
> > viagra, and you recieved their message, noise free. But seeing the
> > solication for viagra, you became upset, and reported a complaint
> > about the inappropriate use of the channel. In
> > information-theory-speak, you report a "communication in violation of
> > the security model"; a covert or sneaky channel.
> 
> I guess we agree that if the message can be read by the intended recipient 
> then it's not in a covert channel. A covert channel is one that can't even 
> be detected by the intended recipient. But, you may ask, "who" is the 
> intended recipient? 

Err, we do not agree on that. You still misunderstand the nature of a
covert channel.  I suggest you re-read the definitions and references in
the reposted messages from Ellis Cohen and Stavros Macrackis.

A covert or sneaky channel is merely one in which the communication is
//not authorized by the security model// It has nothing to do with
readability or detectability.  There is no theorem that says covert
channels cannot be detected. Quite obviously, they are detected, and some
action might be taken.  Theory just says that you cannot prove they aren't
there.  Let me put it another way:  A "null" reading on your "covert
channel detector" does not mean there aren't any covert channels--Just
that you haven't detected any.  This is a subtle, yet significant
difference.

In yet other words: you whack-a-mole when you find one, but you can't say
that there aren't any moles. It is not a game you can win.  You have to
keep looking and keep whacking, so long as there are moles to pop up. The
arcade whack-a-mole game stops when you run out of time, but our game only
stops when no one wants to spam or conduct abuse or government
intervention prevents them from playing.

We now have a legal process to use against abusers who are not
commercial--most, if not all, genuine commercial spammers will simply
comply with the law.  That leaves those who aren't genuinely commercial,
and whose intent it is to annoy people.

> An example of such a covert channel is if a spammer hides information 
> in the subject line by using wrongly spelled forms of "viagra", 

This could be a covert channel. But its also possible that the whole spam
message even with a correctly spelled viagra would be a covert or sneaky
channel if it is not //authorized by the security model//, and the
"security model" is simply an Acceptable Use Policy statement not to do
that.  In this case, you may have a very weak "covert channel detection"  
process. But extreme weakness in the process is irrelvent to the theory.  
What, if anything, has to be done to get past the detection process and
the security model and into your mailbox is irrelevant.  The security
model can be made quite extensive, such as with Mandatory Access Controls
as implemented in secure operating systems.  Even so, one cannot say that
there aren't covert channels.  One can say other things about them, but
not that thing.

Information theory has proved itself useful to the analysis of anti-spam
schemes. Besides being able to rule out a number of schemes which promise
to be complete technical solutions to spam, we also see what range of
solutions can be implemented about spam and what results we can expect to
see from that range.

Consider the case of Mandatory Access Controls (MAC) for operating
systems.  We see that if we tighten up the controls on the flow of
information, that we improve our chances of detection. Particularly, we
hope to detect people trying to find covert channels before they succeed
in finding one.  This is very expensive, both in supervision costs, and in
the difficulty of training workers to use such systems.  In the case of
classified government information, the cost is justified.  Having a
potential spy create a denied MAC security event while probing for a
covert channel is well worth the effort, even if you can't be sure that
they will be caught.  The spy, or even potential spy or stupid user, is
then removed from the secured areas. Even honest, but stupid users are
removed, and this can be rationalized because they don't have the capacity
to be trusted with sensitive information.  It works because the same
'mole' doesn't get repeated chances to find the channel that will be
successful, and there are legal processes to make that happen. You cannot
lie on a security clearance application when it asks your identity, and if
you've ever had a clearance revoked or denied.  Expensive checks are made
to ensure that your answers are truthful.

But the same can't be said about spammers/abusers.  We can't escort them
out, and we can't even be sure of their identity in a civil context.  
Frequently, they are using someone else's identity or computer.  We
already detect them quite easily. But there is little that can be done.
Once positively detected, we can block some spam for a while.  We improve
filters, and spammers/abusers eventually adapt.  This basic process cannot
be stopped, unless the spammers/abusers stop of their own accord or some
legal process is brought to bear to force them to stop.  We can probably
apply bayesian filters, and I think AI, text summarization, etc to the
problem to help automate the task. Of course, spammers can use automatic
inverse methods to automate the task of finding channels that aren't
blocked.  However automated, the fundamental governing process described
by information theory isn't altered.


		--Dean




[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]