On 8/2/23 22:47, John Curran wrote:
I think that's entirely possible, even likely. Especially when combined with AI.The specific point I made is that if there's some oracle used to decide whether a message is CSAM, that is accessible to an app, then the same oracle can be used to test whether some altered version of a CSAM image, or for that matter a synthetic image, passes the oracle. This by itself helps CSAM producers generate images that will evade CSAM detectors.Keith - Sure - but that doesn’t mean that deployment of such solutions will likely result in an _increase_ in CSAM production (not unless you assume that existing enforcement efforts will all automatically become dependent on such solutions and/or otherwise become moribund.)
I said "a likely effect", not "the most likely effect". Perhaps I should have instead said "a highly probable effect".(The analogy to spam filters is if the spammers can test their messages against spam filters that are in use, they can easily generate spam that reliably evades such filters. Prosecution has nothing to do with it.) The general point is simply this: it's not unusual for a naive solution to make a problem worse. It's easy to have misplaced faith in a newly proposed solution. It's not hard to find examples of this in past IETF work.Certainly a possibility, but by no means assured and thus hard to support a “likely” conclusion that the “effect of any CSAM countermeasure is to increase the distribution and production of CSAM, and with it the number of victims.”
I consider DRM a disaster, a complete ripoff of the public's fair use rights, an inexcusable violation of the copyright clause of the Constitution. DRM should be terminated with extreme prejudice.DRM is a fine example - it’s imperfect, there’s a continual escalation of technologies on both defense and attack spheres, and yet it’s deemed sufficiently effective by many parties and in many contexts (despite known imperfections) to enjoy widespread use.
It may not be what technologists want, but technologists aren’t the only party at the table – when new solutions to such problems are considered, the IETF needs to decide if it wishes to participate and architect the most technically sound, effective and least intrusive solution possible, or proclaim such work outside it’s scope and/or existing dogma. There are tradeoffs with either route (and to the resulting role/direction of IETF), so the process by which the IETF considers such questions may be fairly important.
No, I don't buy that IETF has to make either choice. Some
technologies are simply harmful, and IETF doesn't have to either
be complicit in their development OR declare them outside of its
scope. IETF should vigorously oppose development of technologies
known to cause harm, which both censorship and DRM do.
Keith