On 5/25/21, Antony Stone <Antony.Stone@xxxxxxxxxxxxxxxxxxxx> wrote: > On Tuesday 25 May 2021 at 14:36:09, Albretch Mueller wrote: > >> On 5/25/21, Antony Stone <Antony.Stone@xxxxxxxxxxxxxxxxxxxx> wrote: >> > On Tuesday 25 May 2021 at 07:51:21, Albretch Mueller wrote: >> >> As part of a teaching and learning (TaL)/school software, I need >> >> squid: >> >> >> >> a) to detect one of the connected computers in an internal network >> >> comprising wirelessly connected and wired computers as the "master" >> >> (operated by the teacher); >> > >> > What information is available to Squid in order to "detect" that this is >> > the "master" machine? >> >> I think a combination of cookies, > > What system generates / checks the cookies? OK, I am just guessing here in the direction I'd wish that implementation would go, but probably an intermidiate proxy or another running instance internally running an ICAP server should be used? > What URL are the cookies associated with in the browser? OK, again ;-), here I clearly see your point, but the communication associated with certain computers could be handled differently based on other autenticating aspects. Also, say, teachers have their own dedicated tablets, we know the time frames in which classes run in each classroom... >> its mac address and, when both fail, authentication. > > Ah, some form of authentication, where the master user has to log in to > something, would certainly be effective. It was just the way you used the > word > "detect" that made me think this should be some action on the part of Squid > independently of what the master machine user was doing. So, that issue we could see as checked. >> Wouldn't that be enough?, perhaps with an extra proxy server? > > Perhaps with an extra *web* server (for authentication), yes, but where > would > an extra proxy server point to? [teacher] |\ /| ...........| .>.[extra proxy server*2].<.>.[squid server + ICAP].<. | Internet [students] |/ \| * [2] extra proxy server discriminating between communications started by the teacher or server * extra ICAP server used to do some page content marshalling, e.g.: if a youtube video is accessed, students would see only a page with all the js crap removed and a link pointing to a local file with the video (which, of course, could have been pre downloaded) >> My main problem is that I don't want for students boxes to be prompted for >> or trying to initiate an authentication and I don't know of a fool proof >> way >> of achieving that. > > How about the teacher accesses a URL that the students don't know, or at > least > are not supposed to access, and it is that URL which prompts the teacher to > authenticate? You are not a teacher, right? ;-) or at least not a teacher in the 40's section of the South Bronx in NYC. I think it would be much better if the teacher carries around something (probably, just her face and/or fingerprints? which could be checked via biometrics, the webcam on her tablet). In a school (kind of like in a prison) keeping protected functional spaces vertically disentangled is very important. >> If possible, all students' business should let go through with squid >> serving >> as transparent proxy. > > That's just down to your networking configuration. Great! >> Probably squid could cache that request as local files to the extent >> that it can and just redirect the requests of students' clients as >> references to that file using an ICAP server somehow? > > So, the student asks for the Wikipedia article on Amethyst and finds that > their > browser shows them the web page the teacher is looking at instead? > > I think there's no way you're going to achieve this sort of thing with the > current popularity of SSL/TLS. No, I don't see things happening in that way. There are certain moments during the class in which students can "freely" access the Internet, but teacher should be able to direct the class. So her inet request trumps whatever the students may be attempting to do. Yes, it seems to be a bit facist, but I do believe we should offer kids with a safe leaning environment while in school. They will have enough time to watch all that crappy nonsense floating around outside of the school for the rest of their lives. >> >> b) when that master reach out to an outside URL, the response should >> >> be replicated in that master's and all other internal computers; but >> > >> > What do you mean by "the response should be replicated in ... all other >> > internal computers"? >> >> that the initial request by the teacher should be received as >> response by all students > > Response to what? response to the initial teacher request >> > Are you assuming that these computers are already running a browser, >> >> Well, technically, I think we could assume that, why would that be >> problematic? How bad would if be if they are not running a browser, >> you could interrupt an initiated request, you could even shot down >> your computer in the middle of a download or transaction without a >> problem. Why would that be that difficult? or, was is it exactly I am >> not getting right? > > I'm asking "what application is going to receive this "response" sent by > Squid, and be expecting it so that it can process it and display it to the > user?" > > You can't just send a chunk of HTML to a computer over the network and > expect > a browser window to suddenly appear and display it. It would not be a big deal for me/anyone with some moderate programming experience, to code in minutes an application that: a) fields/intercepts a response to an initial request b) saves it to a local file (if it is a media file) and c) parses and marshalls the response page to include a link to that §b file > Aside from anything else, you have to get a TCP session going in the first > place. > Well, yes and that, like making sure or checking if a browser window is open, would be an aspect that squid can't do for us, but it is doable in some other ways. >> > that they should suddenly get some (apparently) web server response via >> > Squid and display it, even though they did not make any request? >> > >> > If so, I would say this is impossible - you can't get a computer to show >> > a response to a request it did not make. >> >> Yes, this is what I meant, why is that so hard? > > a) the client (user's computer) did not open a TCP session to anything > (either > Squid, or a web server), so it's not going to accept TCP "replies" > > b) the client did not send an HTTP request to anything, so it's not going to > accept some HTML which simply turns up on its network port > >> Again, my forte is not networking, but I could see how the requested file >> could be cached and forwarded to all student boxes. Perhaps using an ICAP >> server. > > You can modify a request sent from the client, or you can modify a response > sent back from a sever, but you cannot simply send a response to a machine > which did not make a request. As explained above you could channel some data to any computer with a physical and addressable connection to a network, web/application servers do that all the time. Or probably the idea of "beating"/"polling" requests from a browser should be exploited? >> >> c) responses to requests originating in the non master ("slave"?) >> >> ends, return to their corresponding ends; >> > >> > So, any computer other than the "master" simply makes requests and gets >> > standard responses as usual. Fine. >> >> Yes, once you know the request originated in the non master machine, >> it would go back to the initiating client. Again, why would that be >> that problematic? > > That is not problematic - it's fine. Great! >> >> d) at times the master should be able to switch off that replicating >> >> feature; >> > >> > What times? >> >> Teacher may decide to "privately" check out some information by >> herself without it being displayed on all students' ends or even >> concurrently open another "private" browser window. > > So, the Squid proxy needs somehow to be able to identify which session or > window the teacher is using, and react differently. I suspect this is > probably > doable, but far from simple. Great! You see it is all falling into place little by little All that needs to be done then is qualifying/deal with the "far from simple" aspects relating to it. >> > How? >> >> This is what I don't know but I think (probably somewhat naively) it >> shouldn't be that hard. Again, session tracking via cookies or URL >> rewriting, maybe? > > I think you are completely overlooking modern security practices here. > > a) many websites use SSL/TLS - you can't just intercept the requests or > responses and replace them with whatever you want > > b) most browsers and related applications will either not support, or > definitely warn about, cross-site scripting and foreign cookies Computers and browsers are in the classrooms are owned and it is OK if students see all pages as plain text, non SSL/TLS ones. I may be missing something here, but in princip I think it is doable somehow. >> > I really think you need to explain this "replicating feature" in more >> > detail (and preferably in network terms, from the point of view of the >> > software running on the master, and the software running on a >> > non-master. >> >> I am not a networking guy but probably you could point out to me some >> related documents explaining specifically the kinds of problematics >> around these kinds of issues. > > Just look up "TCP 3-way handshake" and read any basic introduction to > "client- > server connectivity" for an understanding of how an HTTP request gets made > and > the response received. > > Without a request, the response will be ignored. Thank you I will look up "TCP 3-way handshake" and then see what is doable. I still think that if browsers create so many problems you could: a) reconfiger or remove/recompile the part of the browsers which create such "problems", other b) code and use, for example, JavaFX-based reach clients >> > I think this request is (a) a *lot* more complicated than this, and >> > probably a lot more complicated than you think it is, and (b) in parts, >> > impossible. >> >> I am squarely OK with "lots of complication" and as I said, you might >> not be able to completely and directly implement all aspects using >> squid, but what aspects of that integrated whole do you think are >> impossible? > > Basic networking protocols, from what I understand of how you expect this to > work. Thank you very much Anthony/[squid users]. As I squeeze some time to do it, I will start working little by little on the implementation of such a stack. I thought something like that or pretty close to it would be available out already. There definitely is a need for such a thing. I will definitely keep bothering you as I stumble on hurdles along the way ;-) The first hurdle I have in sight is how to locally run squid with more than one browser open on my own computer as a test environment. This is something I think (again probably I am being a bit too imaginative here) shouldn't be a problem. How do people who design and troubleshoot browsers do such things? Thanks to all, lbrtchx _______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users