Search squid archive

Caching netflix by Mime headers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Maybe OT, maybe a dream, but I need to ask


Turning on mime headers you will realize this

1361122064.970   5480 192.168.7.137 TCP_MISS/206 981083 GET
http://108.175.38.89/12348119.ismv? - HIER_DIRECT/108.175.38.89
application/octet-stream [Accept: */*\r\nHost: 108.175.38.89\r\nRange:
bytes=3498123192-3499103587\r\nX-Device: 2012.4 NFPS3-001\r\n]
[HTTP/1.1 206 Partial Content\r\nServer: nginx/1.2.4\r\nDate: Sun, 17
Feb 2013 17:31:07 GMT\r\nContent-Type:
application/octet-stream\r\nContent-Length: 980396\r\nLast-Modified:
Mon, 03 Dec 2012 15:00:39 GMT\r\nConnection:
keep-alive\r\nCache-Control: no-store\r\nPragma:
no-cache\r\nAccess-Control-Allow-Origin: *\r\nX-TCP-Info:
snd_wscale=7;rcv_wscale=9;snd_mss=524;rcv_mss=524;last_data_recv=1000;rtt=46187;rttvar=19875;snd_ssthresh=3668;snd_cwnd=60784;snd_wnd=789248;rcv_wnd=1049048;snd_rexmitpack=186;rcv_ooopack=0;snd_zerowin=0;\r\nContent-Range:
bytes 3498123192-3499103587/3986579703\r\n\r]


Interesting part is the Range:


I know it is quite a bad idea to download all movie first (possible,
like youtube) but Netflix movies are so big that people will get
desesperated before download has finished.  So my question is if
possible to do cache of 206 answers, keeping in mind the mime headers

LD


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux