On Sun, 6 Nov 2011 08:47:04 +0000 (UTC), Adam Gaskins wrote:
> Amos Jeffries,
>
> While I truly can appreciate your border-line pathological idealism,
> I think it's reasonable to desire the browser behaves as designed
> when behind a proxy. For better or worse, people have grown use to
> certain tools and features within their browser of choice. I do work
> in the IT field, but imagine how much benefit the average Jane/Joe
> can get from something like TDL guessing! Sorry to rant, but I am
> also seeking an answer to this. Do you have any information on
> actually making this work? There is almost certainly a work-around
> for this issue, I just haven't found it yet either.
>
> -Adam
 I don't quite follow your argument. For an analogy, people grown used 
 to using forks are encouraged to continue to use forks even when eating 
 soup. Different jobs == different "best" tool, and being accustomed to 
 something is no measure of best.
 As for this feature proposal and Squid. Follow my logic...
 So, (a) do many (a few million) DNS lookups for potential variations of 
 the name?
  up to 1.6 days wait for a single page to load while Squid scans for 
 potential URLs. The why of it is outlined in my earlier post. I guess 
 this is the same reason why browsers don't do it this way. Yes the 
 timings will usually be shorter, but even the less than 60 second delay 
 with Squid today trying to locate IPv6 domain access is raising a lot of 
 complaints.
 Browsers avoid that major problem by using a search engine to do the 
 domain lookup instead of DNS.
 So, that brings us to (b): which corporate search engine do we force 
 everybody to use to follow the IE behaviour?
  Serious question. None of the Squid dev team believe we have the right 
 to decide that on behalf of the whole Internet community.
 Browsers get away with it because they can (and do) provide users with 
 a selection of engines to use. With the user always fully able to change 
 the engine choice. We can build in configuration and provide it to the 
 admin, but that still forces a whole network of users onto one search 
 engine. Not much better.
 Squid does provide plugin interfaces for third-party scripts to do 
 anything they like to fill a certain operation (auth, URL redirect, ACL 
 test, file erasure, SSL cert creation).
 One option is (c) using the redirector interface with a helper that 
 does all the lookups and searching.
  As Kinkie posted earlier in the thread. That could be done with a 
 local database of who the users are and what their preferences are. 
 Combined with a URL redirector that does all the preference loading and 
 domain searching. Or not, if you want to be mean and not give the users 
 any choice.
  Either way a simple "no change" or the "proper" URL to redirect the 
 user to is all that is relevant to Squid.
 Or, (d) you could try to convince the browser people to do this name 
 search before contacting the proxy as well as before doing a direct 
 website connection. The PAC approach favoured by Henrik earlier goes a 
 way towards that without needing to wait for the browser people to add a 
 new feature.
 In summary, we have a choice between (a) very annoying behaviour, (b) 
 very nasty behaviour, or leaving Squid unchanged (c and d), with a 
 third-party script do all the tricky work to make everybody happy. We 
 opt to keep Squid simple whenever possible.
 You still get to choose from (c) and (d), and best of all, you get to 
 select through them how much user choice is followed and what name 
 variations you want to allow.
 Amos
Received on Mon Nov 07 2011 - 01:39:42 MST
This archive was generated by hypermail 2.2.0 : Mon Nov 07 2011 - 12:00:02 MST