FTP problems

From: Brent Foster <B.R.Foster@dont-contact.us>
Date: Mon, 19 May 1997 22:49:09 +1200

We're currently in the process of forcing people through our cache by
disabling HTTP traffic from anywhere but the cache machine.
People are now reporting FTP problems. We've always had the problem, but
it's become more annoying now that everyone is using the cache (Squid
1.1.10).
We are ending up with partial FTPed files in the cache, presumably from a
broken transfer. Once an incomplete file is in the cache there isn't any
obvious way for a user to cause a reload (is there?), so every time they
FTP the file they get the incomplete copy from the cache.
I'm not familiar with the FTP protocol, but I presume it's happening
because there isn't (always) the ability to determine the file size built
into the protocol. Is there any way for squid to check the sizes of FTPed
files to ensure that the complete file is cached? (e.g. could it use the
DIR command (does that always work?) to determine the file size, then
ensure that the whole file ends up in the cache?).
This is a major problem for us, because currently our international link is
badly overflowing, so we are very likely dropping packets everywhere, and
it is very difficult to get a complete FTP transfer. Is anyone else seeing
the same thing?

Brent Foster
Systems Programmer, Massey University, Palmerston North, New Zealand
Received on Mon May 19 1997 - 22:39:31 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:35:13 MST