[Fwd: 1.2b20-1: Async IO fixes & optimizations]

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Tue, 05 May 1998 19:18:02 +0000

This is a multi-part message in MIME format.

--------------7B8EC2AA17A7F80D27409F14
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Hi Stewart,

Here is all changes I made to the thread handling:

The 1 second timeout was fixed in a slightly different way:
* Threads send a SIGCONT signal to the main thread, which breaks the
select() call (EINTR).
* A slightly lowered timeout (1/4 second), in case we miss the signal.
* Two "hints": one telling the threads that the main loop is blocked on
select and one telling the main loop that a thread has finished. The
hints are used to avoid uneccesary signals/blocking selects.

The "spin"-situation was fixed by properly protecting the condition
variable with a mutex (the documented way to use pthread_cond_wait).
Non-blocking main thread is achived by using the non-blocking
pthread_mutex_trylock instead of _lock.

If some platform does not support _mutex_trylock, it could be replaced
with a blocking _mutex_lock. The penalty is a block for 2 context
switches to let the thread enter pthread_cond_wait.

The I/O thread holds the mutex permanently, except during
pthread_cond_wait. The main thread holds the mutex when it knows that
the I/O thread is idle.

The "Solaris hack" is completely removed. Instead the thread side code
simply reads

thread_loop()
{
  pthread_mutex_lock(&threadp->mutex)
  for(;;) {
    while (threadp->state != _THREAD_BUSY) {
      threadp->state = _THREAD_WAITING;
      pthread_cond_wait(&(threadp->cond));
    }
    request = threadp->req;
    [process request]
  }
  pthread_mutex_unlock(&threadp->mutex);
}

Remember that the mutex is held at all times, properly protecting any
updates/reads in the threadp structure.

The main thread code is a little bit more complex:

/* Detect a completed thread */
  loop thought all busy threads {
     if (threadp->state != _THREAD_BUSY)
        if (pthread_mutex_trylock(&threadp->mutex) == 0);
            break; /* A finished thread found */
   }

/* Send a request */
   threadp->req = request;
   threadp->state = _THREAD_BUSY;
   pthread_cond_signal(&threadp->cond);
   pthread_mutex_unlock(&threadp->mutex);

The pthred->state check is MT safe, despite the fact that it is not
protected. It is used to avoid uneccesary calls to
pthread_mutex_trylock(). _trylock is used instead of _lock as there is a
slight chance that a context switch ocurred in the I/O thread between
setting threadp->state and the pthread_cond_wait() call.

/Henrik

--------------7B8EC2AA17A7F80D27409F14
Content-Type: message/rfc822
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

Message-ID: <354E2EFC.18295EE9@hem.passagen.se>
Date: Mon, 04 May 1998 21:11:24 +0000
From: Henrik Nordstrom <hno@hem.passagen.se>
X-Mailer: Mozilla 3.01Gold (X11; I; Linux 2.0.32 i586)
MIME-Version: 1.0
To: squid-bugs@nlanr.net
Subject: 1.2b20-1: Async IO fixes & optimizations
Content-Type: multipart/mixed; boundary="------------3470E9FF3CD6469F3F834C6A"

This is a multi-part message in MIME format.

--------------3470E9FF3CD6469F3F834C6A
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

* Fixes a bug where the file offset was not preserved on reads. This was
crashing pipelined connections (defered replies). (write file offsets is
still not implemented, async or not).

* Added a queue of completed requests, to allow quicker reuse of
finished threads.

* Improved responsiveness on lightly loaded servers
1. Threads interrupt select() by sending SIGCONT to the main thread when
finished.
2. Shorter select() timeout, in case we missed the signal
3. comm.c and aiops.c hints each other when blocking on select or when a
thread is finished, to avoid uneccesary signals or blocking selects().

/Henrik

--------------3470E9FF3CD6469F3F834C6A
Content-Type: text/plain; charset=us-ascii; name="squid-1.2.beta20-1.asyncio.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline; filename="squid-1.2.beta20-1.asyncio.patch"

Index: squid/src/protos.h
diff -u squid/src/protos.h:1.1.1.19 squid/src/protos.h:1.1.1.19.2.3
--- squid/src/protos.h:1.1.1.19 Sat Apr 25 14:47:53 1998
+++ squid/src/protos.h Mon May 4 22:42:54 1998
@@ -48,8 +48,8 @@
 extern void aioCancel(int, void *);
 extern void aioOpen(const char *, int, mode_t, AIOCB *, void *, void *);
 extern void aioClose(int);
-extern void aioWrite(int, char *, int, AIOCB *, void *);
-extern void aioRead(int, char *, int, AIOCB *, void *);
+extern void aioWrite(int, int offset, char *, int size, AIOCB *, void *);
+extern void aioRead(int, int offset, char *, int size, AIOCB *, void *);
 extern void aioStat(char *, struct stat *, AIOCB *, void *, void *);
 extern void aioUnlink(const char *, AIOCB *, void *);
 extern void aioCheckCallbacks(void);
Index: squid/src/comm.c
diff -u squid/src/comm.c:1.1.1.19 squid/src/comm.c:1.1.1.19.2.1
--- squid/src/comm.c:1.1.1.19 Sat Apr 25 14:47:46 1998
+++ squid/src/comm.c Mon May 4 22:43:31 1998
@@ -919,17 +919,14 @@
 #else
         poll_time = sec > 0 ? 1000 : 0;
 #endif
- for (;;) {
- num = poll(pfds, nfds, poll_time);
- Counter.select_loops++;
- if (num >= 0)
- break;
+ num = poll(pfds, nfds, poll_time);
+ Counter.select_loops++;
+ if (num < 0)
             if (ignoreErrno(errno))
                 continue;
             debug(5, 0) ("comm_poll: poll failure: %s\n", xstrerror());
             assert(errno != EINVAL);
             return COMM_ERROR;
- /* NOTREACHED */
         }
         debug(5, num ? 5 : 8) ("comm_poll: %d sockets ready\n", num);
         /* Check timeout handlers ONCE each second. */
@@ -1065,23 +1062,26 @@
             debug(5, 2) ("comm_select: Still waiting on %d FDs\n", nfds);
         if (nfds == 0)
             return COMM_SHUTDOWN;
- for (;;) {
- poll_time.tv_sec = sec > 0 ? 1 : 0;
- poll_time.tv_usec = 0;
- num = select(maxfd, &readfds, &writefds, NULL, &poll_time);
- Counter.select_loops++;
- if (num >= 0)
- break;
+
+#if USE_ASYNC_IO
+ poll_time.tv_sec = 0;
+ poll_time.tv_usec = sec > 0 && !aio_done ? 250 : 0;
+#else
+ poll_time.tv_sec = sec > 0 ? 1 : 0;
+ poll_time.tv_usec = 0;
+#endif
+ squid_in_select=1; /* A hint to aio threads */
+ num = select(maxfd, &readfds, &writefds, NULL, &poll_time);
+ squid_in_select=0;
+ Counter.select_loops++;
+ if (num < 0) {
             if (ignoreErrno(errno))
- break;
+ continue;
             debug(50, 0) ("comm_select: select failure: %s\n",
                 xstrerror());
             examine_select(&readfds, &writefds);
             return COMM_ERROR;
- /* NOTREACHED */
         }
- if (num < 0)
- continue;
         debug(5, num ? 5 : 8) ("comm_select: %d sockets ready at %d\n",
             num, (int) squid_curtime);
 
Index: squid/src/globals.h
diff -u squid/src/globals.h:1.1.1.15 squid/src/globals.h:1.1.1.15.2.1
--- squid/src/globals.h:1.1.1.15 Sat Apr 25 14:47:48 1998
+++ squid/src/globals.h Mon May 4 22:44:12 1998
@@ -103,3 +103,5 @@
 extern const char *StoreDigestUrlPath; /* "store_digest" */
 extern const char *StoreDigestMimeStr; /* "application/cache-digest" */
 extern const Version CacheDigestVer; /* { 2, 2 } */
+extern int squid_in_select; /* 0 */
+extern int aio_done; /* 0 */
Index: squid/src/main.c
diff -u squid/src/main.c:1.1.1.18 squid/src/main.c:1.1.1.18.2.1
--- squid/src/main.c:1.1.1.18 Sat Apr 25 14:47:51 1998
+++ squid/src/main.c Mon May 4 22:44:35 1998
@@ -287,6 +287,16 @@
 #endif
 }
 
+#if USE_ASYNC_IO
+static void
+dummy_handler(int sig)
+{
+#if !HAVE_SIGACTION
+ signal(sig, dummy_handler);
+#endif
+}
+#endif
+
 #if ALARM_UPDATES_TIME
 static void
 time_tick(int sig)
@@ -512,6 +522,9 @@
 #if ALARM_UPDATES_TIME
     squid_signal(SIGALRM, time_tick, SA_RESTART);
     alarm(1);
+#endif
+#if USE_ASYNC_IO
+ squid_signal(SIGCONT, dummy_handler, SA_RESTART);
 #endif
     debug(1, 0) ("Ready to serve requests.\n");
 
Index: squid/src/comm.c
diff -u squid/src/comm.c:1.1.1.19 squid/src/comm.c:1.1.1.19.2.1
--- squid/src/comm.c:1.1.1.19 Sat Apr 25 14:47:46 1998
+++ squid/src/comm.c Mon May 4 22:43:31 1998
@@ -919,17 +919,14 @@
 #else
         poll_time = sec > 0 ? 1000 : 0;
 #endif
- for (;;) {
- num = poll(pfds, nfds, poll_time);
- Counter.select_loops++;
- if (num >= 0)
- break;
+ num = poll(pfds, nfds, poll_time);
+ Counter.select_loops++;
+ if (num < 0)
             if (ignoreErrno(errno))
                 continue;
             debug(5, 0) ("comm_poll: poll failure: %s\n", xstrerror());
             assert(errno != EINVAL);
             return COMM_ERROR;
- /* NOTREACHED */
         }
         debug(5, num ? 5 : 8) ("comm_poll: %d sockets ready\n", num);
         /* Check timeout handlers ONCE each second. */
@@ -1065,23 +1062,26 @@
             debug(5, 2) ("comm_select: Still waiting on %d FDs\n", nfds);
         if (nfds == 0)
             return COMM_SHUTDOWN;
- for (;;) {
- poll_time.tv_sec = sec > 0 ? 1 : 0;
- poll_time.tv_usec = 0;
- num = select(maxfd, &readfds, &writefds, NULL, &poll_time);
- Counter.select_loops++;
- if (num >= 0)
- break;
+
+#if USE_ASYNC_IO
+ poll_time.tv_sec = 0;
+ poll_time.tv_usec = sec > 0 && !aio_done ? 250 : 0;
+#else
+ poll_time.tv_sec = sec > 0 ? 1 : 0;
+ poll_time.tv_usec = 0;
+#endif
+ squid_in_select=1; /* A hint to aio threads */
+ num = select(maxfd, &readfds, &writefds, NULL, &poll_time);
+ squid_in_select=0;
+ Counter.select_loops++;
+ if (num < 0) {
             if (ignoreErrno(errno))
- break;
+ continue;
             debug(50, 0) ("comm_select: select failure: %s\n",
                 xstrerror());
             examine_select(&readfds, &writefds);
             return COMM_ERROR;
- /* NOTREACHED */
         }
- if (num < 0)
- continue;
         debug(5, num ? 5 : 8) ("comm_select: %d sockets ready at %d\n",
             num, (int) squid_curtime);
 
Index: squid/src/aiops.c
diff -u squid/src/aiops.c:1.1.1.7 squid/src/aiops.c:1.1.1.7.8.1
--- squid/src/aiops.c:1.1.1.7 Tue Mar 24 20:00:11 1998
+++ squid/src/aiops.c Mon May 4 22:45:23 1998
@@ -1,4 +1,3 @@
-
 /*
  * $Id$
  *
@@ -43,22 +42,27 @@
 #define NUMTHREADS 16
 #define RIDICULOUS_LENGTH 4096
 
-#define _THREAD_STARTING 0
-#define _THREAD_WAITING 1
-#define _THREAD_BUSY 2
-#define _THREAD_FAILED 3
-
-
-#define _AIO_OP_OPEN 0
-#define _AIO_OP_READ 1
-#define _AIO_OP_WRITE 2
-#define _AIO_OP_CLOSE 3
-#define _AIO_OP_UNLINK 4
-#define _AIO_OP_OPENDIR 5
-#define _AIO_OP_STAT 6
+enum _aio_thread_status {
+ _THREAD_STARTING=0,
+ _THREAD_WAITING,
+ _THREAD_BUSY,
+ _THREAD_FAILED,
+ _THREAD_DONE,
+};
+
+enum _aio_request_type {
+ _AIO_OP_NONE=0,
+ _AIO_OP_OPEN,
+ _AIO_OP_READ,
+ _AIO_OP_WRITE,
+ _AIO_OP_CLOSE,
+ _AIO_OP_UNLINK,
+ _AIO_OP_OPENDIR,
+ _AIO_OP_STAT,
+};
 
 typedef struct aio_request_t {
- int request_type;
+ enum _aio_request_type request_type;
     int cancelled;
     char *path;
     int oflag;
@@ -80,11 +84,10 @@
 
 typedef struct aio_thread_t {
     pthread_t thread;
- int status;
+ enum _aio_thread_status status;
     pthread_mutex_t mutex; /* Mutex for testing condition variable */
     pthread_cond_t cond; /* Condition variable */
     struct aio_request_t *req;
- struct aio_request_t *donereq;
     struct aio_thread_t *next;
 } aio_thread_t;
 
@@ -99,8 +102,6 @@
 aio_result_t *aio_poll_done();
 
 static void aio_init(void);
-static void aio_free_thread(aio_thread_t *);
-static void aio_cleanup_and_free(aio_thread_t *);
 static void aio_queue_request(aio_request_t *);
 static void aio_process_request_queue(void);
 static void aio_cleanup_request(aio_request_t *);
@@ -115,6 +116,7 @@
 static void *aio_thread_opendir(void *);
 #endif
 static void aio_debug(aio_request_t *);
+static void aio_poll_threads(void);
 
 static aio_thread_t thread[NUMTHREADS];
 static int aio_initialised = 0;
@@ -124,17 +126,19 @@
 static int num_free_requests = 0;
 static aio_request_t *request_queue_head = NULL;
 static aio_request_t *request_queue_tail = NULL;
+static aio_request_t *request_done_head = NULL;
+static aio_request_t *request_done_tail = NULL;
 static aio_thread_t *wait_threads = NULL;
 static aio_thread_t *busy_threads_head = NULL;
 static aio_thread_t *busy_threads_tail = NULL;
 static pthread_attr_t globattr;
 static struct sched_param globsched;
+static pthread_t main_thread;
 
 static void
 aio_init(void)
 {
     int i;
- pthread_t self;
     aio_thread_t *threadp;
 
     if (aio_initialised)
@@ -142,9 +146,9 @@
 
     pthread_attr_init(&globattr);
     pthread_attr_setscope(&globattr, PTHREAD_SCOPE_SYSTEM);
     globsched.sched_priority = 1;
- self = pthread_self();
- pthread_setschedparam(self, SCHED_OTHER, &globsched);
+ main_thread = pthread_self();
+ pthread_setschedparam(main_thread, SCHED_OTHER, &globsched);
     globsched.sched_priority = 2;
     pthread_attr_setschedparam(&globattr, &globsched);
 
@@ -162,7 +167,6 @@
             continue;
         }
         threadp->req = NULL;
- threadp->donereq = NULL;
         if (pthread_create(&(threadp->thread), &globattr, aio_thread_loop, threadp)) {
             fprintf(stderr, "Thread creation failed\n");
             threadp->status = _THREAD_FAILED;
@@ -170,6 +174,7 @@
         }
         threadp->next = wait_threads;
         wait_threads = threadp;
+ pthread_mutex_lock(&threadp->mutex);
     }
 
     aio_initialised = 1;
@@ -181,8 +186,6 @@
 {
     aio_thread_t *threadp = (aio_thread_t *) ptr;
     aio_request_t *request;
- struct timespec abstime;
- int ret;
     sigset_t new;
 
     /* Make sure to ignore signals which may possibly get sent to the parent */
@@ -199,54 +202,52 @@
     sigaddset(&new, SIGALRM);
     pthread_sigmask(SIG_BLOCK, &new, NULL);
 
+ pthread_mutex_lock(&threadp->mutex);
     while (1) {
- /* BELOW is done because Solaris 2.5.1 doesn't support semaphores!!! */
- /* Use timed wait to avoid race where thread context switches after */
- /* threadp->status gets set but before the condition wait happens. */
- /* In that case, a race occurs when the parent signals the condition */
- /* but this thread will never receive it. Recheck every 2-3 secs. */
- /* Also provides bonus of keeping thread contexts hot in CPU cache */
- /* (ie. faster thread reactions) at slight expense of CPU time. */
- while (threadp->req == NULL) {
- abstime.tv_sec = squid_curtime + 3;
- abstime.tv_nsec = 0;
+ while (threadp->status != _THREAD_BUSY) {
             threadp->status = _THREAD_WAITING;
- ret = pthread_cond_timedwait(&(threadp->cond),
- &(threadp->mutex),
- &abstime);
+ if (squid_in_select)
+ pthread_kill(main_thread,SIGCONT); /* Break select() */
+ pthread_cond_wait(&(threadp->cond), &(threadp->mutex));
         }
         request = threadp->req;
- switch (request->request_type) {
- case _AIO_OP_OPEN:
- aio_thread_open(threadp);
- break;
- case _AIO_OP_READ:
- aio_thread_read(threadp);
- break;
- case _AIO_OP_WRITE:
- aio_thread_write(threadp);
- break;
- case _AIO_OP_CLOSE:
- aio_thread_close(threadp);
- break;
- case _AIO_OP_UNLINK:
- aio_thread_unlink(threadp);
- break;
-#if AIO_OPENDIR
- /* Opendir not implemented yet */
- case _AIO_OP_OPENDIR:
- aio_thread_opendir(threadp);
- break;
+ errno = 0;
+ if (!request->cancelled) {
+ switch (request->request_type) {
+ case _AIO_OP_OPEN:
+ aio_thread_open(threadp);
+ break;
+ case _AIO_OP_READ:
+ aio_thread_read(threadp);
+ break;
+ case _AIO_OP_WRITE:
+ aio_thread_write(threadp);
+ break;
+ case _AIO_OP_CLOSE:
+ aio_thread_close(threadp);
+ break;
+ case _AIO_OP_UNLINK:
+ aio_thread_unlink(threadp);
+ break;
+#if AIO_OPENDIR /* Opendir not implemented yet */
+ case _AIO_OP_OPENDIR:
+ aio_thread_opendir(threadp);
+ break;
 #endif
- case _AIO_OP_STAT:
- aio_thread_stat(threadp);
- break;
- default:
- threadp->donereq->ret = -1;
- threadp->donereq->err = EINVAL;
- break;
+ case _AIO_OP_STAT:
+ aio_thread_stat(threadp);
+ break;
+ default:
+ threadp->req->ret = -1;
+ threadp->req->err = EINVAL;
+ break;
+ }
+ } else { /* cancelled */
+ threadp->req->ret = -1;
+ threadp->req->err = EINTR;
         }
- threadp->req = NULL;
+ threadp->status = _THREAD_DONE;
+ aio_done = 1; /* Hint to comm_select() */
     } /* while */
 } /* aio_thread_loop */
 
@@ -259,6 +260,7 @@
     if ((req = free_requests) != NULL) {
         free_requests = req->next;
         num_free_requests--;
+ req->next = NULL;
         return req;
     }
     return (aio_request_t *) xmalloc(sizeof(aio_request_t));
@@ -272,11 +274,13 @@
     /* it reflects the sort of load the squid server will experience. A */
     /* higher load will mean a need for more threads, which will in turn mean */
     /* a need for a bigger free request pool. */
+ /* Threads <-> requests are now partially asyncronous, use NUMTHREADS * 2 */
 
- if (num_free_requests >= NUMTHREADS) {
+ if (num_free_requests >= NUMTHREADS * 2) {
         xfree(req);
         return;
     }
+ memset(req,0,sizeof(*req));
     req->next = free_requests;
     free_requests = req;
     num_free_requests++;
@@ -286,8 +290,6 @@
 static void
 aio_do_request(aio_request_t * requestp)
 {
- aio_thread_t *threadp;
-
     if (wait_threads == NULL && busy_threads_head == NULL) {
         fprintf(stderr, "PANIC: No threads to service requests with!\n");
         exit(-1);
@@ -312,7 +314,9 @@
         request_queue_tail = requestp;
     }
     requestp->next = NULL;
- if (++request_queue_len > NUMTHREADS) {
+ if (++request_queue_len > NUMTHREADS / 2)
+ aio_poll_threads();
+ if (request_queue_len > NUMTHREADS) {
         if (squid_curtime > (last_warn + 15)) {
             debug(43, 1) ("aio_queue_request: WARNING - Async request queue growing: Length = %d\n", request_queue_len);
             debug(43, 1) ("aio_queue_request: Perhaps you should increase NUMTHREADS in aiops.c\n");
@@ -379,7 +383,6 @@
         wait_threads = wait_threads->next;
 
         threadp->req = requestp;
- threadp->donereq = requestp;
         if (busy_threads_head != NULL)
             busy_threads_tail->next = threadp;
         else
@@ -389,6 +392,7 @@
 
         threadp->status = _THREAD_BUSY;
         pthread_cond_signal(&(threadp->cond));
+ pthread_mutex_unlock(&threadp->mutex);
     }
 } /* aio_process_request_queue */
 
@@ -420,7 +424,7 @@
     default:
         break;
     }
- if (!cancelled) {
+ if (resultp != NULL && !cancelled) {
         resultp->aio_return = requestp->ret;
         resultp->aio_errno = requestp->err;
     }
@@ -433,15 +437,26 @@
 {
     aio_thread_t *threadp;
     aio_request_t *requestp;
- int ret;
 
     for (threadp = busy_threads_head; threadp != NULL; threadp = threadp->next)
- if (threadp->donereq->resultp == resultp)
- threadp->donereq->cancelled = 1;
+ if (threadp->req->resultp == resultp) {
+ threadp->req->cancelled = 1;
+ threadp->req->resultp = NULL;
+ return 0;
+ }
     for (requestp = request_queue_head; requestp != NULL; requestp = requestp->next)
- if (requestp->resultp == resultp)
+ if (requestp->resultp == resultp) {
             requestp->cancelled = 1;
- return 0;
+ requestp->resultp = NULL;
+ return 0;
+ }
+ for (requestp = request_done_head; requestp != NULL; requestp = requestp->next)
+ if (requestp->resultp == resultp) {
+ requestp->cancelled = 1;
+ requestp->resultp = NULL;
+ return 0;
+ }
+ return 1;
 } /* aio_cancel */
 
 
@@ -709,49 +724,84 @@
 #endif
 
 
-aio_result_t *
-aio_poll_done()
+void
+aio_poll_threads(void)
 {
     aio_thread_t *prev;
     aio_thread_t *threadp;
     aio_request_t *requestp;
+
+ do { /* found thread */
+ prev = NULL;
+ threadp = busy_threads_head;
+ while (threadp) {
+ debug(43, 3) ("%d: %d -> %d\n",
+ threadp->thread,
+ threadp->req->request_type,
+ threadp->status);
+ if (threadp->status != _THREAD_BUSY)
+ if (pthread_mutex_trylock(&threadp->mutex) == 0)
+ break;
+ prev = threadp;
+ threadp = threadp->next;
+ }
+ if (threadp == NULL)
+ return;
+
+ if (prev == NULL)
+ busy_threads_head = busy_threads_head->next;
+ else
+ prev->next = threadp->next;
+
+ if (busy_threads_tail == threadp)
+ busy_threads_tail = prev;
+
+ requestp = threadp->req;
+ threadp->req = NULL;
+
+ threadp->next = wait_threads;
+ wait_threads = threadp;
+
+ if (request_done_tail != NULL)
+ request_done_tail->next = requestp;
+ else
+ request_done_head = requestp;
+ request_done_tail = requestp;
+ } while(threadp);
+
+ aio_process_request_queue();
+} /* aio_poll_threads */
+
+aio_result_t *
+aio_poll_done()
+{
+ aio_request_t *requestp, *prev;
     aio_result_t *resultp;
     int cancelled;
 
   AIO_REPOLL:
+ aio_poll_threads();
+ if (request_done_head == NULL) {
+ aio_done=0;
+ return NULL;
+ }
     prev = NULL;
- threadp = busy_threads_head;
- while (threadp) {
- debug(43, 3) ("%d: %d -> %d\n",
- threadp->thread,
- threadp->donereq->request_type,
- threadp->status);
- if (!threadp->req)
- break;
- prev = threadp;
- threadp = threadp->next;
+ requestp = request_done_head;
+ while (requestp->next) {
+ prev = requestp;
+ requestp = requestp->next;
     }
- if (threadp == NULL)
- return NULL;
-
     if (prev == NULL)
- busy_threads_head = busy_threads_head->next;
+ request_done_head = requestp->next;
     else
- prev->next = threadp->next;
-
- if (busy_threads_tail == threadp)
- busy_threads_tail = prev;
+ prev->next = requestp->next;
+ request_done_tail = prev;
 
- requestp = threadp->donereq;
- threadp->donereq = NULL;
     resultp = requestp->resultp;
+ cancelled = requestp->cancelled;
     aio_debug(requestp);
     debug(43, 3) ("DONE: %d -> %d\n", requestp->ret, requestp->err);
- threadp->next = wait_threads;
- wait_threads = threadp;
- cancelled = requestp->cancelled;
     aio_cleanup_request(requestp);
- aio_process_request_queue();
     if (cancelled)
         goto AIO_REPOLL;
     return resultp;
Index: squid/src/async_io.c
diff -u squid/src/async_io.c:1.1.1.4 squid/src/async_io.c:1.1.1.4.16.1
--- squid/src/async_io.c:1.1.1.4 Thu Feb 5 22:44:58 1998
+++ squid/src/async_io.c Mon May 4 22:45:23 1998
@@ -189,9 +189,10 @@
 
 
 void
-aioWrite(int fd, char *bufp, int len, AIOCB * callback, void *callback_data)
+aioWrite(int fd, int offset, char *bufp, int len, AIOCB * callback, void *callback_data)
 {
     aio_ctrl_t *ctrlp;
+ int seekmode;
 
     if (!initialised)
         aioInit();
@@ -216,7 +217,13 @@
     ctrlp->done_handler = callback;
     ctrlp->done_handler_data = callback_data;
     ctrlp->operation = _AIO_WRITE;
- if (aio_write(fd, bufp, len, 0, SEEK_END, &(ctrlp->result)) < 0) {
+ if (offset >= 0)
+ seekmode = SEEK_SET;
+ else {
+ seekmode = SEEK_END;
+ offset = 0;
+ }
+ if (aio_write(fd, bufp, len, offset, seekmode, &(ctrlp->result)) < 0) {
         if (errno == ENOMEM || errno == EAGAIN || errno == EINVAL)
             errno = EWOULDBLOCK;
         if (callback)
@@ -231,9 +238,10 @@
 
 
 void
-aioRead(int fd, char *bufp, int len, AIOCB * callback, void *callback_data)
+aioRead(int fd, int offset, char *bufp, int len, AIOCB * callback, void *callback_data)
 {
     aio_ctrl_t *ctrlp;
+ int seekmode;
 
     if (!initialised)
         aioInit();
@@ -258,7 +266,13 @@
     ctrlp->done_handler = callback;
     ctrlp->done_handler_data = callback_data;
     ctrlp->operation = _AIO_READ;
- if (aio_read(fd, bufp, len, 0, SEEK_CUR, &(ctrlp->result)) < 0) {
+ if (offset >= 0)
+ seekmode = SEEK_SET;
+ else {
+ seekmode = SEEK_CUR;
+ offset = 0;
+ }
+ if (aio_read(fd, bufp, len, offset, seekmode, &(ctrlp->result)) < 0) {
         if (errno == ENOMEM || errno == EAGAIN || errno == EINVAL)
             errno = EWOULDBLOCK;
         if (callback)
Index: squid/src/disk.c
diff -u squid/src/disk.c:1.1.1.10 squid/src/disk.c:1.1.1.10.6.1
--- squid/src/disk.c:1.1.1.10 Wed Apr 8 22:07:11 1998
+++ squid/src/disk.c Mon May 4 22:46:46 1998
@@ -241,6 +241,7 @@
     debug(6, 3) ("diskHandleWrite: FD %d\n", fd);
     /* We need to combine subsequent write requests after the first */
     /* But only if we don't need to seek() in betwen them, ugh! */
+ /* XXX This currently ignores any seeks (file_offset) */
     if (fdd->write_q->next != NULL && fdd->write_q->next->next != NULL) {
         len = 0;
         for (q = fdd->write_q->next; q != NULL; q = q->next)
@@ -273,6 +274,7 @@
     assert(fdd->write_q->len > fdd->write_q->buf_offset);
 #if USE_ASYNC_IO
     aioWrite(fd,
+ -1, /* seek offset, -1 == append */
         fdd->write_q->buf + fdd->write_q->buf_offset,
         fdd->write_q->len - fdd->write_q->buf_offset,
         diskHandleWriteComplete,
@@ -480,6 +482,7 @@
     ctrlp->data = ctrl_dat;
 #if USE_ASYNC_IO
     aioRead(fd,
+ ctrl_dat->offset,
         ctrl_dat->buf,
         ctrl_dat->req_len,
         diskHandleReadComplete,

--------------3470E9FF3CD6469F3F834C6A--

--------------7B8EC2AA17A7F80D27409F14--
Received on Tue Jul 29 2003 - 13:15:49 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:11:46 MST