comm_select.c bug

From: Andres Kroonmaa <andre@dont-contact.us>
Date: Fri, 8 Sep 2000 15:49:59 +0200

 Well, I think we've got a small nonfatal bug.
 Whoever added comm_poll_dns_incoming and commCheckDNSIncoming
 has forgot to patch cf.data.pre. Thus several timing vars
 were uninitialised: incoming_dns_average, dns_min_poll

 The result was that DNS socket was polled like nuts: strictly
 after each and every socket operation. With this patch things
 get back to normal.
 
 PS. I didn't notice any cpu usage drop (still 100%) or performance
 increase, but seeing much less useless poll()'s in truss output
 gives me way warmer feeling. ;)

--- cf.data.pre.orig Sat Jul 1 14:11:56 2000
+++ cf.data.pre Fri Sep 8 15:03:35 2000
@@ -2964,12 +2964,24 @@
 LOC: Config.comm_incoming.http_average
 DOC_NONE
 
+NAME: incoming_dns_average
+TYPE: int
+DEFAULT: 4
+LOC: Config.comm_incoming.dns_average
+DOC_NONE
+
 NAME: min_icp_poll_cnt
 TYPE: int
 DEFAULT: 8
 LOC: Config.comm_incoming.icp_min_poll
 DOC_NONE
 
+NAME: min_dns_poll_cnt
+TYPE: int
+DEFAULT: 8
+LOC: Config.comm_incoming.dns_min_poll
+DOC_NONE
+
 NAME: min_http_poll_cnt
 TYPE: int
 DEFAULT: 8
@@ -2981,8 +2993,10 @@
 
 incoming_icp_average 6
 incoming_http_average 4
+incoming_dns_average 4
 min_icp_poll_cnt 8
 min_http_poll_cnt 8
+min_dns_poll_cnt 8
 DOC_END
 
 NAME: max_open_disk_fds

------------------------------------
 Andres Kroonmaa <andre@online.ee>
 Delfi Online
 Tel: 6501 731, Fax: 6501 708
 Pärnu mnt. 158, Tallinn,
 11317 Estonia
Received on Fri Sep 08 2000 - 07:53:04 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:36 MST