Whoo-hoo! This is getting lively. Where have we all been lurking? :-) >>>>> On Thu, 05 Dec 2002 16:48:33 -0500, "Marcus J. Ranum" <mjrat_private> said: MJR> Remember LAT? It was a DECNet terminal multiplexing thingie used by MJR> DEC Terminal servers in the 70's and 80's. It sucked. Why? Because MJR> it's _REALLY_ hard to apply security filtering to multiplexing protocols - MJR> the protocols get extended and the next thing you know you've got file MJR> transfer and sharing and IP-over-multiplexed-channels and everything MJR> gets very grotty very fast. Agreed. Like HTTP as the "universal firewall traversal protocol", with WebDAV, WebNFS, and the like. One hammer, many bent carriage bolts. LAT also died because it was not-routable, using MAC addresses for everything. No routers allowed. Not to mention that it was a DECNET protocol. But, yeah, I agree. I mean look at IRC+DCC+Ghu-knows-what. And instant messaging that also does streaming media and file sharing, etc. Truly, about the only use I really have for multiplexed channels would be to make it easier to do "urgent" and out-of-band stuff. You know, the stuff that caused so many early TCP stacks and TELNET clients and servers (and their developers) to go crazy. Separating control and data channels, like FTP. Oh, *there's* a great example. Never mind. You win. I still think that TCP URGENT is an "interesting hack" :-) MJR> The ostensible reason for having those kind of multiplexed protocols MJR> is usually something like "that way we don't have to have as many MJR> sockets open" which is a silly argument when you consider that Web MJR> Services have caused a lot of changes to Unix (and Windows) kernel MJR> internals to make lookups on sockets and management of large MJR> numbers of sockets more efficient. It's probably easier for a modern MJR> BSD system to handle 10,000 live sockets today than it was for an 80's MJR> BSD kernel to handle 1,000. Absolutely. The "too many sockets" argument just doesn't buy it for me. That said, one of the problems that we think we have solved is how to handle thousands (tens of thousands?) of long-lived TCP sessions. Our system adds new input processes to handle more incoming connections as needed. And no, not one proc per connection :-) And procs that have no open connections do go away... I was thinking in terms of 10K hosts logging to a single log host. Yes, you can do things with trees of relays in front of a single log host, but that gets ugly, fast. MJR> I suspect it was similar concerns that led the "designers" of syslog-1 MJR> to use UDP - no need to have a daemon that could effectively cope with MJR> large numbers of connected-state sockets. So, the braindamage began - MJR> I guess it was too hard to look up the man pages for shared memory MJR> system calls, or to just put it all in the /dev/log driver (and had a MJR> tamper-resistant-kernel-mode system logging facility like most REAL MJR> operating systems...) I guess we could ask Eric, right? :-) (I seem to recall, probably incorrectly, that Eric wrote the first syslog daemon?) MJR> I'm consistently amazed that people worry about syslog performance. ;) MJR> How can you, when 99% of syslog implementations share the common MJR> performance improvement mechanism of dropping the UDP packets MJR> when the transmit queue on the client overflows. (See earlier posts by MJR> me in this list regarding how bad it can get under loads...) Making MJR> something that can _receive_ all those packets is straightforward - MJR> fixing all the broken syslog clients is a much more serious problem. MJR> If the standards process actually worked, the entire syslog protocol MJR> (especially the UDP client side) would be deprecated. *sigh* Well, our daemon can act as a source (reading /dev/socket), a relay, or a final data sink. I'm not as worried about a single source client maxing out as much as I am about 20K (yes, twenty thousand) hosts all acting as part of a TeraFLOP or Peta-FLOP cluster all trying to get security-related audit data (authentication, perhaps, or maybe host-based IDS alerts) to a single final resting place. Yup, that's exotic, but that's the kind of stuff people in this building are planning. We'll be running clusters with 1K-2K hosts at the low end within a year or so. And we'll also be running clusters that have parts scattered across the US, e.g. here, NCSA, Pittsburgh, Argonne, etc. Trying to present some form of "single system" appearance to the users. MJR> Re-reading this before I hit "send" I muse briefly about the possibility MJR> of becoming a "software historian" - I don't think anyone's done that, yet. MJR> Hmmm... "Social and economic factors and their influence on the MJR> development of logging protocols in California during the early 1980's..." MJR> I can see it already... See _A Fire Upon the Deep_, Vernor Vinge. Software archaeologists who look through millienia of archived software, figuring out how to glue today's stuff to 45K-year old "legacy systems" :-) You'd better get started :-) --tep _______________________________________________ LogAnalysis mailing list LogAnalysisat_private http://lists.shmoo.com/mailman/listinfo/loganalysis
This archive was generated by hypermail 2b30 : Thu Dec 05 2002 - 15:39:10 PST