Pavel Kankovsky wrote: > > On Thu, 17 May 2001, Klaus Frank wrote: > > > Is it possible to average the measured durations of a huge amount of > > connection attempts until the differences from memcmp() add to a peak > > that exceeds the added random part of the additional delay? > > In theory: yes. In practice: it may be possible but it is probably not as > feasible as it might look at the first glance. (*) Even if the probes are > sent by a local user, they still have to pass through too much other > software. The signal--the differences in memcmp() timings--is measured in > few CPU clock ticks but the noise is much higher--tens, hundreds, maybe > even thousands of clock ticks (or more if no ultra-high precision clock is > available). I myself have done some experiments (using Pentium built in > tick counter) and the measurements appear to be too perturbed to provide > any clue about the number of correct bytes. (**) Perhaps some smart > noise-filtering techniques might make the results look better? Why noise-filtering? Since there seem to be no invalid low numbers, just take the minimum of a certain amount of tries (1000, 10000) and check whether those give you a clue -- i.e. try to find the ones with the lowest noise and compare them.
This archive was generated by hypermail 2b30 : Wed May 23 2001 - 10:47:43 PDT