Pavel Kankovsky wrote: >On Thu, 17 May 2001, Klaus Frank wrote: >> Is it possible to average the measured durations of a huge amount of >> connection attempts until the differences from memcmp() add to a peak >> that exceeds the added random part of the additional delay? > >In theory: yes. In practice: it may be possible but it is probably not as >feasible as it might look at the first glance. (*) Even if the probes are >sent by a local user, they still have to pass through too much other >software. The signal--the differences in memcmp() timings--is measured in >few CPU clock ticks but the noise is much higher--tens, hundreds, maybe >even thousands of clock ticks (or more if no ultra-high precision clock is >available). If the noise were tens or hundreds of clocks, it might still be barely possible---maybe---to mount a practical attack using averaging. (This is is one technique used in Kocher's timing attacks.) Suppose the noise's contribution to the timing has standard deviation D. Then, if we can obtain N independent samples and average the resulting timings, the contribution of the noise to the average will diminish by a factor of sqrt(N): its standard deviation will be D/sqrt(N). Suppose the difference between a correct and incorrect guess is C clock ticks. Then we need C >> D/sqrt(N), i.e., N >> D^2/C^2. With D/C = 10 (e.g., tens of clock ticks of noise, a few clock ticks of signal), we can expect to need about N = 1000 samples per trial. With 100 possible characters in each position of the password, and a 8-character password, this would need 800,000 requests, and if each one takes a few milliseconds, that's on the order of tens of minutes. With D/C = 100, we'll need about 80,000,000 requests, or a few days, which might still be barely in the realm of possibility.
This archive was generated by hypermail 2b30 : Tue May 22 2001 - 09:19:04 PDT