100 likes | 252 Vues
This document explores key performance metrics in distributed computing systems, focusing on historical laws such as Grosch’s Law, Amdahl’s Law, and Gustafson-Barsis Law. Grosch’s Law proposed that to sell a computer for double the price, it needed to be four times faster, a notion that became obsolete post-1970 as inexpensive, faster computers emerged. Amdahl’s Law established limits on speedup based on serial and parallel fractions, while Gustafson-Barsis Law demonstrated that speedup could be dramatically improved by scaling problems to fit parallel architectures, defying earlier pessimistic predictions.
E N D
CENG 532- Distributed Computing Systems Measures of Performance
Grosch’s Law-1960’s • “To sell a computer twıce as much, ıt mustbe four times as fast” • It was Ok at the time, but soon it became meaningless • After 1970, it was possible to make faster computers and sell even cheaper…. • Ultimately the switching speeds rach a limit, the speed of the light on an integrated circuit…
Von Neumann’s Bottleneck • Serial single processor computer architectures followed John Von Neumann’s architecture of 1940-1950. • One processor, single control unit, single memory • This is no more valid: Low cost parallel computers can easily deliver the performance of the fastest single processor computer…
Amdahl’s Law; 1967 • Let speedup (S) be ratio of serial time (one processor) to parallel time (N processors) S=T1/TN < 1/f Where f is the serial fraction of the problem, 1-f is the parallel fraction of the problem, then Tn= T1*f+T1(1-f)/N • S=1/(f+(1-f)/N), thus s<1/f
Amdahl’s Law; 1967 • At f=0.10, Amdahl’ Law predicts, at best a tenfold speedup, which is very pessimistic • This was soon broken, encouraged by Gordon Bell Prize!
Gustafson-Barsis Law; 1988 • The team of researchers of Sandia Labs (John Gustafson and Ed Barsis) , using 1024 processor nCube/10, overthrew Amdahl’s Law, by achieving 1000 fold speedup with f-0.004 to 0.008. • According to Amdahl’s Law, the speedup would have been from 125 to 250. • The key point was that 1-f was not independent of N.
Gustafson-Barsis Law; 1988 • They interpreted the speedup formula, by scaling up the problem to fit the parallel machine: T1=f+(1-f)N TN=f+(1-f)=1, then the speedup can be computes as S=N-(N-1)f
Extreme case analysis • Assuming Amdahl’s Law, an upper and lower bound can be given for the speedup, under unrealistic assumptions: N/log2N <= S <= N where N is based on single processor logN is based on divide and conquer
Inclusion of the communication time • Some researchers (Gelenbe) suggests speedup to be approxımated by S=1/C(N) where C(N) is some function of N • For example, C(N) can be estimated as C(N)=A+Blog2N where A and B are constants determined by the communication mechanisms
Benchmark Performance • Benchmark ıs a program whose purpose is to measure a performance characteristicof a computer system, such as floating point speed, I/O speed, or for a restricted class of problems • The benchmarks are arranged to be either • Kernels of real applications, such as Linpacks, Livermore Loops, or • Synthetic, approximating the behavior of the real problem, such as Whetstone and Wichmann…