The speed of a program is the time it takes the program to execute. This could be measured in any increment of time. Speedup is defined as the time it takes a program to execute sequentially (with one processor) divided by the time it takes to execute in parallel (with many processors). The formula for speedup is:
$$ S_p = \frac{T_1}{T_p} $$Where:
\( S_p \) is the speedup obtained from using \( p \) processors.
\( T_1 \) is the time it takes the program to be executed sequentially.
\( T_p \) is the time it takes the program to be executed in parallel using \( p \) processors.
Linear speedup or ideal speedup is obtained when \( S_p = p \). When running an algorithm with linear speedup, doubling the number of processors doubles the speed. As this is ideal, it is considered very good scalability.
Amdahl's law states that the speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program.
As a formula:
$$ S_p = \frac{1}{F+\frac{1 - F}{p}} $$Where:
\( p \) is the number of processors.
\( S_p \) is the speedup obtained from using \( p \) processors.
\( F \) is the fraction of the program that must be executed sequentially. \( 0 \le F \le 1 \).
If \( p \) tends to \( \infty \), the maximum speedup tends to \( \frac{1}{F} \).
[AKHTER] pp. 13-15.