For a given time parameter T and a signal f(t), define the autocorrelation as the time average
B_T [f] = <f(t) f(t – T)>.
Then, setting T = 0 gives you dispersion (we assume <f> =0), and the behaviour of B_T for other values of T shows how fast the signal “forgets” its previous value.