The Problem
Pearson's r is quietly optimistic.
When your measurements are noisy (they almost always are), the observed correlation between two variables understimates the true underlying relationship. The more noise, the more the coefficient shrinks toward zero. If you're comparing correlations across conditions or datasets with different noise floors, you're not comparing what you think you're comparing.
Attenuation correction fixes this by asking: what would the correlation be if we had infinite samples? The answer is r_ac, the limit of Pearson's r as n -> ∞.
How It Works
Reliability here is defined as mean pairwise Pearson r across all sample pairs within or between classes, computed in Fisher Z-space to handle distributional skew, then back-transformed. The corrected coefficient is then:
r_ac = rel(A,B) / sqrt(rel(A,A) * rel(B,B))
Cross-class reliability normalized by the geometric mean of within-class reliabilities. If either within-class reliability term isn't significantly greater than zero (one-sided t-test, configurable alpha), the result is NaN rather than silently garbage. Implementation
Arrays A and B can have different numbers of samples (rows) but must share feature dimensionality (columns).
Repeated measures are handled via feature labels: rows from the same subject are averaged before reliability estimation, avoiding inflated degrees of freedom.