Consistency and correlation are two widely used approaches to assessing the association between variables. Although they are similar and related, they represent totally different concepts of association. Consistency between evaluation variables assumes that the variables measure the same structure and the correlation between variables can also be assessed when measuring totally different structures. This conceptual difference requires the use of different statistical methods that may vary depending on the distribution of data and the interest of researchers in assessing consistency or relevance. For example, Pearson correlation is a common measurement method for evaluating correlations between continuous variables, which only provides useful information if it is used for variables corresponding to a linear relationship. This second approach should therefore provide a less conservative estimate of population reliability. In this report, we show how the interpretation of concordance can differ when reliability estimates are used either from a standardization population (here test-retest reliability) or from the population studied (here the class correlation coefficient). Figure 2. Comparison of inter-consulting reliability. Intra-class correlation coefficients (KICs presented as points) and confidence intervals corresponding to α = 0.05 (CIs, represented as an error bar) for parent-teacher assessments, mother-father assessments and for all pairs of assessors beyond subgroups of assessors. . . .