Sorry, you need to enable JavaScript to visit this website.
Skip to main content
European Commission logo

Methodology and Technical Documentation

2.0Data analysis

2.1Calculating indicators

The methodology for calculating indicators within the European Higher Education Sector Observatory (EHESO) ensures the development of robust, comparable, and policy-relevant metrics. These indicators are designed to reflect the EU policy priorities of the European Education Area and the European strategy for universities.

The calculation process begins with the integration of multiple data sources, including ETER (European Tertiary Education Register), U-Multirank surveys, Eurostat, EUROSTUDENT, and the institutional and student surveys conducted by the Observatory. To ensure coherence, data from these sources are harmonised. This harmonisation involves aligning data definitions, formats, and units of measurement. The use of stable identifiers ensures that institutional-level data can be accurately matched across datasets.

Indicators are carefully selected based on their relevance to the EU policy objectives, such as measuring mobility, inclusion, and transitions to digital and green practices. The selection process involves consultations with stakeholders to ensure alignment with both policy needs and institutional benchmarking requirements. Once data are harmonised, the calculation process applies normalisation and standardisation techniques to account for variations in reporting practices across countries and institutions. For instance, monetary data are adjusted for purchasing power parity and converted to a common currency, ensuring comparability across different economic contexts.

The calculation of composite indicators, such as innovation capacity or inclusion indices, involves applying statistical methods like weighted averages. Derived metrics often incorporate secondary data sources, such as bibliometric datasets, to provide a richer analysis. Indicators are then cross-referenced with country-level data from sources like Eurostat to validate consistency. Longitudinal analyses help to identify trends over time, while year-on-year comparisons ensure the reliability of results.

Quality assurance plays a critical role in the methodology. Automated plausibility checks and manual validations are applied throughout the process. Longitudinal data analysis further enhances reliability by examining historical trends and flagging anomalies. The validation process also includes feedback from stakeholders, such as National Statistical Authorities (NSAs) and institutional representatives, allowing for corrections before indicators are finalised.

The finalised indicators are presented through EHESO’s visualisation tools, including the Scoreboard and benchmarking platforms. These tools enable users to explore indicators across various dimensions, such as institution type, country, and thematic focus. To ensure transparency, metadata accompany each indicator, detailing data sources, calculation methods, and any limitations. This rigorous methodology ensures that EHESO indicators are not only reliable and comparable but also directly aligned with the strategic goals of the Observatory. By providing actionable and high-quality data, EHESO empowers evidence-based decision-making for policymakers, institutions, and other stakeholders.