Data quality and comparability

Most sources of macro data present their data in a common framework and in standardised tables. But this does not necessarily imply that the data are truly comparable – in the sense that they are measuring the same overarching concept – nor does it mean that the data are of a high quality. In fact, several scholars have argued that many of the most commonly used datasets with contextual statistics are unreliable, suffering from problems of inaccuracy and a lack of comparability.

Which data are potentially problematic?

Problems of comparability and data quality affect most kinds of data, though some statistics are more susceptible than others. In some areas, such as national accounts and labour force statistics, international guidelines exist that specify how central concepts are defined and how data should be collected. As a result, the definitions of variables such as the gross domestic product (GDP) and the unemployment rate are now broadly similar across countries. Nonetheless, even for these highly standardised variables there are some differences in definition and measurement between countries. Hence, even in cases where international organisations have spent decades to coordinate how data should be defined and collected, problems of comparability remain. Similar and sometimes even more serious problems affect the quality of other socio-economic statistics in areas such as income inequality (Atkinson and Brandolini 2001), social indicators (Strauss and Thomas 1996), education (de la Fuente and Doménech 2006), demography (Chamie 1994), poverty and inequality (Fields 1994), and social welfare spending (De deken and Kittel 2007); (Kühner 2007); (Siegel 2007). Datasets trying to measure political issues such as human-rights violations, corruption, political institutions and political regimes are perhaps even more prone to both conceptual and measurement problems. Such datasets are usually produced by individual researchers or non-governmental organisations, who tend to have limited resources for data collection purposes. In some areas, such as human-rights conditions and corruption, it is nearly impossible to get reliable data; other topics are difficult to measure objectively because of lack of consensus on the definition of basic political concepts.

The quality of quantitative data is generally better for developed countries than in the case of developing countries. In general, according to Dogan (1994), “the lower the level of development, the lower is also the validity of quantitative data”. Yet, low data quality is not exclusive to poor countries (Harkness 2004); (Herrera and Kapur 2007).

How important are these problems?

Although advances to methodology and to statistical packages have been developed to remedy some of the problems of low data quality, such problems will still affect the quality of social science research. Nevertheless, some scholars argue that researchers generally do not care much about the quality of the data they use. Herrera and Kapur (2007), for example, claim that “inattentiveness to data quality is, unfortunately, business as usual in political science.” Ignoring deficiencies in the data is clearly not a good way to do science. In general, whether or not the existing data sources are satisfactory in terms of quality and comparability depends on the research question at hand. Some scholars argue that reliability problems should not discourage researchers from doing quantitative analyses, and in some respects it seems reasonable to claim that the available contextual data are, indeed, good enough for the Western democracies. If a researcher simply is interested in some background information in order to interpret the survey data within a broader social context, it will not matter much that the aggregate data are not measuring the variable of interest with absolute precision. For example, whether the unemployment rate in the Netherlands in 2005 was 5.2, as the ILO reports (ILO 2007), or 4.6, as the OECD (2007) says, does not matter much for a researcher interested in comparing people’s attitudes in countries with different levels of unemployment; the Netherlands’ unemployment rate in 2005 was significantly lower than Poland’s whichever way you measure it (the corresponding numbers for Poland were 17.7 and 17.8).

Another reason for treating the existing and deficient macro data as satisfactory is that it is sometimes unrealistic to expect that better data can be collected. Many of the most significant aspects of political and social life cannot be measured with precision, but this does not mean that measures are worthless (Dogan 1994). Issues such as human-rights violations and corruption, for example, can never be measured with accuracy, but it is possible to obtain rough estimates that can still be useful.

This is not to say that scholars should be inattentive to problems of data quality and comparability. The validity and reliability of macro variables are important even when they are only used to put survey data in a broader social context. And in cases where scholars are primarily interested in properties of the contextual data themselves, such as in pooled time-series cross-section analysis, the quality of the data becomes all the more important. Differences between datasets purporting to measure the same thing may sometimes lead to significantly different results. Various ways of estimating GDP at purchasing power parity, for example, may lead to different estimates of the GDP of poor countries relative to rich ones (Hill 2000), and the impact of different variables on regime type varies depending on which democracy index you use (Casper and Tufis 2003).

How to avoid these problems?

All researchers who incorporate macro data in their analyses should, therefore, take the question of data quality seriously. First, they should take the uncertainty of the data into account when they interpret their results, possibly by factoring the measurement error into an estimate of the degree of confidence attached to the data. Second, they should try to find data that are of as high a quality as possible, and if existing data are not good enough, they should try to collect primary data themselves. And third, they should engage the data critically to examine whether supposedly comparative data really are comparable.However, the effort required to resolve the last two tasks is substantial (Cheibub 1999); (Widner 1999), and it is therefore not surprising that many researchers choose the simple option of downloading easily accessible, ready-made datasets without paying too much attention to the quality of the data.

How then, do you recognize problems of data quality? Looking for discrepancies among sources or within publication series is a good start. Many of the widely used dataset will display such discrepancies when scrutinized (Herrera and Kapur 2007).  Also, consulting independent reviews such as those provided by the MacroDataGuide, can be helpful. As there is increasing attention to issues of data quality, reviews and analyses of existing data sets are becoming more common and easily accessible. There are also more systematic procedures to follow in order to gain insight to the quality of datasets. Herrera and Kapur (2007) suggest three aspects of the datasets should be given a critical evaluation: the relationship between theoretical concepts and collected information (validity); the completeness of data sets (coverage); and avoidance of error (accuracy). Munck and Verkuilen (2002) stress the importance of evaluating conceptualizations, as well as measurement and aggregation procedures used to construct datasets. Herrera and Kapur (2007) furthermore point to the need for researchers to perform basic background checks of data sets. Looking into who created the data, what incentives and capabilities the producers were subjected to, whether the producers were governed by an external actor with a stake in the data etc., can easily function as simple "smell tests" and arguably make users more aware of potential quality problems affecting the data.

The MacroDataGuide is not necessarily intended to be a replacement for individual data scrutiny, however it is intended as a starting point and a resource that can assist researchers and students in the process of both acquiring and evaluating data material for their research. The resource therefore offers both a specific evaluation of each data resource under the “ comparability and data quality” section in the description of each particular dataset, as well as further links to other evaluations and relevant literature.

References

Atkinson A.B., Brandolini A., 2001. Promise and pitfalls in the use of “secondary” data-sets: income inequality in OECD countries as a case study. Journal of Economic Literature 39 (September), 771-799.

Casper G., Tufis C., 2003. Correlation versus interchangeablity: the limited robustness of empirical findings on democracy using highly correlated datasets. Political Analysis 11 (Spring), 1-11.

Chamie J.,1994. Population databases in development analysis. Journal of Development Economics 44 (June), 131-146.

Cheibub J.A., 1999. Data optimism in comparative politics: the importance of being earnest. APSA-CP Newsletter 10 (Summer), 21-24.

De Deken J., Kittel B., 2007. Social expenditure under scrutiny: the problems of using aggregate spending data for assessing welfare state dynamics. In Clasen J., Siegel N. A., eds., Investigating Welfare State Change: The “Dependent Variable Problem in Comparative Analysis.  Cheltenham: Edward Elgar.

de la Fuente A., Doménech R., 2006. Human capital in growth regressions: how much difference does data quality make? Journal of the European Economic Association 4 (March), 1-36.

Dogan M. 1994. Use and misuse of statistics in comparative research. Limits to quantification in comparative politics: the gap between substance and method. In Dogan M., Kazancilgil A., eds., Comparing Nations. Oxford: Blackwell.

Fields G, 1994. Data for measuring poverty and inequality changes in the developing countries. Journal of Development Economics 44 (June), 87-102.

Harkness S., 2004. Social and political indicators of human well-being. UNU-WIDER Research Paper no. 2004/33 (May).

Herrera Y.M., Kapur D., 2007. Improving data quality: actors, incentives, and capabilities. Political Analysis 15 (Autumn), 365-386.

Hill R.J., 2000. Measuring substitution bias in international comparisons based on additive purchasing power parity methods. European Economic Review 44 (January), 145-162.

ILO, 2007. ILO Comparable Estimates. Online database.

Kühner S., 2007. Country-level comparisons of welfare state change measures: another facet of the dependent variable problem within the comparative analysis of the welfare state? Journal of European Social Policy 17 (1), 5-18.

Munck G. L., Verkuilen J., 2002. Conceptualizing and Measuring Democracy: Evaluating Alternative Indices. Comparative Political Studies 35 (5), 5-34.

OECD, 2007. OECD Employment Outlook 2007. Paris: OECD.

Rydland L.T., Arnesen S., Østensen Å. G., 2007. Contextual data for the European Social Survey. An overview an assessment of extant sources. Report No. 124. Bergen: The Norwegian social science data services.

Siegel N.A., 2007. When (only) money matters: the pros and cons of expenditure analysis. In Clasen J., Siegel N.A.., eds., Investigating Welfare State Change: The “Dependent Variable Problem” in Comparative Analysis. Cheltenham: Edward Elgar.

Strauss J.,Thomas D., 1996. Measurement and mismeasurement of social indicators. American Economic Review 86 (May), 30-34.

Widner J., 1999. Maintaining our knowledge base. APSA-CP Newsletter 10 (Summer), 17-21.

 

© NSD - Norwegian Centre for Research Data • Contact NSDPersonvern og informasjonskapsler (cookies)