ON A nippy January evening, Clare walks the streets of north London, armed with a file of addresses and maps. She wants to interview people for Britain’s Labour Force Survey (LFS), which is the basis for a host of important economic statistics including the unemployment rate. Her job, like that of many surveyors across the rich world, has been getting harder.

Corralling interviewees has always been tough, particularly in London. Clare sometimes feels like a private detective as she befriends porters to enter gated communities. “It was the rule to be welcomed in, whereas now you can’t count on it,” she says. Of the five doorbells she rings, the most positive answer is that now is “not a good time”. Clare is hopeful about the phone call arranged for the following day.

Response rates to surveys are plummeting all across the rich world. Last year only around 43% of households contacted by the British government responded to the LFS, down from 70% in 2001 (see chart). In America the share of households responding to the Current Population Survey (CPS) has fallen from 94% to 85% over the same period. The rest of Europe and Canada have seen similar trends.

Poor response rates drain budgets, as it takes surveyors more effort to hunt down interviewees. And a growing reluctance to give interviewers information threatens the quality of the data. Politicians often complain about inaccurate election polls. Increasingly misleading economic surveys would be even more disconcerting.

blog-post__inline-image blog-post__inline-image–generic blog-post__inline-image–slim”>

Household surveys derive their power from randomness. Since it is impractical to get every citizen to complete a long questionnaire regularly, statisticians interview what they hope is a representative sample instead. But some types are less likely to respond than others—people who live in flats not houses, for example. A study by Christopher Bollinger of the University of Kentucky and three others matched data from the CPS with social-security records and found that poorer and very rich households were more likely to ignore surveyors than middle-income ones. Survey results will be skewed if the types who do not answer are different from those who do, or if certain types of people are more loth to answer some questions, or more likely to fib.

Statisticians try to correct for these problems. They can bump up the weights attached to answers from underrepresented groups, or fill in blanks with imputed answers based on those from similar people. To check, they can compare results from household surveys with official administrative data, such as tax records.

Worryingly, mounting evidence suggests that some of these corrections are failing. A study by Bruce Meyer of the University of Chicago, Wallace Mok of the Chinese University of Hong Kong and James Sullivan of the University of Notre Dame found a widening gulf between the income people declare in surveys and what administrative records suggest.

Research by Britain’s Behavioural Insights Team, a research group, has found that the gap between the number of calories that Britons consume and what they report in household surveys widened between 1974 and 2008. Another study by Garry Barrett of the University of Sydney, Peter Levell of the Institute for Fiscal Studies and Kevin Milligan of the University of British Columbia compared household data with national-accounts data between 1969 and 2010 in America, Britain, Canada and Australia. It found that for every percentage-point decline in the response rate, the share of spending captured by household surveys fell by 0.8 percentage points.

For decades, governments have relied on household surveys to set policy. Besides using them to gauge economic indicators, many rely on them for censuses. In America, the allocation of over $600bn of federal spending is based on the Census Bureau’s estimates of the population. Undercounting even a single person can cost a government programme—in health care, say—thousands of dollars.

Understanding why people shun surveys might help boost response rates. The most common reasons people give for refusing are that they do not care, that they worry about privacy or that they do not have the time. (Clare reports that some non-respondents spend 20 minutes explaining how busy they are.) Another factor could be a weakening sense of civic duty—voter participation has also been falling. Over-surveying may also be to blame: the share of Americans reporting that they had been surveyed in the past year more than quadrupled between 1978 and 2003. Messrs Meyer, Mok and Sullivan speculate that what once “was a rare chance to tell someone about your life, is now crowded out by an annoying press of telemarketers and commercial surveyors.”

Statisticians have been experimenting with methods of improving response rates: new ways to ask questions, or shorter questionnaires, for example. Payment raises response rates, and some surveys offer more money for the most reluctant interviewees. But such persistence can have drawbacks. One study found that more frequent attempts to contact interviewees raised the average response rate, but lowered the average quality of answers.

Statisticians have also been exploring supplementary data sources, including administrative data. Such statistics come with two big advantages. One is that administrative data sets can include many more people and observations than is practical in a household survey, giving researchers the statistical power to run more detailed studies. Another is that governments already collect them, so they can offer huge cost savings over household surveys. For instance, Finland’s 2010 census, which was based on administrative records rather than surveys, cost its government just €850,000 ($1.1m) to produce. In contrast, America’s government spent $12.3bn on its 2010 census, roughly 200 times as much on a per-person basis.

Recent advances in computing mean that vast data sets are no longer too unwieldy for use by researchers. However, in many rich countries (those in Scandinavia are exceptions), socioeconomic statistics are collected by several agencies, meaning that researchers who want to combine, say, health records with tax data, face formidable bureaucratic and legal challenges.

Governments in English-speaking countries are especially keen to experiment. In January HMRC, the British tax authority, started publishing real-time tax data as an “experimental statistic” to be compared with labour-market data from household surveys. Two-fifths of Canada’s main statistical agency’s programmes are based at least in part on administrative records. Last year, Britain passed the Digital Economy Act, which will give its Office of National Statistics (ONS) the right to requisition data from other departments and from private sources for statistics-and-research purposes. America is exploring using such data as part of its 2020 census.

Administrative data also have their limitations (see article). They are generally not designed to be used in statistical analyses. A data set on income taxes might be representative of the population receiving benefits or earning wages, but not the population as a whole. Most important, some things are not captured in administrative records, such as well-being, informal employment and religious affiliation.

When administrative data offer no alternative, household surveys, warts and all, will have to suffice. Statisticians can only fix a biased survey based on other data. And in some cases, the only other source available is another survey.