Notes

# Analysis of surveys

## Survey weighting

In the olden days, surveys were relatively simple to analyse. If 1,000 people were asked whether or not they thought that poverty was a major issue in the UK and 650 said "yes" and 350 said "no" then it was concluded that 65% thought that poverty was a major issue.

Nowadays, just about all surveys recognise that this is overly simplistic because the set of people surveyed is unlikely to have been completely representative of the population as a whole. To cope with this, most surveys now use a process called weighting whereby each survey respondent is given a weight which reflects the extent to which respondents with their characteristics were under- or over-sampled in the survey. Those who were under-sampled are given a higher than average weight whilst those who were over-sampled are given a lower than average weight. The results are then derived by multiplying the results for each respondent by that respondent's weight. Taking the example above:

- Assume that twice as many women were interviewed as men, then - assuming there are roughly the same number of women as men in the population - the weight for each man's answer will be roughly twice that for each woman's.
- Assume that 74% of women said "yes" but only 56% of men. Then, whilst the unweighted answer would be 65% (the average of 74% and 56%), the weighted answer is 62%: (2*56% + 1*74%)/(2+1).

In practice, the weights are calculated using a whole variety of characteristics rather than just one and, as a result, respondents will each have their own unique weight which reflects this combination of characteristics. But this does not change how one calculates the results: simply multiply the answers for each respondent by that respondent's weight to derive the estimate for the proportion of people giving each answer.

In many cases, there will be a single weight for each respondent, but in some of the more sophisticated surveys there will be two weights, an individual weight and a household weight. In these cases, the individual weight is the one to be used when analysing questions relating to the individual (e.g. are you in paid work?) and the household weight is the one to be used when analysing questions relating to the household (e.g. is anyone in your household in paid work?). One reason why these weights differ is to allow the results to be interpreted in terms of absolute numbers given that the total number of households is much less than the total number people. But another, more subtle, reason is that a respondent might have been under-sampled with respect to their characteristics as an individual but over-sampled with respect to the characteristics of the household in which they live.

Finally, in a few isolated cases, the survey might have a whole variety of weights for use in different circumstances. The British Household Panel Survey is the most notable example of this.

## Numbers versus proportions

#### Surveys

The basic output from any survey is a proportion rather than a number. However, in most cases the weights that a survey uses are such that the total sum of the weights is set to be equal to the total population, as estimated from Census data. Population estimates are sometimes changed retrospectively, particularly in the years following a Census. When this happens, the survey data is sometimes re-published with slightly different weights. This is a process known as 're-grossing'. It requires all the results for all the years to be re-calculated. In such cases, the results can be interpreted in terms of absolute numbers as well as proportions.

In a few surveys, the sum of the weights are (for some reason) not set to the total size of the population. To turn the proportions into numbers in these cases, they need to be multiplied by the relevant population.

Finally, there are some surveys where it is not clear whether the sum of the weights is meant to be equal to the size of the population as the two numbers are close but not the same. In these cases, the safest assumption is that they are not equal and, again, to turn the proportions into numbers by multiplying by the relevant population.

Placing the major surveys into each of these categories:

: Annual Population Survey, English Housing Survey - stock data, Family Resources Survey, Households Below Average Income, Labour Force Survey and Labour Force Survey (Household).*Weights can be used as absolute numbers*: British Crime Survey and British Household Panel Survey.*Weights cannot be used as absolute numbers*: English Housing Survey - household data, General Lifestyle Survey, Health Survey for England and Living Costs and Food Survey.*Situation unclear*

#### Administrative counts

The basic output from any administrative count is a number rather than a proportion. So, for example, the data on out-of-work benefits tells you how many people are receiving these benefits but not what proportion of the population they represent.

In order to turn these numbers into proportions, they simply need to be divided by the relevant population.

#### Sources of population data

Each August, the Office for National Statistics publishes population estimates, by single year for each country and by five-year age bands for each local authority. These estimates can be downloaded as spreadsheets from here.

Overall population estimates at a small area level are also published. These can be downloaded from the following websites:

There are no equivalent annual estimates for the number of households (as opposed to individuals) and thus the only practical option is to use the household estimates from the 2001 Census. This is unfortunate because the number of households is actually increasing much faster over time than the number of individuals as the average household size becomes progressively smaller.

## Issues of sample size

By definition, survey provide estimates rather than full counts and, as such, their results are subject to some uncertainty. As is well known, the smaller the survey the greater the uncertainty. For the same reasons, within a given survey, there will be more uncertainty about the results at a regional level than at the UK-wide level. The issue that arises is how much confidence one should have in the results for any particular analysis.

In terms of the survey analyses on this website, this issue manifests itself in two different ways, namely:

- Interpretation of results.
- Presentation of results.

#### Interpretation of results

The main issue here is when should a time trend be interpreted as rising or falling rather than flat.

Classical statistics addresses this issue using 'confidence intervals': any estimated proportion from a survey is deemed to have an uncertainty of +/- X% where X diminishes as the survey size increases. The most commonly used 'confidence interval' is the '95% confidence interval' which effectively means that if the survey were repeated many times, then the results would fall within the +/- X% interval 95% of the time. For an analysis which separates a population into two groups (i.e. most analyses), this is calculated as 1.96 times the square root of p(1-p)/n, where n is the sample size, p is the estimated proportion who are in the first group and (1-p) is the estimated proportion who are in the second group. Search google for 'binomial distribution' and 'standard deviation' if you want to know more! The problems with this are twofold. First, classical statistics is effectively 'playing safe', only 'allowing' you to conclude that two proportions are different when this is 'beyond reasonable doubt' rather than the weaker criterion of 'probably'. Second, it effectively assumes that you have no relevant information other than the two proportions themselves.

Rather than use classical statistical methods to drive views about trends, the approach used in this website is more informal, Although the approach is informal, it is based on an understanding of the formal statistical methods - collectively known as 'Bayesian statistics' - for reaching judgements in the light of multiple pieces of evidence. based on the following rules of thumb:

- If a statistic jumps around from year to year, then any judgements about trends are problematic. If, however, it follows smooth lines over time, then it is much more reasonable to make such judgements.
- If the change in a statistic in the latest year is in the same direction as the change in the years that preceded it then it is reasonable to conclude that it is part of a continuing trend. If, however, the change is in the opposite direction to that in the preceding years then it is much wiser to wait for more data points before concluding that the trend has changed direction.

To illustrate, assume that a survey suggests that the proportion of people who are in income poverty was 20% in one year and 19% in the next. A simplistic application of classical statistics might well suggest that the one percentage point difference is not statistically significant. But if the proportions in the three previous years had been 23%-22%-21%, then it seems reasonably safe to conclude that the 20% and 19% are part of a continuing downward trend. If, on the other hand, the proportions in the three previous years had been 18%-19%-20%, then it is much less safe to conclude that the current trend is downward. And if the previous proportions were 19%-21%-18% then it is not really wise to conclude anything.

In this context, it is noteworthy that most of the UK-wide time trends on this website follow reasonably smooth lines. By contrast, many of their equivalents for Scotland, Wales and Northern Ireland jump around a lot more. Care should therefore be taken to avoid over-interpreting time trends for these countries. Similar care should also be taken to avoid selecting single years as the baseline against which progress is judged.

Issue of interpretation also arise when comparing a statistic for different groups within the population, particularly as these are (by definition) based on smaller sample sizes. The rules of thumb here are:

- If the differences are large then it is safe to conclude that they are both real and large. If, however, the differences are small then they may not be real at all.
- In either case, the issue is more whether or not there are clear
differences rather than the
*precise*scale of these differences.

#### Presentation of results

The issue here is how to present results in graphics such that they are least likely to mislead.

For time trends, the general approach taken on this website is to show the statistics for each of the years rather than, for example, just for the start and end years of the time period. By providing the full information, this allows the user to form their own views about trends as well as reading our judgements. Where, for reasons of graphical complexity, only selected years can be shown, the general approach is always to compare the three-year average at the start of the time period with the three-year average at the end of the time period. Whilst not perfect, this does at least have the benefit of lessening the impact of 'aberrant years'.

Where groups of the population are being compared in terms of their current statistics, the usual standard is again to average the results across the latest three years. The main exceptions being when the data is from the Labour Force Survey which is a much bigger survey than any of the other surveys. This can be thought of as increasing the sample size in order to reduce uncertainty. It is particularly important when the differences between the groups are relatively small, as, for example, is often the case when comparing regions. It is noteworthy that, following discussions with ourselves, the Department of Work and Pensions now presents all its regional income poverty statistics as three-year averages rather than as single years.

Such averaging cannot, however, compensate for sample sizes that are really too small to analyse variations by region. In this context, we would place the major surveys into the following broad categories:

Annual Population Survey, Labour Force Survey and Labour Force Survey (Household).*Analysis by region reliable:*British Crime Survey(non-victims), English Housing Survey - household data, English Housing Survey - stock data, Family Resources Survey, Households Below Average Income and Health Survey for England.*Analysis by region reasonably reliable:*British Crime Survey(victims), British Household Panel Survey (except for Scotland, Wales and Northern Ireland, which have boosters), General Lifestyle Survey and Living Costs and Food Survey.*Analysis by region not reliable at all:*