1 / 22

Putting the Life Table in Context

Putting the Life Table in Context. (Session 15). Learning Objectives – this session. At the end of this session, you will be able to explain the differences between cohort and current Life Tables discuss the background to Life Table calculations

maude
Télécharger la présentation

Putting the Life Table in Context

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Putting the Life Table in Context (Session 15)

  2. Learning Objectives – this session At the end of this session, you will be able to • explain the differences between cohort and current Life Tables • discuss the background to Life Table calculations • understand the data requirements for Life Table production • appreciate the broad idea of Model Life Tables

  3. Introduction As shown in previous sessions, the arithmetic of Life Tables provides numerous different summaries of a given set of probabilities qx or nqx. The arithmetic follows a clear logical pattern, but in reality we need to look critically at the information source(s) for the “given” data input, and first at the true meaning of LT numbers.

  4. Cohort LTs: 1 If we were given 1. a real cohort of people – say men – say all born in 1900, & 2. their births and deaths were fully recorded, collected and processed, thenwe would have genuine cohort mortality rates by year, & a genuine cohort or “generation” LT [pause: think back to mention of age-period-cohort issues in Intermediate]

  5. Cohort LTs: 2 but true cohort LTs are rare because a. the record is of historic value relating mainly to long-deceased individuals, & b. it reflects innumerable period effects e.g. health: no antibiotics were in use before they were about 50, HIV was not known while they were sexually active; e.g. food: preservatives, packaging, factory processing, refrigeration were much more limited in their youth

  6. Cross-sectional LTs: 1 Usually the set {nqx} that are used all relate to current data e.g. death registration in UK population from 2003 to 2005. q0 refers to babies born into current conditions and reflect present-day maternal health, behaviour as well as public health provision etc … but q80 reflects current mortality of those born 80 years ago and who have lived through periods with many differences from now

  7. Cross-sectional LTs: 2 so most LTs are “synthetic” and “current”, use a “cross-sectional” or “period” dataset, & reflect what might happen to an artificial population. Same form of summary as in standardised death rates discussed before. A 50-year-old British male sees e50 = 29 in UK Interim Life Table ~ maybe a rough indication of his residual expectation of life, but does not predict, or take account of, future health/social/climate conditions. therefore INTERPRET WITH CARE!

  8. Stationary Population Demographers stress the hypothetical nature of a “stationary population” ~ usually used as a theoretical model. It has:- • no migration • death rate = birth rate (growth rate zero) • age-specific mortality & fertility rates, and age-structure are constant The stationary population life table would reflect any cohort’s mortality experience. See applications below.

  9. Data for LT mortality rates The computations generally build upon pop.n death rates. If these are to be calculated for single years of age, it requires a very large population base, in which the population is enumerated • so as to be divided into ≈ 100 age categories; • so as to estimate in every category proportions dying, which are generally small

  10. Data for LT mortality rates: 1 Consider developing the set {qx} of single-year-of-age mortality rates. These depend on populations or large samples where (i) each person’s presence and accurate age are recorded, and (ii) death registration is complete and generates accurate age-at-death data so we can calculate each underlying age-specific death rate mx = [Dx/Px] (reiterated from Intermediate level below)

  11. Data for LT mortality rates: 2 For most ages mx is a small proportion. In the UK LT used earlier it is less than 1% for all male ages less than 60, and less than 10% for all male ages less than 82. The sample size needed to get an accurate estimate of a small proportion is always very big ~ see Basic Statistics module. Of course very large sample size requires a widespread & accurate death registration system.

  12. Data for LT mortality rates: 3 If we collect data to generate set of {5qx} values, there are far fewer age-groups to estimate and age-at-death data need not be so accurate: this is a bit more likely to be feasible in smaller, poorer countries. Recall from Intermediate sessions that we measure {mx} and derive {qx} estimates. To measure {5mx} and derive {5qx} needs a little extra thought. See next 3 slides.

  13. Deriving probabilities from data: 1 For a single year of age, x, recall that mx = [Dx/Px], while approximately qx = Dx/ [Px+ ½Dx] = (Dx/Px) / [1 + ½ (Dx/Px)] on dividing top and bottom by Px. qx= mx/ [1 + ½ mx] = 2mx/ [2 + mx] So data-derived death rate (mx) feeds into the last formula to give estimated probability of dying (or mortality rate), qx.

  14. Deriving probabilities from data: 2 If data are collected in n-year age bands, it is important to distinguish 2 different concepts. First is the one-year death rate for an n-year age band e.g. deaths aged 65-69 in 2008. nmx = [nDx/nPx], the number of deaths in the age-band x to x+n over a period of 1 year divided by the mid-period pop.n in the age-band. On the other hand, nqx is the probability of dying during n-year period from exact age x

  15. Deriving probabilities from data: 3 Approximately nqx is the number of deaths in the age-group over n years mid-period pop.n “projected back”* to start of n-year period * i.e. augmented by ½ the deaths to the age-group over the n-year period So nqx = n.nDx/[nPx+ ½.n.nDx] = n.(Dx/Px)/[1 + ½.n.(Dx/Px)] § § on dividing top and bottom by Px. nqx= n.nmx/[1 + ½n.nmx] = 2n. nmx/[2 + n.nmx]

  16. Models: 1 Inspection of UK Interim LT data, ages 0-40 (seen in Practical 12) shows fluctuations in {qx}, due to statistical sampling variability, not to any underlying reality. These are generally removed by statistical smoothing procedures in definitive LTs. Another approach when data are incomplete or unreliable is to use so-called “Model Life Tables”. Beyond scope of this course, but briefly outlined below.

  17. Models: 2 An established LT – the “Model” – say from a regional demographic surveillance centre, may have a similar general “shape” to that for your region or population. It can be “scaled” up or down e.g. multiplying all {qx} by 0.98 to reflect slightly lower mortality rates at all ages. If your region has some data the Model can be scaled to match those figures as nearly as possible, then adopted as a good substitute for a wholly locally-sourced LT.

  18. Models: 3 Note that the UK Interim Life Tables could not be adapted for South African use. The pattern of high death rates in young adults is not present in the UK figures. Use of Model LTs is a high-level technical skill for those with quite extensive demographic training. In settings where accurate data are unavailable, these are invaluable methods for assessing mortality rates. “INDEPTH Model Life Tables for Sub-Saharan Africa : Ashgate Publishers” authored by the INDEPTH Network (2004) is a good resource for those doing so in SADC countries.

  19. The need for large numbers: 1 Even using all modelling aids, the funda-mental issue remains: to get estimates of even a few proportions, with which to calibrate a Model LT, requires quite large samples from the population, and high-quality ascertainment of their ages at death.

  20. The need for large numbers: 2 A further consideration is that our life table summaries are “crude” in that generally a whole population is sub-divided only by sex and age. Yet we know that survival is adversely affected by risky behaviours such as smoking or drunk-driving. Actuaries and insurers are interested in the mortality patterns of those who take out life insurance, and often pool claims data from many companies to get adequate sample sizes.

  21. The need for more than numbers The last slide correctly hints that in using LT methods the first responsibility of the statistical contributor is to generate and properly interpret a good set of LT data. A later-stage responsibility is to work along with social, medical and other professionals in moving beyond simply describing mortality towards understanding and explaining it.

  22. Some practical work follows …

More Related