Fairy Tales and Labour Force Surveys
This is a true story.
I made porridge for my children this morning, as I usually do.  When my son first tried it, it was too hot. When I finally got around to eating, it after making the kid’s lunches, it was too cold. But when my daughter tried it, it was just right (and no, she doesn’t have golden locks, but a recipe tip is that  peanut butter tastes great mixed in with porridge).
At about the same time, the media report on the labour force survey came on the 7AM news. And it was just about the same story.Â
In June, the increase in employment had been too hot (rising by 93,000). In July, it was too cold (falling by 9,000).  But the August figures were just about right, rising by 36,000.  This was almost exactly in the middle of the range that I predicted it would be after last month’s report.
Just like with fairy tales, I find this story comes around again and again with Statscan’s Labour Force Survey. I haven’t done any back analysis, but from my casual observation, there often seems to a three month cycle with these labour force  figures–too hot, too cold, then about right–that repeats itself.
What I find odd is that so media attention is paid, comments made and probably markets moved over figures that clearly have a lot of noise in them. There’s way more hyperventilating expended over this than exterted by my children over the temperature of their porridge.
What is almost always completely ignored in the media reports is how statistically unreliable the monthly labour force figures are.  For instance, the standard error (S.E. in the third column of major tables in the detailed report) for employment at a national level is 28,000.  This means that there is only 68% confidence that the “real value” of an increase of 36,000 is in the range of +8,000 to  +64,000.  At a more reliable confidence interval of 95% (the familiar “19 times out of 20” reported for public opinion polls), the range is -20,000 to +92,000, as is explained in the data quality section of the report and in a bit more detail in the Guide.  There’s still a 5% chance that the real value is outside this wide range.
And those are the confidence intervals for the unadjusted figures.  I’m sure that the seasonal adjustment process adds more variability.  And I expect that the unique sampling method of the LFS, with its panel data and possible reporting errors, may add even more noise and variability to the data. Â
After obliging with some quotes, I explained these reliability problems to a well-known and intelligent CBC reporter when he called for some commentary, but after after expressing some surprise about this he didn’t show any interest.  Of course, no news is bad news for the media.
I’m not exactly sure when it happened, but I stopped paying any serious attention to the monthly labour force figures about ten years ago when there seemed to be fair too much back and forth variability in the monthly numbers they reported.  I’ve never been part of the labour force survey sample, but an economist friend of mine was when I lived up in the Yukon (where it is hard to escape being sampled) and he came out of the experience quite concerned about the reliability of the data.
Households are kept in the sample for six months, only the first interview is detailed and the information for all members of the household are subsequently reported by one member of the household by telephone.  This six month rotation doesn’t necessarily explain what appears to be be a three month hot, cold, just right cycle of accuracy, but perhaps there is some other adjustment they make that accounts for that. Â
Don’t get me wrong. I still use the annual labour force figures and I think the quarterly or three-month moving average figures are still fairly reliable.  But I wish Statscan would fix the problems that seem to exist with the monthly labour force figures.  They report that they use some of these modified sample methods in order to reduce response burden, but I suspect that a lot of thsi was really done to reduce costs.  We now know that similar cost cutting in the 1990s for the survey size of employment, earnings and hours resulted in major problems for the reliability of that survey at the provincial level and they’ve made some recent changes to improve those figures.
Is it expecting far too much to hope (especially in light of looming federal budget cuts and this government’s attititude towards accurate information) that Statistics Canada will try to fix some of the problems with the Labour Force Survey, considering how much attention is paid to it? Â
I’m sure there are many others much more informed about the internal workings of this survey who could comment more knowledgably about it. Is there anybody at the Stats barn (or elsewhere) with something to add about this?
This is a true story, as was reading Toby’s post I was listening to Nick Cave’s song – We Call Upon the Author
I am not joking.
We call Upon the Author to explain,
mainly because we are tired of getting lost in the variance games. (my line)
Toby you get most of it all right- lots of problems with the LFS and with the Census about to take a hit- we are only bound for even more variance-and undoubtedly a lot more bias.
Soon the LFS will only measure employment within the walls of the castle- the sea of poverty surrounding it, is not worthy of being measured.
At a minimum they are going to have to think very long about how to bring the reliability of the estimates up to a level that is reasonable. We are moving in the wrong direction- of course that is undoubtedly what the tories want- no legitimacy in the information makes it a little harder to pinpoint the problem.
Like I said yesterday- have a look at consumer bankruptcies in some of the CMAs had hit by the recession, and you will get a lot clearer picture of main street. On top of all the quantitative problems the extension of precarious work into the inner reached of the middle class is not showing up in the LFS so again the qualitative is discounted.
I will end this comment with a Nick Cave line from the song – as I think there is a reason it was playing when I read this post.
“Who is this great burdensome slavering dog-thing that mediocres my every thought?
I feel like a vacuum cleaner, a complete sucker, it’s fucked up and he is a fucker
But what an enormous and encyclopedic brain
I call upon the author to explain
Oh rampant discrimination, mass poverty, third world debt, infectious disease
Global inequality and deepening socio-economic divisions
Well, it does in your brain
And we call upon the author to explain”