Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree with your original point, even if you no longer stand by it.

>Historically, Kinsa’s methods have been able to account for a sharp rise in the number of people taking tests

What historical precedent is Kinsa referring to? There has never before been a pandemic of this magnitude, in a time and place where many people owned internet-connected thermometers.

Perhaps I am being uncharitable, but it seems that they are leaping to conclusions here. I'm sure that they have done their best to account for this, but the claim that "Dalziel has ensured that a spike in testing didn’t lead to inaccurate predictions" is one I find difficult to take at face value. Ensured?

You said that "there could be some selection bias here." Could. Sorry to cause you to cringe again, but +1.



I don't think this is uncharitable at all. I'm sure Kinsa has made a good effort at controlling for testing frequency, and I'm sure it's helped. But there's no reason to think the dynamics of COVID-motivated testing are the same as for flu-season, or new-buyer novelty, or anything else.

And more importantly, how could we know if it is? That's not just a Kinsa problem; we see this over and over again with peer-reviewed studies that "control for" certain factors like socioeconimics or health history. They're inherently limited to controlling for what they know about, and it's never perfect. Often, the entire effect is from an undiscovered variable. Take, say, the widely-promoted study finding that visiting a museum, opera, or concert just once a year is tied to a 14% decline in early death risk. The researchers tried to control for health and economic status, then concluded "over half the association is independent of all the factors we identified that could explain the link." [1]

Now, what seems more likely: that the unexplained half is from the profound, persistent social impact of dropping by a museum or concert once a year? Or that some of the explained factors like "civic engagement" can't be defined clearly, others are undercounted (e.g. mental health issues), and some were missed entirely?

I suspect Kinsa did much better than that, because they're not trying to control for such vague terms. But I think "even after controlling for" should basically never rule out asking "what if it's a confounder"?

[1] https://www.cnn.com/style/article/art-longevity-wellness/ind...


Yeah, I wish that science reporting would either a) mention specifics on methodology or b) link directly to papers.

Without this plus a stronger push for open-access publishing, readers often have absolutely no way to verify claims like this - and, moreover, have no way to learn how to apply similar methodologies to their own work. There's a lot of people in data science / analytics positions right now who could benefit from sharing knowledge around statistical tools to correct for highly unusual situations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: