• mozz@mbin.grits.dev
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    My personal opinion is that polling methodology may have overcorrected for 2020, and we’re getting a picture now that’s skewed right, versus left from beforehand.

    I won’t say that you’re wrong about what the pollsters are doing – but to me this strikes me as very obviously the wrong way to do it.

    If you find out your polls were wrong, and then instead of digging into detail as to what exactly went wrong, and then fixing the methodology going forward, using non-phone polls, doing a more accurate calculation to make sure you’re weighting the people who are going to vote and not the people who aren’t going to vote, things like that, then I think you can expect to keep getting wrong answers out. Making up a fudge factor for how wrong the polls were last time, and assuming that if you just add that fudge factor in then you don’t have to fix all the things that went wrong on a more fundamental level, seems wrong.

    Again I won’t say you’re wrong about how they’re going about it. (And, I’m not saying it’s necessarily easy to do or anything.) But I think you’ve accurately captured the flaw in just adding a fudge factor and then assuming you’ll be able to learn anything from the now-corrected-for-sure-until-next-time-when-we-add-in-how-wrong-we-were-this-time answers.

    • assassin_aragorn@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      That’s the thing, we don’t know how they’re correcting for it, and if it is just a fudge factor. The issue is there’s more confounding factors that anyone could list which could be the culprit here.

      A fudge factor is easy, but the wrong solution here. But the right solution is incredibly complex and difficult to even identify. In my field we can get away with using a timer instead of a precise calculation sometimes. That really isn’t an option for polls. I don’t favor the people trying to fix the models.