Note to self: Never give interview to BuzzFeed about job I just got.
Gelman's Monkey Cage Post
-
Re high over-time correlations among the feeling thermometers in Study 1, this from the MPSA version of their paper (p.22):
"The sole change between Studies 1 and 2 had to do with the way in which feeling thermometers were presented
to subjects. In Study 1, subjects were presented a virtual slider button that was pre-set to their rating in the previous wave. Concerned that this method may distort our assessment of how effects persist over time, we presented no slider button or pre-set rating in Study 2."Note that the Science version did not present the early waves of the thermometer from Study 1, so the point is moot; they present only an end-line wave that did not use the pre-sets.
-
Just took a look at the data. Kind of weird.... Check out this crosstab.
. insheet using science-data.txt, comma clear
. reshape wide therm_level ssm_level, i(panelist_id) j(wave)
. tab ssm_level1 ssm_level2
1 | 2 ssm_level
ssm_level | 1 2 3 4 5 | Total
-----------+-------------------------------------------------------+----------
1 | 2,754 623 25 2 3 | 3,407
2 | 182 779 208 11 1 | 1,181
3 | 4 148 657 185 11 | 1,005
4 | 0 13 226 1,044 307 | 1,590
5 | 0 0 24 526 2,864 | 3,414
-----------+-------------------------------------------------------+----------
Total | 2,940 1,563 1,140 1,768 3,186 | 10,597
. corr ssm_level1 ssm_level2
(obs=10597)
| ssm_le~1 ssm_le~2
-------------+------------------
ssm_level1 | 1.0000
ssm_level2 | 0.9519 1.0000
That amount of intertemporal stability is not believable to me. Not one of the 2,940 people who answered at 1 at time 2 answered at 4 or 5 at time 1? Is that consistent with other online panels?
Same thing in the feeling thermometer...
. destring therm_level*, replace force
. corr therm_level1 therm_level2
(obs=10597)
| therm_~1 therm_~2
-------------+------------------
therm_level1 | 1.0000
therm_level2 | 0.9882 1.0000
.988? That's nuts. Even if the measurements were a week apart.
Don't know how to reproduce this on PSR but check out this plot too on your own computers...
. scatter therm_level1 therm_level2
The noise is weirdly normally distributed, with *NOT ONE* respondent out of 10597 deviating by more than 25 in a negative direction or 38 in a positive direction. That complete lack of survey error in a sample so large is bizarre. All it takes is one person to misinterpret the feeling thermometer, think it's for a different question, etc.
Here's ANES 2012 vs. 2013 just as a rough benchmark, realizing that the time difference is way more and mode is different etc.:
. corr gayft2012 gayft2013
(obs=1492)
| gay~2012 gay~2013
-------------+------------------
gayft2012 | 1.0000
gayft2013 | 0.7106 1.0000
The amount of instability is *24x* lower in this study than in the ANES. Not saying that couldn't happen, but it is weird. Anecdotally, I've seen party ID within survey have a much lower correlation.
It's consistent with ML having a fixed sample size and a fixed believable effect size and having to work backward to what would be statistically significant (if there was less stability, the study would be less well-powered).
DEFINITELY not saying this nails anything, just that digging deeper into this data probably should be done...This is crazy. Busted on PSR.
-
Goddamnit, no. It should *moderate* the court effect. I just had a f**king dumbass reviewer reject my paper (in part) because they didn't know the difference between these two extremely fundamental concepts. It's really not that difficult.
Reannon, just to let you know, the original dataset has news interes, and newspaper consumption variables. Should mediate the court effect.
-
Goddamnit, no. It should *moderate* the court effect. I just had a f**king dumbass reviewer reject my paper (in part) because they didn't know the difference between these two extremely fundamental concepts. It's really not that difficult.
Reannon, just to let you know, the original dataset has news interes, and newspaper consumption variables. Should mediate the court effect.I was wondering what this poster was trying to say. Let's avoid using words we don't understand?
-
I used mediator correctly. If you conceptualize the variable as being responsible for the effect, it mediates. I would expect that a media usage variables would completely mediate the effect of reporting on a court decision. However, if you expected there to be some relationship without any media exposure, then it would be a moderator.
-
"Note that the Science version did not present the early waves of the thermometer from Study 1, so the point is moot; they present only an end-line wave that did not use the pre-sets."
What does this mean Jocelyn? That the feeling thermometer did not use the slider on the last few waves of study 1? And that the first few waves were not used in the science article?
-
I may have over-reacted in my first post, Judith, but I still don't understand. So, you are saying that being treated (i.e. being canvassed by a gay person), *caused* a change in media usage? because that is what mediation must mean. The treatment causes variation in the mediator, which in turn causes variation in the outcome variable. I am not too familiar with the data at hand, but I interpreted what you said to mean that you are saying certain pre-existing characteristics define who is more or less affected by the treatment which would be moderation. So, I may be wrong in this case, and if so I apologize, but my reviewer was dead wrong and that is what has got me agitated. In this case it was reviewer 1, not the more oft-blamed reviewer #3.
I used mediator correctly. If you conceptualize the variable as being responsible for the effect, it mediates. I would expect that a media usage variables would completely mediate the effect of reporting on a court decision. However, if you expected there to be some relationship without any media exposure, then it would be a moderator.
-
No...I was NOT referring to the contact effect specifically. I was referring to the supreme court decision --> media consumption --> attitude change path that I thought would be of interest to Reannon. I've had this problem also with reviewers, and learned the definition like the back of my hand. We appear to have been thinking of completely different casual paths, one moderation, one mediation.
-
Vavreck's Upshot piece praising the study is actually much more damning to me than Gelman's. She writes,
"The unusual persuasive appeal was the brainchild of Dave Fleischer, head of the Vote for Equality team. Mr. Fleischer produced a script that not only involved people in discussions about their own marriages or the marriage of their friends, but also included the unexpectedly powerful moment of having a gay stranger come out to people on their front porch."
There's no social science theory there whatsoever; ML and DG are simply market researchers testing this guy's proposed treatment.