SL, stop defending the people in your department with your "one troll" theory. You went above and beyond to defend JD back in the day too, you coward. You should be ashamed of yourself.
you are a delusional idiot
To be fair, ~95% of quant political scientists probably couldn't derive CIs back in the day. Even now many would mess up using margins, etc.
Not excusing this particular person making a mistake computing CIs, but that's not exactly a big indictment.
Ditto if you claim that someone 20 years ago couldn't figure out interactions. Recent lit reviews show you that most people couldn't. And probably still can't.
The article is from 15, not 20 years ago. A lot of people could do it. They were teaching this stuff at ICPSR, which she attended. Those who couldn't do it did not have any claim to Harvard tenure.
The main problems with her article is that testing that interaction was the only added-value wrt Dawson's book. And if she lacked the skills to do it properly the article shouldn't have been published. If you skimmed through it, she is not even trying to derive CIs and predicted probabilities. She simulates them. And she doesn't even know how to report simulation-based CIs on the predicted probabilities, which is much easier than actually deriving them. Without CIs on the predicted probabilities at various levels of neighborhood index, she can't assess whether or not her hypotheses receive support.
Now take a look at those predicted probability graphs. Assuming she didn't mess up the simulations, a lot of differences would likely be non-significant if you added CIs, except maybe at the very low index values. She must, however, have messed up the simulations really badly (assuming she didn't do it on purpose). Those probabilities should sum up to 100% at each index value, and they don't. These are rookie mistakes. Simply inexcusable of someone with decades of Exeter, Stanford, Harvard, and Michigan summer school under her belt. If a non-REP person had sent an article that is so flawed on its face, it would have been desk rejected. I know I rejected more competently done articles back then.
To be fair, ~95% of quant political scientists probably couldn't derive CIs back in the day. Even now many would mess up using margins, etc.
Not excusing this particular person making a mistake computing CIs, but that's not exactly a big indictment.
Ditto if you claim that someone 20 years ago couldn't figure out interactions. Recent lit reviews show you that most people couldn't. And probably still can't.
The article is from 15, not 20 years ago. A lot of people could do it. They were teaching this stuff at ICPSR, which she attended. Those who couldn't do it did not have any claim to Harvard tenure.
The main problems with her article is that testing that interaction was the only added-value wrt Dawson's book. And if she lacked the skills to do it properly the article shouldn't have been published. If you skimmed through it, she is not even trying to derive CIs and predicted probabilities. She simulates them. And she doesn't even know how to report simulation-based CIs on the predicted probabilities, which is much easier than actually deriving them. Without CIs on the predicted probabilities at various levels of neighborhood index, she can't assess whether or not her hypotheses receive support.
Now take a look at those predicted probability graphs. Assuming she didn't mess up the simulations, a lot of differences would likely be non-significant if you added CIs, except maybe at the very low index values. She must, however, have messed up the simulations really badly (assuming she didn't do it on purpose). Those probabilities should sum up to 100% at each index value, and they don't. These are rookie mistakes. Simply inexcusable of someone with decades of Exeter, Stanford, Harvard, and Michigan summer school under her belt. If a non-REP person had sent an article that is so flawed on its face, it would have been desk rejected. I know I rejected more competently done articles back then.
To be fair, ~95% of quant political scientists probably couldn't derive CIs back in the day. Even now many would mess up using margins, etc.
Not excusing this particular person making a mistake computing CIs, but that's not exactly a big indictment.
Ditto if you claim that someone 20 years ago couldn't figure out interactions. Recent lit reviews show you that most people couldn't. And probably still can't.
Look at this idiot trying to rescue his mistake! It's adorable how desperate these trolls are getting. Remember, the original claim is CG wasn't doing hypothesis testing in the article. Then it was pointed out there's a table of, um, tests. Now it's some babbling nonsense about the graphs showing predicted probabilities, but no CI bands or something? But that's not how you test interactions or anything else, rube! You do it with test statistics, not graphs. Maybe leave the APSR work to Harvard deans and your betters.
Now take a look at those predicted probability graphs. Assuming she didn't mess up the simulations, a lot of differences would likely be non-significant if you added CIs, except maybe at the very low index values. She must, however, have messed up the simulations really badly (assuming she didn't do it on purpose). Those probabilities should sum up to 100% at each index value, and they don't. These are rookie mistakes. Simply inexcusable of someone with decades of Exeter, Stanford, Harvard, and Michigan summer school under her belt. If a non-REP person had sent an article that is so flawed on its face, it would have been desk rejected. I know I rejected more competently done articles back then.
Oh, also this doofus is completely wrong about the predicted probability graphs. They do sum to 100% in Figure 2. Figure 3 is showing something different -- how two different categories (interacted with neighborhood) predict a different DV. I'm guessing you just didn't understand that, despite the crystal-clear description.
Can't wait for your apology!
The table reports coefficients from the probit regression, and standard errors (note, not confidence intervals, as her defender mistakenly claims). These are insufficient to test her hypotheses, hence the need to show predicted probabilities at various levels of the neighborhood quality index. CG is inconsistent about how she obtains those: in the main text, she says she "simulates" them; in the table, she says they are "derived". She probably doesn't understand the difference between the two approaches. I suspect they are simulated. In any case, differences may be significant at some levels of the moderating variable, but not at others. That's why she needs to add confidence intervals to the line plots, to show at what index levels differences are significant and and what levels they are not. Where CIs overlap, differences are not statistically significant. That probably happens everywhere except at index value 1. Also, the choice of baseline categories matter, and she does not indicate what levels the variables are being held to when simulating those probabilities. It's important not to hold variables at theoretically nonsensical values.
This is intro stats 101, which 95% of PSR posters understand. But not CG's defender. Maybe they will finally start to learn some basics from this post.
The article is from 15, not 20 years ago. A lot of people could do it. They were teaching this stuff at ICPSR, which she attended. Those who couldn't do it did not have any claim to Harvard tenure.
The main problems with her article is that testing that interaction was the only added-value wrt Dawson's book. And if she lacked the skills to do it properly the article shouldn't have been published. If you skimmed through it, she is not even trying to derive CIs and predicted probabilities. She simulates them. And she doesn't even know how to report simulation-based CIs on the predicted probabilities, which is much easier than actually deriving them. Without CIs on the predicted probabilities at various levels of neighborhood index, she can't assess whether or not her hypotheses receive support.
Now take a look at those predicted probability graphs. Assuming she didn't mess up the simulations, a lot of differences would likely be non-significant if you added CIs, except maybe at the very low index values. She must, however, have messed up the simulations really badly (assuming she didn't do it on purpose). Those probabilities should sum up to 100% at each index value, and they don't. These are rookie mistakes. Simply inexcusable of someone with decades of Exeter, Stanford, Harvard, and Michigan summer school under her belt. If a non-REP person had sent an article that is so flawed on its face, it would have been desk rejected. I know I rejected more competently done articles back then.
To be fair, ~95% of quant political scientists probably couldn't derive CIs back in the day. Even now many would mess up using margins, etc.
Not excusing this particular person making a mistake computing CIs, but that's not exactly a big indictment.
Ditto if you claim that someone 20 years ago couldn't figure out interactions. Recent lit reviews show you that most people couldn't. And probably still can't.Look at this idiot trying to rescue his mistake! It's adorable how desperate these trolls are getting. Remember, the original claim is CG wasn't doing hypothesis testing in the article. Then it was pointed out there's a table of, um, tests. Now it's some babbling nonsense about the graphs showing predicted probabilities, but no CI bands or something? But that's not how you test interactions or anything else, rube! You do it with test statistics, not graphs. Maybe leave the APSR work to Harvard deans and your betters.
The stuff she did was the norm when she did it. This is why an AJPS was published in ~2010 on this subject. The criticisms of her APSR seem pedantic and/or divorced from changes in guidelines and standards over time.
Sorry, PA in 2006; 2 years after CG's article. 2010 AJPS is related, but not completely on topic.
The PA article was a paper that started to be circulated around 2003, so many people were aware of it. There were also earlier papers on the correct specification and computation of multiplicative interaction effects. And they had courses on logit/probit models at ICPSR. GK's UPM book also discussed these models. For a 10+ ranked school AP these mistakes would have been understandable. Not so much for someone who went to Stanford and Harvard. Obviously, I wouldn't have expected her to have the calculus and stats background to derive interaction effects and predicted probabilities, but the clarify software has been around since 2001, so she could have reported simulated CIs without understanding much if any of the algebra and calculus required.
Her 2006 AJPS is also a variation on the neighborhood interaction effect, with similarly inadequate methodology (even very similar predicted probability graphs). The 2002 AJPS had logit regressions with interaction effects, and not even simple predicted probabilities plots, only incorrect interpretations based on the tables.
So one of the APSRs and both AJPSs are based on incorrect tests of interactions effects, and the 2001 APSR is the logically inconsistent EI one discussed at polmeth. That's at least 4 out of 5 (can't recall what her JOP was about). Unfortunately she lacked breadth both theoretically and methodologically, she lacked the knowledge to apply correctly the few basic methods that she was familiar with, and she didn't take any steps to improve her skills. Which explains her going basically deadwood over the past decade and a half.
I feel sorry for her, I can only imagine how devastating it must be to realize that everything you have ever published is flawed and you're too far behind to ever catch up, but many others with fewer resources have tried their best to learn and keep up with methods advances. I hope other minorities will avoid these pitfalls in the future. A 300K administrator salary is not enough to compensate for the suffering of a failed academic career.
The stuff she did was the norm when she did it. This is why an AJPS was published in ~2010 on this subject. The criticisms of her APSR seem pedantic and/or divorced from changes in guidelines and standards over time.
Sorry, PA in 2006; 2 years after CG's article. 2010 AJPS is related, but not completely on topic.
The table reports coefficients from the probit regression, and standard errors (note, not confidence intervals, as her defender mistakenly claims). These are insufficient to test her hypotheses, hence the need to show predicted probabilities at various levels of the neighborhood quality index. CG is inconsistent about how she obtains those: in the main text, she says she "simulates" them; in the table, she says they are "derived". She probably doesn't understand the difference between the two approaches. I suspect they are simulated. In any case, differences may be significant at some levels of the moderating variable, but not at others. That's why she needs to add confidence intervals to the line plots, to show at what index levels differences are significant and and what levels they are not. Where CIs overlap, differences are not statistically significant. That probably happens everywhere except at index value 1. Also, the choice of baseline categories matter, and she does not indicate what levels the variables are being held to when simulating those probabilities. It's important not to hold variables at theoretically nonsensical values.
This is intro stats 101, which 95% of PSR posters understand. But not CG's defender. Maybe they will finally start to learn some basics from this post.
This is so painfully stupid it's really adorable! This person doesn't have the slightest clue about the most basic issues of empirical testing and analysis. Do you realize how ignorant of analysis you need to be to write: "Where CIs overlap, differences are not statistically significant." Nope. Not how that works.
Her hypotheses are about the effect of an interaction term. That is correctly tested with an actual test statistic, through a probit model and coefficient tests. Visuals are very nice, too, but looking at overlapping CIs is not a test of where differences are significant or whether the overall interaction is. If you disagree, I'm sorry that you're about to fail your freshman econometrics final.
It's really incredible how easy it is to knock down every last CG criticism. It's all a mix of complete fantasy (mystery fraud!) and freshman-level quant mistakes. Why? Because the opposition isn't borne out of principled empirical concerns or anything like that, but pure racial grievance. She's black, so there must be something wrong with her work.
Note that Theirn's "critique" above is even more dishonest and empirically illiterate. Again laughably thinks you test interaction terms by comparing CIs in a predicted probability graph! That's amazing. Also buys the lie about the EI model, then babbles about lack of originality or something. So note exactly zero credible quantitative critiques, zero theoretical critiques, and zero IQ points.
Short version: Racist rubes try to attack Harvard Dean. Fall on their faces. Employed people laugh.
Statistics is racist!
Well, since CG is the successful one who correctly applied statistics, then maybe it's reverse racist? You're from the crew who thinks you test interaction terms by looking at where CIs overlap in a predicted probability graph. So cute! You're gonna make something out of that GED some day!
Just so everyone's clear, these racist idiots are *desperate* to find something to attack CG with, and literally the best thing they can come up with is she didn't include CIs in her predicted probability graphs in a paper from 2004. That's hilarious! Imagine being so good at your job that that's the best someone can find against you!!
The table reports coefficients from the probit regression, and standard errors (note, not confidence intervals, as her defender mistakenly claims). These are insufficient to test her hypotheses, hence the need to show predicted probabilities at various levels of the neighborhood quality index. CG is inconsistent about how she obtains those: in the main text, she says she "simulates" them; in the table, she says they are "derived". She probably doesn't understand the difference between the two approaches. I suspect they are simulated. In any case, differences may be significant at some levels of the moderating variable, but not at others. That's why she needs to add confidence intervals to the line plots, to show at what index levels differences are significant and and what levels they are not. Where CIs overlap, differences are not statistically significant. That probably happens everywhere except at index value 1. Also, the choice of baseline categories matter, and she does not indicate what levels the variables are being held to when simulating those probabilities. It's important not to hold variables at theoretically nonsensical values.
This is intro stats 101, which 95% of PSR posters understand. But not CG's defender. Maybe they will finally start to learn some basics from this post.This is so painfully stupid it's really adorable! This person doesn't have the slightest clue about the most basic issues of empirical testing and analysis. Do you realize how ignorant of analysis you need to be to write: "Where CIs overlap, differences are not statistically significant." Nope. Not how that works.
Her hypotheses are about the effect of an interaction term. That is correctly tested with an actual test statistic, through a probit model and coefficient tests. Visuals are very nice, too, but looking at overlapping CIs is not a test of where differences are significant or whether the overall interaction is. If you disagree, I'm sorry that you're about to fail your freshman econometrics final.
It's really incredible how easy it is to knock down every last CG criticism. It's all a mix of complete fantasy (mystery fraud!) and freshman-level quant mistakes. Why? Because the opposition isn't borne out of principled empirical concerns or anything like that, but pure racial grievance. She's black, so there must be something wrong with her work.
How did CG not fail her methods courses? Ok, just for fun: what is the "test statistic", and what is its exact sampling distribution under the null hypothesis?
The table reports coefficients from the probit regression, and standard errors (note, not confidence intervals, as her defender mistakenly claims). These are insufficient to test her hypotheses, hence the need to show predicted probabilities at various levels of the neighborhood quality index. CG is inconsistent about how she obtains those: in the main text, she says she "simulates" them; in the table, she says they are "derived". She probably doesn't understand the difference between the two approaches. I suspect they are simulated. In any case, differences may be significant at some levels of the moderating variable, but not at others. That's why she needs to add confidence intervals to the line plots, to show at what index levels differences are significant and and what levels they are not. Where CIs overlap, differences are not statistically significant. That probably happens everywhere except at index value 1. Also, the choice of baseline categories matter, and she does not indicate what levels the variables are being held to when simulating those probabilities. It's important not to hold variables at theoretically nonsensical values.
This is intro stats 101, which 95% of PSR posters understand. But not CG's defender. Maybe they will finally start to learn some basics from this post.This is so painfully stupid it's really adorable! This person doesn't have the slightest clue about the most basic issues of empirical testing and analysis. Do you realize how ignorant of analysis you need to be to write: "Where CIs overlap, differences are not statistically significant." Nope. Not how that works.
Her hypotheses are about the effect of an interaction term. That is correctly tested with an actual test statistic, through a probit model and coefficient tests. Visuals are very nice, too, but looking at overlapping CIs is not a test of where differences are significant or whether the overall interaction is. If you disagree, I'm sorry that you're about to fail your freshman econometrics final.
It's really incredible how easy it is to knock down every last CG criticism. It's all a mix of complete fantasy (mystery fraud!) and freshman-level quant mistakes. Why? Because the opposition isn't borne out of principled empirical concerns or anything like that, but pure racial grievance. She's black, so there must be something wrong with her work.
I will respond with quotes from the PA article mentioned by anot...See full post
http://mattgolder.com/files/research/pa_final.pdf
“The magnitude of the interaction effect in nonlinear models does not equal the marginal effect of the interaction term, can be of opposite sign, and its statistical significance is not calculated by standard software. […] It is, therefore, incorrect to say that a positive and significant coefficient on X (or Z) indicates that an increase in X (or Z) is expected to lead to an increase in Y. “
“Analysts should provide a substantively meaningful description of the marginal effects of the independent variables and the uncertainty with which they are estimated. “
“The analyst cannot even infer whether X has a meaningful conditional effect on Y from the magnitude and significance of the coefficient on the interaction term either. As we showed earlier, it is perfectly possible for the marginal effect of X on Y to be significant for substantively relevant values of the modifying variable Z even if the coefficient on the interaction term is insignificant. Note what this means. It means that one cannot determine whether a model should include an interaction term simply by looking at the significance of the coefficient on the interaction term. ”
“The point here is that the typical results table often conveys very little information of interest because the analyst is not concerned with model parameters per se; he or she is primarily interested in the marginal effect of X on Y […] for substantively meaningful values of the conditioning variable Z. “
“If a multiplicative interaction model is employed, it is nearly always necessary for the analyst to go beyond the traditional results table in order to convey quantities of interest such as the marginal effect of X on Y. […] If the conditioning variable is continuous, the analyst must work a little harder. A simple figure can be used to succinctly illustrate the marginal effect of X and the corresponding standard errors across a substantively meaningful range of the modifying variable(s).“