This is getting just cringeworthy. Basically every stats bro troll has now admitted I'm right (the clear C&B evidence was a real killer for them), but we got a couple real diehards who are impervious to the most basic definitions and clear evidence. Admitting they were wrong is just too painful so no amount of logic or facts will ever matter.
Some user claims CG@Harvard is a plagiarist

1. Read the solution to 8.37.c. They construct a capital T_{n1} r.v which they use as the test statistic to compare to lower case t_{n1, \alpha}, which is the critical value.
2. The sample mean meets the first criterion but not the second. That's why it can't be used as a test statistic. You don't know its exact distribution. You've been asked to specify it and you couldn't.
What are you hoping to achieve by lying about things in C&B that can be verified in 5 minutes? Are you hoping posters aren't paying attention? We see right through your theatrics. I think it's clear even to you that you lost.More complete idiocy.
1. Again thinks a thing isn't true if you solve it in a certain way. "Hey, does y=3x qualify as an equation? No, because you can turn in into x+y=3 and that's an equation. Only one of those can be an equation (for some unexplained reason), so there." That's literally how dumb this person's argument is.
2. Completely ignores the clear evidence that (b) is satisfied. You do know it's distribution. It's a t distribution times the estimated standard error, as I've stated about a hundred times. That's why C&B construct a test using that exact distribution and show it works as a test. Right there in the problem. See those black marks on the page? That's them using the sample mean as a test statistic. Yup, right there.
This troll is basically resorting to pure gaslighting denialism. Plain evidence right in front of his face doesn't count or isn't real because he doesn't want it to be.1. You claimed there's no capital T variable. Just the sample mean. There's obviously a capital T variable in the solutions to C&B that acts as the test statistic. It's not Xbar. It's a different random variable. Hence Xbar is NOT the test statistic.
2. You can't have that distribution depend on an estimated quantity. If you estimate it you don't know it. You must compare it to a distribution of known parameters, such as the standard normal (mean 0, variance 1). SE(bhat) depends on the variance of bhat. That violates the criterion for the comparison distribution. If you could assume the variance of the beta coefficient and treat its square root (SE) as fixed and known, it would be logically inconsistent to estimate bhat. 
Everyone disagrees with you, including Casella and Berger. The socalled "kill shot" is Exercise 8.37.c from Casella & Berger, which you misinterpreted as an example showing that the sample mean can be a test statistic. The official solutions, written by the authors themselves, revealed that the correct approach is to construct a test statistic T_{n1} which follows a Student t distribution, which you can compare to the given critical value t_{n1, \alpha}. Therefore, the sample mean cannot be used as a test statistic.
Despite being proved wrong with official evidence, you keep insisting that you're right and everyone agrees with you. Just because you keep telling that lie in the face of evidence from the authors themselves proving you wrong will never make your claims true. How can you stand to look at yourself in the mirror knowing you're such a compulsive liar?
This is getting just cringeworthy. Basically every stats bro troll has now admitted I'm right (the clear C&B evidence was a real killer for them), but we got a couple real diehards who are impervious to the most basic definitions and clear evidence. Admitting they were wrong is just too painful so no amount of logic or facts will ever matter.

1. You claimed there's no capital T variable. Just the sample mean. There's obviously a capital T variable in the solutions to C&B that acts as the test statistic. It's not Xbar. It's a different random variable. Hence Xbar is NOT the test statistic.
2. You can't have that distribution depend on an estimated quantity. If you estimate it you don't know it. You must compare it to a distribution of known parameters, such as the standard normal (mean 0, variance 1). SE(bhat) depends on the variance of bhat. That violates the criterion for the comparison distribution. If you could assume the variance of the beta coefficient and treat its square root (SE) as fixed and known, it would be logically inconsistent to estimate bhat.1. I "claimed there's no capital T variable"? What? This is a complete lie. It's not in the problem itself, but it's completely irrelevant to me whether this is in the solution, since the solution is irrelevant to whether the sample mean can act as a test statistic. Plus, you're still stuck in the immature thinking that there can only be one and exactly one test statistic. This is nonsense and based on nothing.
2. My God, is there nothing that will let basic concepts sink into your milesthick skull? This "violates the criterion for the comparison distribution"? Nope! C&B have an entire section explaining that you can condition a distribution on an observed statistic. Please let me know how many times I have to repeat this before you understand. 100 times? 1000?

Everyone disagrees with you, including Casella and Berger. The socalled "kill shot" is Exercise 8.37.c from Casella & Berger, which you misinterpreted as an example showing that the sample mean can be a test statistic. The official solutions, written by the authors themselves, revealed that the correct approach is to construct a test statistic T_{n1} which follows a Student t distribution, which you can compare to the given critical value t_{n1, \alpha}. Therefore, the sample mean cannot be used as a test statistic.
Despite being proved wrong with official evidence, you keep insisting that you're right and everyone agrees with you. Just because you keep telling that lie in the face of evidence from the authors themselves proving you wrong will never make your claims true. How can you stand to look at yourself in the mirror knowing you're such a compulsive liar?
This is getting just cringeworthy. Basically every stats bro troll has now admitted I'm right (the clear C&B evidence was a real killer for them), but we got a couple real diehards who are impervious to the most basic definitions and clear evidence. Admitting they were wrong is just too painful so no amount of logic or facts will ever matter.This complete idiot is still going with the laughable argument that how you prove something to be true negates it being true. That's his actual argument! He actually believes y=3x is not an equation since you can turn it into y+x=3 and that's an equation. Did you know only one thing can be an equation? That's what this guy's selling!
It's just incredible stuff. C&B literally set up a problem showing that you can compare the sample mean to a distribution under the null. They call it a "test." They show it can test your null hypothesis. This is the exact definition of a test statistic. But does it count as one to the stats bros? No, because you compute something else to *prove* the sample mean works as a test statistic. So that like eliminates it working by the madeup Conservation of Test Statistics or something. Do these people have actual brain damage?

This whole debate really shows how insane people can get when embarrassed. A totally innocuous point (that coefficients can act as test statistics through a simple rearrangement of terms) gets transformed into an existential crisis for the stats bros. Now even the plainest and clearest possible evidence (a direct confirmation from C&B) means nothing to them. All that matters is saving their depleted egos from admitting the obvious painful truth that they were wrong and everyone knows it.

1. I "claimed there's no capital T variable"? What? This is a complete lie. It's not in the problem itself, but it's completely irrelevant to me whether this is in the solution, since the solution is irrelevant to whether the sample mean can act as a test statistic. Plus, you're still stuck in the immature thinking that there can only be one and exactly one test statistic. This is nonsense and based on nothing.
It is *highly* relevant. Up until the official solutions were posted, you kept claiming that C&B ask you to prove that the mean is a valid test statistic. Stats bros have been telling you that you're actually supposed to construct that capital T_{n1} which follows a Student t distribution. What the solutions show is that stats bros were right and you were wrong. It's not irrelevant to you. It's *inconvenient* to you, because that capital T in the solutions was the final nail in your coffin.

How is FIsher's exact test relevant to CG's claim that the mean can be a test statistic?
C&B use this example to show how test statistics can be compared to distributions that are conditional on other observed statistics. This disproves the claim of several posters that this is not allowed for test statistics. In particular, it shows that the sample mean can be used as a test statistic, where the distribution it is compared to is conditional on the estimated standard error.

1. I "claimed there's no capital T variable"? What? This is a complete lie. It's not in the problem itself, but it's completely irrelevant to me whether this is in the solution, since the solution is irrelevant to whether the sample mean can act as a test statistic. Plus, you're still stuck in the immature thinking that there can only be one and exactly one test statistic. This is nonsense and based on nothing.
It is *highly* relevant. Up until the official solutions were posted, you kept claiming that C&B ask you to prove that the mean is a valid test statistic. Stats bros have been telling you that you're actually supposed to construct that capital T_{n1} which follows a Student t distribution. What the solutions show is that stats bros were right and you were wrong. It's not irrelevant to you. It's *inconvenient* to you, because that capital T in the solutions was the final nail in your coffin.
Your argument continues to be so stupid it's painful. Yes, I'm claiming that C&B "ask you to prove that the mean is a valid test statistic." This is because I understand how to read. Right there in the problem they ask you to prove this. They present an equation where the sample mean is compared to a distribution and they ask you to prove this works as a hypothesis test. By their definition (or any knowledgeable person's definition), this is the sample mean working as a test statistic. You have no response to this incredibly obvious fact that proves you wrong.
You idiotically continue to insist that *how* you prove something invalidates it being true. It's literally one of the dumbest arguments I've ever seen in print. Multiple things can be test statistics, so showing that the mean working as a test statistic flows from T working as a test statistic does not invalidate the first fact. It just shows *why* it's true. I can't believe I have to actually write this down. Again, no response to this incredibly obvious argument.
Is it seriously possible that you don't understand the most basic elements of logic like this? Are you just trolling? Or did getting completely destroyed in this thread just melt your mind, leaving you incapable of rational thought? Those are the only three options I can think of.

How is FIsher's exact test relevant to CG's claim that the mean can be a test statistic?
C&B use this example to show how test statistics can be compared to distributions that are conditional on other observed statistics. This disproves the claim of several posters that this is not allowed for test statistics. In particular, it shows that the sample mean can be used as a test statistic, where the distribution it is compared to is conditional on the estimated standard error.
Thank you for arguing so cogently that you can also apply the same logic to prove that SE(Bhat) is a test statistic. I think that debate is closed.

How is FIsher's exact test relevant to CG's claim that the mean can be a test statistic?
C&B use this example to show how test statistics can be compared to distributions that are conditional on other observed statistics. This disproves the claim of several posters that this is not allowed for test statistics. In particular, it shows that the sample mean can be used as a test statistic, where the distribution it is compared to is conditional on the estimated standard error.Thank you for arguing so cogently that you can also apply the same logic to prove that SE(Bhat) is a test statistic. I think that debate is closed.
No argument. No response. Loses one debate, so tries to distract with another irrelevant one. The mark of a loser who knows he's a loser!

How is FIsher's exact test relevant to CG's claim that the mean can be a test statistic?
C&B use this example to show how test statistics can be compared to distributions that are conditional on other observed statistics. This disproves the claim of several posters that this is not allowed for test statistics. In particular, it shows that the sample mean can be used as a test statistic, where the distribution it is compared to is conditional on the estimated standard error.
Thank you for arguing so cogently that you can also apply the same logic to prove that SE(Bhat) is a test statistic. I think that debate is closed.No argument. No response. Loses one debate, so tries to distract with another irrelevant one. The mark of a loser who knows he's a loser!
Huh?

Just to review:
Is there a whole section in Casella & Berger showing you can have a test statistic where the distribution is conditioned on another statistic, despite all the stats bros insisting that wasn't allowed? Yup. No argument from anyone disputing this.
Is there an example where they show a sample mean used as a test statistic? Yup. No serious argument against this. Plain evidence right there.
Is the test they set up the exact one I proposed down to the exact equation? Yup. No chance to deny that. Right there in the thread.
Is that check mate? Yup. Also known as a kill shot, the bro sweeper, and the perfect vindication!

How is FIsher's exact test relevant to CG's claim that the mean can be a test statistic?
C&B use this example to show how test statistics can be compared to distributions that are conditional on other observed statistics. This disproves the claim of several posters that this is not allowed for test statistics. In particular, it shows that the sample mean can be used as a test statistic, where the distribution it is compared to is conditional on the estimated standard error.
Which is the sample mean in their notation (S1, S2, S)? And what distribution does it follow under the null?

How is FIsher's exact test relevant to CG's claim that the mean can be a test statistic?
C&B use this example to show how test statistics can be compared to distributions that are conditional on other observed statistics. This disproves the claim of several posters that this is not allowed for test statistics. In particular, it shows that the sample mean can be used as a test statistic, where the distribution it is compared to is conditional on the estimated standard error.Which is the sample mean in their notation (S1, S2, S)? And what distribution does it follow under the null?
In the Fisher Test example, they use S1 (total 1s from one subsample) as the test statistic and condition the distribution they compare to on S (total 1s from the whole sample). This proves you can set up a test statistic in which the distribution you compare to is conditioned on an observed statistic, proving the stats bros wrong yet again.
Similarly, in a later problem, they set up the sample mean as a test statistic and compare it to a distribution that's conditioned on the estimated standard error. They ask you to confirm this works as a hypothesis test. In fact, in the solutions, they show the sample mean working as a test statistic flows directly from the fact that T works as a test statistic. One implies the other. Wow, exactly as I was saying from the very beginning!
This was the final piece of evidence that proved the stats bros wrong. About 90% have now begrudgingly admitted it, with one or two of the dumber ones pretending there's still doubt. It's been a fun ride winning this! Thanks to C&B for providing the definitive evidence proving me right!!

Question:
How is FIsher's exact test relevant to CG's claim that the mean can be a test statistic?Answer:
C&B use this example to show how test statistics can be compared to distributions that are conditional on other observed statistics. This disproves the claim of several posters that this is not allowed for test statistics. In particular, it shows that the sample mean can be used as a test statistic, where the distribution it is compared to is conditional on the estimated standard error.
Question:
Which is the sample mean in their notation (S1, S2, S)? And what distribution does it follow under the null?In the Fisher Test example, they use S1 (total 1s from one subsample) as the test statistic and condition the distribution they compare to on S (total 1s from the whole sample). This proves you can set up a test statistic in which the distribution you compare to is conditioned on an observed statistic, proving the stats bros wrong yet again.
Two tiny paragraphs of answers. I never thought it would be possible to pack so many inaccurate statements about Fisher's exact test into such a small amount of text.

Question:
How is FIsher's exact test relevant to CG's claim that the mean can be a test statistic?
Answer:C&B use this example to show how test statistics can be compared to distributions that are conditional on other observed statistics. This disproves the claim of several posters that this is not allowed for test statistics. In particular, it shows that the sample mean can be used as a test statistic, where the distribution it is compared to is conditional on the estimated standard error.
Question:
Which is the sample mean in their notation (S1, S2, S)? And what distribution does it follow under the null?In the Fisher Test example, they use S1 (total 1s from one subsample) as the test statistic and condition the distribution they compare to on S (total 1s from the whole sample). This proves you can set up a test statistic in which the distribution you compare to is conditioned on an observed statistic, proving the stats bros wrong yet again.
Two tiny paragraphs of answers. I never thought it would be possible to pack so many inaccurate statements about Fisher's exact test into such a small amount of text.
My God, the stupid. It's just blinding. Do you idiots really need this stuff repeated 10,000 times? Maybe accept that you're just too dim to get it?
There are two examples being referenced. One is on fisher and demonstrates you can condition distributions on observed statistics. Every word there is correct and you've challenged nothing. The other uses the sample mean as a test statistic. See above for page references. This was crystal clear from the beginning, but doofus here just can't get it.
No matter how simple the explanation, how basic the material, he can ponder it for a thousand years and he'll still be here asking, so where is the mean in the fisher test?

Question:
How is FIsher's exact test relevant to CG's claim that the mean can be a test statistic?Answer:
C&B use this example to show how test statistics can be compared to distributions that are conditional on other observed statistics. This disproves the claim of several posters that this is not allowed for test statistics. In particular, it shows that the sample mean can be used as a test statistic, where the distribution it is compared to is conditional on the estimated standard error.
Question:
Which is the sample mean in their notation (S1, S2, S)? And what distribution does it follow under the null?Answer:
In the Fisher Test example, they use S1 (total 1s from one subsample) as the test statistic and condition the distribution they compare to on S (total 1s from the whole sample). This proves you can set up a test statistic in which the distribution you compare to is conditioned on an observed statistic, proving the stats bros wrong yet again.Fisher's exact test is about comparing the probability parameters of two independent Binomials, and the test statistic is obtained by conditioning the random variable giving the proportion of successes in one sample (S1) on the random variable capturing the total number of successes across the two samples (S=S1+S2). S1 is neither the sample mean, nor the test statistic. The test statistic, call it T, is derived as P(T=t) = P(S1=s1  S=s), using the simple definition of conditional probability and doing some simple algebra to show it follows a Hypergeometric distribution with known parameters. In no way does it follow from this example that the sample mean can be a test statistic.