It's over. I found a direct confirmation of my claim that a coefficient can serve as a test statistic in Casella & Berger's list of problems (8.37c, p.408). Their more general discussion on why I'm right is on p.399.

Recall that I've been mocked by the stats bros for saying you can set up a test with the coefficient compared to a t distribution scaled by the estimated standard error, which I wrote as t*se.

Well, right here in C&B, they consider the even simpler case where you're testing whether a mean is greater than some value, call it 0. (This of course is equivalent to a regression with just a constant term.) For the usual test, you would take the mean and divide by the estimated se (S/sqrt(N)), then compare that test stat to a t distribution.

C&B instead propose a test where you compare the mean to 0 + t_N-1 * S/sqrt(N). You're asked to verify this is a proper test. By their definition, this makes the mean a test statistic. My argument is exactly parallel down to the equation (and exactly the same for a regression with just a constant). I'll run with C&B knowing what they're talking about.

For the trolls, all I'd point to as a lesson here is to not assume you know everything about a subject. When someone has an unfamiliar way of looking at something, that's sometimes a sign they get a concept at a deeper level and can, horrors, make an original point.