Today, I'm at home sick so I have a little time to sit here at the computer. I briefly mentioned a problem I have with the general "Sabermetric Community" when I posted about the argument on replacement players at The Book Blog (that actually stemmed from an unfounded accusation that a bunch of "stupid economists" wrote a flawed paper on Factor Analysis--not really an Econometric technique). However, I think JC Bradbury does a much better job than I in his interview at Chop n' Change.
Bradbury sums up my thoughts on interactions with this group of people pretty well. The general pattern on many sites is to simply ignore or misrepresent any sort of perceived conflicting view (even if that view isn't actually conflicting with anything). Now, I am not here to state that these people aren't intelligent, or that it doesn't happen to some extent on both sides of the issue. To the contrary, many of them are very smart people, but with an unfortunate arrogance that I don't understand. In my discussions with top sports economists, any inconvenient truth presented seems to simply be ignored or, as Bradbury puts it, "chastised without heeding the point."
An example is that of my previous post on discrimination in the NHL. While Phil Birnbaum claims that the book is making "premature accusations", the phenomenon of this discrimination has been documented and studied for more than 20 years in the sports economics literature (if that interests you, see the citations in my previous post). Despite mentioning these papers--supplemented by a sarcastic yet friendly post by sports economist Rodney Fort about making sure to be well read on a subject before heavily criticizing it--went unheard for the rest of the thread. The conversation continued as if this was truly a new problem.
This isn't an isolated incident. I recently read an article over at the Harvard Sports Analysis Collective that essentially looked at a time series of competitive balance. While I think this website is a great learning tool for Harvard students, it amazes me that their resident Harvard statistician allowed this article to be posted. There are a couple of reasons for this. The first is simply that taking the standard deviation of wins is problematic when comparing across years. The number of teams and games has changed dramatically over this time, making it very difficult to compare across seasons. In addition, the competitive balance change has been well-documented by Rodney Fort and Young Hoon Lee in a series of papers from 2005 to now (and probably will continue). That DOES NOT mean that further analysis is inappropriate. To the contrary, more inspection is needed. However, presenting work with no reference or understanding of the problems is troublesome. Finally, allowing these students to take others' work on the internet as a given isn't something we would want going on at an institution like Harvard. In fact, the last thing we want is for Harvard graduates and students to participate in what Bradbury calls a "groupthink attitude".
Finally, Bradbury mentions an article at The Book Blog that completely abuses a model developed by John Hakes and Skip Sauer. I had in fact read the article by Tango and was appalled at the misuse of the model myself. I began writing a response explaining the difficulty with extrapolating a regression outside of a sample, but decided it would simply fall upon deaf ears. At this point, I just don't bother. It seems that others that do not look upon Tango as some sort of cult leader have given up as well.
I am extremely excited about a forthcoming special issue of The Journal of Sports Economics that discusses many of these problems in depth by some of the most vocal, and most knowledgable, economists in the field of sport. Hopefully self-proclaimed "subject matter experts" will take some of the implications in the issue to heart. However, my expectation is that none of them will bother to read it.
The current state of The Book Blog reminds me of sitting in MBA economics classes at the Business School here at Michigan. Without understanding of how models are simplistic in order to explain expectations of markets, arrogant students consistently chastise the professor because of a single example they ran into at work. The professor's response is always, "Well, of course there are anomolies, but on average X happens" as if he was waiting for it. This statement just doesn't get through to people for some reason, despite its generality. The thing that blows my mind about this entire problem is that Sabermetricians believe they are critical thinkers. They are people who ran into others ignoring their opinions for years or criticizing their silly 'statistics' based on small sample sizes for years. Yet, the arrogance continues to blind minds to the fact that economics and Sabermetric study are so interrelated that ignoring the basic economic principles can be counterproductive in progressing the science.
While I continue to post things on this blog, I want to reiterate that what I post here IS NOT something that should be taken as science. Most of what I write is a general brain dump, or interesting tidbits and extensions using projects from my statistics classes. I hope to have a monthly disclaimer to ensure this is understood, and that fostering discussion and well-read arguments is part of my intention. My ideas on this site are not to start an online pissing match, or to out-do anyone else. Please see my Introduction as to how I think about the things I write. I try to write with the utmost care, but can make mistakes. I hope they are pointed out in a manner conducive to discussion.
So let's all take one from Rodney Fort's book, as he says, "Let's all READ MORE ."
ADDENDUM: Here's the Rosenthal article...where he supports the idea of sabermetrics and claims their findings have greatly enhanced our understanding...yet gets heavily criticized elsewhere on the internet.