Acceptance Rates and a Measured Rant about Statistics
A publication’s acceptance rate is one of those numbers that seems impossible to pin down with any degree of certainty. Some quantitative phenomena are like that. Take, for instance, the coastline paradox. That one seems weird at first, but then when you think about how fractals work, it actually makes a lot of sense. Acceptance rates, in contrast, are prone to so many human errors that I find it hard to believe that anyone can take them seriously.
Almost every writer is familiar with Duotrope, and for a while, I took their acceptance rates at face value. But then I actually thought about how they gather their statistics. Let’s start by addressing a glaring issue: Duotrope’s figures are self-reported. Self-reporting is already a less-than-ideal sampling technique. For example, are people who get rejected by a journal more or less likely to report that response to Duotrope? Freelance writers are also a sensitive bunch, and it’s not hard to imagine them being more likely to report their successes than their failures. To be fair, Duotrope offers the disclaimer that a journal’s acceptance rate is often lower than what they report, but Duotrope is hesitant to say by how much. Data collection here seems less like a science more like a free-for-all. Actually, better scratch that “free” part, because you need to pay in order to use Duotrope and report your acceptances and rejections.
Duotrope’s standards for which journals get listings also make no sense. Take, for instance, the reason they don’t have a listing for Prairie Schooner. Despite Duotrope’s assertion to the contrary, I know that Schooner has been putting out issues, and when I tried to bring this fact to the site’s attention, they told me I was wrong, even though I’d sent them links to prove that I was right. This wouldn’t be so bad if Duotrope didn’t decide to explain their choice in the most pompous, holier-than-thou way possible:
“Just as publications have guidelines for submission, so too does Duotrope have Criteria for Listings for any project to be included in our listings. Among other things, these requirements are necessary to ensure that we can keep our listings as timely and accurate as possible.”
That’s right—apparently Prairie Schooner isn’t rigorous enough for them.Sure thing, buddy.
Chillsubs, on the other hand, is more laid back, but the way they describe publications and their standards are…interesting, to say the least. Their listing for Denver Quarterly is mostly straightforward, but the “vibe” section caught my eye. “Top-tier stuff. Not Paris Review, but ok.” Is that a backhanded compliment? Maybe a journal doesn’t need to be Paris Review to be good, did you ever think of that? Also, why are there two acceptance rates listed? What does that even mean? At least Duotrope bothers to explain their statistics, even if those statistics are suspect.
Strangely enough, they list that same “vibe” for McSweeney’s Internet Tendency. To which I say, “Of course it’s not Paris Review, it’s McSweeney’s Internet Tendency!” Other than being selective, it makes no sense to put Denver Quarterly and Internet Tendency in the same category.
Their “vibe” for Georgia Review is oddly passive aggressive: “Very fancy very impressive very not fast.” What, you’re not even going to bother with commas? What did Georgia Review ever do to you? I don’t know anything about their data collection practices, but this “vibe” business just gives me…bad vibes.
The Submission Grinder is refreshing because it’s transparent about the fact that its statistics are self-reported and doesn’t hesitate to tell you the value of n (i.e., how many people have submitted to X journal and reported it to The Submission Grinder). It also provides handy graphs so that you can better understand the likely fate of your submission, and unlike other sites, it actually tells you how many people have withdrawn their submissions, which, depending on the value of n, can greatly impact the final acceptance rate. The users of this site seem to be more into genre fiction, though, so stats for publications like Denver Quarterly and Prairie Schooner are scant, while reports for journals such as Asimov’s and Clarkesworld run into the thousands. This site isn’t by any means perfect, but at least it’s intellectually honest.
Okay, so if self-reporting by submitters is sketchy, what about asking for statistics from the publications themselves? Well, that might be a problem. I won’t name names, but I was recently accepted by a journal that, according to one listing, has an acceptance rate of roughly 30%, but according to emails I received from the journal, they receive around 10,000 submissions per year, and only around 300 of those submissions are published, which would translate to roughly 3%. So how does a 3% acceptance rate get inflated to 30%? I have no way of knowing, but I suspect that the 30% figure comes from the journal’s EIC reporting what she herself accepts, not interns or secondary readers. Often, the EIC of a well-established journal will only read the submissions that have made it past several rounds of readers; while interns will accept perhaps one or two submissions per hundred to go on to the next level, the EIC will have much fewer submissions to evaluate and may end up selecting 20%, 30%, or even 50% of what’s presented to her. Thus, the smaller rate accounts for all submissions, while the larger accounts for only the shortlist. I’ve seen this discrepancy in publications like Drunk Monkeys, North Dakota Quarterly, Points in Case, and many others. I won’t pretend my explanation above is the answer, but it’s my best guess as to why this happens so often.
It’s often said that data don’t lie, but people do. Well, lying is a matter of intent. Depending on collection and sampling methods, data can still be mistaken without being dishonest. That’s why you should look at all statistics critically. Too often, people see quantitative information and simply assume it’s legitimate because, let’s face it, numbers are intimidating and most of us would rather not think about them too deeply.
Also, if you happen to be someone who pays for Duotrope, please email me and explain why.