List

A few days ago, I mentioned that I really like the philosophy journal wiki, but it would be nice to have more automated system that presents the data in an easier to digest format. I have since been experimenting with various ways to do this, and I think I’ve settled on a method.

The following will do what philosophers want the journal wiki to do. It combines the easy submission style of a survey with the easy data presentation of a spreadsheet.

To get an idea as to how this will work, I ported data about Mind from the philosophy journal wiki and entered it into a Google Spreadsheet. I created a survey form that will add new data to this spreadsheet in the desired order. Below you can see the results. You can also see the survey below the results. Notice that you can click on the “Raw Data” tab to see all of the results.

If people like this method, I could have everything up and running very soon. So…what you you all think?

UPDATE: The really important questions that need to be answered are: Is there any thing about the surveys that should be changed? Is there anything about the data tables that needs to be changed? I don’t want to clone this thing 90+ times without having the format fine-tuned the way people want it.

UPDATE: I’ve been asked about spam bot protection. Google forms doesn’t come built with that, but I think I have something I can do that’s (a) simple for us philosophers, and (b) will help keep the spam bots away. It’s not quite captcha, but the principle is the same.

61 Responses to “Best Journal Survey Method So Far”

  1. Doug Portmore

    Is there any easy way to ask respondents which of the following categories they fall under and then to display the statistics according to each category?

    Category 1: Graduate Student
    Category 2: Faculty – Non-tenure-track position
    Category 3: Faculty – Tenure-track but non-tenured
    Category 4: Faculty – Tenured
    Category 5: Other

    One of the complaints about the Journals Wiki has been that it’s probably not representative of the overall statistics for a given journal in that there are probably a much higher percentage of graduate students (as opposed to, say, tenured faculty) who use it. And those who fall in a given category are probably more interested in the statistics for their group than in the statistics for those in other groups or for the whole.

  2. Andrew Cullison

    Hi Doug,

    I can do that. I’ll change the Mind survey so we can test it out.

    Thanks. This is exactly the sort of feedback I’m looking for before I launch the rest of the surveys.

    UPDATE:
    I’m working on adding the first feature that Doug suggested. I’ll be adding a column of artificial data to the data I imported from the journal wiki concerning status. This is merely to test how to crunch the numbers. I will of course remove it when the real thing starts.
    UPDATE:
    It worked. We can implement the feature that Doug suggested. Awesome!

  3. Walter Riker

    re: time from acceptance to publication.

    E.g., Political Studies publishes papers online (“EarlyView”) before actually putting them in print.

    http://www3.interscience.wiley.com/journal/120121142/issue

    Is this practice becoming more common? Should the journal wiki track online vs print publishing? If we should pick one pub alternative as standard, which one?

    My inclination is to use the online pub date. I rarely look in print journals any more. I can find most of those articles online now.

  4. Carl

    Some people might appreciate stats for gender and race as well as grad/faculty/etc. It might also be interesting to add categories for academic staff and non-academic employment.

    Also, what happens if a vandal comes in a feeds lots of clearly erroneous data? Do you have the ability after the fact to delete, at the back end, certain submissions (for example, from a particular IP address)? I’d hate to see you do all this work and have someone mess it up. This looks like a very helpful instrument, so thanks.

  5. Andrew Cullison

    Carl,

    Regarding the first point. That can be implemented.

    Regarding the second…The surveys will be password protected with a single global password that will be easily obtained by a human philosopher from the site. Bots should be kept out.

    If vandals are humans…I’ll be able to go back and delete data from the spreadsheet on my end.

  6. Doug Portmore

    I agree with Walter. What matters is when it’s first published whether it be online or in print. If so, you might change “Time from Acceptance to Publication” to read something like the following: “Please enter the number of months from the time the paper was accepted to the time it was published (Leave blank if your paper was not accepted).”

  7. Mark van Roojen

    Andrew,

    FWIW, there are cases of multiple revise and resubmits (don’t ask me how I know) and those might be hard to put into the format in a useful way as the survey is currently set up.

  8. Eric Winsberg

    That looks terrific Andrew,

    thanks for the fine work.

  9. S. Matthew Liao

    Andrew,

    This is great. Thanks for all your effort!

    I think that it’d be interesting to know both of the following where applicable:

    a) Time from Acceptance to Online First
    b) Time from Acceptance to Print Publication

    To address Mark’s comments, you might add the following questions:

    1) How many times did you have to revise and resubmit?
    2) Did your revision(s) get sent to the same or different referees?

    Finally, I’m not sure how well it would work, but it might be interesting to have a comment box in which people can add additional comments if they want to.

  10. Tom Polger

    This format is very useful.

    One suggestion: we might want also to have an “initial verdict” option of “withdrawn.” This could be relevant for journals that sometimes have very long turn-around times.

    I have also received more than three referee reports, in the past. Maybe a 4+ category?

  11. Barry Lam

    Great work Andrew. I wanted to ask, what does R & R acceptance rate indicate? Does this indicate the percentage of people who receive a revise and resubmit initial verdict, or does this indicate the percentage of people who have papers eventually accepted after receiving a revise and resubmit initial verdict?

  12. Andrew Cullison

    Thanks for all of this helpful feedback so far. To answer some specific questions/comments.

    Doug and Walter,
    I concur. I’ll make changes to reflect that.

    Mark and Matthew,
    I should be able to work out something to gauge multiple R&Rs…also, each survey will have a comment thread like this one…I was thinking that people could place specific comments about the journal in that thread. It may be useful to have something in the survey, though…I’ll look into it.

    Tom,
    I like both of those ideas. I’ll add to the number of comments.

    Barry,
    the R&R rate indicates the percentage of people who have papers eventually accepted after receiving an R&R on the initial verdict.

  13. Popkin

    I wonder if providing the recent average review time (say the previous six months or year) would be more helpful. I’m sure there’s a fair bit of variability at any given journal over time, and it seems to me that when it comes to estimating how long you’re going to have to wait when you submit something, what you’d most want to know is how the journal has been doing recently.

  14. Andrew Cullison

    Popkin,

    I think that can be arranged.
    UPDATE: I just tested this out. We can average the review time for whatever time range we’d like.

  15. Ben Blumson

    This is awesome!

    One small thing is that if the time from acceptance to publication is very long, it will be correspondingly long before its possible to enter all one’s data. Is there a way one could answer the earlier questions first, then return later to answer the later questions?

  16. Andrew Cullison

    Ben,

    I was actually just thinking about that. A person can fill out all of their details after the paper has been accepted, and leave the time to publication field blank. When the paper is published they can come back, fill out a new survey. If they leave everything blank, and just fill in the time to publication field – everything should update smoothly.

    I should include some sort of instructions at the beginning of the survey to let everyone know that’s an option.

    UPDATE: Just tested it out. That works!

  17. johann tor

    Google Chrome here and I get a “Fatal Error” that won’t let your example show up.

  18. Richard Brown

    Hi Andrew, this is really great!!! Thanks for all the hard work!

  19. Andrew Cullison

    Hi Johann,

    I run another website called Sympoze. It’s like Digg for philosophers.

    Philosophers can put external vote buttons on their websites. Last night (around the time of your post) that site had some memory problems, and was generating a “Fatal Error” message in these external buttons. That message was showing up in the buttons on external sites.

    I suspect the “Fatal Error” message was from that. Do you mind trying to access the example again?

  20. Doug Portmore

    The spreadsheet doesn’t display for me any more. It just says, “loading…”. And internet explorer indicates that there’s an error on the page. In any case, I was thinking that we might want to know the median and range of time to initial verdict. I rather submit to a journal that consistently returns an initial verdict in about 3.5 months than a journal that half the time returns an initial verdict in one month but half the time returns an initial in six months. Both would, though, have the same average. Also, the median might give a better indicator if the journal occasionally returns a verdict in a couple weeks, when, for instance, the paper doesn’t deserve to go out to external reviewers.

  21. Andrew Cullison

    Doug (and Johann),

    I see the error now too. It looks like an issue with Google Docs. I’m looking into it.

  22. Andrew Cullison

    I think I fixed it.

  23. Doug Portmore

    It’s fixed.

  24. Preston Werner

    Andrew–
    This is an excellent idea. Much appreciated!

  25. Barry Lam

    Andrew (and others)

    What do you think of this idea: in addition to average review time in months, have a Shortest and Longest, so that people can see that a journal can be as short as say 3 months, and as long as, say 22 months, with an average of 9 months.

  26. Doug Portmore

    Barry,

    I think that that’s a good idea.

  27. Andrew Cullison

    Barry and Doug,

    I’ve added a third tab to the spreadsheet. Right now it calculates and displays: average, median, mode, shortest, and longest review time.

  28. Andrew Cullison

    An update regarding Popkin’s request for an average recent response time (e.g. an average of review time for the last 6 months). This can easily be done. I just tested the formula. I’ll put up an example here when I get a chance.

  29. Anonymous

    Many journals have completely online processes. I want to know which journals those are. So maybe add a box for “completely online submission”, “partially online”, “offline”, etc…

    Others know which journals are blind, double blind, non-blind, etc… I’d like to know that. Add an “if you you know this” polling box?

  30. johann tor

    No problems now for me as well. Very nice job!

  31. Brad

    Great idea Andrew, this will definitely be widely used.

  32. Joachim Horvath

    Hi Andrew,

    this is really great, for I already loved the Jornals Wiki even in its present and non-ideal form!

    Here’s a suggestion, though: Often, I do not only want to know the average review time for a journal, but also how much the single values deviate from that average value, e.g. by adding the variance or standard deviation of all the values. For, without that information, the average by itself does not tell me how likely it will be that my own submission will be reviewed in an average time. In the Mind case, for example, my review time could be anything between 1.5 and 36 months. One can, of course, always go to the spreadsheet for getting this kind of information, but it would still be nice to have one additional value that sums it up (at least to some degree).

    Here’s another suggestion in the same spirit: Wouldn’t it also be nice to have improvement-measures for review time and other figures? This measure would then e.g. tell us if the review time of a given journal went up or down in the last few months/years, and to which degree.

    Finally, one could also try to implement a number of rankings for the most important measures, e.g. the top-ten journals in terms of average review time or quality of feedback.

  33. Andy

    Andrew, I can only echo the kudos of everyone else. This is a real service on your part.

    Did I correctly read your previous post to say that you personally will have to create a survey form for each individual journal? That would mean users cannot add new journals, as they can with the current wiki. Over time that could detract from the usefulness of the site. Is there any way around that?

  34. Andrew Cullison

    Joachim,
    I can add do all of that.

    Andy,
    Right now, I would have to create a survey form for each journal. However, once I have a template survey/spreadsheet combo, I can generate a copy of it, rename it, dedicate it to a specific journal, embed everything in a page on my site, and it’s automatically added to the list. That process would only take me about 1 minute.

    Now that there are 90+ journals already on the philosophy journal wiki, I have a list of desired journals, and I’ll create those surveys as soon as I get the main template for the survey/spreadsheet combo hammered out. I imagine the need to add new journals after that will be minimal. If anyone wants a journal added, all they need to do is email me and it will be up within a few hours. So I don’t think there will be a significant sacrifice in usefulness. So this lost functionality seems like it’s a negligible cost.

    Note: a survey only needs to be generated once for a journal and then embedded. Philosophers use the same survey over and over to submit data about the journal. So I don’t need to generate a new survey for each time a philosopher wants to submit data about a journal. I only need to generate a survey for the first time a philosopher wants to submit data about a journal.

  35. John Turri

    Fantastic work, Andrew. Thanks so much for doing this!

    I had one suggestion (which I don’t think has yet been made, though perhaps I just missed it in the comment thread). It might be worth asking people what category the submission falls into, e.g. “metaphysics,” “epistemology,” “feminism,” “ethics,” “history of philosophy,” etc., which could then be used to generate statistics for submissions in each of these fields for the various journals.

  36. Richard Brown

    When I go to the “Journal Surveys” tab and click on ‘Mind’ there is a “#DIV/0!” error in the ‘overall experience’ field, fyi…

  37. Andrew Cullison

    John,

    I like that idea. What categories do people suggest? One concern is what to do with hybrid category papers…Also depending on how we carve up categories, the list could get quite long.

    Richard,

    fixed that error. Thanks for the heads up.

  38. Barry Lam

    Andrew, easier said than done, but do be careful that you reach the right balance of informativeness and brevity. We would all love all kinds of information, including information about changes over time, but too cumbersome a survey form can be dissuasive, and we want ongoing contributions. For people who want information on changes over time, perhaps something easier to do is to drop data that is more than, say, 3 years old. This gives journals incentive for improvement, facilitates ongoing contributions to the spreadsheet, and optimizes the information we need concerning here and now decisions about where to submit.

  39. Jonathan Ichikawa

    Sorry I’m late with this thought — maybe it’s not still useful, but: if we’re asking for time to print, then doesn’t that mean we’re asking people not to fill in the form until much later than paper acceptance? The data will always be a year or two old…

  40. Andrew Cullison

    Hey Jonathan,

    Thanks. Ben had a similar concern here – http://www.andrewcullison.com/2009/07/best-journal-survey-method-so-far/comment-page-1/#comment-3476

    I noted that persons can submit all of the review information and leave the time to print field blank. They can go back and fill out the time to print field as a new survey. If they leave everything else blank, it won’t corrupt the data.

  41. Dan Speak

    Good Lord, Andrew! I stand amazed at your initiative, invention, skill, and good will in all of this. You are a frickin’ stud!! I have no suggestions other than that every one of us buy you a drink whenever we run into you in a bar at the APA.

  42. Andrew Cullison

    Thanks Dan! (…if we cross paths at the APA, I’ll hold you to that.)

  43. Drew Howat

    Excellent work Andrew, let me add my sincere thanks for undertaking this exceptionally helpful project.

    I’d just like to second Carl’s request for race/gender data in the survey. There are good reasons to think some journals accept an unfairly small proportion/number of papers by women and minorities. This deserves to be a matter of public record – it’s difficult to imagine it ever changing otherwise.

    For the sceptical, there is a nice discussion of this issue in Sally Haslanger’s recent paper on Women in Philosophy:

    http://bit.ly/3cI57Q

    See also https://implicit.harvard.edu/implicit/ or Gladwell’s enjoyable book Blink.

    Keep up the good work.

  44. Gary H. Merrill

    I’ve just discovered this site as a result of trying to find some recent and reliable statistics concerning journal acceptance practices and rates. I think it’s quite a good idea, but at the moment it has — from my perspective — one fatal flaw.

    It appears to me that all the rates listed and the statistics calculated suffer from what statisticians call “reporting bias”. That is, the only information you have is the information that (unsolicited and untracked) contributors choose to provide. Unfortunately, this renders the results virtually meaningless — particularly given such a small number of reports (though it is perhaps safe to generalize on SOME of the numbers). There is, under the circumstances, absolutely NO way to avoid severe reporting bias, and (without some figures from the journal publishers themselves) no way to even determine the scope of the bias. The consequence is that none of the information provided for any journal can reasonably be considered as even remotely accurate. (This is, of course an obvious problem with the whole “wiki” or “volunteer” approach to surveys.)

    Do you have any plans for addressing this fundamental statistical (indeed epistemological) problem? Could you, for example, now that the site has been implemented, at least get information from the journal publishers, publish that, and employ it to “audit” the survey results and perhaps make some more reasonable inferences on that basis?

  45. Andrew Cullison

    Hi Gary,
    We have thought about that and I have had several discussions about how to rectify it. I’ll send you some of our ideas when I get a chance.

  46. Andrew Cullison

    It’s also worth noting that the data isn’t totally useless for comparison purposes…e.g. the data might still track relative review times which would be useful.

  47. Gary H. Merrill

    Ummm, strictly speaking, the data would track relative review times only if the average review time you calculate in fact matches to some significant degree the true average revue time. So I’m afraid that, strictly speaking, this too falls to the reporting bias problem.

    However, if one makes the ASSUMPTION that review times are in general enforced by editorial policy, then a small sample would be enough to draw a reasonably confident conclusion about average review time. It was, in fact, precisely this case that was the basis for my suggestion that some generalizations from the data might safely be made.

    Reporting bias is in fact a BIG problem in circumstances such as this survey. My guess (based on experience with similar contexts) is that the proportion of reports from successful submissions outweighs the proportion of reports from unsuccessful submissions. But again, in the absence of empirical data, that’s simply no more than a guess.

    (Aside: Reporting bias is such a problem in certain critical areas, such as drug safety, that innovative and complex techniques — involving “Bayesian shrinkage” — are used to estimate the denominator in such cases so that ratios may be claimed to have some meaning. However, this requires substantial amounts of data, there is a huge controversy about its success, and virtually everyone agrees that data not subject to reporting bias should be employed if at all possible. Significant numbers of scientists regard results based on these methods to be meaningless, the FDA aside.)

  48. Andrew Cullison

    You could be wildly off the true average and still track relative review times as long as you had some reason to think that the bias of the sample in question would have the reports be wildly off in the same direction.

    You are right that successful submissions are reported more than unsuccessful submissions – we have already been comparing the data here to the data from other journals (per your suggestion).

    Another thing I’ve been looking at by comparing some of the data to other journals are the relative review times. Relative review times in the cases I have looked at so far do seem to track relative review times from actual journal reports.

  49. Kate Padgett Walsh

    Great site! Can you add Journal of Idealistic Studies, please?

  50. Matt King

    I think this is an excellent resource. One thing that would be helpful, however, is a feature that would allow users to sort the journals. So, for instance, one could sort by average review time, and then get them by shortest to longest. Or by average usefulness of comments. Just the categories in the overview section of each journal’s page.

    I don’t know how feasible this is – but I do know it would be extraordinarily helpful, especially for younger scholars who, say, really need to get verdicts back quickly.

    Thanks!

  51. Ned Block

    Add Thought!

  52. Dale Miller

    Modern Schoolman?

    • Catherine Sutton

      Modern Schoolman is changing its name to Res Philosophica.

  53. Arianna Betti

    Great resource, thank you Andrew! Could you add
    History and Theory,
    Journal of the History of Ideas
    The Bulletin of Symbolic Logic
    Journal for the History of Analytic Philosophy
    Many thanks in advance!

  54. Nick

    Fantastic resource – thanks a lot Andrew!

    Any chance you could add please 2 more journals – Ethics, Policy & Environment and Environmental Philosophy?

    Thanks. N

  55. Chad McIntosh

    Three journals to consider adding:

    Ars Disputandi
    European Journal for Philosophy of Religion
    Philosophia Christi

  56. Thomas Carroll

    Could you add Teaching Philosophy?

    Many thanks for the website!

  57. Arnon Levy

    Hi,

    Thanks for putting up the journal surveys – a very helpful resource! I was wondering if you could add the European Journal for Philosophy of Science. (I tried to post this on the bottom of the page that lists the journals, but could not locate such an option).

    Thanks!

  58. Curtis L

    Found this through a blog article on AAPS. Love the database. Thanks a bunch for putting it together.

    Some ideas for more journals (I’m a religious ethics person, but that is very closely related to philosophy). Take them or leave them. 🙂

    Journal of the American Academy of Religion
    Journal of Religious Ethics
    Holocaust and Genocide Studies

    Again, thanks and keep it up!

  59. David Dick

    Dear Dr. Cullison,

    Thanks again for providing the discipline with this great resource.

    I’m posting to ask if you would add Ergo to the list of journals. I’ve just had a great experience with their editorial process and would like to record it in the survey.

    Thank you,
    David

  60. Joe Dewhurst

    Hi Andrew,

    Thanks for setting up this excellent resource. Would it be possible to add the journal Philosophy and Technology? (Relatedly, on the main survey page it says to ask “here” if someone wants a journal added, but it was unclear where “here” referred to. Small point, but maybe dealing with…)

    Best wishes,
    Joe Dewhurst

Leave a Reply to Andrew Cullison Cancel reply

Your email address will not be published. Required fields are marked *

  Posts

April 3rd, 2014

Ethics and Technology Panel This Week

I’m participated in a panel yesterday Fredonia on Ethics and Technology. The title of my presentation was “Grounding a Moral […]

March 27th, 2014

Gunshot victims to be suspended between life and death

This is unreal. Doctors in Pittsburgh will try to save the lives of 10 patients by placing them in a […]

March 26th, 2014

Diversity and Inclusiveness: Amy Ferrer over at newAPPS

The executive director of the American Philosophical Association is doing a series of guest posts this week over at newAPPS […]

March 20th, 2014

Thinking about moral realism may lead to better moral behavior.

This is really interesting. A recent article published in the Journal of Experimental Social Psychology suggests that being primed to think about […]

March 14th, 2014

APA Now Accepting Nominees for Leadership Positions

The APA now has an online nomination system. There are vacancies on all twenty APA committees. You can access the […]

February 27th, 2014

A Discovery Based Account of Intellectual Property Rights

One of the issues, that’s most interested me so far in the Ethics and Technology class I’m teaching is how […]

February 26th, 2014

How the MPAA inadvertently gave American Artists Leverage Against Hollywood

This is a very interesting read. For the most part it is an over-view of the global subsidy war between nations. Here’s […]

February 25th, 2014

Spritz – New Technology Aims to Boost Reading Speed to 500 words a minute

I just learned about Spritz today. It’s starts out to be pretty mind-blowing. The technology is designed to feed text […]

February 6th, 2014

Gettier Case in The Simpsons

If we assume that Bart (at some point) justifiably believed that the lemon-shaped rock was a lemon, then he had […]

February 4th, 2014

The Case of the Copyright Hoarder

I’m teaching an Ethics and Technology class this semester. I came up with a thought experiment today that I’m going […]