Ok. I am tired and the data output is kind of hard to work with on the fly. So, some things:
1. Treat the numbers that I report below with a *huge* grain of salt. There's a very high risk of human error in my data manipulation. There's no way that I would report these officially. And the process of compiling them for an official report might produce very different results. And yes, it does look as if two years had the same number of unique referees.
2. The best test isn't 'how many uniques agreed' but 'how many uniques were asked.' This data isn't available for the prior team because, AFACIT, they asked people via email before issuing a formal invite. So there are no "declines" in the system.
3. I'm reporting data from 'journal years' (Oct 1-Sep 30) and only for 2010 on. I don't have easy access to the relevant data from before ISQ moved to ScholarOne. My understanding is that data remains spotty in the system until a few years ago.
4. I include an average of the number of unique referees for each manuscript put under review. This has some issues (see below), but it also adjusts for desk rejections.
2010-2011....Total Unique Refs: 709....Total Ms. Reviewed....329....'Per Ms.' Unique....2.155
2011-2012....Total Unique Refs: 653....Total Ms. Reviewed....340....'Per Ms.' Unique....1.920
2012-2013....Total Unique Refs: 653....Total Ms. Reviewed....319....'Per Ms.' Unique....2.047
2013-2014....Total Unique Refs: 589....Total Ms. Reviewed....264....'Per Ms.' Unique....2.231
2014-2015....Total Unique Refs: 510....Total Ms. Reviewed....212....'Per Ms.' Unique....2.405
(thru 5/31)
So, what do we know? The number of unique referees drops over the period of '10/'11 through '13/'14. The number of 'per ms' unique reviewers is higher in '13-'15 then in previous years. But it is possible that this may reflect more manuscripts with three referees. I can check tomorrow. I suspect what these numbers reflect is that ISQ editors seldom went to the same reviewer more than once in a year.
I know that at least one person has essentially accused us of using a less diverse (or more 'clubby') pool than past teams. I don't know if that's the intent behind Jan's question. However, I want to make clear that this data doesn't answer that question. It is perfectly possible that we used a higher proportion of unique referees but still drew from a more closed network of reviewers.
And, I repeat, this data may be incorrect.
HTH.