Institutions and teams, not ‘superstar’ academics, must be heart of the system, says Peter Kolarz
The competitive research funding system is at a breaking point. Innovations to address ongoing problems are needed on a grander scale than ever before, but even this will not suffice to fix a struggling system. What we need is a whole-system transformation.
The most critical problem in this creaking research-funding and proposal-assessment machine is the increased peer-review burden. Besides the strain on researchers’ productivity, funders are struggling to find enough reviewers. Peer review is also an imperfect arbiter of quality: the commonly practiced process of external peer review, followed by panel review, does well at identifying substandard work, as well as the very highest peaks of quality, but it tends to produce arbitrary outcomes in the upper midfield of the quality spectrum. Peer review means that radical, ‘breakthrough’ ideas struggle to garner positive consensus from multiple reviewers, so they lose out to more incremental, ‘safe’ research proposals. It is also unclear how well peer review is suited to identifying and rewarding potential societal use and relevance of proposed projects.
The rise of AI also poses challenges. Writing an application can be less time-consuming, as parts of it can be automated. However, the additional pressure that this places on the system may also incentivise unethical use of AI throughout the process—from writing to reviewing applications and even ranking and decision-making itself. The result could be a profound legitimacy crisis of the entire funding system.
Making modifications in decision-making
There are, in short, ample reasons for significant reforms to the international competitive research funding system. Much is already being done: funders have trialled and implemented modifications to standard selection and decision-making processes, detailed in a 2023 report. Some of these are simple tweaks already in use, such as short pre-proposals, shortening application sections, or briefing panellists on how to apply criteria. But more radical interventions have also been trialled. Since the 2023 report, evidence of efficacy has mounted further. It is time to move beyond experimentation and towards wider rollout.
Two such interventions are distributed peer review (DPR) and partial randomisation. DPR involves applicants to a scheme also acting as reviewers on the scheme. Evidence suggests gaming is remarkably rare and can be mitigated by splitting applicants into two groups, where ‘pot A’ applicants only review those in ‘pot B’, and vice versa. Especially in large calls or those with a relatively limited thematic remit, the applicants themselves typically provide a suitably broad base of expertise. Aside from not having to recruit external reviewers, current evidence from various trials suggests DPR speeds up the time-to-grant. DPR is also fully scalable and can improve reviewer diversity. It also provides a degree of demand management, as it discourages serial speculative resubmission of already-written applications.
Partial randomisation, or ‘lottery’, involves selecting proposals at random from a subset of applications that have already passed a minimum-quality threshold. It is an increasingly common way to reduce the effect of bias, enable the funding of riskier, more novel ideas, and reduce burden in the decision-making process. While some funders use it across larger selections of applications, it is commonly used as a tie-breaker, or for a small set of applications that are all of fundable quality, but the budget is insufficient to fund all of them. Although it is usually a modest process change, evidence of its benefit is now substantial and further rollout is advisable.
Experimental interventions
Other interventions have not been as rigorously tested yet, so more experimentation is needed with a view to rapid rollout if positive findings emerge. One recent study suggested that randomisation before review may provide substantial relief to the current system overload. Programmes with exceptionally high demand might introduce a sign-up system, with invitations to apply, then selected at random, ensuring the system isn’t overloaded. This approach could be introduced alongside more established demand management mechanisms, such as prohibiting multiple applications per person or within a certain timeframe. The recent study suggests that ‘lottery before review’ could bring increased participation from women and reduce cost.
These interventions will bring relief to an overburdened system and address other challenges around peer review. However, they won’t solve the problems completely, because they still exist in a hyper-competitive grant-funding environment, where researchers depend on grant income for job security in an increasingly precarious research landscape.
We need to consider strengthening institutions, teams and groups as the main formative unit of how science is funded, rather than placing ‘superstar’ academics and their grant-winning skills at the heart of the academic enterprise. This may mean a move towards greater shares of institutional funding, or even to more mission-based institutes.
Likewise, there is a need to remove perverse system incentives like grant income as a metric for individuals’ performance management. Critical in any such landscape, shifts will be the work of initiatives such as Dora and CoARA, which push for responsible assessment practices and healthy incentive structures.
There will likely always be a role for grant funding. Where such direct competition of ideas makes sense, process experimentation and rollout of innovative funding approaches are the way forward. But the recent exacerbation of challenges around peer review means that person-centred, application-based competitive funding can no longer be the central driver and shaper of how the research world works.
Peter Kolarz is head of programmes at the Research on Research Institute This is an abridged version of an article that first appeared in PLOS Biology.
