New Things Under the Sun

Can taste beat peer review?

April 24, 2023 Matt Clancy Season 1 Episode 44
New Things Under the Sun
Can taste beat peer review?
Show Notes Transcript

Scientific peer review is widely used as a way to distribute scarce resources in academic science, whether those are scarce research dollars or scarce journal pages. At the same time, peer review has several potential short-comings. One alternative is to empower individuals to make decisions about how to allocate scientific resources. Indeed, we do this with journal editors and grant makers, though generally in consultation with peer review.

Under what conditions might we expect individuals empowered to exercise independent judgement to outperform peer review?

This podcast is an audio read through of the (initial version of the) article "Can taste beat peer review?", originally published on New Things Under the Sun.

Articles mentioned
Wagner, Caroline S., and Jeffrey Alexander. 2013. Evaluating transformative research programmes: A case study of the NSF Small Grants for Exploratory Research programme. Research Evaluation 22 (3): 187–197. https://doi.org/10.1093/reseval/rvt006

Goldstein, Anna, and Michael Kearney. 2017. Uncertainty and Individual Discretion in Allocating Research Funds. Available at SSRN. https://ssrn.com/abstract=3012169 or http://dx.doi.org/10.2139/ssrn.3012169

Card, David, and Stefano DellaVigna. 2020. What Do Editors Maximize? Evidence from Four Economics Journals. The Review of Economics and Statistics 102 (1): 195–217. https://doi.org/10.1162/rest_a_00839

Teplitskiy, Misha, Hao Peng, Andrea Blasco, and Karim R. Lakhani. 2022. Is novel research worth doing? Evidence from peer review at 49 journals. Proceedings of the National Academy of Sciences 119 (47): e2118046119. https://doi.org/10.1073/pnas.2118046119

Samson Q2U Microphone-1:

Hello and welcome to New Things the Sun. I'm Matt Clancy. This week's podcast can taste beat, peer

Samson Q2U Microphone:

review.

Samson Q2U Microphone-1:

So, as we know, scientific peer review is widely used as a way to distribute scarce resources in academic science. Whether those are scarce research dollars or scarce journal pages and peer review is on average actually predictive of the eventual scientific impact of research proposals and journal articles. They're not super strongly, and if you wanna learn more about that, there's a link in the newsletter too. A previous podcast and, uh, newsletter I've written.

Samson Q2U Microphone:

that topic.

Samson Q2U Microphone-1:

You know, in some sense it's not actually that surprising that peer review is predictive of eventual scientific impact, cuz most of our measures of

Samson Q2U Microphone:

impact are

Samson Q2U Microphone-1:

to some degree, about how does the scientific community perceive the

Samson Q2U Microphone:

merits of your work?

Samson Q2U Microphone-1:

Do they wanna let it into a journal? Do they wanna cite it? It's not surprising that polling a few people from

Samson Q2U Microphone:

given community is,

Samson Q2U Microphone-1:

you know, mildly predictive of that community's views.

Samson Q2U Microphone-3:

At the same time, peer review has several shortcomings. You know, multiple people reading and commenting on the same document is always gonna cost more than having just one person do it. Current peer review practices provide little incentives to do, you know, a really great job at peer review. And third, as I've discussed in other newsletters and podcasts, peer review could lead to biases against riskier proposals. So one alternative to all this is to empower individuals to make decisions about how to allocate scientific resources. And indeed, we actually do do this with journal editors and grant makers, although

Samson Q2U Microphone-2:

generally in

Samson Q2U Microphone-3:

consultation with peer review. And so what I wanna ask this week is under what conditions might we expect individuals empowered to exercise sort of independent judgment and discretion to outperform just pure peer review? So to begin, while peer review does seem to add value, it doesn't seem to add like a ton of value. For example, at

Samson Q2U Microphone-2:

nih,

Samson Q2U Microphone-3:

top scoring proposals aren't, That much better than average scoring proposals that still get funded in terms of their eventual probability of sort of leading to a hit, uh, you know, scientific discovery.

Samson Q2U Microphone-2:

And again,

Samson Q2U Microphone-3:

there's a link to some of that literature in the newsletter. Maybe individuals selected for their scientific taste can do better in the same way. You know, some people seem to just have an unusual knack. First, a forecasting.

Samson Q2U Microphone-4:

Second peer reviewers are only really accountable for their professional or for their recommendations in so far as it affects their professional reputations. And you know, often they're just anonymous except to a journal editor, maybe a grant program manager. So that doesn't really lead to strong incentives to try and

Samson Q2U Microphone-5:

just really

Samson Q2U Microphone-4:

pin down the likely scientific contribution of a proposal or article. So to the extent that it's possible to make better judgements by exerting more effort, we might expect better decision making from people who have more of their professional reputation on the line, such as editors and grant makers, or who are otherwise incentivized to try to really get this right. Third, the very process of peer review could lead to risk aversion along a couple different, uh, pathways that I discuss in another article Linked. I keep referring to these other articles linked, but like most of them are the same article anyway. Individual judgment relying on a different process might be able to avoid some of these pitfalls of peer review, at least if taking risks is aligned with

Samson Q2U Microphone-5:

professional

Samson Q2U Microphone-4:

incentives. Alternatively, it could be that a tolerance for risk is just this rare trait in individuals. And so, you know, most peer reviewers

Samson Q2U Microphone-5:

just risk

Samson Q2U Microphone-4:

averse cuz

Samson Q2U Microphone-5:

people are risk

Samson Q2U Microphone-4:

averse. If that's the case, a grant maker

Samson Q2U Microphone-5:

a journal that wants

Samson Q2U Microphone-4:

to

Samson Q2U Microphone-5:

risk

Samson Q2U Microphone-4:

could do so by seeking out, you know, the rare risk loving individuals and putting them in decision making roles. Lastly, another

Samson Q2U Microphone-5:

feature of

Samson Q2U Microphone-4:

peer review is that most proposals or papers are evaluated independently of each other, but it may make sense for a grant maker or a journal

Samson Q2U Microphone-5:

adopt

Samson Q2U Microphone-4:

a broader portfolio based strategy for selecting science. Sometimes elevating projects with lower scores if they fit into some broader strategy. For example, maybe a grant maker

Samson Q2U Microphone-5:

would wanna support

Samson Q2U Microphone-4:

in parallel. A variety of distinct approaches to a problem to maximize the chances that at least one of them will succeed. Or maybe they will wanna fund mutually synergistic scientific projects, even if individually some of them are not as strong on peer review as others. So turning to some papers now, we've got a bit of evidence that empowered individual decision makers can indeed offer some of these advantages, although usually in consultation with peer. So to start, Wagner and Alexander 2013 is an evaluation of the NSFs small Grants for exploratory research program. This program, which ran from 1990 to 2006, allowed the N S F program managers to bypass peer review and award small short-term grants of up to$200,000 over two years. This was superseded later by other programs that do similar things.

Samson Q2U Microphone-5:

Proposals

Samson Q2U Microphone-4:

under the S G A ER program. Were short. Just a few pages. They're made in consultation with the program manager, but not other external review, and they got processed fast. The idea was to provide a way for program managers to fund a risky and speculative research project that might not have made it through normal peer review over at 16 years. The S G E R or

Samson Q2U Microphone-5:

as

Samson Q2U Microphone-4:

it's so-called Sugar Program, dispersed$284 million via about 5,000 awards. Wagner and Alexander argue that the sugar program was a big success by the time of their study. About two-thirds of the sugar recipients

Samson Q2U Microphone-5:

had used

Samson Q2U Microphone-4:

their results to apply for larger grant funding from the conventional NSF programs, and of those that applied, about 80% were successful in their

Samson Q2U Microphone-5:

application,

Samson Q2U Microphone-4:

at least among those who had received a decision by the time they did the study. They also specifically identified a number of spectacular successes where sugar provided seed funding for highly transformative research, and that's judged from a survey of S G A ER awardees program managers. And they also sort of did a citation analysis to double check indeed Lang And Alexander's main critique of the program is that it wasn't used enough up to 5% of agency. Were allowed to be allocated to the program. But a 2001 study, which is, you know, different than

Samson Q2U Microphone-5:

theirs in

Samson Q2U Microphone-4:

2013, found that only about 0.6% of the budget actually was allocated to this program. Wagner and Alexander also argue that by their estimates, by their criteria, around 10% of funded projects were associated with transformational research. Whereas a 2007 report by the NSF suggests research should be transformational about 3% of the time. That suggests maybe program managers were not taking enough risks with this program. Moreover, in a survey of awardees, 25% of those who won one of these grants said an extremely important reason for pursuing an S G E R grant. Was that their proposed research idea would be seen as either too high risk, too novel, too controversial, or too opposed to the status quo for a peer review panel. 25% is a large fraction of people who won the awards, but it's not a majority. Although, to be fair, that's people who rate these reasons as extremely important. We don't know how many rated as just

Samson Q2U Microphone-5:

important or

Samson Q2U Microphone-4:

something else. Again, maybe the high risk program though is not taking enough risk.

Samson Q2U Microphone-5:

In

Samson Q2U Microphone-4:

general though, the Sge R programs experience at least seems consistent with the idea that individual decision makers can do a decent job supporting less conventional research. Goldstein and Kearney. 2018 is another look at how well discretion compares

Samson Q2U Microphone-5:

to

Samson Q2U Microphone-4:

peer review. This time in the context of the advanced research projects, agency Energy or ARPA e ARPA E does not function like a traditional scientific grant maker where, you know, most

Samson Q2U Microphone-5:

the

Samson Q2U Microphone-4:

money is handed out to scientists who independently propose projects, just sort of for very broadly defined research proposed, uh, priorities. Or research areas.

Samson Q2U Microphone-5:

Instead,

Samson Q2U Microphone-4:

RPA E is composed of program managers who are goal oriented. They're seeking to fund research projects in the

Samson Q2U Microphone-5:

the service

Samson Q2U Microphone-4:

of overcoming specific technological challenges. Proposals are solicited and

Samson Q2U Microphone-5:

get

Samson Q2U Microphone-4:

scored by peer reviewers along several criteria on

Samson Q2U Microphone-5:

five

Samson Q2U Microphone-4:

point scale.

Samson Q2U Microphone-5:

But

Samson Q2U Microphone-4:

program managers are really autonomous and don't just defer to what the peer reviewers. Instead, they decide what to fund in terms of how the proposals fit into their overall vision. Indeed, in interviews conducted by Goldstein and Kearney program managers report that they explicitly think of their funded proposals as constituting a portfolio, and they will often fund diverse projects

Samson Q2U Microphone-5:

to better ensure

Samson Q2U Microphone-4:

at least one approach succeeds rather than just picking the highest scoring proposals. Golden and Kearney have data on 1,216

Samson Q2U Microphone-5:

proposals that were made

Samson Q2U Microphone-4:

up through the end of 2015, and they wanna see what kinds of projects program managers select,

Samson Q2U Microphone-5:

and in

Samson Q2U Microphone-4:

particular how they use their peer review feedback. Overall, they find proposals with higher average peer review scores are more likely to get funded, but the effects are pretty weak, explaining about 13% of the variation in what gets funded in the

Samson Q2U Microphone-5:

newsletter, there's a figure

Samson Q2U Microphone-4:

that you can't see. The average peer review scores for 74 different proposals, uh, to the batteries for electrical energy storage in transportation program. So this is, you know, electric batteries for, I guess electric cars. And in this diagram showing the average scores, you've got like indicators for which ones got funded, and it's just clear visually that a lot of prop, uh, a lot of proposals. Scores that were

Samson Q2U Microphone-5:

well outside

Samson Q2U Microphone-4:

the top got funded. So what do ARPA e managers look at besides the average peer review score? Goldstein and Kearney argue that they're very open to proposals with highly divergent scores. So long as at least one of the peer review reports is very good. We have another figure, um, also showing proposals to the battery program. But instead of ordering them and displaying them by

Samson Q2U Microphone-5:

average peer

Samson Q2U Microphone-4:

review score, now we're gonna look at them by their maximum peer review score. And when you do that, and again, we have little indicators for

Samson Q2U Microphone-5:

which

Samson Q2U Microphone-4:

proposals were scored, now we're seeing more proposals get funded that are clustered around the highest score. So it's not that common to find

Samson Q2U Microphone-5:

proposals that get score

Samson Q2U Microphone-4:

if even the maximum score is not very. And this is true

Samson Q2U Microphone-5:

beyond just

Samson Q2U Microphone-4:

this battery program, which is just a nice example because you can make a nice figure out of it across all 1,216 project proposals for a given average score. The probability that you were funded by ARPA E is higher if the proposal receives a wider range. Of peer review scores and gold and kearney also find

Samson Q2U Microphone-5:

are

Samson Q2U Microphone-4:

are more likely to be funded if they're described as being creative by peer reviewers. Even after, even

Samson Q2U Microphone-5:

after

Samson Q2U Microphone-4:

you take into account the average peer review score. Now ARPA E was first funded in 2009, and this study took place in 2018 using proposals made up through the year 2015.

Samson Q2U Microphone-5:

So there hasn't

Samson Q2U Microphone-4:

been a

Samson Q2U Microphone-5:

a ton

Samson Q2U Microphone-4:

of time to assess how well the program has worked, but Goldstein and Kearney do an initial analysis to see how well projects turn out when program managers use their discretion to override peer review. To

Samson Q2U Microphone-5:

do this, they

Samson Q2U Microphone-4:

divide the 165 different funded proposals into two groups. Those with average peer review scores high enough to ensure that they would've been funded if the program managers just completely deferred to peer review. And those that were funded in spite of peer review scores below this threshold, they find in general no evidence that the proposals were program managers overrode peer. Are any less likely to result in a journal publication, a patent or market engagement, which is a thing that is tracked by ARPA E. And I think that's notable because as I've alluded to, uh, in earlier work that I've

Samson Q2U Microphone-5:

summarized, higher

Samson Q2U Microphone-4:

peer review scores are usually correlated with things like more publications or more patents, although it's a noisy indicator and that sometimes doesn't show up, uh, when you only have, you know, less than 200. Pieces of data. Now we also have some studies on how journal editors mix peer review with their own discretion when deciding which papers to publish card and DELVINA 2020 Have data on nearly 30,000 submissions to four top economics journals, as well as the recommendations of the peer review reports.

Samson Q2U Microphone-5:

Because in economics it's quite common

Samson Q2U Microphone-4:

draft versions

Samson Q2U Microphone-5:

papers to be

Samson Q2U Microphone-4:

posted in advance of publication. Cardinal Delvina can see what happens to papers that

Samson Q2U Microphone-5:

accepted

Samson Q2U Microphone-4:

or rejected from these journals, including how many

Samson Q2U Microphone-5:

citations they go under receive.

Samson Q2U Microphone-4:

They could get citations even as just a draft paper. It's pretty common in economics. Or they might get published somewhere else,

Samson Q2U Microphone-5:

then we

Samson Q2U Microphone-4:

can

Samson Q2U Microphone-5:

the

Samson Q2U Microphone-4:

citations that went to the published version

Samson Q2U Microphone-5:

or the draft

Samson Q2U Microphone-4:

version. Now one thing we can do with this data is see if empowered decision makers, in this case, the editors of the journals can improve on peer review. And one way we can test that is to compare the fates of submissions that get similar peer review scores, but where one of the papers was rejected by the editor. And the other allowed to proceed through the revise and resubmit process. We can then see how the citations ultimately received by these submissions varies. If peer review can't be improved on, then we shouldn't expect there to be much of a difference in citations between the articles the editor rejects and the ones they allow to proceed. But if the editor has some ability to spot high

Samson Q2U Microphone-5:

impact

Samson Q2U Microphone-4:

research above and beyond what's in the peer review reports, then we should expect the submissions receiving a revise and resubmit to outperform the papers that were just out and out rejected. And there's a figure below that you can't see, so it's not really below. Um, on the horizontal axis of this figure, we have the estimated probability that some submission to one of these journals with a given set of peer review scores plus some other information on

Samson Q2U Microphone-5:

like

Samson Q2U Microphone-4:

the number of authors and the publication track

Samson Q2U Microphone-5:

record of

Samson Q2U Microphone-4:

the author, what's

Samson Q2U Microphone-5:

the probability

Samson Q2U Microphone-4:

that it receives a revise and resubmit decision. And on the vertical axis of this chart, we've got a measure. How many citations that submission eventually receives. All the lines slope up, so papers and submissions with higher P review scores tend to get more citations. But what's interesting for us today is the gap between two of the lines. On the one of these lines, we've got all of the citations received by papers that the editor sent out for a revise and resu. And on the other line, the citations received by papers the editor didn't send out.

Samson Q2U Microphone-5:

And

Samson Q2U Microphone-4:

there's a pretty big gap between those two lines. That means is that among submissions with very similar peer review scores, those the editor thought merited or revise and resubmit tended to receive a lot more citations than those that did not. So at least for these journals

Samson Q2U Microphone-5:

which

Samson Q2U Microphone-4:

try to identify high impact economics research, the editors seems to have missed or seems to have the ability to spot something that peer

Samson Q2U Microphone-5:

reviewers.

Samson Q2U Microphone-4:

Now you may be thinking, this is just picking up, you know, this is a completely fallacious interpretation. Maybe what's going on here is if an editor sends a paper out for a revise and resubmit, it's a lot more likely that that paper is going to

Samson Q2U Microphone-5:

published in one

Samson Q2U Microphone-4:

of these top four journals, and that's the reason it gets more citations because

Samson Q2U Microphone-5:

in a prestigious journal.

Samson Q2U Microphone-4:

So the paper is really aware of this and spends a lot of its energy trying to model this effect. And

Samson Q2U Microphone-5:

the idea that

Samson Q2U Microphone-4:

they exploit is that you get

Samson Q2U Microphone-5:

randomly assigned to

Samson Q2U Microphone-4:

different editors and different editors have different levels of stringency. And so if you get assigned to a lenient editor, you're more likely

Samson Q2U Microphone-5:

get your

Samson Q2U Microphone-4:

work published. And if

Samson Q2U Microphone-5:

are

Samson Q2U Microphone-4:

a similar publication and you're assigned to a strict editor, you're less likely

Samson Q2U Microphone-5:

get

Samson Q2U Microphone-4:

your work published and they can, they try to use that to infer. What's the value just of getting published, uh, in one of these journals? Otherwise equivalent papers in terms of what they, what the peer reviewers think and they find the extra citations that you can be attributed purely to just being published in one of these top journals are pretty small. Uh, no more than half the effect that we kind of implied by this figure. Moreover, there's another study which.

Samson Q2U Microphone-5:

um, Or there's another component of

Samson Q2U Microphone-4:

study which compares desk rejected to things that were rejected

Samson Q2U Microphone-5:

peer

Samson Q2U Microphone-4:

review was sent out. And in that case too, we see the editor's decision making was right. Like the stuff they desk rejected tends to get less citations than stuff they eventually reject. Although in this case we can't compare it to peer review. But still that's at least comparing rejected to rejected and showing

Samson Q2U Microphone-5:

editor

Samson Q2U Microphone-4:

taste is sort of somewhat informative. Lastly, another paper that lets us compare peer review and editor decision making is to plitt at all 2022, which has data on thousands of submissions to sell. Sell reports and the

Samson Q2U Microphone-5:

journals of

Samson Q2U Microphone-4:

the Institute of Physics Publishing, and they've got, uh, information on sub

Samson Q2U Microphone-5:

submissions

Samson Q2U Microphone-4:

between 2013 and 2018. Delinsky and co-authors are more interested in how novelty is judged by journals rather than the number of citations that ultimately get received by papers. So to do this, they've gotta measure novelty and to measure novelty. They use a common measure based on the citations and article makes for every pair of journals cited by the submission. They create a measure of how atypical it is for these journals to be cited together based on the frequency that they've been cited together in the past, relative

Samson Q2U Microphone-5:

to what you

Samson Q2U Microphone-4:

would expect just due to random chance. They then order these journal pairings from the least. To most typical and use the atypical score at the 10th percentile as their measure of how novel a paper is. Basically, this measure zooms in on sort of the most surprising

Samson Q2U Microphone-5:

combinations

Samson Q2U Microphone-4:

of journal cited and uses those as a proxy for how novel the overall article is. You know, a submission

Samson Q2U Microphone-5:

who's most

Samson Q2U Microphone-4:

unusual combination of cited references is cell and cell reports. It's probably not very novel. Probably pretty common for papers to cite both of those reports or both of those journals, but a submission that cite both Sell and the Journal of Economic History. That's not a combination you typically see, and they take that as a sign that this is a more novel paper. So in one analysis to blitz ski and co-authors break the publication

Samson Q2U Microphone-5:

pipeline

Samson Q2U Microphone-4:

into three steps, the decision to send out for. Rather than just desk rejected the peer review decision or the peer review recommendation, they don't get to decide and the peer reviewers can recommend, you know, we reject this, or you get revisions or you accept it. And then the final stages, the ultimate decision whether to publish the paper after you've got peer review at each of

Samson Q2U Microphone-5:

these

Samson Q2U Microphone-4:

stages. Toski and co-authors look to see how novelty of the submission affects its progression through the publication pipeline, where submissions, uh, you know, are ranged from least to most novel. So at the desk review section where the editors making a decision about whether to send this out for peer review at all, they find across all the

Samson Q2U Microphone-5:

journal.

Samson Q2U Microphone-4:

The most novel submissions that is, those that are

Samson Q2U Microphone-5:

citing the

Samson Q2U Microphone-4:

most unusual set of references are more likely to be sent out for peer review. At the peer review stage, we have the recommendation of the peer reviewers to reject or issue a revise and resubmit, or maybe rarely just accept the paper and

Samson Q2U Microphone-5:

now,

Samson Q2U Microphone-4:

We have a Lord of a, a more variable sort of result in cell. Peer reviewers are basically indifferent to novelty. They don't care like the probability that you get, uh, revised and resubmit. Is the same whether you have a really novel paper or a less novel paper. However, at the Institute of Physics Journals, uh, peer reviewers, they seem to also,

Samson Q2U Microphone-5:

they seem

Samson Q2U Microphone-4:

to prefer more novel papers. They're more likely to repre, uh, recommend that stuff gets a revise and resubmit or an acceptance compared to less novel papers. There was also,

Samson Q2U Microphone-5:

I said, sell reports.

Samson Q2U Microphone-4:

They don't have data on the peer review reports of that. Finally, We've got the editor's decision to accept or reject publication, and now we're gonna see how novelty matters holding fixed, the peer review recommendations. And once again, we find results are kind of variable at sell among submissions with similar peer review recommendations. Journal editors are much more likely to accept a paper if the paper is more novel, but at the Institute of Physics, the editors don't seem to really care at this stage. You know, while the peer reviewers like novel papers, the editors basically look at the peer review scores, and if it's more novel or less novel, that doesn't make any additional difference. All in all sell seems to

Samson Q2U Microphone-5:

demonstrate

Samson Q2U Microphone-4:

how an editor with a taste for novelty can boost the prospects of novel research relative to, in this case, peer reviewers who are indifferent to novelty. But I'd say at best, this

Samson Q2U Microphone-5:

sort

Samson Q2U Microphone-4:

of an existence proof. We don't really see the same dynamics in the other set of journals studied. So we can't say that this is like a general thing.

Samson Q2U Microphone-5:

So

Samson Q2U Microphone-4:

to sum up at the. A program that allows managers to use their discretion to bypass peer review and fund riskier research Seems to have worked pretty well, though it was perhaps underused, and we don't have like a nice experimental design where we could compare it to other papers or other programs. At ARPA E, we can see that program managers regularly pass on proposals with high average peer review, perhaps in favor of proposals that are creative or have at least one enthusiast, and which perhaps better satisfy the needs of their own research portfolio. And at present, we don't really see

Samson Q2U Microphone-5:

that

Samson Q2U Microphone-4:

these ARPA E program managers face any penalty for taking this discretionary approach. Meanwhile, on the journal side, we've got some evidence in economics that editors have some skill at selecting higher impact research from papers with similar peer review scores. And in one major biology journal that editors seem to use their discretion to select more

Samson Q2U Microphone-5:

novel research relative to

Samson Q2U Microphone-4:

to the peer review. People, though we don't see this in another set of journals, so the main thing that I take away from all this is that the individuals who we empower to make the ultimate decisions about the allocation of scientific resource, Could matter a lot, but at the same time, I don't think we know much about them compared to, for example, what we know about the individuals who allocate resources in the rest of the economy. That is the bankers, the traders, the venture capitalists, and so on.

Samson Q2U Microphone-7:

Questions we could ask are like, what are the incentives faced by our allocators of scientific resources? How are these guys selected? What kinds of feedback do they get on their performance? Would feedback matter? Does it matter? Do successful allocators share certain traits, whether those are in terms of their professional background or they're just their

Samson Q2U Microphone-6:

personality?

Samson Q2U Microphone-7:

How good are these guys at forecasting outside of science and so on? It just seems like there's a lot that we could still learn and

Samson Q2U Microphone-6:

it could

Samson Q2U Microphone-7:

matter.