BJKS Podcast

73. Tom Hostler: Open science, workload, and academic capitalism

June 23, 2023
73. Tom Hostler: Open science, workload, and academic capitalism
BJKS Podcast
More Info
BJKS Podcast
73. Tom Hostler: Open science, workload, and academic capitalism
Jun 23, 2023

Tom Hostler is a senior lecturer at Manchester Metropolitan University. In this conversation, we focus on his recent article on the increased workload caused by open science.

BJKS Podcast is a podcast about neuroscience, psychology, and anything vaguely related, hosted by Benjamin James Kuper-Smith.

Support the show:

0:00:00: Start discussing Tom's paper 'The Invisible Workload of Open Research'
0:29:22: Does open science actually increase workload?
0:44:26: How open science changes the research process
0:54:02: Are open science requirements especially time consuming for labs without lots of funding?
1:01:44: What are the most effective open science practices?
1:06:31: Book or paper Tom thinks more people should read
1:09:39: Something Tom wishes he'd learnt sooner
1:13:32: Tom's advice for PhD students and postdocs

Podcast links

Tom's links

Ben's links

 Aczel, Szaszi, Sarafoglou et al. A consensus-based transparency checklist. Nat Hum Behav 4, 4–6 (2020).
Bozeman, Youtie & Jung (2021). Death by a thousand 10-minute tasks: Workarounds and noncompliance in university research administration. Administration & Society.
Costantini, Cordero, Campbell, … Pearson, R. M. (2021). Mental Health Intergenerational Transmission (MHINT) Process Manual.
Dienes (2008). Understanding psychology as a science: An introduction to scientific and statistical inference. New York: Palgrave Macmillan.
Forscher, Wagenmakers, Coles, Silan, Dutra, Basnight-Brown & IJzerman (2023). The benefits, barriers, and risks of big-team science. Perspectives on Psychological Science.
Hostler (2023). The Invisible Workload of Open Research. Journal of Trial & Error.
Nickerson (2000). Null hypothesis significance testing: a review of an old and continuing controversy. Psychological methods.
Schneider (2015). The censor's hand: The misregulation of human-subject research. MIT Press.

Mark Rubin's Critical Metascience Blog:
Reporting checklist:

Show Notes Transcript Chapter Markers

Tom Hostler is a senior lecturer at Manchester Metropolitan University. In this conversation, we focus on his recent article on the increased workload caused by open science.

BJKS Podcast is a podcast about neuroscience, psychology, and anything vaguely related, hosted by Benjamin James Kuper-Smith.

Support the show:

0:00:00: Start discussing Tom's paper 'The Invisible Workload of Open Research'
0:29:22: Does open science actually increase workload?
0:44:26: How open science changes the research process
0:54:02: Are open science requirements especially time consuming for labs without lots of funding?
1:01:44: What are the most effective open science practices?
1:06:31: Book or paper Tom thinks more people should read
1:09:39: Something Tom wishes he'd learnt sooner
1:13:32: Tom's advice for PhD students and postdocs

Podcast links

Tom's links

Ben's links

 Aczel, Szaszi, Sarafoglou et al. A consensus-based transparency checklist. Nat Hum Behav 4, 4–6 (2020).
Bozeman, Youtie & Jung (2021). Death by a thousand 10-minute tasks: Workarounds and noncompliance in university research administration. Administration & Society.
Costantini, Cordero, Campbell, … Pearson, R. M. (2021). Mental Health Intergenerational Transmission (MHINT) Process Manual.
Dienes (2008). Understanding psychology as a science: An introduction to scientific and statistical inference. New York: Palgrave Macmillan.
Forscher, Wagenmakers, Coles, Silan, Dutra, Basnight-Brown & IJzerman (2023). The benefits, barriers, and risks of big-team science. Perspectives on Psychological Science.
Hostler (2023). The Invisible Workload of Open Research. Journal of Trial & Error.
Nickerson (2000). Null hypothesis significance testing: a review of an old and continuing controversy. Psychological methods.
Schneider (2015). The censor's hand: The misregulation of human-subject research. MIT Press.

Mark Rubin's Critical Metascience Blog:
Reporting checklist:

[This is an automated transcript with many errors]

Benjamin James Kuper-Smith: [00:00:00] I guess today we'll be talking mainly about your paper on workload and open science and Yeah, yeah. I mean, maybe do you just wanna, shall we just go right, get right into it and you kind of outline kind of what the paper's roughly about and Yeah, sure. Yeah, yeah. Or you can maybe, or you can start maybe with kind of how, how you got to write the paper and  why you wrote it. 

Tom Hostler: Yeah. So the paper, so it's published in this, um, you know, the Journal of Trial and Error, uh, did a special issue, uh, edited by Sarah Enfield. Um, and the topic is about the consequences of the scientific reform movement. So the kind of broad idea that everyone is, uh, you know, there's a huge amount of change in, in science at the minute in terms of promoting open science. 

Developing like new policies and procedures and stuff that impact a lot of the way that people do research and they kind of wanted a put out a call for papers about, you know, what, uh, what are some kind of [00:01:00] potential consequences of this that people might not have thought of. And I think generally like the, you know, the discourse around doing open science. 

So by this I mean, you know, things like making your kind of plans of research open, like pre-registration, making your data open, your materials, your code, your contributions to the paper and like authorship and stuff. Um, you know, going way beyond like what a sort of standard journal article from the mid two thousands reported. 

Generally the discourse is, is kind of overwhelmingly positive, right? Open science is great for research, it's great for science, it's great for like producing better knowledge and it's increasingly like, Good for people's careers because it's, there's, you know, all this stuff now about like the incentives to do it and people should be rewarded for doing it. 

So it's kind of uniformly seen as like a good thing that people should be doing. But I think [00:02:00] a lot of the sort of the discourse doesn't really kind of touch on how people, and how like researchers are actually kind of practically supposed to do it in the context of their kind of, you know, day-to-day jobs and their, their day-to-day work. 

And the issue is that for a lot of researchers and a lot of academics, that context is that they are already kind of overworked. They're already got a huge amount of pressure on them to publish, you know, conduct, uh, yeah, publish a lot of research, put in grant applications to get more money to do more research, but also. 

Most researchers are not necessarily like employed, specifically just as researchers, right? They're employed as academics by university, and so they're expected to do a lot of stuff in addition to research. So they're probably expected to a bit of [00:03:00] teaching outreach, media stuff, administration meetings, committees, uh, mentoring PhD students. 

And so all of these things take up a lot of time. And so in that context, does the kind of expectation to do open research as well, which I argue kind of does, you know, as, as good as it is, like, as good as the outcomes of it, of doing it are, you know, it, it does take more time than not doing it, you know, so is that actually a potentially quite big problem that people are gonna be expected to do all this kind of extra work in a context when they're kind of already overworked? 

So that was the kind of, that's the, the sort of main kind of thing about the paper and the uh, I guess the kind of context to me writing it was that in my university, I'm a, a senior lecturer in psychology at [00:04:00] Manchester Metropolitan University. And when you join as a, a lecturer at my, you have to do a, in the first couple of years that you join, you have to do A P G C L T H E I think it's called. 

It's basically like a certificate in higher education. Right. Cuz a lot of lecturers join and maybe they don't actually have any kind of formal training in how to teach mm-hmm. At all. So they put you through this course where you kind of learn the basics about sort of pedagogy and teaching and how it works and how to do it well, but what's cool is that you can, it, it's like kind of official academic credits, right? 

Like any other kind of course. And you can use those credits to contribute towards doing a master's degree. In higher education. Mm-hmm. Um, so this is what I did and I actually really enjoyed it and it's quite interesting to do because it's not just academics that do it. So you get people from all over the different parts of the university that do it. 

So [00:05:00] what, on my course when I was doing it, you know, there's people who worked in like student admissions and teaching, um, sort of technology support and it, and bits of the university that I didn't even know existed. There was someone who like worked in the, the university had this thing set up where it helped like students who did art courses to like sell their products and creations and stuff to the public. 

And she managed that and it's like, oh, that's, that's really cool. Um, but anyway, so the, the, the master sort of encourages you to sort of reflect on your practice in the context of both. You know, your kind of day-today job and like the university strategy and where you fit into like the university in terms of what you do, but then also in the kind of broader context of like higher education and the sector and what the sector is about. 

Um, so there's all these kind of quite interesting questions about what, what is the purpose of a university and what's the purpose of a degree and [00:06:00] why do uni, you know, if universities have this sort of strategy to do this, like why are they doing it that way? And so when I did my kind of final dissertation for the masters, I was like looking at my role as a researcher and kind of, okay, well why am I expected to, you know, published research sort of from the university's point of view, right? 

Why are they paying me money to do research? Well, how does it benefit them? And at the same time I was quite interested in promoting open research and open science. And so my kind of master's dissertation was about, yeah, how does promoting open science and doing more open science fit with. The university's priorities, you know, like why, why would they want us to do it? 

Mm-hmm. There's a lot of stuff around like why it's good for, you know, the endeavor of science and producing more knowledge and, and individual researchers in terms of their careers. But why would like a university who's a ultimately like pays your wages, [00:07:00] like what benefit does it have for them?  

Benjamin James Kuper-Smith: What, what is it? 

Or did you find an answer  

Tom Hostler: or? Well, I mean, a lot of it comes down to like looking at the incentives again. You know, there's a lot of stuff about incentives of researchers that need to publish in good journals to like climb the career ladder. But you can also look at like the incentives for universities and the way that they are measured and the way that their performance is measured. 

That might be set by like, you know, the government for instance, uh, university rankings quite a big thing that like. Universities pay IT management in universities like pay attention to. Yeah. Um, and so what drives them kind of might influence what a university wants its employees to do. And this kind of, a lot of this is bundled up in this like theory of academic capitalism, which is basically kind of looking at the university as a capitalist actor. 

So basically kind of, if you think of like a [00:08:00] business like, well why does it, why is a business doing what it's doing? And the answer is usually cuz it's trying to make more money and compete against other businesses. So it's always like looking to, you know, make more profit or work more efficiently or hit. 

Its like KPIs and key performance indicators and you know, targets and all this kind of stuff. And so academic capitalism as a theory is basically saying like, well you can look at other businesses and other companies sort of through this lens, but. You can also look at universities through this lens and perhaps like a traditional view of a university is it's, this almost just kind of operates in the background, right? 

Like academics and researchers are the, are the focus and just they're kind of employed by university. But the university itself is not like a major actor in, in any of this. Uh, it's just somewhere you go to work, right? And they give you, they give you a lab and they give you access to a library and you [00:09:00] kind of get on with doing your research. 

And the theory in academic capitalism sort of says, well, you know, if, if that was ever true, increasingly like universities are, do kind of operating competition with one another for, for funding for instance. And so that kind of makes them behave like, An industry that, that needs to be competitive and needs to be more efficient and needs to, you know, be able to kind of measure its progress against its competitors and set targets. 

And the way the kind of internal organization of the university and its internal structures and the way that it manages its employees all kind of, uh, are very relevant to it doing well as a university. And so that kind of feeds down into, you know, how researchers are employed and what they're employed to do and how they're kind of treated by the university. 

I guess you wanted to ask,  

Benjamin James Kuper-Smith: how does, how does the idea of a university as a kind of capitalist [00:10:00] enterprise, how does that exactly now link back to workload, um, in the sense of, in the context of open science?  

Tom Hostler: So that's kind of one way that I thought, you know, was sh showed quite a kind of good link between these, these two things. 

Was that. So it's fairly uncontroversial to say that open research does take more time compared to closed research, right? And, and most open research advocates like acknowledge that, but, you know, say it's worth it though, right? And the kind of really clear example of this is something like open data where, you know, 20 years ago, no one shared their data. 

You just sort of forgot about it, left it on a CD or a floppy disc. Um, and it's, it is zero work to do anything with it. Whereas now, you know, you're expected to put it on an online repository, but it's not just a case of say, you know, dragging your Excel file and leaving it there. You have to kind of format it to be understandable [00:11:00] to other people who might want to use it. 

So if you think of like just a quantitative psychology data set, you have all these like variable names where you need to explain what they mean. Why is this column. You know, duplicated here. Oh, well that's because like we recoded the question to be a separate one. And I mean, anyone who's ever sort of downloaded someone else's data set and looked at it, there's always like a million questions that you have about like, why the hell is this there? 

I'm like, so actually kind of putting a date, a full data set online that is understandable to someone who's never seen it before is like quite a lot of, can be quite a lot of extra work. And  

Benjamin James Kuper-Smith: then just to jump in a little bit, just because I had this recently, we did all of this and I think, uh, I think I did it reasonably well, uh, for like seven different experiments. 

I probably did it all, put it on GitHub, and then the, the reviewer comments were, oh, I don't know how GitHub works. Can you put it somewhere else? It's easier to use. It's like, come on. Yeah,  

Tom Hostler: exactly.  

Benjamin James Kuper-Smith: I did everything you want me to do. Just like not in the [00:12:00] specific thing that you know about. Yeah.  

Tom Hostler: Yeah. But that's it. 

Like, and, and that's part of it as well, right? Putting, got on a, a repository that is like findable and accessible to people and as well, so yeah, assuming,  

Benjamin James Kuper-Smith: I mean, I mean maybe this is more complicated because I'm kind of often in the, uh, area that kind of is between like often quite basic social psychology and computational neuroscience. 

So you have, like, on the one hand you just assume everyone knows how to code and or do all this thing. And the other hand, lots of people have no idea and have never done it. So you have to, yeah. Yeah. I don't know. I still don't know how to really deal with that specific comment because it's, yeah, on the one, there's like, I get it, it's annoying, like if you just can't use it. 

But then again, like, like how much more do I have to do just so people can use it? The file that's already online. Yeah, yeah,  

Tom Hostler: yeah. But I mean, from a, from a sort of time perspective then, you know, I mean, that's completely fair. But then I guess that is time for like another researcher then who [00:13:00] wants to access this data. 

Then they have to look, spend time like learning how to use GitHub and stuff. So, you know, another example like how open research kind of takes more time than closed research. But yeah. So to go back to kind of, um, the workload then, so typically an academic job, your contract is, you know, you are technically obviously employed by a university to do a job. 

Uh, but your contract is like very. Vaguely worded, or at least mine is, you know, it's like, it's not like a job where it's like, oh, you expected to do this, you expected to do that, expected to do this, this is your line manager. Um, you know, an academic contract's definitely like, okay, well you're expected to do the teaching that we ask you to do, expected to do some research, and then the rest of your time, uh, should just be dedicated to scholarly activities, whatever that means. 

Yeah, whatever that means. Like reading, writing, going on podcasts, [00:14:00] publishing stuff, peer reviewing, all this kind of stuff that  

Benjamin James Kuper-Smith: academics do. Although just briefly, is there a kind of quantification of how much the purport, I mean, this is maybe specific then for, for you at your place with that position, but I'm just, uh, maybe you can, I wonder whether you can maybe say a bit for the UK in general, like does it, do you know whether it specifies like you have to do like, you know, this much teaching per semester, this much that this much that or,  

Tom Hostler: so I can't really speak for the UK, but my, I think most. 

Academic contracts will at the very least have a split between teaching, research and other. And this is like a thing that, you know, the, the more the, the more research you do and the, the better the research that you do is like, the more time you then get to do research, you often hear kind of stereotypes about, you know, people battling to get more research time. 

And, and quite often you can, you, you know, if you put in a grant for a, a bid for like a grant, you can like buy out more of your teaching time by [00:15:00] getting someone else to cover your teaching. And then that becomes research time. So like my workload is pretty much similar to that. You know, I have certain amount of hours to teaching, certain amount of hours to research certain amount of hours for kind of the admin and then, you know, professional development and, and other, but it's not kind of broken down on a. 

On a kind of super granular level, right? So I'll just get a certain percentage of my yearly workload, say like 20% or something, to do research. And I guess then the way that that is assessed is like quite often in, you know, things like in your department meetings or your PDs, they'd be like, okay, well how many papers did you publish this year? 

How good were they? Like, were they rated as, um, in the UK we have, uh, like a national research evaluation exercise by the government that kind of rates the quality of the research and all [00:16:00] different universities. And the main way that that success, it's like whether your paper is three star or four star are the ones that you should be aiming for. 

That's the ref. The research. Yeah, the ref, ref the grid. I think, I think most countries have something similar. Some, some are more based on like citations and journals and stuff, but you know, your kind of expectations for that are. You know, so yeah, measured in these like, perhaps like yearly meetings or, or targets for how much research you should be producing, uh, either as an individual or perhaps like as a department or a research group or a lab or whatever. 

So one way I guess that the university can become from, from its kind of capitalist perspective, it become more efficient is by asking people to do more work in the same amount of time. Mm-hmm. Right? So you could sort of say like, potentially, like with stuff like teaching, maybe you get a certain amount of [00:17:00] hours to prepare because the teaching sort of workload is actually quite broken down quite granularly. 

So, It actually kind of goes down to, well, you, you've got this many hours for this particular unit or module that you teach on, and within that, there's this amount of hours for your marking, this amount of hours for your teaching, this amount of hours for your preparation. So one, uh, you know, one way of the university of kind of becoming more efficient is by saying like, oh, well, okay, you've, you've gotta prepare this new unit, but we're not gonna kind of give you extra hours for preparation. 

You can just have to do it. So instead of hiring someone else, you know, you just have to do it, but it actually takes you probably longer than what is written on your workload for the amount of time it should take. And then this kind of applies my argument in the, in the paper is that, well, this can kind of apply to research as well, right? 

Because you can be expected to do more, spend more time on the process of [00:18:00] research, in particular with things like doing o open science, but you're not actually gonna get more. Kind of hours on your workload to do that?  

Benjamin James Kuper-Smith: Yeah. You mean like there's basically a fixed amount of time you, um,  

Tom Hostler: for research? Yeah, exactly. 

So if I have like, I dunno X amount of a hundred hours to do research on my workload for the year and that's the way the university sees it, then if it actually takes me doing open research takes me a lot longer, I'm not gonna get other parts of my workload taken away. So they're not gonna say, oh well because you, you, you know, we realize it took you, uh, you know, an extra x amount of hours to put all your data online. 

So we'll give you sort of, we'll take away these like 50 essays you have to mark because everyone is already working at essentially like a max workload, right? Cuz you're employed for that amount of hours per year. Um, so people rarely have kind of like a lot of free time. So it's quite [00:19:00] easy to kind of just add extra things in without taking other things away, which is. 

Really what they should do, I guess, if stuff is supposed to have a certain amount of hours for each thing. But in practice, quite often what happens is you get asked to do more, but you don't get more hours on your workload  

Benjamin James Kuper-Smith: for it is, um, it seems to me that like one very simple solution to this, I mean, yeah, simplistic solution and always with this would be to expect less research output from researchers as a university, right. 

The, the sense of like, okay, well the, now the projects don't take a hundred hours, let's say just to make maths easy. Uh, they take 110 hours because you have to do this open science stuff, whatever. Right? Like as a silly example. Yeah,  

Tom Hostler: exactly. Um, and, and, and then, I mean, I think, at least in my university, I think stuff is, you know, I I do read a lot about like people having to have a certain amount of, you know, a quantity of publications and I think generally like. 

[00:20:00] The way that you are supposed that you are assessed is more about the quality. So they would be, a university would generally be more happy if you published, I don't know, one or two really high quality papers a year, a than four or five kind of rubbish ones in, in rubbish  

Benjamin James Kuper-Smith: journals. I mean, I've seen that as the criticism of the, the, the ref thing in the uk Yeah. 

The race, because it's like, I mean, four stars is the most, one is the least or whatever, right. Something like that. Yeah. And I think some, isn't it something like, uh, each jump you make is, is worth basically more, infinitely more than the one before, something like that.  

Tom Hostler: Exactly. Yeah. So they would be much happier if you gave them one four star paper than like four one star papers. 


Benjamin James Kuper-Smith: Or even four, three star papers, right? Isn't it even that? Yeah,  

Tom Hostler: yeah, exactly. So there's definitely a kind of idea that like, well, this shouldn't be a problem. Right. Because if open research is rewarded. Uh, as a kind of [00:21:00] official marker of quality, either in things like the ref or, uh, in promotions and stuff, and hiring practices. 

Like if instead of looking at how many citations a researcher has and how many publications in like a big journal, they actually looked at the kind of objective quality of the research in terms of, I guess the rigor of it, the methodology, whether it uses open science, uh, employees, open science practices. 

Then there, there shouldn't be a problem. But I, my kind of issue with that is that like, well, yeah, in theory it shouldn't be a problem, but if everyone's, it doesn't in itself changing the, the criteria for quality of research doesn't do anything to like reduce the, the competition that already exists. Mm. 

You know, at the minute, like there's a lot of, or, or, or it won't. In the long run. So at the minute people are sort of saying, well, doing open research is [00:22:00] great because, you know, uh, that's what's gonna be rewarded. So you doing one really good rigorous open research paper, you'll get the job ahead of someone who is still doing, producing kind of lots of crappy closed research papers because they think that that's what's good for their career. 

But if, let's say in five, 10 years that all the incentive structures have changed and now doing that kind of closed research is not rewarded at all, doing open research is rewarded and that's how you get a job and that's how you do well and that's how your university does well. Then the competition that's kind of already existed between researchers for jobs and the competition between universities is still there. 

It's just that now everyone is doing open research. So eventually you kind of still end up going reverting back to the. You know, qu quality is better than quantity, but when [00:23:00] everything is high quality, then quantity is again, the Yeah. Quantity then becomes like, the better, I think the important thing again. 

Right. So I think that that's where the sort of the discourse around kind of incentive structures and stuff at the current moment in time, that's definitely, you know, it's a good idea. It's better to reward people doing open rigorous research compared to closed research. But once everyone is doing it, you, you've kind of still got the problem of the competition and you know, people burning themselves out trying to produce, you know, publish or perish is still gonna exist. 

It's just gonna be publish open science or perish. But  

Benjamin James Kuper-Smith: I mean is, that's not really, I mean, open science was. Uh, not created, let's say like open size exists not to solve that  

Tom Hostler: particular problem, right? No, exactly. But I think that it bec it becomes, and especially, you know, in this example in terms of like workload, [00:24:00] it, it can potentially like exacerbate the problem and into if people, if it does take more time than closed research and people are, are not given extra time to do it, then, you know, it just raises the expectations on, on researchers of what they, uh, supposed to achieve in a set period of time. 

Benjamin James Kuper-Smith: Right. I mean, is it, is it basically your point like in a, in an environment where everyone by definition is already occupied hard, has a hundred percent of the time occupied and there's this certain kind of incentive structure, then now you're just making it worse for everyone from a, not, I mean, it makes the science maybe better, but maybe, but, uh, I mean, there's. 

It definitely makes it better, hypothetically, realistically, yeah. If you just cram, if you have to cram it in, in, in the same amount of time, then you're just, um, exacerbating the existing problems, that  

Tom Hostler: kinda thing. Yeah. Yeah. So it's not, it's not saying that like, you know, oh, it, it, it's open research [00:25:00] advocates job to now like, you know, stop academic capitalism and stop competition and, and fix that. 

But, you know, you sh I think you should be aware that it has the potential. It's, it's not just this kind of necessarily uniformly good thing with no negative side effects at all. And it has the potential if you, if you kind of just promote it uncritically to make that kind of competition worse if, if you're expecting people to do more and more things, but without giving them the sort of time and resources to  

Benjamin James Kuper-Smith: do it. 

But from what, from what level is the, I'm trying to, I'm not sure I can articulate a spec, very specific question here, but it seems to me, I dunno, I'm just thinking right now, like if you are a scientist, let's say, and you are doing peer review, right? Um, you're reading a paper of someone and they don't have open, uh, data, for example, right? 

It still seems to me like as a scientist, you still have the do just say like, Hey, make your code open. So it seems to me like [00:26:00] on an individual basis you still have this sense of like, you should still ask for, I mean, you can take it too far, obviously, but you should still ask for the open science practices as a scientist. 

But then it seems to  

Tom Hostler: Yeah, a hundred percent. I think it's more, it's, it's definitely like a more systematic issue and, and you know, it, it kind of ultimately goes down to, you can take it sort of keep going up and up and like, well what causes this? Well, what causes this? Well, what causes this? And the kind of, the ultimate cause of everything is the way that, uh, research and. 

Researches are funded, which ultimately is from the government of whatever country you're working in. And so it ultimately becomes a kind of political decision. Right. Um, but I guess e every level that, that kind of cascades down from, um, you know, there are op, there are opportunities to direct it in a certain way or make changes in a certain [00:27:00] way to try and, you know, mitigate that. 

But that, that would be the kind of, you know, the, the sort of the broader theory of academic capitalism is that, you know, governments fund research sort of from this, uh, you know, a neoliberal political view that the most efficient way to do anything is to create a market and get people to compete with each other for money because, you know, that's the best motivator of, of human. 

Yeah. Get people to do stuff. Um, And there's certainly like other ways that you could fund universities and fund researchers other than that,  

Benjamin James Kuper-Smith: if you wanted to. So is, I mean, is this, um, is your article then, I mean, to some extent maybe to, to, to raise awareness of these broader things with people who are, I mean, there are definitely people who take the open science thing too far. 

I've definitely seen that myself, even in my limited time in science. Um, but is your, is your article maybe almost more [00:28:00] for p people in university administration rather than for scientists almost?  

Tom Hostler: I, I think it's just, uh, well, I think it's for researchers as well, but to kind of, you know, sort of get them to perhaps consider thinking about the broader system in which like research exists. 

Because I think a lot of the articles that have been, you know, the discourse around open research is about. Yeah, the research system and the incentives for research. And I think kind of just focusing it on that in like a silo with ignoring the kind of the broader context of like, well, research is done in universities and researchers are also academics. 

Um, which is hard, right? Because research is a universal, global thing that broadly operates the same way for everyone. But those other things are very different across different countries and different, you know, even different [00:29:00] universities might have quite different policies. But I think thinking about that context, a it more can help people who want to promote open research to kind of consider the yeah, the possible kind of consequences of, of doing that, that maybe they hadn't before. 


Benjamin James Kuper-Smith: Um, Can I play devil's advocate a little bit about the idea that open science increases workload? I mean, to some extent. I mean, I say playing the devil's advocate here very, uh, explicitly because, um, I don't exactly agree with all the points I make here, but, um, it seems to me that. I dunno, whenever, I mean, well whenever, but when I hear you or like read your article and see the argument that like, you know, this takes more time and that kinda stuff, part of me agrees because obviously I realize how long it takes to write a well written pre-registration. 

Uh, I mean, in some experiments I've done some where, you know, it takes you like an hour because you've basically done the same thing before. It's super [00:30:00] easy, but then others you really have to like, go through everything from scratch and that kind of thing. But, um, and I also know that in my p I mean, I'm at the end of my PhD now. 

I've spent a lot of time in my PhD reading about open science and uh, you know, how to visualize your data so it's more accessible to people, how to share your code, share your data, all that kinda stuff. Right? Uh, so I know that takes time, but part of me also wonders whether it's more a short term kind of cost that has a long term benefit for not only your, I mean not only for science in general, but also for yourself specifically. 

Um, and maybe to start with something like a pre registration. The reason usually it takes so long to write a good pre registration is because you didn't know what you were gonna do before you started writing it, right? So in some sense, you're just taking lots of the time you would spend in analysis and you're doing it before you collect the data. 

So in principle, this should save you quite a lot of time, right? Because you are at a lower, you have a lower probability of running an experiment that [00:31:00] afterwards you, after you ran it, you realize you shouldn't have run, because now you make all the analysis steps explicit before that and this kind of stuff. 

Um, yeah.  

Tom Hostler: Yeah. I, I, I mean, I definitely think that it in terms of, I mean in terms of the efficiency of like science in general and like generating knowledge, it, it sort of speeds things up. You're looking on a kind of epistemological level in theory. Like yeah, if you can find, if, if through the mechanism of open science, you find that a paper. 

Has been like p hacked or that there, you know, maybe just because you have the open data, you find that there's a mistake in the data analysis and the conclusion is incorrect, then you know, you, you can save time that you wouldn't spend, I dunno, another six months trying to replicate that study. Exactly. 

Because you already found, yeah. So I think that, I mean, even  

Benjamin James Kuper-Smith: [00:32:00] at a big scale, entire fields chasing effects that were  

Tom Hostler: never Yeah, exactly. Um, I mean I would say that by kind of one potential thing is that, is that, yeah, in theory this works in practice it also requires people to actually, for, for that to happen. 

It does require people to actually go through and look at other people's data and try and replicate it, which, I think, you know, there's a broader question there about whether there, there are people who do that or not. Uh, whether it's like a good me whether science is sort of self-correcting or whether a paper gets, you know, hundreds of citations before someone actually goes back and looks at the data and finds this narrower. 

Um, but I think, uh, you know, in theory, yeah, it's definitely kind of reduces, removes inefficiency, let's say from like generating knowledge. And then I think, yeah, it's definitely an argument that, that in terms of your like workflow as a researcher, that if again, you were, [00:33:00] you were being inefficient by not labeling your data properly and not really planning your studies properly, um, and documenting all that kind of stuff for someone else means that you yourself can then go back and, and look at it and you know, there's this kind of idea that, oh, your most common collaborator collaborate areas you are. 

Past self. Yeah. And they don't answer emails, you know? Um, so I think there's definitely, you know, that can speed that up and once you've integrated all these things into your workflow, then that, that would sort of save you time. But I, I guess the question, you know, the other thing is that potentially that because of your expectations to produce stuff elsewhere, kind of increases as well. 

You know, the time that you save, perhaps like Yeah. Planning your pre-registration, perhaps you end up using that time to write your [00:34:00] metadata to put it under repository for someone else. And so actually you didn't, you weren't able to kind of cram in an extra study there. Um, it's just that. The study as a whole kind of took longer, but different parts of it were quicker, but other parts of it took longer. 

And the other thing that I kind of, the other idea that I sort of brought into the paper to sort of try and capture this, this idea a bit is not just that open research practices like formatting data or writing pre-registration can take more time, but it's looking at open science and a lot of open science practices as a kind of a form of administration basically. 

And I definitely think, you know, when I was like an open research. Well still am like you're a research advocate, like going, you know, talking to people about that stuff. I teach research methods. I'm really into research methods. I'm interested in them. I like talking about them. So I'd be sort of telling people about pre-registration, going like, oh, you know, this, this really cool thing that [00:35:00] you can do where you can go on this like cool website, like the open science framework and pre-register study and it'll prevent QPS and P hacking and stuff. 

But you can kind of sort of see like other people who are perhaps not so into research methods, looking at that and going, all right, so this is like another form that you want me to fill out before I can get on with my job of doing research and looking at sort of a lot of open research practices as admin and administration. 

You can also use these kind of theories of administrative burden to sort of see that, you know, potentially a lot of them take. Take longer than they should. And, and a lot of your time is not actually spent doing anything that has practical or like functional benefits, but is kind of just done to record stuff that might be useful for someone else and might have a benefit, but actually takes longer than it should. 

And, and there's not this necessarily [00:36:00] any kind of necessarily, like at the minute, any evidence that it's actually good in practice. So my example of that, that, and I, I don't wanna dunk on this as, as a particular thing, but like transparency checklist. Okay. So this is this, um, a checklist that someone, a group of has produced. 

Uh, and if you are trying to make your study open, it's something that you can go through and it will kind of say like, okay, have you. Pre-registered this element. Have you pre-registered this element? Have you made your data open? Have you checked the metadata? Have you made your materials open? Have you made sure that these are accessible? 

You can gonna go through your whole project and, and make, follow this checklist. Like, make it as open as possible. And this, you know, this, this thing probably takes like five, 10 minutes to fill out and you think, oh, okay. Like that's a great tool. That's a great thing that people can use, uh, and potentially like should be using to make their research open.[00:37:00]  

But you can sort of see a, uh, you know, a future where stuff like that becomes regarded either because perhaps like journals or funders might require you to do it. Or just as a sort of a general kind of, uh, oh, a, a good piece of research. Does this, And there's similarly with like reporting guidelines for studies, right? 

I mean, if you do like a systematic review, one way that they judge the, you judge, the quality of the studies that you include is by seeing whether they followed a reported reporting checklist of did they report this? Did they report that? Did they report how they selected participants, how they coded their dependent variable, blah, blah, blah. 

So it kind of becomes that filling out this checklist or filling out this, yeah, form is something that you need to do as part of the research, but in the future, is anyone actually kind of gonna go and read the checklist in itself [00:38:00] as something and use that information for, for a particular purpose? Or did it kind of just become part of the administration of doing research that you had to kind of fill this out and write this down? 

Because someone somewhere might eventually kind of want to check something or want to read it, but. It doesn't necessarily actually have any sort of functional benefit on its own. Does that make sense? Yeah. I think, I mean,  

Benjamin James Kuper-Smith: the thing I'm reminded of right now is, um, in handbook where I started my PhD, um, I mean we, the lab moved, but we had, I think there, we had a lab book that you had to fill in every time you used this particular room. 

Right. And I think I never did. Um, yeah, I, but for me it was also like all the information you want is in my MATLAB data. Like I've saved, like when it is what computer it's you, you know, like it's all in there. But in theory, I probably would've had to, or probably in practice, I would've had to [00:39:00] actually fill in this thing when I was using the room et such a what happened and blah, blah, blah. 

Um, I think, I mean, it's also like probably to like report if the monitor stops working or something like that Right. Has, has other purposes. But is, it's that kind of thing you mean. Right. Something that kind of maybe  

Tom Hostler: Yeah. And stuff like that always has like you can always. Pretty convincingly, like justify it for why it's there and say like, oh, well this, this would, you know, previously we had a situation where someone, you know, we didn't have this checklist and something happened, and so like now we need to do it, uh, because that will help us sort of prevent this in future. 

But it becomes a kind of just an example of like, you know, almost like administration that Yeah. Is kind of, has good intentions, but whether it it actually works is a different question. And you know, that sort of like lab books and stuff, it's like you can see why someone might have invented that because yeah, one time like the monitor [00:40:00] stopped working and no one really knew when that occurred. 

But getting every person who goes into lab to every time to fill out this checklist doesn't actually stop the monitor from ever breaking. Yeah. Yeah. Do you know what I mean? It's just the kind of audit trail. And so that's a potential kind of, you know, not criticism, but concern that like some elements of open research of just documenting everything about what you do. 

You know, making your data available to someone else, like does have a functional purpose that they can go and check it. Recording a checklist that says, I made my data available, is just to kind of help you remember to do it. But that that sort of information itself doesn't necessarily have all these benefits. 

And you could potentially see, you know, in 10 years time, someone does a review says like, actually only oh 0.1% of transparency checklists were ever actually accessed or read by anyone. [00:41:00] And then you think, okay, well cumulatively across all the hundreds of or thousands of researchers that spent time like filling out that checklist, was that worth it? 

I don't wanna say, like, I'm not, I'm not criticizing that checklist in particular. I think it is, is really good. It was just a sort of example Yeah. Of the sort of thing. And, and when you take that sort of holistic view of the academics as well, you sort of realize, well, they're already being asked to do, you know, I have a checklist that I need to fill out at the start of my teaching module to make sure I've set up my online resource page correctly. 

I've got another checklist that I need to fill out at the end of the module to show that I've collected student feedback. The library might send me a checklist to ask me, which of these teaching tools do you use that we need to provide support for? So da da. And so all of these things individually only take like five, 10 minutes to fill out, but then you have five or six of 'em to do. 

Suddenly that's an hour. And this is like one of the, if you sort of view [00:42:00] these kind of minor administrative tasks through this, through this, um, lens of administrative burden, it actually becomes, Quite difficult. They, they could accumulate like quite a lot and it's, it's quite difficult to ever argue against one of them in particular, right? 

Yeah. Because, you know, this only took you five minutes and it had, it potentially had this really good benefit. But then yeah, like, but if for, for the, for the researcher who has to fill all these things out, then it becomes, well, which ones do I drop? Like I'm being asked to do all of them. And the guy who's written, um, it's called Bozeman, has written a lot of stuff on like research administration and he's got like a quote from one of his studies and someone says it's like, yeah, death by like 1,010 minute tasks. 

So I think this is another sort of thing for like people open research, kind of advocates to consider is that, you know, everything, [00:43:00] anything that you ask people to do has a cost. And in most cases that cost is time. And that time might not be a lot, but. Time is like a very valuable resource when people are already at maximum capacity. 

So even if like the kind of the cost benefit analysis seems to be, well the benefits could be huge because this could provide all this amazing metadata about open research practices that let's do this, this, this, this. And the cost is tiny because it only takes the researcher five minutes to fill it out. 

That's kind of not really considering that actually there might be 10, 15 tiny little administrative tasks that sort of have the same logic to them that someone has to do. And that cumulatively it did have a big cost to them because they actually had to spend, you know, entire day kind of filling out these forms or doing this or answering, you know, these checklists. 

Yeah. And  

Benjamin James Kuper-Smith: of course things that take five to 10 minutes ready, take five to 10 minutes, but that's,  

Tom Hostler: uh, separate. [00:44:00] Yeah. And there's a lot of aspect. Um, I mean, you can, I actually find the whole sort of administration, you know, the psychology of administration. I feel like you could do. There's a huge kind of thing there about like the cognitive psychology of it, right? 

Like task switching and all this kinda stuff. Like it actually can be quite a burden. Like Yeah. Trying to find all the information and, yeah.  

Benjamin James Kuper-Smith: I mean, the, wanna ask maybe a little bit, is this, I mean, to some extent the, the open science movement is still pretty young, right? Especially if we compare it to, yeah. 

Even a young field like psychology, uh, the, the open science movement is, I dunno what we're gonna say, 15 years old at most. Um, something like that. Right. And is this kind of just a problem of the process we have to necessarily go through to kind of. I mean, for one, realize what is necessary to do what doesn't actually do anything. 

Uh, is it maybe something where also people need to figure out how to do this? I mean, for example, I mean, I agree, for [00:45:00] example, with the checklist, whether it'll be read or not, I'm not sure anyone has read my pre-registration yet. I dunno. Yeah, they've gone through peer review. They've been public for quite a while. 

I don't think anyone's ever read them as far as I can tell from the review comments. Um, yeah, there's, there's all these different things. Right. And I've definitely also seen like reviewers, I mean, yeah, one study they're like, oh, it's good you review pre-registered some studies, but. Would've been better if you registered more. 

It's like, no, it wouldn't. There's a difference between exploration and confirmation. Like it would not necessarily better. So this is not to like criticize a particular person, but I think it's also like you have to kind of, it seems to me, especially with open science and like the reproducibility stuff, there's a really strong pull that I've also definitely felt to like really go into it and to like then also want to become like a open science advocate, a reproducibility expert and whatever. 

And I feel like it's something that not only every researcher has to go through or many go through at some point, but also the field just has to kind of figure out like, what's the right [00:46:00] balance here? What are the right expectations?  

Tom Hostler: Yeah, I, I definitely agree. I think, you know, it's not, it's not gonna go away, right. 

There's, there's the changes that are already kind of being implemented into like, Policies for journals, for funders, and, you know, increasingly for kind of institutions and, and jobs and stuff that people, I've actually kind of got a side project going on at the minute where I'm like recording. I've got like job alerts signed up for any academic job that mentions the word open research and open science. 

And I'm like saving the job adverts. I've got like hundreds now and I just need some money or someone to help me kind of go through them. But, you know, I'm, I'm interested in like, sort of how they're, that the number of like alerts I'm getting is been increased massively, you know, over the last few years. 

That this is something that a lot of people like, want researchers to know at the very least, like, know [00:47:00] about. Um, and it's part of your research strategy for your university. This is something we should do, so it's not gonna go away and like people are gonna have to. You know, to, to do it. So it's about a kind of, I think like coming, adapting to that and for people to kind of think differently about Yeah, the research process as a whole and, and what that looks like and, and building open research practices and the time it takes to do that, and the skills it takes to do that into that. 

And I, my prediction, I guess, and the way that it will go and the way that it's already started to go is basically saying, look, if, if you're already working at max capacity and you suddenly have to make this incredibly complicated data set that you have openly available, and it contains, you know, we, we, we forget as well, like a lot of people just do kind of online [00:48:00] questionnaire studies and Yeah. 

Okay. It takes. Two minutes to make that available. Yeah. But a lot of interesting data that people collect is incredibly difficult to anonymize and share. And I, I have a professor in my department who does stuff on sort of intergenerational transmission of mental health issues. And her data is like kind of, uh, videos. 

She gets like a, a mother and a child to wear cameras on their head that like film each other and in whilst they like interact with each other and like there's all the kind of associated batteries of like psychological questionnaires and stuff with this. You think, God, that's so interesting. And like imagine all the kind of different questions you would answer with like a data set like that, but the ethical and legal and technical challenges of like sharing that are huge. 

And the answer is like, well at the end of the day, if you are, if you're at max workload [00:49:00] and. You're kind of expected to do stuff like this, you just can't do it on your own. Is the, is the, is the answer. And you need to get someone to help you and you need to collaborate more and start working more in teams of researchers on a project. 

And that team basically needs to be as big as, as it needs to be to, to get all this stuff done. So I think that is probably the way that a lot of the, these kind of open science practices are gonna be accommodated in the research process and in people's, uh, you know, the work of researchers as a, as a collective is, is by not kind of people pursuing individual projects anymore and individual grants. 

It's by saying, okay, well we actually need a team of, I don't know, 10, 20 people to, to do this. You know, big complicated project to the highest open research standards. Uh, we need to get [00:50:00] software engineers and specialists in methodology and, and specialists in open research to, to come and do this stuff for us as part of a team. 

Um, because it's just beyond the scope of any kind of one person to be an expert in cancer biology and, you know, the legal ethics of sharing. Yeah. Yeah. Do you know what I mean? Like, it's just this research is just too complicated for any one person to do it.  

Benjamin James Kuper-Smith: Yeah, and I mean, it's interesting to me because, I mean, these, like big projects are becoming increasingly, I think, also popular. 

Not just in, not just in terms of like more frequent, but I think also lots of people like doing them. And I've definitely talked to some people on the podcast about it. I mean, for me it's always slightly weird because I don't really don't like the idea of doing that myself. Just as a, like, from a practical perspective, I like basically like me and like one or two other people basically doing stuff. 

I don't really like the whole like, Um, you know, you have 30 people on the paper and everyone does like this little chart. It's like, I just want, like, I wanna do the entire thing. Right? Yeah.  

Tom Hostler: But, um, yeah, it's, [00:51:00] it's really interesting you say that actually, because I'm currently writing a sort of paper slash, well, it's supposed to be a, a blog post for Mark Rubin's critical meta science blog, but it might turn into a paper cuz I'm already at sort of 4,000 words. 

So, a about this idea that that sort of change to research and the change to the way that people do research does, you know, it, it's kind of got like a link to like people's identities as well, right? Mm-hmm. And, and it's, you know, it can be quite a personal thing that you say, okay, well actually the way that you see yourself as an academic and the way that you work, uh, is, you know, you can't do that anymore because, Sorry, like you need to start working as part of a team because what we expect you to produce, you know, you physically probably can't do that on your own. 

And so that's got quite like Yeah. Interesting implications that I, I think, again, I think people haven't really sort of thought about how all [00:52:00] these changes to the research process, uh, and careers and the way that research is rewarded, you know, all look kind of really good on paper and all sort of fix the problems they're supposed to fix, but they have this kind of perhaps like intangible effect on things like identity and people's actual personal feelings towards being an academic and doing research and doing their job that, you know, can be often kind of dismissed in the name of progress and good science, but perhaps not, you know, unimportant as well. 

Benjamin James Kuper-Smith: Yeah, I mean, one thing I often think about is the, the, the more that you make something, whether it. Also takes away spon, spontaneity and creativity and these kind of things. And to some extent I feel like it does, but then again it probably also increases in other areas such as you can read a paper and download the dataset and then just fiddle around with it. 

Right. So in some sense I [00:53:00] feel like it might just like shift where the spontaneity is happening around a bit. Yeah. But yeah, I also dunno what the, I mean, in a sense like nothing still, nothing is stopping you from just running something quickly and  

Tom Hostler: you know, well, no piloting it briefly, you know, there there's always gonna be el it's not, I think there's always gonna be things that sort of remain the same as well. 

Like even with all the change. But it's hard. Yeah. Everything is also kind of changing so fast that I think that that's what makes it kind of more important to at least try and anticipate the potential consequences of, of stuff as well. That you know, even if you can't stop it, it's not gonna kind of blindside you that suddenly. 

You know, people are quitting academia because they don't like this kind of new way of working. Like Yeah.  

Benjamin James Kuper-Smith: Eventually trying anticipate stuff. I mean, I guess of the, of the big team thing, in some sense it is this, it becomes more of a company kind of thing, right? Where you do your Yeah, exactly. Some people really love it. 

It's just, I don't, I guess I have too much of a, of an ego to, to [00:54:00] do that. Yeah. I dunno. But, um, I want to ask about, you mentioned earlier, you know, you can't do this on your own. You need a team part of me a little bit with the creativity. Um, I have a, a suggestion to which to also immediately have a counterpoint, um, which is, that is basically is this problem you're addressing with workload and all these kinds of, is that more of a problem for people who don't have lots of funding? 

That's kind of part of the problem because it seems to me like when I, you know, if you are, if you are, uh, supervisor and you have four PhD students, you're not gonna do open science yourself. Basically, right. Like that's, that's what you have your PhD students for. So in some sense, you, um, obviously PhD students still have to do it, but it seems to me that the, the whole, many of the problems you're describing are much more a problem for people who are, you know, who don't have a grant and who have like, maybe one person working with, and they themselves have to do lot of research. 

Um, the, the immediate counterpoint is that, well, the data's open now, you don't even need funding to do lots of cool stuff. But I was just curious like what you, whether that rings true to [00:55:00] you or, um,  

Tom Hostler: yeah, I, I, I think, I think definitely that, I think that's definitely true because I think, yeah, a lot of the time if you have access to even kind of small pots of money, especially with these kind of administrative tasks in terms of sort of, you know, setting up your OSF page and writing all your metadata and stuff, or you writing a description about how the code works and things like that. 

You know, you can kind of, you can potentially just employ little pots of money to kind of employ a few people for a few hours to help you with it. Um, whereas if you've, yeah, if you've not got that, then you basically kind of have to do it yourself, right? And potentially if you're in that position where you don't have a lot of funding and you don't have a history of getting a lot of funding, then you will at least, uh, in the UK you would probably also have less research time anyway, because usually that's kind of correlated with your track record of [00:56:00] research that, you know, people are good at research, get more research time. 

So if you are in a, if you're kind of struggling to just do these like little independent projects, when you have a few hours here and there in between your teaching, then at the very least either like, yeah, you're gonna have to, they're gonna take a lot longer, or you might even sort of start to design them to minimize the amount of extra work you have to do for open science. 

So you kind of might be like, oh, well, There's no way that I'm gonna be able to make this kind of quite complicated qualitative data set anonymous. Uh, so I'll just do it a questionnaire study cuz I know that I can Yeah. Put that online pretty easily. Um, so I think, yeah, it kind of, kind of feed into perhaps present. 

Yeah. If you don't have the resources to do it, it could also kind of feed into the sort of projects that you even attempt in the first place. Uh, especially those little individual kind of, you know, pilot projects that people might do to, to test out a new idea or, uh, things like that. [00:57:00] Mm-hmm.  

Benjamin James Kuper-Smith: Yeah. I'm wondering to some extent, I wonder like, whether that's a bad thing, you know, might as, I mean, is it worse to do something? 

I mean, I guess it's the thing like you're, you're doing something potentially better that's not as worth doing, or it's like more obvious or whatever as compared to doing something cooler, but. Yeah, not quite. Yeah. Well I guess it's also the question, like you can do still do it well, but not do the documentation and then it just looks as if you didn't do it well. 

But yeah, that's a whole different question. Yeah. I had a general question which was about, um, I, this is something that I kind of, as I was reading your article and go like preparing, I kept going back and forth on it a little bit, which was to some extent like is this more of a problem for not exactly. I mean, it is difficult to find the right terms here. 

Not, not for like bad science, but like, it seems to me to some extent like the, I've kind of, I dunno. When I started my PhD, let's say, I was really into like, ah, let's do a few different projects and we can, you know, do something quickly and get it out right. [00:58:00] And also, I mean, I just realized personally I don't like doing that. 

Um, uh, but. Now I also feel like it's, there's just probably not that much worth in it to begin with. Like, you know, just doing something quickly because you can do it. You had an idea, you ran with it well, often the first thing you think of isn't the thing you should be doing. And to some extent I'm wondering like whether if you kind of increase the running costs of, or if you, you know, you add these like initiative burdens to it, you are less likely to just do something quickly because it can be done quickly. 

Yeah, because, but I guess what I mean is like if you have a big project that has like several experiments, in my case, neuroimaging or something with it, you know, the longer the project is, the less percent by percentage, um, these small administrative tasks become. But if your project is very small, then the administrative tasks become proportionally much larger part of the project. 

Right. So, I dunno, to some extent, it seems to me that the, this is actually pushing also towards bigger projects, um, [00:59:00] that. I dunno, to some percent I feel like there is, there probably are more worth being done in the first place, but I'm not sure. I dunno.  

Tom Hostler: Yeah, I mean it's a good, I think, yeah, you, I mean I think you're definitely right in the sense that yeah, the a a little project perhaps becomes too much work to, to even bother attempt. 

But then in, in many cases you could say like, oh actually, yeah, that probably, it probably would've been better for me to contribute to a smaller part, to a bigger project instead, you know, use my time to kind of do that. But I dunno, I, I, a lot of, you know, this kind of yeah, sort of taps into this discourse more generally about moving to more teamwork, moving to bigger projects and stuff, which, yeah, they, they, in many ways, they are objectively better than small studies in things like larger sample sizes. 

But I guess there's also, you know, questions [01:00:00] about, well, what. Those larger projects kind of create a certain type of knowledge as well. Yeah. Yeah. And whether that knowledge is always better than the knowledge created by smaller projects is, you know, a different question and, you know, do, do you kind of need a sort of space to be able to try out those smaller ideas in order to kind of see if they work first? 

There's a really good, um, paper about the sort of the psychological science accelerator and the big team science project, um, where they discuss the, um, you know, the benefits of them and also the, the weaknesses and or like the limitations. And one of the limitations they mentioned, which I thought was like really good, is like, well, the more money, you know, the bigger the project, then the bigger the risk of making like a big mistake if it, if it doesn't go well and actually, you [01:01:00] know, Do you kind of, is there a bit the potential to kind of people to just say, okay, yeah, big is always better. 

We need these bigger teams, um, which are, well, you know, more expensive in terms of like, even just like the staff cost of like working on them, but then, you know, and there's kind of assumption like, oh, well a bigger, more expensive project couldn't possibly be worse. Like it couldn't be bad. But if the project does end up kind of making a mistake or not being great, it's like we've actually wasted a lot more time and money on it then if you had kind of maybe tried it out in a little bit more a sandpit I idea first, um, and, and then realized it was a terrible idea through the process of, of trying to do it. 


Benjamin James Kuper-Smith: I mean, maybe, I guess lots of what we've been talking about, kind of circles around the kind of differences between doing science in a kind of hypothetical scenario where there's unlimited time, et cetera, and the realities of what it's actually like and other things you have [01:02:00] to do that take away time, that kinda stuff. 

I'm just curious, like from all of that, what would you say is kind of, if we're looking for kind of solutions to, as on a personal level, um, I don't think we can solve the system right now, but, um, on a kind of personal level, scientists, what are, would you say like the most kind of, you know, time effective of, uh, principles or like aspects where you go like, okay, this is something it, you know, it actually takes 10 minutes and it's like worth like several hours or compared to something else where you're like, uh, that takes huge amounts of time and like, who cares  

Tom Hostler: basically? 

I mean, I definitely think like in, in a lot of cases, You know, perhaps the time cost actually isn't as, as big as some people think. Even with like these kind of, you do like a questionnaire study or something on Qualtrics. You know, you can download your entire questionnaire as a Word document with like two clicks and then just put it on [01:03:00] the SF page and suddenly, you know, people have this exact record of e every question that was asked to your participants in exactly what order, the spelling mistakes that you put in there and the, you know, but it, it is very easy to do. 

And I think the same with a lot of kind of quantitative data sets as well, is potentially people kind of think about all the ethical issues of making certain things available. And if the main reason for, or at least one reason to make it available is that people can sort of check your analysis, right? 

But then the only really need the data that's in that, that allows you to. Reproduce the analysis. So they don't need, perhaps your demographic questions if you didn't include them in your analysis. So you just delete your age and gender variables before you share it.  

Benjamin James Kuper-Smith: Although don't you think that kind of defeats a huge part of the purpose of doing this stuff, that even, like I for example, don't care about age and [01:04:00] gender, for example. 

Right. Uh, in, in any of my studies basically, but maybe someone else does. Right? And then they have that.  

Tom Hostler: Yeah. But if that, if it's a, if it's a question about like, oh, well your, you know, your ethics committee doesn't want you to, to share that, you know, it's either share all of it or nothing. Yeah. Okay. I would say like, actually in some cases you could probably say some of, some of it and it's still gonna be very useful to someone else. 

Yeah, yeah. Of course, of course. And that shouldn't, shouldn't be a barrier. And I think, yeah, I, I think sort of what I said before that probably. There are a lot of efficiency. There are efficiency gains to be made for individual researchers to incorporate open science stuff into their workflows. And probably the, the ones who are gonna benefit most from that are the ones who are working very, you know, very inefficiently at the minute and don't have a good system for storing all their old data on their computers and are constantly like losing, looking for files and, oh, where was that questionnaire that we used and, [01:05:00] oh, how did we do that manipulation before? 

And you didn't need to tack  

Benjamin James Kuper-Smith: me like that. 

No, luckily, luckily actually, I mean, I actually, I guess in the sense grew up with open science stuff, right? So even, yeah, even if I'm not super organized, I have to be so like yeah. I can't actually be that disorganized anymore. Yeah.  

Tom Hostler: Yeah. And but also for you, you know, for new, for new science PhD students, you know, who are potentially like learning how to do research, they've kind of, or master's students, like, you know, kind of. 

Embedding those sort of workflows into the way that they're taught to do research is Yeah. Yeah. Is gonna be really beneficial. And, you know, they're not gonna notice that it takes longer because that's the way they've always done it. So I think that there are definitely kind of benefits to that, and it would never sort of want my paper to be interpreted in a way of saying it's, it's not worth doing these things, but more that the costs kind of have to be [01:06:00] born somewhere. 

And we should sort of try and think about the ways to mitigate that and the system, the way the systems are sort of designed to, uh, mitigate that. Okay.  

Benjamin James Kuper-Smith: Um, yeah, I still have to figure out how to make these transitions to the recurring questions because it's obviously such a like, jump, uh, uh. Yeah, I guess I'm not gonna be able to, I'm not gonna create a transition. 

I'm just gonna go straight to recurring questions now. Um, yeah. First a recurring question is always a book of paper that you think more people should read. This could be, you know, a hidden gem, something that's been forgotten, or maybe just something that, uh, you think is a great paper that very, lots of people know it, but everyone should read it again. 

I dunno. Um, oh, I'm just, sorry, just briefly. Uh, I didn't, I guess I didn't mention this so far. Uh, I always put links and references in the description so you can just find the paper that Okay, cool. And all the other  

Tom Hostler: stuff we mentioned. Um, [01:07:00] yeah, I had to think about this cause I had a few, um, but I think the one I was gonna go, I always end up sort of recommending this to people, uh, is a book called, uh, the Census Hand, uh, the Dysregulation of Human Subject Research. 

Never heard of that. Okay. By Carl Schneider. I have it here actually. Yeah. Um, so, Uh, it's like a, it's about basically like the problems with ethical review boards. And he goes through this whole, uh, throughout the book, like he goes through this pretty convincing argument. I'd say that, um, they are very ineffective. 

Uh, they often do more harm than good and, you know, should basically kind of abolish them and, and come up with a better way of, of regulating research. But what I like about it is that it also is kind of in the context of the broader context of how they were sort created as a response to, uh, you [01:08:00] know, kind of scandals in research and crises. 

Yeah. And so they kind of had good intentions behind them, but then the way that they've kind of ended up is actually kind of ends up making things very burdensome for researchers and worse and, you know, his kind of main take home message that have a kind of. Net negative effect on science in terms of preventing good research that should be done, delaying research, costing money, um, just generally kind of putting a lot of extra Yeah. 

Costs and burdens on, on researchers to comply with requests and things. So, yeah.  

Benjamin James Kuper-Smith: That's interesting. I had, I had one point I considered mentioning at the end of our discussion about open science anchor stuff, which was the question of whether we are criticizing almost the wrong thing for the, for the, the time commitment, because I've definitely spent time, you know, learning about open science and implementing it, that kind of stuff. 

But I've definitely wasted a huge amount of time on ethics [01:09:00] applications because we, we've been in hospitals, which just makes everything way  

Tom Hostler: harder. Yeah. Oh, I mean, he's got so many case studies in this particularly about medical research. Yeah.  

Benjamin James Kuper-Smith: Well the thing is, we're not doing medical research, right. 

But we're in a hospital, so we have to write like way bigger, uh, ethic work things and Yeah. So like if we're talking about like, what's taking time away here, it's not the open science in my case. Yeah. Um, yeah.  

Tom Hostler: Yeah. It's a, a really good book though, and it's, it's sort of written in quite a, you know, nice engaging style as well, which I would appreciate. 

So, um,  

Benjamin James Kuper-Smith: second, kind, second recurring question. Some, some error you made repeatedly. Um, I dunno, some people they make an error once and then they learn from that. I don't, uh, I'm curious, what's, what's an error you've repeated too often or something you're working on? Uh, changing?  

Tom Hostler: Um, I think, I [01:10:00] think probably is, I'm gonna answer it more generally and say like, I wish I'd learned sooner about hypothesis, like significance testing properly. 

Mm-hmm. Okay. So I think, you know, the way that you're taught it, uh, undergrad, And master's level is, you know, sort of quite superficial, quite procedural. A lot of bits of it don't really kind of make sense, but you go along with it because that's what you are sort of told to do and that's what other people are doing and that's what you find in published papers. 

And so I kind of wish I'd sort of learned about, so, so, you know, the example when I teach my students now is like, you know, you have the peers less than not five cutoff rule for deciding, you know, it's like a binary decision about significant or not. But then you also see papers report sort of the cutoff points of not five and notnot one as this kind of sliding scale of evidence that, oh, this one was [01:11:00] more significant than this one. 

And for ages I would just be like, I, I just can't get that to kind of click in my head about how those things fit together. And I read, um, Zol and Dean's book, uh, understanding Psychology as a science. Uh, where he kind of goes through in a lot of detail about why this is the case and a few other papers as well. 

There's maybe about Nickerson like 2000, um, which basically sort of explained like, well it's, it's kind of two different ways of using p-values from different statisticians. Uh, but the way they're taught, they kind of muddle it together a bit because that's just the way that everyone uses it. And that really kind of helped me when I had understood that and been like, ah, okay. 

So the whole process is like muddled that makes more sense now, like no one understands what they're doing. Um, that kind of made me feel a lot better, I think, and, you know, made me able to kind of use them and interpret them once I [01:12:00] had a consistent framework for doing it. That made me able to kind of interpret them and use them a lot better. 

So, uh, yeah.  

Benjamin James Kuper-Smith: Yeah. Spoken like a true psychology student from the uk. I guess I, I, I'm a bit younger than you, but I, I, I know exactly what you mean. Um, yeah, but I get, do you think, I, Don dunno, I haven't been in the UK for a while now. Um, but is the, it seems to me that that's changing quite a lot. Is that the case? 

Tom Hostler: I think so. I mean, I, I teach research methods, so I try and teach now my students like the correct way of, have you, have you  

Benjamin James Kuper-Smith: gone full basian or just  

Tom Hostler: Not quite, no, but I think, I think just getting people to understand the contradictions that they see other people using as well. You know, because I think especially when you're a student, you sort of, although you're sort of taught to critically evaluate methods in the sense of like, oh, they didn't have a control condition or, uh, it was a, between the strengths and weeks of like a, between [01:13:00] subjects design, um, Kind of when it comes to stats, you've kind of been like, oh, well we're all using the same stats and this is the way that you use them. 

And, and actually like, sort of teaching people that, well no, there's, there's different ways of using them and some are definitely better than others. Is, is important part of that. So, and I wish I'd kind of Yeah. Had that and or, or just been like, this really doesn't make sense to me. Just please can someone explain it to me like a lot earlier? 

Benjamin James Kuper-Smith: Yeah. Um, final question, uh, which is almost so broad. It's almost not a question. I dunno, I'm a late, basically my salary ends in two weeks. Uh, I'm a late PhD at some point soon. Postdoc, I dunno. Any advice for, for someone like me in terms of like what science to do, what, what to focus on? What, uh, can interpret this in whatever  

Tom Hostler: way you want to? 

Uh, yeah, I mean, I dunno, I guess I would probably have just say [01:14:00] something like, don't. Don't. That's it. Yeah. 

Uh, yeah, don't, I think, don't try and like pigeonhole yourself too much because I think the, what I did as a a, my PhD was on what, like what my undergrad dissertation was on was completely different to what my PhD was on, which was completely different to what my initial kind of research stuff was on, which is completely different to what I do now. 

And maybe that's terrible advice and will lead you with a kind of broken career and no coherent thing. But I think doing lots of different things has also kind of given me a better perspective to be able to sort of interpret academia as like the way it works. Uh, which I think is it more useful in some ways cuz then you can kind of talk to people more and understand. 

Stuff from their perspectives a bit as well. So yeah, whether that involves doing research itself [01:15:00] or just kind of reading other people's stuff and talking to people who you wouldn't normally like Yeah. You think, oh, well I'm, I'm a, I do eg I do f r I like this is, this is what I know and this is, I sort of focus only on that in some ways. 

That's very good. But it also kind of perhaps gives you a too narrow perspective to understand the broader stuff maybe. But then you do this podcast where you talk to lots of people anyway, so I think you're, you're doing that already. Yeah. I mean, it's  

Benjamin James Kuper-Smith: obviously to me, but also, you know, hopefully at least one person person's listening. 

Um, yeah. Uh, yeah. I mean, no, it's, it's, um, I wanted to ask like, what, at what stage do you think that makes sense? Because for example, I feel like I did. To very extreme degree with within psychology, but to a quite extreme degree. I did basically what you suggested until I started my PhD where I, you know, I was like just doing like lots of projects on the side and helping out, collect data here and whatever. 

And then I had a [01:16:00] master's project, uh, which a master's, which happened to have two different projects, um, like research projects, and spent some time in between like bachelor's and masters doing two or three different things. And so I've like, you know, done like 10 different like things or something within psychology. 

Right. And I think that's to some extent quite good. But I mean, maybe I overdid it also, but to me it also now feels like now that I'm kind of in my PhD, I still had a bit of that approach where I did a few different projects and I didn't, I mean, it's, I guess it's good to have done it for my own projects, but I. 

If it wasn't for the learning experience, I wish I'd not done it. Let's put it that way. Yeah. And I feel like now I really want to like, actually, I kind of wanna narrow down on something. Um, yeah. To actually feel like that maybe for once I understand something rather than just like, jumping about.  

Tom Hostler: Yeah. I mean, that's it, it can go both ways, I think, but I think, you know, you don't want to, [01:17:00] the, the broader kind of climate you're working in as well can change as well. 

Do you know what I mean? Like different sort of techniques, different areas can, like, fall out of fashion or in terms of like funding and stuff, like quite quickly. And what gets funded can change quite quickly. And what skills people are looking for can change quite quickly, you know, even in terms of open research, right? 

Like, so I, I think, yeah, even, I think even if you specialize, you shouldn't, you shouldn't assume that. What you have kind of the, you know, the way that a particular field or or approach is, is always gonna stay like that as well. Um, and that's the sort of benefit in terms of, at least, even if you're not pursuing other projects, kind of actively at least trying to keep a little finger in different pies here and there or, or, you know, understanding what's going on in, in other bits can, you know, help you kind of predict where the [01:18:00] winds have changed are gonna blow a bit more and, and how your. 

Stuff might fit in with other, you know, might be able to combine with these other ideas as well. So,  

Benjamin James Kuper-Smith: yeah, it's true. It's interesting, like, I guess when I say I wanna specialize on something, I mean that very much from the perspective of right now, I want to understand that approach a lot better rather than, this is, this is the one thing, like one like, uh, method I want to use for the rest of my life. 

Right. It's, it's, yeah, it was actually more within what you said, almost in the sense of this is one approach. I mean, in my case it's kind of more like neuroeconomics, right? Like, and there's obviously problems with it, but I, I kind of wanna like, understand it. Yeah. Yeah. And then I can like do something else and kind of have like very different perspectives on the same problem, for example. 


Tom Hostler: Okay. Yeah, that's, well that's good advice. Okay. Well thank.

Start discussing Tom's paper 'The Invisible Workload of Open Research'
Does open science actually increase workload?
How open science changes the research process
Are open science requirements especially time consuming for labs without lots of funding?
What are the most effective open science practices?
Book or paper Tom thinks more people should read
Something Tom wishes he'd learnt sooner
Tom's advice for PhD students and postdocs