BJKS Podcast

117. Kai Ruggeri: Global collaborations, Prospect Theory, and temporal discounting

Kai Ruggeri is professor for health policy and management at Columbia University. We talk about his global collaborations, in which they studied various important decision-making aspects, including Prospect Theory and temporal discounting.

BJKS Podcast is a podcast about neuroscience, psychology, hosted by Benjamin James Kuper-Smith.

Support the show: https://geni.us/bjks-patreon

Timestamps
0:00:00: Why Kai studied stats anxiety in his PhD, and then moved to broader policy questions

0:09:15: Replicating the original Prospect Theory paper across the world

0:30:01: Adversarial collaborations and choosing which findings are worth being replicated

0:38:31: How to run global collaborations

0:56:25: Overlooked aspects of these global collaborations

1:03:59: Should we collect data from non-Western countries without local collaborators?

1:10:24: A book or paper more people should read

1:16:38: Something Kai wishes he'd learnt sooner

1:27:50: Advice for postdocs

Podcast links


Kai's links


Ben's links


References, links & notes

Junior Researcher Programme: https://jrp.pscholars.org/

Today, Israel uses the Shekel, but when Kahneman & Tversky did research there, they used the Israeli pound: https://en.wikipedia.org/wiki/Israeli_pound

Prolific: https://www.prolific.com/

Besample: https://besample.app/

Kahneman's final decision: https://www.wsj.com/arts-culture/books/daniel-kahneman-assisted-suicide-9fb16124

Gal & Rucker (2018). The loss of loss aversion: Will it loom larger than its gain? J Cons Psych.

Kahneman & Tversky (1979). Prospect theory: an analysis of decisions under risk. Econometrica.

Lewis (2016). The undoing project.

Macher, ... & Ruggeri (2012). Statistics anxiety, trait anxiety, learning behavior, and academic performance. Europ J psych edu.

Macher, ... Ruggeri, ... (2013). Statistics anxiety, state anxiety during an examination, and academic achievement. British J Edu Psych.

Mellers, Hertwig & Kahneman (2001). Do frequency representations eliminate conjunction effects? An exercise in adversarial collaboration. Psych Sci.

Parks, Joireman & Van Lange (2013). Cooperation, trust, and antagonism: How public goods are promoted. Psych sci in the public interest.

Ruggeri, ... & Folke (2020). Replicating patterns of prospect theory for decision under risk. Nat Hum Behav.

Ruggeri, ... & Folke (2021). The general fault in our fault lines. Nat Hum Behav.

Ruggeri, ... & Toscano (2022). The globalizability of temporal discounting. Nat Hum Behav.

Ruggeri (Ed.). (2018). Behavioral insights for public policy: concepts and cases.

Thaler (2015). Misbehaving.

[This is an automated transcript that contains many errors]

Benjamin James Kuper-Smith: [00:00:00] So, yeah, I mean, I guess we'll be talking mainly about, uh, or we're talking about your kind of large scale international collaborations and specifically the, the two examples that are most interesting to me, which are the prospect theory and the temporal discounting papers. Briefly before that, you know, I always look at people's, uh, CV and that kind of stuff, and it seemed like initially you started off, it seemed like you were mainly interested in why people hate stats.

That seemed to be, uh, so you have several papers on stats anxiety. Uh, so I was curious, do you, so it seemed to me there's two obvious reasons why you would study that. One is either you love stats and you don't understand why other people don't love it. So it's, uh, you wanna understand it or you hate stats and it's kind of more you wanna express that, uh, through research.

So, yeah, I'm curious why study that? And then also it's how did you end up doing this more recent research?

Kai Ruggeri: Absolutely. So, no, I I, it's a great question and it's actually a great observation because I raise this all the time to people when I mention it, I, I actually frame it in the same way you did. [00:01:00] the first interesting thing is that I didn't actually apply for the PhD that I did. I didn't apply for any PhD.

Uh, what happened was I got interested in going to graduate school and had looked into going overseas. I thought I would try something different, try a bit of a change. Uh, I had started out planning to be a, an American football coach, uh, had played American football in university and had all this. And then one day I just decided that really wasn't the trajectory I wanted.

I thought I'd try something else. So, um, ended up looking into graduate school options, uh, overseas and had really just applied for a master's program in general. Psychology didn't have anything specific, the department I had applied to essentially put my name into a list of candidates. a PhD program, um, because they didn't even offer a master's, so they just put my name into a pile and I got a phone call.

If you can imagine back when a regular landline phone call at about four o'clock in the morning. It was on the 4th of July. Interestingly, uh, it was about four o'clock in the morning and that was when the panel [00:02:00] had met in Belfast. And the first question they asked me in this interview panel was, um, why did I wanna do a PhD? And I pretty honestly was like, ah, it's not really something that interests me. Um, and, uh, uh, the interview was a little bit messy for a while. Didn't, as you can imagine, uh, I wasn't necessarily thinking academic trajectory or anything. um, what ended up happening was at one point they asked me if I had any questions or thoughts based on the discussion.

And the question I asked was simply. What do PhD students do? Do they get a chance to teach when they're PhD students? 'cause in the states it's very common that PhD students get to teach. And I mentioned, the reason I was curious in this was because just about this was because as an undergrad I'd been asked to help with essentially an overflow of statistics undergrad students by my stats professor. I hadn't been a top student, but I had been a really engaged student. And I got really got along really well with that professor and I found him very motivating and, and all this. I had mentioned that as an undergrad I was asked to help [00:03:00] teach a statistic class. And I loved it. I really enjoyed it. I got along great with, I found it was really cool, you know, and something that was really engaging for me to, to work with, um, fellow students on, because there were people I knew and I really enjoyed, enjoyed it. And in that interview, you could hear all the jaws dropping and hitting the table in that moment because their experience had been psychology. Students hate statistics. Full stop. And they were very excited that someone not only enjoyed it, but enjoyed the idea of teaching it. Because apparently in that department at the time, that was a class that they gave to basically punish somebody or some new professor or somebody who basically didn't want that.

That was the class they kind of required them, was to teach the stats classes. And so I had said, yeah, I really enjoyed it and all that, they kind of floated this topic. I mean, I'm, I'm kind of cutting out some of the random details in the middle, but it didn't all happen right in this, in one exact moment.

But essentially eventually they floated me this idea of, would you be interested in doing a PhD in essentially statistics, anxiety or why psychology is such a [00:04:00] place for such dislike towards statistics. I ended up thinking that's a great idea. Like I, I was shocked that anybody didn't like statistics.

I had really enjoyed it, um, felt like I learned a lot. It really clicked with me and I, I could see all the applications for it. So it was really more the interest in why people weren't learning, not not purely educational, but just what was this apprehension? And it almost seemed anemic to a lot of people when I started working on that.

And I, I spent three years basically studying why do, now we looked at psychology undergraduates, but it was broader than that. Um, it was back in the day when you could get away with doing a PhD that only tested psychology undergrads. But in my case, it made sense because psychology students should be the most inclined to care about this subject. And what happened was I started realizing there's much broader implications for this, because some of the things we found, yes, there's issues with teaching and issues with people not quite grasping it and so on. Or, you know, having a bad ne you know, not in their previous experience with math wasn't positive or something, there were some bigger ones that related to [00:05:00] people just don't understand what statistics really is. I started realizing this has much larger implications. So there's a much broader issue with people not understanding and not wanting to understand risk, uncertainty, probability, significance, variability, or even just kind of the standards of distribution. So data dispersion, this basic kind of information tends to be something a lot of people don't wanna know. this is 20 years ago now, I don't think we have quite the same issue because with things like AI and, you know, the idea of analytics and the fact that so many people have so many different reasons to, to think about or care about data, it's probably changed a little bit and people now see more of the value. But 20 years ago, even people in social sciences and, you know, various academic fields saw statistics as a burden. Um, because a lot of it was taught through, you know, programs like SPSS, where you were basically taught about pointing and clicking things. You weren't really engaging and learning.[00:06:00]

Benjamin James Kuper-Smith: Yeah, I mean, your description is basically exactly what, uh, I also felt, I think when I did my psychology undergraduate where there there was a sense like, there's these things I wanna learn and things I wanna find out and things I wanna do. And then I guess you have to like jump through these hoops to do the stats.

You know? It, it seemed like this kind of like vaguely unnecessary, slightly not, I didn't find it like super complicated, but it was just like annoying. Um, and just something you had to get, get done in that sense. Um, but I think, I mean, for me it was also like just doing, actually doing my own research. At some point you do reach that point where you go like, ah, now I see now, now I get it.

Like it is a, I'm starting to understand the point of it, but I think for you basically described very accurate, like very precisely what I felt in the sense of like, yeah, this vague sense of like, why do we even need this without really asking it though, in my case. It's just kind of just background annoyance.

Kai Ruggeri: Absolutely. And I think that was the prevailing view in a lot of disciplines, and I think it, it even spoke to the general [00:07:00] public outside of academia. I think it was a, a much broader issue. And so essentially I worked on that for several years and even collaborated with people in different countries who had similar kind of challenges with. Their own students or in just kind of the general public. And I just started seeing the, the link to that, I started seeing that this challenge isn't just something that relates to how it's taught in undergraduate psychology curriculum, but it's psychology curriculum. it's a much broader social issue because if such a large part of society doesn't grasp basic understanding of things, essentially everything can basically be missed out or misunderstood on simple understanding, simple probability, simple risks and uncertainties. And that applies to so much, right? love to reflect on, you know, wall Street or stuff. But a lot of basic, uncertainty and risk and all that applies to everything we do every day, whether or not we're even thinking about it. So the idea that we should be better at understanding even the fundamentals, [00:08:00] it's, it's a pretty ar easy argument to make and it's an easy leap to go from why it's a problem in classrooms to much bigger challenges. So it really wasn't a hard link over. I spent several years working on a lot of different topics and kind of dabbling in in various different spaces, but that wasn't so much because I lost interest on the statistics anxiety side. It was because I really wanted to understand broader, under broader kind of concepts of public policy challenges and broader questions we could ask as it related to understanding risk, uncertainty, general statistics, and the application of it. So it wasn't so much that I pivoted greatly from my PhD, it's that I saw that that was a very contained way to understand something that was a much wider challenge that we face. So. Students understanding or not understanding statistics in a psychology degree, that's one thing, and it, it's, it's not nothing and it's certainly important to our discipline, but it, it was a much larger topic than that.

So it really was only that I [00:09:00] expanded it away. And so the frame looks different, but essentially you're still talking about the same kind of things. How do people understand of risk, uncertainty, probability, and so on, and how does that apply to any number of different challenges and decisions we face?

Benjamin James Kuper-Smith: So I know of the two articles that we wanna wanna discuss today, the prospect three one was the first one. How do you get then from these, you know, initially probably fairly small lab studies you did, or you mentioned some international collaborations. They also, but then how do you get to these kind of big studies with, I don't know, 50 labs in 50 countries or whatever it is?

Kai Ruggeri: Yeah, so the, the, there's kind of a two, uh, level answer to that. And the first level is my research interests really started evolving and expanding again, outside of just looking at, within our discipline and really started looking at these broader big social challenges. Really interested in the public policy side. But around that same time, so in 2011, I launched along with some others, [00:10:00] um, this, uh, international. program for early career researchers. At the time it was kind of more student focused and essentially it was an opportunity for early career researchers and students who had come from places where they didn't get many opportunities to engage in active research projects while they were students, whether it was undergrad, masters, um, or in some cases PhD. we set up this program to build small international teams to run their own studies and we would give them the support to run this. It was called the Junior Researcher Program. And we started developing that and it used to be small teams of around six to seven people. So they would run these kind of studies that were led by a PhD student in our postdoc in six or seven countries. And it developed. We also started adding in at the time what we called an internship, which was hosted by my former institution in Cambridge. And we would bring everybody in at the end who wanted to, and we would work on one of these kind of international collaborations. In the early years, we were really feeling it out.

We were trying different things. We were running studies. I wouldn't have called them quite what we're doing now and [00:11:00] what we've learned over time, but they were still kind of large collaboration, multi-country studies, but we weren't really asking fundamental questions. We weren't retesting, major theories or trying to expand these kind of major questions.

They were really kind of really contained things. That was early career, and we were mostly with students. Didn't have a whole lot of experience with it, but we looked at a lot of different things like healthcare decisions between con, how do people make decisions about their health and understanding risk in those contexts. We looked at the policy challenges and we really kind of tried to apply it to a different theme each year. Then we started realizing that there were broader values to this. So as more and more organizations started understanding the role of behavioral science in applied policy, in institutional settings, in real world context, whatever they might be, we started realizing there would be some use in translating a lot of that evidence. For general audiences. So we were, we started writing, uh, as these big collaborations, rather than running individual and original studies, we'd write [00:12:00]policy reports that tried to translate findings that led to being invited to put together an entire textbook on the role of behavioral science in public policy. So we put an entire textbook together on it. The first year we did that was 2017. So that, obviously that takes a while to put a book together, but, but it came out I believe in 2019. So what happened was, is while we were working on that, this is really when, I wouldn't say the peak of the replication crisis, but the beginning of the wave where people started looking for solutions, not just identifying issues that might exist. So there was more interest in trying to say, well, what. Are the methods necessary to really reliably test questions that we have in psychological science, particularly as it relates to behavior and decision making. So as those questions really started pushing along, there was, let's say, pushback on almost every major psychological theory. You might, know, I think exist or those that we pretty much all learned in our field [00:13:00] at undergrad. Right? one of the ones that's really important, if you care about things like risk and uncertainty, and if you care about understanding the probability and their applications in public policy, one of the most fundamental ones is loss aversion. Around that time that the book was coming out, I believe it was in 2018 or so, um, maybe possibly 2017, some, somewhere in there, 20 17, 20 18, work came out basically saying loss aversion was fallacy. It's the biggest myth in the whole field, and it, it wasn't a real thing. And this is quite a dramatic claim to make, as you can understand. So claiming that one of the most fundamental and largely agreed on constructs isn't a thing. piqued our interest. 'cause we had just written many examples that essentially rely on assuming that loss aversion is a real thing. So we'd looked through it. We weren't super impressed by the argument saying it wasn't real.

It seemed more just like disagreement as to what explained what. But I, I don't wanna go into a whole long thing on it. We weren't super impressed by the argument, [00:14:00] but we did think, you know what, that is a really important question though. If we're working on the assumption that loss aversion is real, and we talk about all these cases that essentially rely on that, we probably should have some confidence that what we are saying is based on what we know, not just something that's generally accepted. we looked at different options for doing that. Now we've been asked, and you alluded to about why we didn't. Just do something new, something more, 21st century test of loss aversion rather than go back and run the 1979 methods and, and retest that. Well, the reality is we did think about that and it was discussed and it was considered, but there was a pretty strong prevailing view that we didn't actually think there was any evidence saying loss aversion wasn't real.

We actually thought that plenty of the modern methods showed pretty well that loss aversion is a, a really u it's a very real thing. It's not universal, it doesn't apply to everyone in all contexts. But that there are plenty of real world examples [00:15:00] that essentially validate this is a useful frame. Um, I'll do a quick aside on this and point out, and I'll bring this up repeatedly.

One of the biggest flaws I think we have right now is debate around the field seems to be whether something universally applies or it's wrong. And I think this is a really problematic way to go about behavioral science generally, or, or any sort of really just any sort of psychological field. Even economics and, and many social disciplines.

The idea of having a frame for a theory doesn't mean it has to universally apply. It means it has to be a useful standard so that we can test deviations from it. So one of the things that we looked at was, is there anything that suggests loss aversion isn't real from all the more modern ways of testing it? And we just weren't convinced. So we saw this complaint saying it wasn't real, but in our view at the time, certainly, um, was we don't think they've made a compelling argument. And we also think that there are plenty of modern ways that this has been tested in real world settings and, and just deeper, [00:16:00] um, you know, laboratory tests that have shown that it's, it's a very real frame.

Again, not universal, it doesn't always apply to every person in every context, but that it's a very good descriptive baseline. Against which you can test deviations, right? It's the same thing like with anything economic supply and demand is not absolute and universal. There are times when higher prices can drive demand, right? And we see this all the time daily. Um, so it's not the idea that something has to be universally applied to be true. the reason we ended up going back to the original is because we thought, well, there's actually plenty of validation of this. And what would be better is to say, what if we went back to the original descriptive model?

Because remember pro and, and we'll talk about what prospect theory really is, I guess. But one of the critical things to remember is it's meant to be a descriptive model. is not meant to be a universal predictor, explainer of all human behavior and decision making related to, uh, uncertainty and. I think this is what was critical to us is why don't we just go back to that and say, [00:17:00] if you would come to the same conclusion, if we tested this again, we tweaked some things that we think could have been, you know, now that we know what we know could be, you know, add some robustness to the study. If we did that, we would be able to say what are the strengths and which things don't really tend to, uh, what don't seem to necessarily apply again or don't apply broadly. So that's why we went back to the original is because we didn't actually see this dramatic need for something else because there was plenty of something else's. The question was really going back to would we have, would, would the theory still look the same if we tested it again now? Um, and so that was why we went back to the 1979 when we decided that was the way to go.

Benjamin James Kuper-Smith: I mean, I guess my, my question to some extent is that it, I guess, I think I wrote this down in my notes somewhere, which was something like, you know, is this a test of prospect theory or more a test of like how much we can trust Kana Anderski in 1979? Right. So like, because, I dunno, I think I have a, I'm not as critical of loss aversion and, you know, we don't need to turn this into a debate [00:18:00] about that.

Uh, I, I think I'm more critical than you are, but not as critical as some of the other people. And I guess my point is more like the, you know, the original prospect had, you know, you, maybe you can get into a bit more detail later, but the questions that you then also asked that you try to replicate were very simple yet binary questions.

Right. And, um, then you look at proportions of people who say yes or no. And um, yeah, it just seems to me it's like so antithetical to the way that most modern Yeah, science to some extent is done. And that, I mean, maybe. Yeah, I'm jumping around a bit now, but one question maybe that I have right now on this is like was also one reason you did this just because it's very easy to do and quick to do in lots of places, rather than setting up a, you know, one and a half hour laboratory experiment.

Kai Ruggeri: I mean, I think jokingly we may have said that, but I wouldn't have called it easy, uh, because as you may recall, we did this a lot different. In a lot of different languages. And so the, the difference isn't so much that it would be easy, [00:19:00] but to do something like that with a direct observation of behavior while also being able to control for the many compounds that would exist beyond just translating the language but changing the different context. One of the things that we actually thought was more useful was to say, so if you go back and read the 1979 paper, there's a lot of things that probably wouldn't get through peer review now. Not the work itself, but the ambiguities of what was explained. Like if you go back and read it, see if you can figure out how many participants actually completed the whole study study or, uh, what statistical tests were run. It's kind of presumed they were all chi squared tests, um, because of the nature of the data they were working with. We don't, it's not in there. The other big thing, and this shocks people, do you know what term is not in that paper? aversion? Loss aversion is not in that paper. Original. Now, it's not that dramatic of one except we are academics.

Our job is often to be pedantic and critical about semantics. And one of those things is the term loss aversion doesn't [00:20:00] appear in the 1979 paper. And we knew going into this that that was going to be a concern. Are we really challenging? Is loss aversion being backed up by this or are we doing something else?

So we put that very explicitly in the, in the article when, when we wrote it up. But the critical thing, and the reason we decided on this was con first's big change was providing these the six frames in the prospect theory framework, right? They had this descriptive model, and it gives six frames of understanding things that don't look like what we used to expect.

And what that means is for a long time, the prediction was expected. Utility theory, right? We have this base understanding of the value of different choices. of a good outcome, and we pick the one where those things line up the best. Right? I'm giving a very simplified explanation because I'm gonna guess most of your audience is familiar with it.

And I don't want angry hate mail if I give somebody's, uh, the version that somebody doesn't like of it. So I'm gonna give the most generic possible one in any case. So they provide these six different frames that [00:21:00] relate to things like certainty, um, the inverting losses and, and gains, uh, magnitude differences, reframing the same things, you know, kind of, uh, switching up the way information was presented, right? we just retested all of that. We used all the frames that we could use without getting too complicated, but we tried to do it in a way that would pass 2019 peer review standards, not 1979 peer review standards. Now, I'll always say that this is one thing I do not like revisiting standards from 50 years before, so long as we're not talking about ethical differences, right?

Ethics is different. Like we wanna talk about ethics and Milgram. That's fine. I don't think that, but. we're talking about what con risky did, I don't find any value in criticizing them. Oh, they didn't know to produce this thing, or they didn't have this chart. I mean, back in those days, you had to send your data off to someone to run on a computer.

Right. And they had to plug in a ticket into a, a supercomputer to, or a che, whatever they were, to, to, you know, manipulate data and then have a result. So I don't think there's any value in [00:22:00]criticizing all it is fair to say though, is if you go back to the 1979 paper, there are some gaps that today you wouldn't be allowed to do.

Okay. And it's simple things like where were their participants? Right. Were they in Israel, Sweden, or the states? Because the, the paper kind of alludes to all three. What value of money was presented? We know in some cases it was Israeli shillings, right. At the time. Um, or Israeli pound, I can't

Benjamin James Kuper-Smith: Is it Sheko or.

Kai Ruggeri: Shekel. Is it shekel or shill? Anyway, it whatever the, the currency was in the, in late seventies of Israel. And it's talked about in these ways, but we know they were in these different countries. 'cause it's alluded to in the, in the paper, but there's specifics that are missing. What is the significance level?

How many people actually participated in each of the experiments? Right? So we thought we can, you know, we can, these are easy things that we can correct for and we can, we can resolve this, but also we can get a much larger sample and we can get a much more varied sample in different places with different economic systems.

There's critically different certainty and safety net [00:23:00] systems because your willingness to, you know, take on risk might have a lot to do with the environment that you're in. You know, can you survive risk, right? Um, and, and just lots of different, uh, sort of, uh, context so that way we could have a more varied sample around it and say, would we have come to the same conclusion that they came to then? Okay. And that's why we decided on it. 'cause we felt there was enough under the modern testing methods to talk about the broader pros or broader loss aversion. But prospect theory, going back to that original, we felt we could give it this kind of modern update on and use this great network that we have.

So that's why we did that study. also had this great network of early career researchers. So we had a lot of different, uh, we had representatives of researchers from a lot of 19 different countries. I think we started with a few more and, and kind of lost a few along the way, but we ended up with 19 different countries and we were able to, to test 17 of the, of the questions out of their original 20.

I believe there were a few that we dropped just because for translatability, uh, issues [00:24:00] and, and some other things that just didn't make much sense. Um, and different kind of, uh, systems might've, uh, confounded the results anyway. So we stuck with the ones that we could definitely do in different countries and where we could produce uniform comparative financial values.

We wanted to make sure that we weren't. the differences in financial values, but we could make them directly the same, uh, relative values. So that was worked on and we, we updated, um, to match 2019, uh, economic standards. So the reason we did all that is getting this great variability, having this robust, robust method.

And then the real question is, would we have come to the same conclusion? Now, if you focus only on the replication aspects of it, you come out and you say, well, we probably would've come to the same conclusion, but it's probably not for the reasons most people think. And this is actually one of the downsides of doing a paper that has the term replication in it, especially because lots of people wanted it to be true.

I'll be honest with you, we had a lot of people when they found out we were doing the study, wanted it to be true. you [00:25:00] know who was the most supportive any concern whether or not it was right? Danny Conman. He was wonderful and I wanna be very clear and upfront about that. The very kind words of encouragement that he shared without any sort of biasing or any sort of try to push to get anything done in any way. He said, I think this is a wonderful idea. Um, and I just think he deserves that credit, um, because I, you know, no prior reason for, for him to think, you know, somebody running this test with all these international people. And certainly, let's be honest, a lot of the rhetoric now is seems to be more people wanting to cut down past ideas.

So I wanna say he was actually extremely supportive of it, as was Hugo Soine, who was the editor of Econometrica in 1979. Um, he wrote a very nice, uh, email or two, um, trying to help us see if we could find some records that had gone missing over the years. Um, so they were great. But the big reason that I, I, I think we would come to the conclusion isn't just the direct replication values because if you look at, at our chart and how it compared to the 1979 [00:26:00] values other than the attenuation, so the kind of smaller effects in some cases, which you would expect, we had a much larger sample. The attenuation in some cases, that's really the only place we didn't find pretty solid replication. But there's a bigger one, and it's the table that a lot of people didn't look at 'cause they just wanted to jump to saying, oh, it replicated, so we're good. if you look at the replication tables, those are fine. But there was a really interesting one that I think a lot of people missed out on, and it was figure five in the article and figure five in the article. Article shows very directly much the result that we found be explained by prospect theory, but not explained by expected utility theory. Right. And then we went one further where we said neither one of them. And that's figure five. And it shows very clearly the reason we would've come to the same conclusion in 1979 as we did in 2019. we've done it is because, again, this descriptive model gave us way more than expected Utility theory did.

[00:27:00] It was not perfectly and universally explanatory. It just gave us a ton more information and a much better way to understand and describe decision making than what was prevailing at the time. And I think that is the important thing. There's been this great new work coming out. There's a lot of new things talking about either agency and decision making and also this complexity work.

Byre. I think these things are great debates that we should be having. I think they're wonderful. What I don't that much about though, is anybody saying something wasn't perfectly explanatory? 'cause no model is ever gonna be perfectly explanatory. Kahneman and Sarah talked about noise, right? Some things just end up there because they end up there. this was one of the exciting things about the work that we did. The reason we would come to the same conclusion isn't just because most of the effects were essentially as they were in 1979. It's because they, again, explained a ton more than the prevailing theory at the time. We're not even gonna expect, say, expect a utility theory was wrong.

We are gonna say that as a baseline model, it [00:28:00] doesn't explain as much deviation as prospect theory does. Right? With Prospect theory, you can get so much more and you still have deviation from the model. That is one of the main reasons we wanted to run that study, and I was very proud that we got to that point of it, but we could present it. I'll be o openly honest with you, I didn't anticipate a failed replication. I didn't pick it because I thought it was gonna be that way. I picked it because we had just done a whole book. the assumption loss aversion was a real thing. And loss aversion being in core based on prospect theory, well, we better go back and validate whether or not that's even true.

If it had been wrong, we would've said it. fact, as, as you've probably seen in a lot of it, sometimes there's more incentive in finding an existing theory wrong than, you know, validating it's right. but that's essentially why we went with it. And, um, there was a great deal of interest in that work and, and we were very thankful the, the kind of positive reaction we got, uh, to the method and the approach that, that we took. Obviously there were the complaints that why did, why did we just test it again? But for me, a lot of it was shifting the goalposts. Um, our job was, would we have come to the [00:29:00] same conclusion in 1979? And our answer was yes. Um, some attenuation of effects, some minor variations. The replication effect difference between countries was very interesting.

I, I don't wanna speculate on it, but I think that's very interesting that in some countries, I believe Chile was the lowest, um, replication rate overall. Uh, Bulgar and Chile, uh, were the lowest. Interesting. I'm not gonna speculate what that might mean. Um, but it's a very interesting thing nonetheless. But if you look at it, there's a distribution of effects.

So somebody has to be the lowest. Okay. Uh, so I'm not gonna speculate if you find a decent distribution. So that's basically the backstory of how we did prospect theory, why we chose to do it the way we did. the group was wonderful, almost entirely early career researchers from the Junior Researcher Program. Um, it was a, uh, a hot summer that year in, in, uh, in, in Cambridge. Um, and, uh, which is not a normal thing you say about the uk, at least used to not be. Uh, so we had a, a lot of effort in some very hot rooms that weren't set up for hot wind, uh, hot summers, but, um, a great experience [00:30:00] overall.

Benjamin James Kuper-Smith: Yeah, I love that Kamo was, uh, you know, was supportive. It, and it seems to be very much in agreement with everything I've basically read about him that is like very supportive of, you know, I mean, he said, you know, I think I, I think he or him and some other people coined this term of the, what's it called, the adversarial collaboration or whatever it's called.

You know, that kind of thing, uh, where you basically, someone disagrees with you, you. Create an experiment together that tests which one of use. Correct. And you both have to agree with the design, basically.

Kai Ruggeri: If you don't mind, I'll take a, uh, take a lead from that and just give a very minor little gripe about our field. Okay. Minor gripe. We actually sought adversarial collaboration and what we found was there

Benjamin James Kuper-Smith: for this project.

Kai Ruggeri: for that project and since we actually tried to find some people who were, and what we had found was there weren't that many people that were adversarial, that were collegial. We found that. They didn't want to engage in the scientific debate. They wanted [00:31:00] to criticize. We actually sought it for a few of our studies where we wanted to find people who disagreed. and it, and it, it, honestly, a lot of the blowback or the feedback we got from those people was more just anger at the work. We wanted people who disagreed with it, we thought, let's put it through. I didn't, for me, I would've been very shocked if it hadn't replicated, but I wasn't gonna hide if it didn't. Right. But we actually sought it and we, we got mostly the, the reactions we got were mostly bemused, I guess. Um, but there was no interest in actually. Having an adversarial collaboration there was interesting criticizing, um, and I think that's a problem for the field because I think adversarial collaboration is ideal. Um, you know, the, the idea of working with people that you have disagreement on, so you say, well, then we should come up with a way that it answers for both of us that we agree this way will answer our question. And we didn't really get a lot of that. We certainly had some people who shared some very useful critique, but we didn't get as much of that as we would've liked. To be honest with you. We would've loved some people in the study that were collegially, [00:32:00] adversarial, uh, not just cynical. Right. And I, I think cynicism isn't particularly helpful.

Benjamin James Kuper-Smith: Yeah, there's a difference between, between, uh, uh, being critical and adversarial and just being a dick. I don't know whether that was the case exactly in your case, but, uh, I, I,

Kai Ruggeri: there was,

Benjamin James Kuper-Smith: I can roughly imagine it. Yeah.

Kai Ruggeri: you know, Kahneman has no reason to be as supportive to me as he was. I wasn't one of his students. I have no link to Princeton. I was overseas at the time. I hadn't lived or been a professional in the States in a decade. So he had no real reason to have any kind of faith in my interest.

And, you know, certainly plenty of people were being much more critical about it, and yet he was the most supportive one. And in my view, that's collegial, adversarial collaboration. Right? He didn't have any role in what we did. Um, the, the big large scale study we did the year later was a bit of, a bit of a pivot, but in that case, we actually worked with the original authors and they had to basically put themselves out [00:33:00] on a limb to work with us, but know that they didn't get the final say.

And we were testing the replication of their whole thing on, on polar, on, well, meta percept of polar polarization. It's a different topic than decision making, um, but one we felt was important at the time. But, uh, Jeff Leera, um, Chikara. They were wonderful. And again, they had no reason and they had no say over it.

It was, that was an excellent one. I wouldn't have called it adversarial as much as just collaborative, but we were replicating their method a bunch of countries and they didn't get a final say over any of it. So those things you can tell very quickly who's really committed to the knowledge idea and who just enjoys being cynical.

Benjamin James Kuper-Smith: Do you find that that by itself is a good predictor of the replicability? I mean, this is obviously a bit speculation, but uh.

Kai Ruggeri: Sample size is too small. Um, but I'd say it more predicts how people are gonna comment when we post the final results. You know, we've only tested, up to now we've only tested things that we felt were like the, the only things that we've really focused on replicating, because we wanna do broader [00:34:00] work than just replicate for the sake of replication.

I don't think, I don't think there's that much value in replicating for the sake of replication. Maybe there were some times where this was the case, but in most cases, I think replicate things that are hugely impactful. Make sure that we can have trust in them rolling something out and asking more people to participate in a study if it's not having any major influence. I, I'm less motivated for our sample size is rather small of studies we've done that for, because there's very few theories that we give that much trust and credit to. That's why we ended up doing temporal discounting as well because that is a hugely influential framework. But we only tested it for the same reason.

We essentially tested prospect theory. It matters. It is a core assumption in what we do. it matters that we can be confident that it applies in the way we think it applies. And I think that our sample size is too small to say if that's every single one. But certainly it gives us uh, uh, some idea that if you go after the things that you rely on [00:35:00] and you go into them, relying them based not just on a paper or some, you know, pop psychology article or some mainstream thing or. Somebody on LinkedIn who, uh, is selling consulting work from it. I think if you, if you really rely on these in your work, that probably is a better predictor replicability. cynicism, I don't know, high cynicism of popular ideas. It could go,

Benjamin James Kuper-Smith: Yeah. But do, do you have then, you know, especially, this is my assumption, uh, that these projects take a lot of time, um, and it's not something you just want to, you know. You know, casually do a random project. Um, do you have like a list, like a hierarchical list of like, you know, next I wanna do that, and then I'm gonna do this project?

Or, or have you done the ones you want to do? Or how do you kind of decide which one to do next in that sense? Is it just, yeah.

Kai Ruggeri: Yeah, so we, um, we didn't, originally it was, we tested [00:36:00] prospect theory because that seemed critical to test. At that time, that was a really important claim that we thought was important. one sense we found that our model for running these high quality or kind of high, high sample size studies was producing pretty quality data.

And we, we saw a lot of feedback, really critical feedback from a lot of people saying, do you think you know this works? And you don't wanna ask and take the time of that many researchers to do, and that many participants, if you don't feel really good about your method, right, you need to say, we're not just asking people to fill this in so we can say our sample size was this size.

Size, okay. Right. It's great to have those sample sizes, but no, our hierarchy has always been, is there something specifically that seems of significant value to test. such a large scale that you can understand the distribution of this concept on a quote unquote global level, global being. I actually asked one of the former editors at Science how to define global because you'll see some things and it's like [00:37:00]three countries and other things that are 50 and other things that are, you know, as many countries as the UN or the WHO can get involved.

And I was at what level can we, we call something global, but case it was just at what point is something important enough to test in enough countries that we can say this concept or this framework does seem to apply broadly. It is not just something that's Western or American or whatever. and so we've had a few of those, you know, we've evolved it over time too.

It never wanted to, it was never intended to be. A bunch of research students and early career investigators just running my studies for me, that's why we evolved it out where I don't, I don't lead these anymore. And another person has taken over and even she has devolved it further to other people, um, choosing on. But essentially the, the decision point was not just what should we replicate? 'cause replication wasn't really our goal in and of itself. It's what major questions are out there that there'd be tremendous value in running these at such a scale. And that's where we've [00:38:00] really focused on, on doing this. Prospect theory is one, temporal discount is one. These, these were clear ones, right? And we've done some other ones, um, you know, questions about policies around climate change and who should make those decisions. We've done someone who should regulate the internet. These kind of things. These aren't necessarily my particular interests.

The, the program kind of evolved from there, but those are obviously really important questions that asking in a lot of places has specific value. So essentially you're, you're replicating your own work as you do it right?

Benjamin James Kuper-Smith: Is the, about your point about what counts global, I mean, uh, one thing I noticed is that, if I remember correctly, is that between the discounting and the prospect theory one, you added quite a lot more of the world. I think the prospect theory is mainly western world, plus China and Chile, I believe, and then the discounting one involves.

Five, six countries in Africa, I believe more of like Southeast Asia or Australia like, or Oceania and that kinda stuff. Right. Um, [00:39:00] and more, much more South America also. Um, was that like a deliberate, uh, choice to really try and push into new, you know, new countries that you haven't covered at all? Or was it kind of just a byproduct of you doing this work and more people being interested in being involved?

Kai Ruggeri: That's a great question. I mean, it's, it's a bit of all the above. The first year was the first time we did it, so we couldn't really go around asking, you know, research, you know, again, I, I'm big on the ethical side for both the investigators and the participants that I think. I don't like wasting people's time.

Um, and I think it's inappropriate to run studies if you're not pretty confident in the, the way you approach them. So because of how the prospect theory one went, and what we've learned since we've felt more comfortable inviting people and doing more, we don't quite do open calls. We do a little bit of an open call for researcher.

I, I found some issues when we did a straight open call, um, because of how do you establish some ethical standards with that? But one of our real things was we want more groups involved. We wanna get broader representation within and between countries, right? [00:40:00] And so what we found was as those papers were pretty well received, at least in terms of methodology, and more, uh, people were aware of the work we were doing.

We got contacted by people directly, and then we had a stronger network of people who had worked with us, saw how we operated, saw that we seemed to at least have some clue what we were doing with running these studies. And that seemed to attract a lot more interest in engaging from many places. We certainly have done everything we can to engage collaborators from just about every country.

Um, our, the current one we're running right now is, is kind of back closer to my core research interest. It's not specifically that program, but through that, that broader network of investigators around the world. Um, we have over 21,000 participants in that one. Um, but we have over 150 that have done it in Yemen we had two researchers in Yemen who really wanted to be part of it, and we've worked with them.

So you talk about there are researchers who want their countries to be part of these studies. [00:41:00]Right. And if we can work with them and we have the ability to engage with them safely and ethically, then we try to, you know, where it's, um, where it's possible. the bigger challenges tends to be you have resource limitations, um, time. limitations. And there's other things as well can be a barrier to this, but essentially, anytime we've been able to find collaborators in or from countries, we've tried to work with them. Um, and we've tried to expand that. And we've identified specific countries in the past where we were very close but couldn't quite engage with them.

And it's been productive and it's built a lot of relationships. Um, we've had some that there have been issues that have stopped, you know, our ability to collaborate with them. But right now we have one going that I believe we're, we have over 90 countries signed up. It probably won't end up with 90 in the final study, but it will be, we're, we're already over, I think, 80 that have collected a significant amount of data. Um, so in these cases, you know, I, I, I don't mean to get such a serious tone, but yes, we absolutely try to work with as many as possible. it's not [00:42:00] possible, um, for various reasons. Um, and, and sometimes we just, you know, collaborators drop off. People realize they didn't have the time to engage, but we try to engage as many as possible.

Um, and, and we don't have any, we don't do any sort of questioning, you know, do you agree with this or do you expect to replicate it? We just, collaborators are collaborators. We don't, we don't really, uh, you know, lean into one view or another in terms of should they be adversarial or should they be on, there's no, um, there's no real line on that.

We just want to engage with many as people as we can.

Benjamin James Kuper-Smith: Yeah. I mean that by itself sounds like enough work to begin with. Yeah. Yeah. But I mean, is the, I mean, without getting like too technical into like, you know, the governance of a project like this, but I'm just curious, like, it sounded like it started off relatively small or something. Uh, if I understand correctly, this, I mean this is basically the continuation of what you started, right?

Quite a while back and, uh, now it's grown into this. Like, I mean, yeah, it sounds like there's lots of, lots of people involved. I mean, you already mentioned that you're maybe not quite as [00:43:00] involved with it as you were in the beginning, if I understood you correctly. Uh, but kind of how does it, I guess what I'm curious is like how, how it works, basically, like how the, the everyone decides on the next study and how to get new people and whether they're trustworthy and when to start and what exactly the methods are and you know, all that kind of stuff.

I'm just curious like how that practically works.

Kai Ruggeri: So the, the, the program, the junior researcher program, and then this eventual, uh, sister program that Columbia University supported me to launch when I relocated here. Um, these programs are still run. Um, I've, the, the leadership now is with, uh, Dr. Uh, Sarah Ashcroft Jones, who, uh, coincidentally was my undergrad student in Cambridge and now works with me as a postdoc.

Many years later, we reconnected, uh, academically and she's a tremendous, uh, early career researcher who had been through the program herself. through that, she now has kind of a process she works on with the researchers in the program to talk about, you know, topics. And some of the PhD students and postdocs can pitch ideas and they have a [00:44:00] process they run for the ones that I'm directing now.

Um, and the ones that I did in the past, I essentially had a small number of people I went to to about ideas for what. Would be good to run at this scale, what seems to be robust enough to test what's worthwhile to test. Um, and there was kind of a, a relatively smaller group and you could tell how much value people placed in it by how many people signed up to be part of it, right? If people didn't see value in it, you kind of get the validation through if you suddenly get as we got in, um, in, during the pandemic in 2021, you know, the 200, I forget, was it two 80 authors that we ended up having that year? I don't, I don't remember the final number. Maybe I'm, maybe I'm exaggerating the number, but the, the, the discounting paper, we had hundreds, you know, collaborators in so many different countries.

I think we ended up with 65 countries or something along those lines. And uh, you know, with that one, the value was seen by how many people. Not just agreed to it, but how many people wrote us when they found out about it, even though we weren't advertising it. So that can kind of [00:45:00] validate that you're, you're picking the right things.

Um, and so the process tends to be relatively small in terms of a small number of people who will ultimately be running it. Um, I want to give, you know, I recommend anybody who's interested in this work to go back and look at the names on our papers, particularly up there in the first few authors, or the last few, because those people put in a ton.

I really wanna stress a, as much as I've directed and been out front and I, I've tried to lead in every possible capacity. are wonderful people that have put in a ton on this, uh, Sarah's one. Now, we also have, uh, Ika is, um, a PhD student in Amsterdam right now. Uh, previous years. Sandra Geiger, who's now Princeton.

Um, she was doing her PhD, uh, she worked on in, in Vienna before. Um, uh, she and bvi, who's also in Amsterdam, was in Belgrade before that. They have run previous years of these studies with us. Fredrika stock, who's at Max Plank in Berlin, uh, works with, uh, two academics that are working on her PhD there. She read it, led the study last year.

So we, you know, we have really good people [00:46:00] also leading these with us. And I think that's also where, you know, I wanna give all the credit to these wonderful people who've been on there for a while. And then one name you'll see in a lot of our papers is Hans Yake, um, who, uh, works in Brussels but is also a, a PhD student. At Trich in the Netherlands, uh, was previously in, uh, at Vienna as well. He has been a, a major key player and sometimes say, well, what does he do? And it's hard to describe. He's like the producer. Uh, when a movie, what does a producer actually do? I don't know. They make things happen, you know? Um, on top of being a good academic, all of these people make things happen with these studies. Um, and they take tremendous coordination. You have to find time late at night sometimes to match other people's time zones. You have to be aware of what's going on. It's, it's a really, a big challenge, um, to, to get all those things lined up.

Benjamin James Kuper-Smith: Yeah, it does. It is funny. It's, it's just, uh, I mean, I'm always fascinated with these studies because it's, um, I think like my natural way of working is very much like me and one or two other people. Like, you know, I just kind of do what I wanna [00:47:00] do. And sometimes you wanna have collaborators, I mean, usually have collaborators with that.

Uh, but it's like very, very close and very small. And, uh, I always fascinated with this project because obviously like I am increasingly interested in this. Like, you know, I mean, this question I think that everyone is interested in, right? Which is like. Does this also do these results hold if I tested somewhere else, you know, how specific are these results?

How well do they generalize? But yeah, it seems like, I mean, it's, I guess the, the, you know, I guess the reason that you are that the papers we've discussed have such value is because it's so hard to do. Right. It's so hard to get all of this together. So I'm always, always fascinated with like hearing about how it's done.

But yeah, I mean there's also no easy way, right? There's no like easy solution to this, right? It's just, that's kind of what you have to do probably right.

Kai Ruggeri: easier ways, you know, there are easier ways if you're content just running a big scale study. You can take a survey, send it to a bunch of people and say, Hey, go help us data and if you wanna translate it, translate it. And I, I think there are some that are like that, and in [00:48:00] some cases maybe that's enough. is simple enough, but then you tend to get the kind of samples we always got in psychology. Um, one of the things that I want to just praise all the collaborators we've ever had, put people through such a high level of precision. We are very pedantic when you collaborate with us. We give a lot of spreadsheets with fill this in and follow this.

And we, you know, you know, slack channels and email threads. Back in the day we were using Facebook groups to communicate all of these things, right? It was a very different kind of time. But we are very pedantic to make sure, to the extent that we can, things are not just translated, but culturally appropriate, linguistically appropriate.

This is not always easy. There are words that don't exist between languages, right? Um, and, you know, how do you translate money values sometimes, you know, we, we found a way that we just found, you know, average income in certain countries and then we worked from that rather than translating, oh, here's the US dollar version.

And that really doesn't work very well. Um, we found, and it's been much better. Don't, [00:49:00] don't worry about converting it. Find the local equivalent and start with that number. And, and those kind of things have helped tremendously. But we get really, really, uh, pedantic with some of our, uh, people. And you know, one of the great thing about technologies where they're right now is we used to have to just kind of trust the translation process.

But one of the great things now is there's tools that can rapidly back translate things to English for us to where that we, at least as the central team can check the way things were translated and how it back translate. And we actually have flagged issues after the translation was done, we've gone back to those teams and said, we're really sorry to annoy you, but this is what it says.

Can you explain this difference? Right. You know, or one of the big things, some countries, if you ask them how much you earn, people default to monthly after taxes. places it's annual before taxes other places, um, I think Singapore for example, you know, they have this additional income that they might get from the government. Does that count as income? If that's [00:50:00] something that's roughly equivalent across people, do you count that? Because everybody has the same baseline and what if individuals don't include that? Right? Some do, some don't. Right? So you have to have a really strong understanding of it. So a lot of being really pedantic with people and trying not to say, I'm not telling you how your own system works, I'm just trying to make sure that we've got it correctly entered in, into, into the method. it's this level of commitment. I mean, it's very easy to criticize whatever random things come out in the end, but there is this extreme amount of worrying about minor details, um, that go into running these large scale studies. But it has to be that way. If you wanna get them right, if you believe they're gonna be important, you really have to do them that way. Um, and, and that's something we've really felt, um, and why I have a limited amount of hair left on my head at, at a relatively young age.

Benjamin James Kuper-Smith: Yeah, I mean, I, I, I think I completely get they're being pedantic about things because I guess it's like a thousand different decisions and, you know, if, if all of them are slightly vague and not exactly what you mean, that it, I mean, it's gonna happen anyway. But if [00:51:00] it, if you don't try and control that, it just becomes something completely different in the end.

I, I was slightly surprised when you mentioned that you use AI for the translation parts of check it, because I mean, so I mean, I, I, you know, I grew up bilingual, so like I'm, I, I have tried some of these translation things and they are getting incredibly good, but it seems to me still that like the nuances you're actually interested in is exactly the thing that it might not get right.

So like, uh, isn't isn't basically the translation system Exactly. Not the thing for finding Yeah.

Kai Ruggeri: Oh, absolutely. So this is actually a great question and I think this an important lesson that we've had that I'd gladly share with with researchers on this so we don't say it. It wasn't, I don't know if AI is what I would call it. Better forms of Google Translate. You can call it AI if you want. Um, but we tell people not to use AI when they do their

Benjamin James Kuper-Smith: Like Depot or Deep L or whatever it's called or what to use.

Kai Ruggeri: um, well, it depends, which have to use a few different ones. Sometimes it's just as simple as using Google Translate on Chrome, right? But to be [00:52:00] clear, what we do, we first check, we have a way that we set up and we look at the way they directly translated everything, so we can see them side by side, then implement it into the survey bit. We take the survey as would a normal participant, but we use a tool that automatically switches it back to our native language, whatever, whatever the team member's native language is, and we just go through it. And if we see something that looks different, we simply ask the team, Hey, you just check that this is correct?

Because the back translation shows this. 50% of the time they say the back translation is wrong, that we are seeing and in fact leave it that way. The other 50% of the time they say, good catch will, will modify it. Right? So that's maybe a really good lesson if you're gonna use those tools. Don't assume one way or the other, just use that as a red flag to check. Okay. Um, we've all had it where we finished studies and immediately thought, oh, darn it. How did we not catch that? Or, you know, something, you know, we have one in our current thing. I regret not including a question, I [00:53:00] was afraid that the survey was getting too long. And so I decided we just wouldn't include this one question that I really wanted in there. And now we have over 21,000 participants and there's no indication that anyone thinks the survey is too long, and I regret not putting it in there. Right? So you're not gonna get it perfect. But the new tools, at least helpless, um, to give us, when we were in the study, during the beginning of the pandemic, um, so summer 2020, I was in New York, you know, the most shut down of all cities, I think, you know, in many ways. when New York reached what was called phase two of the reopening, you could now sit outside in small numbers at, at, uh, kind of cafes and restaurants. So I took materials, I found the first place that I, that I knew was open. Uh, and I went and I sat there for the better part of four days. And all I did was tap through. Every version in every language of the study. We ran that year in all the different, I forget how many, I think 26 countries. We did that. I don't remember how many countries we did that. 26, um, 26 countries. And I just tapped through and then I didn't have the translation tool. So I'm not only reading [00:54:00] different languages, I'm reading different alphabets. Right. can read Relic pretty well. My Spanish is decent. Okay. My European language is in broad. I, I can mostly get through fine. I spend enough, con enough time in enough countries that I can get, I don't read Georgian, right? I don't know the America Alphabet for Ethiopia, right? So I'm going through it.

And one of the things that you find. If you at least kind of frame yourself and include yourself in, you can spot things and you can see numerical things. Well in the ones that use the kind of, uh, numeric system we use, some are a little more difficult and you find that you can actually check those things. said that, I much prefer the technology we have now because it's gonna do a better job of it. 'cause there were lots of times I found little things. You know, you, you start learning, uh, a lot about languages and the way things are done. You know, a lot of the language we work in are right to left rather than left to right.

So you have to become more comfortable in, in, in these things. Again, I don't know if this is that interesting to a lot of people, but I certainly think the [00:55:00] message around making use of the new technologies to at least identify the possible risks. This is a major part of the unseen labor of studies like this is the hours and hours you spend double checking.

Not because you don't trust your teams, but because those minor little differences could change the study.

Benjamin James Kuper-Smith: Man, my, uh, Georgian audience is gonna be very, very disappointed in your lack of the language. But,

Kai Ruggeri: I, I now know a great deal about a lot of how Georgian, uh, linguistics looks. I don't, I still don't speak the language or can read the alphabet, but I at least had to learn a lot about it. It was great learning about America as well, learning that I think at least three or four of the countries we work with don't use our calendar that work on a completely different calendar. Right.

Benjamin James Kuper-Smith: which ones, which countries don't use.

Kai Ruggeri: a different year. Um, I think it depends which country you're talking about. 'cause for example, Iran does not use calendar, uh, I believe Ethiopia as well. so it's a different year, [00:56:00]right? So learning these things, it's, it's very insightful culturally, right? It's a very, even without leaving, going anywhere, you can get a great deal of cultural appreciation, um, and understanding, you know, a lot of these things. Thank goodness for universal time for airlines, right? Or man, can you imagine flights and times and figuring out where things are. So.

Benjamin James Kuper-Smith: Yeah, I, so I mean, like, to, to kind of, uh, uh, you know, it feels like a very, very broad and generic question, but, you know, what are the, you know, the pros and cons of these, of these big studies? I mean, I guess there's the obvious ones, right? I mean, which I guess I would say the obvious ones are, uh, you know, the con is all the effort you talked about and all these, you know, all these things that we just mentioned.

And the pros are, you know, the results you can get and. You know, I think those are the very obvious ones, but I'm just curious, like, is there anything that, you know, people just don't see when they read a paper like that and that, uh, I mean, maybe you already mentioned some of the like, you know, painstaking detail you have to go into that, [00:57:00] you know, just, just doesn't really apply to a study that's done in one country, in one place, in one language.

Um, but yeah, I'm just curious, is there anything that's like, I dunno, what, what should we know about those papers that we don't when we just read them?

Kai Ruggeri: Well, they're great. In my view, those papers have much deeper science than just large scale, sample, many countries and all that. In all of those, we weren't just, it wasn't intended to be a bulk story, right? The prospect theory paper, I mean, I certainly feel like it added something in validating and really just giving essentially a modern. to, not overhaul, but update to what they did in 1979. also did original analysis in there and some original things, like I told you about the, the expansion from expected utility theory, but with temporal discounting. We also talked about the distribution of the effects, the links or the lack of links to economic circumstances.

And we talked about the economic environment says a lot more than just income, right? [00:58:00]Because if you're in an unstable economic environment, you're much more likely to have high rates of temporal discounting than someone who's low income in a much more stable economic environment. And those things are very insightful and they expand beyond saying that this bulk study shows that, that this theory is okay.

We weren't just trying to do that. That was never really the intention. The idea was if we could get more countries a broader sample, more engagement, we have a better understanding of where the theory does and doesn't apply. And the discounting paper, we talked about this as the contours, right? No psychological theory is ever, in my view, ever intended or expected to be universally true. Right? Even take something like personality heavily studied, massive me, uh, measurements, right? Detailed ones, lots of validation of different components of it, but never would anybody argue universally applicable. You're never gonna take a personality measure and say, absolutely guarantee what this one person will do in this [00:59:00] one instance. Right? And it never should be. Right? Um, and, and I don't think, I think in many cases what was lost was people saw it as this big validation of those things. And that's great. I'm, I'm glad that in many cases they saw the validation of the, of the concepts. But we went further and we said, where doesn't it apply? Like, if you look at a lot of the prospect theory stuff, and there have been papers on this sense, I believe, by, uh, teams in Sweden, and they talk about, you know. Even if prospect theory is good at explaining behavior, it doesn't explain the majority of people the majority of the time. And it, it talks about nobody universally fits perfectly in all six of the anomalies. And, um, and the same can be said for temporal choice, right? These intertemporal choice anomalies that we see. And I think that in our work, one of the big things is we weren't just trying to show this replication. We were expanding knowledge and in my view, we were adding to as well. I think that that's something that maybe, maybe the idea should be, you should write that stuff separately or else it gets [01:00:00] lost in, in, in the Mexico.

You know, the downside of some of this coming out during the thick of the pandemic was that there was really no opportunity to go around and, and present to engage because the world kind of shut down for those opportunities. There wasn't really this way to engage that material, but certainly I think there's some really important scientific insights that, that came from that work that weren't as, as, as picked up on. I could go on about, you know, the unseen labor. I don't wanna complain about it though, because I feel like I, I imagine most people could probably grasp, I could, I could make one gripe though we did. And it relates to my earlier one. We did contact people that we thought were pretty cynical, well critical of some work in the field, um, whether it was on the theoretical side of what question we were asking or sampling and, you know, underrepresented populations and so on. we got ghosted in a few instances. Certainly a lot of non responses at all, but ghosting more from people who at first showed interest and then just [01:01:00] stopped applying entirely. But also we found that people who had been most critical of the absence of the sampling that we were doing, weren't interested in contributing to it.

They were interested in just complaining about it. this has been really kind of, I, I think detriment to the field. know, I'm not saying they had to participate in our studies, you know, that that doesn't, I I don't qualify. I, I don't, I don't think that their, their views are qualified based on whether or not they collaborate with me.

That's, that's not what I'm saying. But it certainly was strange that they didn't seem in any way interested in supporting what that, what was being done there, even recommending anyone. Um, and, uh, I would say that that is something that I, I think people should be aware of too, is that some of the most cynical voices aren't actually that interested in contributing.

It seems they're very interested in just being cynical, um, which doesn't really apply to me. I'm, uh, part of the reason I chose my field was because I liked its idea of understanding a problem or an issue or an opportunity and saying, okay, what can we do? Right? And what can we learn, learn more about this.

I, griping doesn't really, it doesn't get me out [01:02:00] of bed in the morning.

Benjamin James Kuper-Smith: That's a shame. Um, I mean, no, no. I mean, I, I also don't, uh. I mean, sure you can get annoyed by things very easily, but like yeah, it just seems so exhausting to always be cynical, but yeah. Um, so let's not do it. Um.

Kai Ruggeri: Yeah. I mean, and the, the praise of the people we've worked with, you know, people have come up. We, we've, we've tried to, because, you know, one of the things you can get lost in a sea of authors like we have, one of the things we try to do is for people who do exceptionally important work, you know, we try to highlight that.

So for example, um, Emyo Damage and Banovich Kalo, um, came up with one of the, uh, data collection methods that we talk about, Haki did as well. So we tried to name and, and give them that appropriate recognition because people complain about absence of representation in studies, but they don't necessarily provide US pathways to improve on it.

Well, these people have done that. And we, we, we really try to do that. And if you look at some of the [01:03:00] countries we now have involved in our studies, these things have been. Really valuable. We've done a little bit less on trying to publicize those because one of the things I was afraid of was we'd publicize these methods and then people would use them to do bad quality work. So, um, and that, that I think is an issue. Um, in our field. I've seen some large scale data collection done that I think. Okay. But have both data, but it's not necessarily a good study. Right. And I think that's the number one most important thing. I hope everybody understands. We invest massively and seek lots of input and feedback before we actually run a study. Um, because even with all of that, it can't be perfect. So if you start out just saying, well, here's the thing, we wanna run everybody now run it. I, I think is a problem. I, I think what I hope people understand is we invest massively into getting, I, I can't say getting it right, because you can never guarantee that, and it's certainly self-assessing, but we certainly invest heavily into doing our best get it.

Benjamin James Kuper-Smith: Maybe one question [01:04:00] from the perspective. So like, I'm really in, you know, so now we obviously have these tools to, to get samples, right? Whether it be prolific or, there's also a new one I'm, I'm at least aware of called Be Sample, where they're trying to especially get these, uh, the ones that are basically not covered widely by the many of the others.

Is there any point in, you know, for example, when I, you know, I signed up to be someone had a look what it looks like, you know, just having a look and, you know, there's a temptation to go like, you know what, I'm just going to like test some people, you know, some English speaking countries, let's say around the world.

Um, is there any point in doing that if you don't with, unless you heavily engage with people in the country to, you know, make sure that you're not doing anything silly? Um, because part of me feels like, you know, it's, it's very easy now to just, you know, select a different country and collect a, a couple of participants.

Um, but I'm just not entirely sure what the value exactly would be of [01:05:00] doing that if you don't know that place very well.

Kai Ruggeri: Sure. No, I mean, I, you know, we do a big, big effort to make sure we have collaborators in and from the country. Um, sometimes we have collaborators from the country that aren't physically there at that time, but they're from that country. Um, first is the ethics, right? How, how you handle the ethics says a lot. If you just do it and then roll it out. But you haven't even asked people do they need to get a secondary ethics or something and you're just working on it externally. I'm not saying there's no value in it. I would say that it'll be limited because all your limitations will be obvious. You asked it as an outsider not knowing any of these.

I mean, the simple adaptation I talked to you about is how people refer to their incomes, right? If you just say, what was your income? You know, how do you even know how to translate that answer, even if you get it in a language you can understand, right? I think there's a lot of that. Not having a local collaborator will also diminish a lot of your understanding of the things are said and presented. Um, you know, even, even, you know, the [01:06:00] classic joke about the US and the UK is two nations divided by a common language, right? Um, in the US, 70% is considered an average grade, right? In the uk that's the line for a distinction level, right? So how those two things translate if you ask people on a scale of zero to 10 or one to 10, you know, how do we interpret a seven outta 10? That can be very different. And if you have no understanding of a place, I think you're gonna be limited in that. Having said, I don't think it's zero value, I certainly think there's a lot of value and additionally, if you use prolific or B sample, you're paying people. at a very limited, at a very minimum, at least they're getting money for their time. Right. If you are getting volunteer participants, I would hope that you've done a lot more homework at making sure this study is as close to perfect as it can be, or at least the best it can be given resources, knowledge, expertise, um, and local collaboration. I think the history is a lot of, you know, foreign, foreign researchers going into places, taking [01:07:00] time of people, and then jetting off, and then writing a one author paper that required, you know, 30 people to help put it together.

This I obviously strongly disagree with, and we give authorship recognition to everybody who completes their contribution to the study. You know, that's actually one of our harder issues. You know, have we over included authors who maybe didn't do quite as much even though they had some minor role? Um, we rarely have an issue of missing people out.

Um, there've been a few minor situations over the years where there was kind of a gray air issue. Someone be included, but we make sure everyone gets included. And that's kind of part and parcel of should you be running a study in this place if you have no knowledge of it? Well, if you're paying people for their time and you can ensure the ethics and the quality of the study, I don't necessarily think you shouldn't do it.

But if you are doing that without any sort of cultural awareness or engaging a local collaborator of any kind, you know, a representative of each place, essentially an ambassador for each country, you're gonna have of the known unknowns. We know that we don't know how to interpret a lot of this [01:08:00]information. I also think you're missing out on the great experience it is to collaborate with people in different places and learn about stuff like the, the learning experience I get from collaborating with people and the way things are done differently is spectacularly informative. Um, and, and some really interesting things I've learned in a lot of countries of, of what engages people and finding this was, you know, uh, you find it even in your own country. Uh, there, there's parts of the country you may not realize. Um, engage in certain things in certain ways that you may not be aware of if you don't know anybody from that community. Um, so you're missing out on, on both that quality side of the study, but also just the chance to learn, um, and, and figure these things out in, in the way that we are.

Um, and it, you know, it, it can generate some frustrations, but also you, you, you get a great appreciation for the variety that exists in the world.

Benjamin James Kuper-Smith: Yeah, I mean, I guess the, I mean, the reason I'm asking specifically is because we have, for example, you know. Uh, especially we have like, you know, I have a, I have a student in particular who's really interested in this, right? And she really [01:09:00] wants to do it. And like part of me thinks like, you know, why not, you know, why not pay people to take part in a study and then see what happens?

You know, we're not doing anything that's like, I mean, sure everything is culturally specific, but we're not doing anything that's like, it's, I would say in terms of like the kind of studies you can do, it's, it's a little bit more on the basic side, uh, with maybe fewer moving variables, let's say, um, than, than other things.

But, but yeah, part of me also think like, yeah, interpreting them what we get is gonna be basically impossible if you just dunno that country other than it's different and far away, you know?

Kai Ruggeri: I mean, I, I, I think that there's no problem. I, I think I, you know, uh, I have some bias in the situation, but I'm certainly encouraging the people of using those platforms, particularly if it helps them get solid samples and it pays people for their time in fair ways, and they've gone through the ethics of it. I'm certainly in support of that, uh, where [01:10:00] I just encourage it is that. doesn't preclude you finding somebody from that country who's willing to partner with you and make sure that the quality of that study is, is appropriate for that place.

Benjamin James Kuper-Smith: And of course, but it's more of a time issue, right? Like in certain, so in, in an ideal world, of course you would do it, but in some it's like, you know, we could just do it quickly and see what happens. But yeah, I dunno.

Kai Ruggeri: sure.

Benjamin James Kuper-Smith: Um, yeah. Yeah. So at the, at the end of each interview, ask my guess the same three questions.

Uh, the first is, what's a book paper you think more people should read? Can be old, new, well known, not well known. Uh, just something you think people should check out.

Kai Ruggeri: So, um, you know, the, the book that I always recommend to people who are interested in my field is, um, the funny thing is the author. You're gonna roll your eyes at how obvious the author is. It's, but it's which book I recommend. So, um, a lot of people that will listen, listen to this program will have probably read Nudge or will be familiar with Richard Thaler.

Right. Um, and you know, I, I'm not here to criticize the book or anything, [01:11:00] but I have long said that Misbehaving is the book that people should really read if they want to get excited about this field. 'cause if you want to really work in the field, I found Misbehaving much more valuable. And, um, ironically, I read that book on a train in Switzerland, um, a number of years ago and had already been, you know, trying to think of different ways, to apply some of the work that I wanted to do on, you know, statistics, probability and understanding and thinking what, you know, real world frames.

It went well in that book was in some ways life changing for, even though I've stayed on the same trajectory, it, it gave a home, you know, it gave this idea a home and I feel like many people have read Thinking Fast and Slow. They've probably read Nudge or, um. irrational or, you know, these kind of, you know, mainstream books that, that have done very well and, you know, brought our field a great deal of, you know, credibility and interest.

And I, you know, I don't think there's real problems that, or noise or, you know, going on. But, um, for me, misbehaving was the book that [01:12:00] much more about the scientific topics that I was interested in. And it gave much better frame. 'cause a lot of times people will ask me if I've read, you know, a book or heard an interview about a topic.

And I, I've always been more interested in the original article because as an academic I'm kind of interested in not the headline information, but the, the, you know, the pedantic stuff, those steps, the methods, the analysis, not, and I felt I got that more from misbehaving. so that's actually a book I have kind of long recommended to people, um, that if they really like this field, that's one.

I think that'll, that you'll walk away feeling a little bit more. Okay. So these are the ways in which we see. Behavior understood as this scientific concept that we can understand and study. Right. I, I've long felt that. So that would be my book. Um, you know, uh, is it book and article or book or article?

Benjamin James Kuper-Smith: Book or some people do both. Some people go on a whole list. It's, uh, you know, it's, it's very open.

Kai Ruggeri: Yeah. There's a paper I believe in Perspectives on Psych Science by Craig Craig [01:13:00] Parks, I believe, uh, years ago about, uh, prosociality. And that was a, if you're interested in kind of, you know, personality in how it relates to how who we are in the social domain, I always thought this was an amazing one because he seemed to bring together, you know, like year decades of research into one paper, which I was amazed how he did this. Um, and it basically talks about, you know, the three kind of different ways we interact in society, you know, the pro-social, the pro self, and the competitor. Um, I, I always thought this was a brilliant way of, of looking at this in, in this kind of frame that, you know, we have people that really position themselves as what's best for society is best, you know, is what I support. Then what is best for me is what I support. And then the competitor, which I feel we probably all are this at sometimes at least, which is good with what is good for society so long as I come out in a good position from it, right? Uh, which is like conditional prosociality, I guess. Um, I always thought this was a great paper that I [01:14:00] always recommend people to read because it kind of takes psychological science and says, how does this apply as us as social beings?

Um, and so I always that one, so sorry.

Benjamin James Kuper-Smith: It's totally fine. Um, funny thing is I'm still, you know, so again, like it's a very loose question. I'm still waiting for someone to recommend their own stuff Hasn't happened yet. Uh, one day. I'm still waiting for that to happen one day. Um,

Kai Ruggeri: hopes that the previous hour of our discussion, uh, encourage some people to read one of the two papers I've worked on at least.

Benjamin James Kuper-Smith: yeah, yeah, of course, of course. I mean, it's, um, yeah, uh, uh, but yeah, yeah, definitely. I mean, so the, the misbehaving I read a while ago, it is one of those, it's funny because like I didn't really plan on working on decision making. Uh, it's not like some, you know, some people are just always been interested in that topic.

For me, it's something I'd just kind of. Someone accidentally fell into. So like I read all those things like before. Right? So the, all the books you mentioned, I read them just like as a psychology undergraduate or like in my masters or whatever. Right. [01:15:00] Um, but I, I should probably just reread them again now that I'm actually like, you know, I've read the research that, that the whole about and all the context.

I mean, so I really love the, uh, related book, I guess is, um, the undoing project about car's, uh, working relationship or relationship and, um, yeah. Yeah, no, I, I think it's, it's a, I think one of the best scientific biographies I've read. I think it's great. Um, uh, but the others I haven't reread since, so that, that's gonna be one I should probably reread also.

Kai Ruggeri: I think if anybody's looking for movie ideas, they did the big short, great, they did Moneyball. Great. Now do the Undoing Project. You know,

Benjamin James Kuper-Smith: Yeah. Do you know whether there's any, 'cause it, it does feel like a very obvious kind of.

Kai Ruggeri: if anybody knows Michael Lewis's agent, you know, uh, there are definitely some academics that would back the idea of that one becoming a movie

Benjamin James Kuper-Smith: Yeah.

Kai Ruggeri: keeps doing franchises. I mean, why not just a Michael Lewis franchise? You know, like that's, uh,

Benjamin James Kuper-Smith: What like the, like the Marvel Cinematic universe, but the [01:16:00]Michael Lewis' stories universe?

Kai Ruggeri: not? know, I'm not saying they have to do all of his books, but it seems like the Undoing project, know, I think enough people would find that interesting enough. And certainly it ends with some very interesting, I don't know if Plot Twist is the right one, but you can pinnacle what happens after the book, you know?

Benjamin James Kuper-Smith: Yeah, I was about to say, there's a very, I'll link into a recent article about. I guess they call it Conman's final decision, right. Or whatever it is. That was very interesting. Um, and yeah, I guess it's a little bit like the, a Beautiful Mind where one of the big plot twist happens after a book that is full of it.

Yeah. Um, anyway, the second question is, what's something you wish you'd learned sooner? This can be from your prior life, from your work life, whatever you want. Uh, are you willing to share something that you know would've helped you out if you figured out a bit sooner? Maybe you know how much, however much you're willing to share, how you figured it out, or what you did about it, or, yeah.

Kai Ruggeri: You know, I'll, I'll, I'll kind of change tones and speak on a more serious level. [01:17:00]Um, you know, of the things that, I learned and, and probably in 2020 it really came to a head 'cause I, I realized, so like many. PhDs and postdocs that came out during the recession and, you know, finishing up 2009, 2010 was not a great time, um, to, uh, uh, to be, you know, looking for academic work in that time. It was a real challenge and I think a lot of people feel it similar again today. What it meant though, was of time after, if you wanted to do academic research, and I had no intention ever being an academic. I, I enjoyed the opportunity of doing the PhD, but I was more than content to, to leave, uh, academia at the time. It was only later that I realized, you know, my passion was really studying and, and, and researching things. And I only learned that by leaving academic academia for a while. Um, and I got these great experiences in other things, but one of the things that I think a lot of people. I would benefit from hearing is, you know, we really burn ourselves out a lot in modern times with how much we work.

We, we pour a lot of time into things. [01:18:00] We do a lot of kind of thankless tasks that we're expected to do. and we do stuff a lot of times, either because it feels like a requirement because we think that we're contributing to something that, you know, we, we think that what we're contributing to is important.

We're helping other people out. We're putting other people's needs ahead or we're putting institutions and so on. And of the things I realized later though, was I had burned myself out in many way. I really pushed myself very far and spent a lot of time, nights, weekends, you know, putting or even work time, prioritizing other things than what was most important. And then later on realizing that a lot of that time was created its own burnout. Like it was, you were doing all these things that was creating things that you had to do. a lot of it became, in many ways, not just thankless. Stuff that you would think later. Okay, well now I've spent all this time doing that.

I didn't get my primary, you know, or didn't get my priorities done first and I'm [01:19:00] not focused and my mind isn't focused on these things. later you start getting kind of frustrated that you put all these time and effort and all of it. You're like, why have I spent so much time working on this? And then you realize you kind of did it to yourself, right?

You said yes to things you thought there was this bigger kind of objective or that you needed to help out these other things. And one of the things I wish I learned earlier was if you want to do something 'cause you think it contributes or because you think someone else will benefit from it, or you think it will have some greater objective to it, it for those reasons. If you ever expect some sort of return. From it. If you ever expect anybody to pay it back or to show any form of like formal gratitude or anything, if the only reason you're doing it is because of that, don't do it. burn yourself out and you'll find yourself very bitter. You'll find yourself very frustrated because one of the things that I noticed is that people are actually very quick to ask for favors. And even if they don't realize it, they tend to be pretty quick to say, oh, can you do this or help me with that, shoot a quick email that asks for something. Not [01:20:00] realizing that each one of those things is time. It's a finite resource. You have a finite amount of time. and I actually wished I had learned earlier that you know, things that you contribute to because you want to contribute to them, won't as likely, at least in my experience, won't as likely burn you out or leave you frustrated and bitter because you wanted to work on them.

There was no, there was no, I work on them because X, Y, z later down the line or some if you do it because you think there's some benefit or payoff later, or you just. know, you, you think, you know you're gonna help somebody. chances are is you are gonna find yourself frustrated later on because it doesn't mean the favor will be repaid. I'm not saying only do favors that you think you'll get repaid from. I'm saying do things because you choose to do them because you wanna do them. Not be, be really clear and honest with yourself because they're a lot of things you get asked to do, especially in academia that maybe down the line you think this will help out. People will ask you, and if you say [01:21:00] yes, they'll continue to ask you. Right? Um, and not gonna use, like, I'll put boundaries, this, that. I just wanna say, if there's one thing I have learned later that I wish I'd learned earlier, work on things or contribute time to things that you directly see the value and you want to work on.

Whether they're altruistic and, you know, you're just doing them for the, for the good of doing them. you know, because you, you see the value in them long term. If you think you're doing. there's some payoff to you later on, don't do it. You will burn out and you'll find yourself frustrated because very rarely is anybody taking the favor from you or the work or time from you because they're thinking how they can repay you later. I don't mean to be cynical about it, I only mean that I did a lot of things where I really burned myself out trying to help out other people, only to realize it was not seen as some sort of engaged relationship in the way we were trying to help out. It was someone was just asking 'cause they, 'cause I said yes, right? [01:22:00] I saw that as problematic. Not because you should only do things that people help out, but because people take advantage of that. I think people should be more protective of their time and be more comfortable saying no on things. Um, if it's not really something that needs to be done or something you genuinely want to work on or see value in yourself. That may be a roundabout way of describing all of that. But I, I actually realized much later on in my career that I spent a lot of time on things that fundamentally I just shouldn't have. And not because you need to be self-centered or anything, but simply because there are a lot of things people ask and they're not necessarily asking for anything other than they think you might say.

Yes.

Benjamin James Kuper-Smith: Yeah. If I can add one thing to that, it's that the, uh. I mean, so you're describing mainly a situation where someone asked you and then, you know, you did something for hoping to get something out of it at some point. The fun thing that I did in my PhD is, uh, kind of doing that to yourself, which is the, uh, you know, as fun of a version of that where you do something and you think like, oh, [01:23:00] this is then gonna lead to that core thing that I want.

And even though you don't really like doing it, and, uh, I mean, obviously sometimes you have to do stuff you don't wanna do, but, uh, I find it this question of like, am I actually interested in the thing itself, or for what it might bring me, is a pretty tricky thing to realize yourself because it's, you know, obviously everything you do can have a positive outcome, right?

It's just not like it's one or the other. It's, it's usually like, it's always, it's always both present and the question is more like, which, because of which of these am I doing it? And it's, yeah, it's not easy.

Kai Ruggeri: I, I think that, I mean, look, you hear horror stories about, you know, someone was told if they just teach this one class, we'll make sure that you. You know, or a top candidate for the next opening we have or something. And it just never comes true. And you hear these kind of things. Now that's an extreme, that's an egregious one.

But there's actually a lot of others where somebody is just, you know, you for, you know, where you know your own students, you would likely help out on things proofread, you know, you know, [01:24:00] application letters or giving feedback on their CV or things like that. Like that's the kind of stuff you should be doing if you're in a position to do it and help out with. the stuff that goes beyond kind of the standards of your job that people will ask you to work on things. I mean, you know, you burn bridges with people trying to help somebody else out only to realize it was that somebody else that caused the problem, right? Somebody couldn't get some administrative paperwork or couldn't get this thing or that.

And then it's, you're sitting there trying to get this person to do it, and then you find out later that the reason why that it wasn't done is the person who asked you for the help didn't do their job. And, and these kind of things come up a lot. So all I would say is, there's one thing I'd, been more selective about things that I allowed to fill my day because. know, they, they say it a lot now. You know, make your priority the bulk of your day. Don't make other things the bulk of your day. And then when those other things are done, then you can finally focus. is, I think, a common issue we all face. Okay, I'm just gonna get all my emails done and then I'll sit and focus.

But you've cognitively spread yourself so much. Right? [01:25:00] Um, and one of the, one of the, the great joys I had during writing my dissertation, it's one of the few people in the world. I think I loved writing up my PhD. Yes, it's stressful and it was time, but I got into a rhythm at one point. One of the critical things I learned was how to finish days and how to start the next ones without spending four hours doing random other things and then finding it impossible to get writing those kind of things.

Learning how to get that momentum is great, right? But a lot of it has to do with just knowing which things aren't a priority in your day. And just because other people want you to answer quickly doesn't necessarily mean you need to. It's

Benjamin James Kuper-Smith: Yeah, it's funny. I think also. On of relatively few people who enjoyed writing his thesis. For me, it was this thing where like, there's nothing else to do now. I mean, like, you know, I've got like, you know, a week left now or something. I just have to finish it and this is it. Like,

Kai Ruggeri: spent 10 days on a farm in Bavaria trying to get my head right. Helped out with farm stuff, um, and, uh, did, did random things around that I could help out with while I was there. Tried to get my head on right, couldn't get the writing started. Um, [01:26:00] find, found a few different, like mental tricks and guidances and things.

And then suddenly, uh, something clicked and I realized what was making it difficult for me to get going. Once I found that out, I was like off to the races. I absolutely loved it. After that,

Benjamin James Kuper-Smith: yeah, maybe. Maybe the secret is just being in rural Germany. Maybe that sounds same for me. Maybe. Maybe that's the secret to

Kai Ruggeri: write a word of it there. I

Benjamin James Kuper-Smith: Oh, okay.

Kai Ruggeri: thinking I would get writing done. I, I went there thinking that's where I'd get my writing done. Um, and, uh, uh, the, it, it didn't work quite like that, but the headspace where I started realizing what was stopping me from getting that productivity came clear to me. Um, so.

Benjamin James Kuper-Smith: So if this is too long of a tangent, then uh, we can stop it. But why were you in a form of a area.

Kai Ruggeri: So one of my closest friends from grad school, uh, from when I got to Belfast, I did my PhD in Northern Ireland at Queen University in Belfast, one of my closest friends. Um, he is from the town, uh, about 60 kilometers in north of Munich in Ion. So what they would [01:27:00] say is the real Bavaria. Um, yeah, uh, I know lots of jokes from that region about other regions in Bavaria too, interestingly. Um, so, uh, but, uh, uh, they, uh, they would always host me. Uh, I spent a lot of time there with the family and they were wonderful people to me. And, um, so I went, just happened to go, it was around the holidays that year. It was easier to, to get there and spend extra time than, um, get back to the States. In those days, I, a PhD student didn't have an income. Um, so, uh, I went there, spent, uh. time. I mean, I, I, I think in total I've probably been there 15 to 20 times, maybe more. Um, so I've spent a lot of time with him. They're close friends. He had gone back to Bavaria by that time, um, to finish up, um, at, uh, the lmu, um, and then ended up at Max Plank. So.

Benjamin James Kuper-Smith: Okay, cool. Uh, final question, and I dunno how different this is from the, I guess I have two questions that are always very similar. Uh, but the final question is, uh, well, usually so far it's always been [01:28:00] advice for. Uh, PhD students, postdoc, or people in that border. But now that I'm technically in the second year of my postdoc, I don't care about PhD students anymore.

Now it's just postdocs. So I am changing the question now. I guess the question's gonna grow with my academic stature over time, uh, or not. We'll see whether my academic stature grows, uh, but any advice for postdocs?

Kai Ruggeri: Postdocs. Yes. Tons. You know, I, I lived the misery life of the postdoc and the postdoc life is highly varied, right? First thing is, are you on a project that you really love with an advisor that has your interest in mind, or are you just, know, a research assistant with a better title? Right, and not all postdocs are the same.

There's precarious contracts, there's stable contracts, there's ones where you're basically spending all your time trying to get the next grant or whatever. So not all postdocs are the same. So I understand that there are many different ways. Some have a lot of independence that are more like fellowships and some [01:29:00] are very directly tied to specific projects that they may or may not have much interest in. the strongest advice I can say is it, it can be a long road at that point, um, and it can be competitive. biggest piece of advice I give all postdocs is make sure that if your intention is to stay at. Academia, you have essentially the business card paper. And what I mean by that is wanna have a study that is yours at some point.

You just don't, you the being named on 50 papers, I, I've, I've noticed more and more it doesn't really advance in the way most people might think it would in other fields, it might in the social sciences. I don't think it really does. Um, granted it is great to be part of collaborations. I wanna strongly encourage that.

I'm not saying don't do that, but what you wanna make sure of is that you have one or two that really establish work, not necessarily what you wanna do for the next 50 years, but so that people can understand it. Vague [01:30:00] understanding of what your interests are doesn't really help in the career trajectory in most cases because most departments aren't looking for a, a jack of all trades.

They're looking for a hire that fits what they're looking for. Right. So you don't wanna be too spread out and you don't, obviously don't wanna be too, you know, narrowed down into what you do. But really make sure that if you're working as a postdoc, you are advancing who you are, not just getting your name on things. So make sure that you carve out something that gets you that paper. Now, if you've got one from your PhD and you know that's what you wanna get working on, then great. Or if you're working in a postdoc that gives you that wonderful, but you'll find that some of the ones that end up in circular and, you know, talk about these people that are 5, 10, 15 years in a postdoc, a lot of it's because they had to jump project to project, to project never got fully sunk into any, and it becomes very difficult to break outta that loop. So I absolutely understand and empathize with getting caught in that loop. And also you'll get asked to do a lot more things because as a postdoc it's assumed that because you have a doctor, you can teach certain classes. Well those classes can, you know, teaching [01:31:00] can be great, but it can also be difficult to establish yourself. So to postdocs, I would really recommend, I'd really recommend. sure that you're establishing your credibility on a topic, right? On a, on a thing that can be defined not just going to the next thing. you know, I was in a fortunate position that I could do a lot of different things as a postdoc, and so I was dabbling more so than I was being spread out. It was kind of my choice. I took on a lot of consulting work. I took on a lot of additional things. did that by choice, but after a while I found it could also be a prison, right? You could end up in this thing where, okay, well people will give you the things that they don't know who else can do it, but not necessarily the things that you want.

And so you've gotta be careful not to get caught in that, uh, cycle. But as a postdoc, the critical thing is, are you developing in the direction you want to go, right? So don't wanna get too pull pulled too far away from what you wanna work on too far away from skills [01:32:00] that you need to develop. Um, that's a critical recommendation I would give to, to postdocs, um, because, you know, it was. From, you know, I, I, I went through that window of kind of on different things, different grants, different different roles. Um, some of which I had direct control over, but some which was a bit more precarious and varied. And it can be very messy if you don't, uh, have a path out of that, right? Um, I did it by choice, but I think sometimes you can get wrapped into it.

Also, a lot of postdocs, there's the temptation to say yes to everything because you think by saying yes will lead you to that next thing. It rarely does. That's a, a one. I'd be careful of. Being really good at one or two things as a postdoc is, tends to be more valuable than showing that you've done 50. Um, having a really good commitment to a small number of things. So that's what I would say as a postdoc.

Benjamin James Kuper-Smith: It's really interesting because like, I guess there's one thing I always found slightly confusing is that there seem to be quite different assumptions about how [01:33:00] much fun people are having as a postdoc, uh, since I've been asking that question. Because a lot of people say like, yeah man, it's really hard.

And a lot of people say like, man, this is the best period of yoga. Right, right. And I always find that slightly confusing, but I think I understand it more now with your answer because I also am having a good time in my postdoc. I mean, obviously there is, you know, this like long term looming stress of like, you know, by certain years you should have, you know, uh, of course there's that.

But like I, I never quite understood the whole like why, what's so terrible about it? But maybe it is because. I'm working on the projects I wanna work on that are continuation of what I wanted to do before and I'm doing exactly what I wanna be doing and I have, I don't have to teach, uh, I have a fellowship to do research and I guess I am kind of doing what research, I guess maybe that explains a little bit more is like, um,

Kai Ruggeri: That's a big part of it. So situationally is a lot.

Benjamin James Kuper-Smith: yeah.

Kai Ruggeri: the situation, the length of the contract, all of those things heavily vary. And the environment in which you [01:34:00] do them right. Um, you know, some people took the first position that they got offered because they were worried and understandably worried about it.

But that could also mean that they were in something where they were essentially being told what to do. So gone from the stress of the PhD and owning the topic to now going back to no longer, you're owning the topic, but you're expected to be better than you were, you know. A few months ago. And that can be a, a, a real shock in some cases.

Um, and, and that I think that ownership is a big part. Owning what you're working on isn't true of all postdoctoral positions. And that, and, and also the, the people that are overseeing your work, are they invested in your advancement? 'cause if they aren't, that is self is a big challenge. Um, and I understand that's the, when I hire postdocs, I make sure that they are genuinely, fundamentally interested in what we're working on and make sure that the level of autonomy they have is at least enough to keep them feeling motivated about the work.

'cause if you're babysitting, nobody likes it. Um, and uh, you know, it's very different when PhD, you think you're for most, for a lot of people. If you get a good postdoc, it's great because you no longer have to worry [01:35:00] about the defense. You're no longer, you're now doing research. You're not strategizing a degree and a diploma.

Right. And, you know, I loved my PhD. I worked very intensely on it, but I also knew that at some level I had to know. The critical thing though is that you have to pass these guidelines other people have given you. When you're doing your own research, that's not the case anymore, right? Because it's yours.

Right? Not to say that I, mine was, I wanna be clear, mine wasn't overly strategized and in fact my supervisor would've told wasn't Um, but its a different kind of feeling when you're doing the research and it's not just the, I need to get this degree finished and pass this defense.

Benjamin James Kuper-Smith: Yeah. I like the, the part you said about the, you know, the ownership of the project because it's like, you know, whenever the project gets to an annoying point where you go like, oh, why am I doing this is a city, it is like, oh. 'cause I thought this was a good idea. That's why, you know, you can't like blame someone else for that stupid decision.

It's like, oh, that was my, I got, I literally applied to do this. I would like to do, I, I applied to get money to do this. Uh, [01:36:00] so yeah, it kind of eliminates some of those like,

Kai Ruggeri: Yeah. And one would wonder if you were miserable for a project that you described. Uh, it's a slightly different

Benjamin James Kuper-Smith: yeah.

Kai Ruggeri: I mean, in some cases it can be over passion. I mean, there's that whole thing of, you know, if you're extremely passionate about it, you end up burning yourself because of it. Right. You know, the, the guy that solved formas theorem right, locked himself into an attic and his family was forcing him, trying to force him to eat by sliding food under his door. Uh, you know, those kind of things is probably not what I would recommend to most people. Um, but you know, those kind of stresses as well. But otherwise, normally you would expect that people who are self choosing the research should be generally happier in those roles. And if you get a postdoc like that, I mean, that's, that's beautiful, especially if it's well funded, too short of a timeline and so on.

Benjamin James Kuper-Smith: Okay, well, uh, on that, uh, hopeful note, positive note, uh, those all my questions. So thank you very much.

Kai Ruggeri: Wonderful. Thank you so much.