Innovating Enrollment Success
The higher education landscape is at a pivotal point of transformation. In each episode, the Innovating Enrollment Success Podcast provides insights into what is driving results right now at colleges and universities nationwide. Learn what collaborative partnerships and data-driven strategies are accelerating enrollment growth and understand how creative can compel student action and bring enrollment funnels to life.
Innovating Enrollment Success
Bot or Not: Using the Right Data to Drive Enrollment Success
What’s really behind your web traffic? How can you tell if the clicks you’re counting are from future students or from bots inflating your metrics? And what does it take to turn raw numbers into insights you can actually act on?
In this episode of Innovating Enrollment Success, Digital Analytics Manager Sam Hofman and Ad Ops Engineer Nathan Huff talk about how to separate real insights from digital noise, and why trustworthy data is essential for student recruitment and retention.
What you’ll learn:
- Good data is trusted and actionable, not just clean spreadsheets.
- Set metrics early with a measurement brief that defines success and allows for pivots.
- Compare platform and CRM data to avoid inflated conversion counts.
- Identify bot traffic—40–60% of hits could be non-human.
- Block invalid traffic with honeypots, IP filters, and ad verification.
- Assign a data steward to own your data strategy and guide smarter decisions.
When your data is done right, it doesn’t just track activity, it tells the real story of your audience. And in today’s enrollment climate, those insights can make all the difference.
Cathy Donovan [00:00:00]:
Hello and welcome to the Innovating Enrollment Success podcast where we explore all the ways colleges and universities are connecting with prospects and how students are finding their way to their right fit.
But behind every strategic enrollment campaign is a world of data site metrics, ad clicks, form fills, heat maps, but which data matters most. And how do you separate signal from noise, especially when bots are skewing the story?
The college journey is very real for students and parents, but even professors are experiencing what happens when bots register for online classes. I'm Cathy Donovan, agency marketing director at Paskill, a higher education enrollment marketing firm that helps institutions across the country connect the dots between student behavior and enrollment strategy.
Today I am joined by two Paskill experts who live and breathe this data every day.
Sam Hofman, manager of digital analytics leads Paskill's, New York City-based analytics practice. He serves key accounts, oversees the analytics team, and manages all data and analytics capabilities. Before joining Paskill, Sam was senior media analyst at the Gate New York. He's a graduate of Franklin & Marshall College and brings robust experience in both client and agency-side marketing analytics.
Nathan Huff is Paskill's ad ops engineer. He partners with our paid media and interactive teams to ensure proper measurement and tracking across enrollment campaigns. Nathan makes it possible to measure and report on ROI through custom data connections. He's held several development roles before joining Paskill and holds a degree from the Rochester Institute of Technology. Welcome, Sam and Nathan.
Sam Hofman: Thank you.
Nathan Huff: Thanks.
Cathy Donovan: Alright, well let's get started. So, I want to start the conversation today with a few questions for Sam. Let's start with what does good data mean, especially in the context of today's enrollment marketing?
Sam Hofman: Yeah, so it's a great question. I think one of the first things you think about when you think of good data is having a holistic insight, especially when we're talking about in higher education into the full funnel of your enrollment at the institution.
So, the first thing that we start talking with clients about when we're talking about good data is whether they trust the data that they have in their systems currently. So, do they feel their web analytics tool is accurately tracking actions on the website? Once people submit an inquiry or apply to the school, do they feel that their internal system for managing that enrollment process is accurately measuring information around that submission or that application?
And then can you follow those users down that funnel? So, the first thing we ask clients in most cases is how confident do you feel in your data? If the answer is, we feel confident, the second question is, how easily can we access that data? Right? When we're looking to optimize an enrollment marketing campaign, we want to be able to optimize as far down the funnel as feasibly makes sense. So that we're not only driving inquiries at volume, but we're also driving quality leads who are more likely to be identified as qualified leads by the admissions team or those leads that submit applications. So, it really can go as far down the funnel as we can kind of still record and send data back to our platforms to optimize for.
Cathy Donovan: Okay, so when you're launching a campaign, how do you decide which metrics to prioritize? Because obviously, your focus can change over time.
Sam Hofman: Absolutely. At the onset of a campaign, whether it's a new client or an existing client, we'll almost always have a discussion around what the goals of the campaign are. If it's a new client, we'll often have a specific measurement brief that we work on with them, where we ask them to help us identify or start the conversation around what exact goals we're looking to accomplish with the campaign. Where or how those goals are measured in a way that we can record or report on.
So early on we like to get ahead of that conversation, both so that the client and our team have clarity on what we're kind of all pulling in the direction of. And then second, we can review with clients along the way how we're performing against those goals if it's a specific goal or just maximizing efficiency of whatever goal we're trying to hit.
Cathy, this is a great point. We also absolutely see over time that sometimes clients' goals change, right? So, they may find that they're actually struggling in the transfer area more than they thought they would. So, we would shift increased focus to promoting transfer enrollment that could come with a different target for total transfer enrollment that we can drive from our paid media campaign. It could come along with increased budget against that goal that would throw out of whack, whatever that original goal was. We’re constantly working with clients to make sure that everybody's still aligned on what the goals of the campaign are. And to more clearly identify how those goals change over time so that we're remaining on the same page with our clients.
Cathy Donovan: So let's talk a little bit, if you're willing, about some common missteps in how enrollment teams might interpret performance data. I know everybody wants results right away, but there could be a little bit of confusion of what you're looking at.
Sam Hofman: Absolutely. Oftentimes that gets really organization-specific when we're in the higher ed space. Because one of the things, as I mentioned a little earlier, we are focused even more on now than we have been historically is measuring actions further down the enrollment funnel and given the different length of time that we often see by client, by program, by degree type.
Between when a student initiates that enrollment process with an inquiry versus when they actually end up applying versus how long until they register can vary drastically. So, to one of your points about this, that makes it a little bit more difficult for us to, on an ongoing basis, report on things like cost per application, right?
Where if a student inquired after seeing an ad in February and only applied in December. How do we calculate how much money we spent to drive that application? Is it the spend that we incurred in February that led to that application? Is it the spend that we incurred from February until December? Is it the spend in December?
I think that's one of the things that we absolutely try to message clearly to clients. And the other thing that's really important is understanding the difference between a platform, a paid media platform reported conversion and an actual conversion that you can see in your CRM system. Paid media platforms report on both directly attributable conversions.
So that means somebody saw the ad, clicked on it, submitted an inquiry right, all in one session, and they also record what they might call assisted conversions, view through conversions, where somebody was served an ad, and whether or not they directly converted after clicking on the ad or if they converted within a couple days of clicking on an ad, even if they didn't submit that inquiry the same day, or if they just saw the ad and converted within a period of time.
Platforms also record those as conversions, which leads to two challenges. The first being the number that the platform reports is often going to be bigger than you can see in your CRM system, and that can create some confusion on the client end when you're talking about how many students applied for a specific program.
For instance. It can also lead to confusion when you're aggregating conversion data across paid platforms because Google Ads may have served an ad to user x. Then later, Facebook may have served an ad to that same user X, and if that user converts a day later after they viewed both of those ads, both platforms will record themselves as having generated a conversion, which would mean for one user in your system, there are actually two conversions being reported in the paid media platforms. So those are a couple of the kind of areas where we see confusion on the client side. Sometimes we try to just clarify how that works and how we try to report on that accurately for them.
Cathy Donovan: It sounds like a lot of moving parts and a lot of this student journey is definitely not a linear process anymore whatsoever. So now we're going to add a little bit more chaos into this situation by talking with Nathan about bots. So, just curious, you know, as my own user, you see this obviously that's an impact into just everyday life online right now, but what kind of a problem are they for enrollment marketers right now, Nathan?
Nathan Huff: So, I think one of the most obvious problems with bots, and to some extent, bots generally on the internet being things on websites and bots also exist in their forms on social media, which are also avenues for enrollment marketing. It's one of the main touchpoints for, you know, prospective students and folks that are coming to higher ed websites.
So we have to understand that bots exist in all of these spaces and will continue to exist irrespective of running a paid media marketing campaign for higher ed. I would say that they're a problem because they're out there generally and they're creating noise in all of what we would call our core website metrics.
These are website metrics, figures, measurements that would be present on a website. Again, regardless of the purpose of that website, they are affecting your analytics traffic. So the rule of thumb, sort of broadly across the internet, is that anywhere between 40 to 60% of all of the hits or requests or, you know, think of them as like loads of your website pages, are actually not from real people.
They're actually from bots, and I'll be clear on this and, and maybe we'll touch on this later, but not all bots are necessarily evil. It's a potentially pejorative or a, a diminutive term, bots. But some bots are helpful, right? They're doing helpful things. Um, so in any case, you have to keep in mind that when we look at like a Google Analytics number, um, for instance, or again some other, you know, marketing analytics tool that's reporting on website traffic to a higher ed website or a college's website, that those numbers are essentially inflated from the human value.
So the thing that we're always looking for there is to look at that number, knowing that context and looking at changes in the trend to that number in terms of those website metrics, whether that's like users or. Sessions or page views, all of those things. So bots are, they're noise, they're in the background, they're affecting those things.
They're affecting sort of the core baseline for those metrics, which is sort of the gateway that we use to interpret all of the more conversion specific or enrollment specific, or inquiry specific measurements that we have. So bots are also a problem there for higher ed folks, because bots can be used to impersonate people. Those are more malicious bots. Those are far less common than just like your regular bots in the background. But they can generate fake inquiries. They can generate fake profiles and personas. There's fake bots out on Instagram and Facebook that can theoretically fill out spam into your ad units that you might have for in platform inquiry forms on those as well.
That's a possibility. You can get bot hits of your paid search ads with what we'll call bad attribution data. That's something that we've seen in our experience, that can affect all those things. So, there are more malicious impacts from bots. It is just kind of one example, there is kind of a running gag in the folks that work in the web development space of how there's this pervasive bot that goes by the name of Eric Jones. Apologies if you're listening to this and your name's Eric Jones. But there's an Eric Jones and apparently the bot that is Eric Jones is very aggressive in terms of detecting web forms.
Like let's say you have an inquiry form on your higher ed website. He's really hard to block, basically. He's really hard to keep him out of your system. And so some people put specific entries for the many variants that Eric Jones puts in there. And why is it called Eric Jones? Well, because most forms have a name field and Eric Jones is a name that is put in there.
And as one web developer put it, if we actually use Eric Jones in our QA process because that ensures that we know the form is live because Eric Jones is visiting it.
Cathy Donovan: Oh my gosh.
Nathan Huff: So, there's some of the problems with how bots affect those things from, for marketers, it's just kind of two main examples there that affect those efforts and how we deal with reporting on those things.
Cathy Donovan: Are there other telltale signs that it's a bot or non-human traffic? I know it's Eric Jones or something, but what are the obvious where as marketers you expect it, so you just don't even consider that. What are some basic ones people should know about?
Nathan Huff: So at the end of the day, it's hard right now to some extent for bots to generate a certain level of quality in a response. Whether that's a field on an inquiry form that's like, tell us about yourself, or tell us about your problem, or something like that. It is to some extent, harder for simplistic bots or bots that don't have a lot of sophistication, kind of like your run of the mill, web form spam. It's hard for them to reach that level of sophistication. You'll see patterns that will say to someone even with a casual eye to say, “hey, that doesn't look like that's a submission from someone who's a real person.”
A lot of times you'll see visits from locations that don't make any sense. Let's say, I'm not going to name any specific colleges, but let's say you're helping out a small private college in a semi-rural part of Pennsylvania. And they're getting all kinds of attention from, again, this is nothing against Sri Lanka or any other country that I might name here, but you're getting visits from countries you didn't even know existed, like Turks and Caicos or something.
You know, these countries that no one who is from this country in a very low likelihood would visit from an IP address coming from that place. Or from some faraway land, like in Russia, in Siberia or something, you would be like, “Hey, we're getting a bunch of traffic from this.” That's highly unlikely. So that's one of the ways you could detect bots. Another pattern that you might see more domestically is if you're getting a bunch of traffic that's coming from known zones where internet traffic is routed through certain parts of the United States. There’s an area of Virginia that's very common as a gateway for a lot of network traffic.
So, if you see a huge spike in that, that's kind of a big red flag location wise. There's also some stuff that can happen, some of your geolocation metrics. That there's a geographic center of Kansas, and apparently if there's some bug in the geolocating it will go to the geographic center of the United States, which is in this part of Kansas.
Sam Hofman: It's Coffeyville. We've seen it way too many times.
Nathan Huff: Yes, that's right. There's a whole bunch of traffic that will come from there. And then to some extent, the other thing that can be present is in certain types of user agents that are very, very unlikely to be used by a regular person. Classic example is there are certainly hobbyists out there that use Linux and people who use Linux operating systems to navigate websites and to do things. I'm not here to put down Linux users.
They're probably proud that I'm recognizing them. But suffice it to say that if your website has the majority of its traffic from a Linux user agent, it's highly likely, especially in higher ed marketing, that for the audiences that you're trying to reach. If you have a bunch of traffic where that user agent or agents is showing up in your analytics reports or you're in your website logs, then then, you know, that's also that type of thing. There's probably other examples, but those are the ones that you're probably going to encounter most commonly in terms of the telltale signs.
Cathy Donovan: Okay. So how do your teams detect, filter or account for these bots when it comes to reporting? Obviously, there's known telltale signs, but how do you account for that and chug along with the work that the campaign needs to do?
Nathan Huff: Yeah, so like with many things, I'll make an analogy to you're in an emergency room and you know that you're going to get some attention and that volume's going to vary based on level of severity. And you do this practice called triage where what's the threat level, or kind of the danger level or the proximity to death to be blunt, the level of a problem that you're dealing with. So obviously if you have a problem that is causing a high spend or an issue where it’s really affecting the efficacy of your media and your reporting and it's causing financial impacts and it's really obscuring your way of getting any meaningful insights. Then for those, we want to track down the source. We do a lot of that both defensively and offensively.
So defensively is more around making it more difficult for bots or spam traffic to really sway any of those things. There's a couple tools that's called a honeypot field on forms that the user doesn't see, but a bot scanning that form programmatically will see. And the bot will assume that that form field needs to be given a value. But the public facing real user will not see that form field because it's hidden. So, the bot will put a value in that form field and then you'll know that the honeypot is set. If the honeypot is set, then we know that it's from a bot.
That's one example of how we do things more defensively. Offensively, we do things like we block certain IP ranges. We look through website logs, we look through all of those user agent markers. We use sophisticated tools that our whole software services for things like what's known as a WAF or web application firewall to help us, a service that is essentially professionally oriented to blocking bots.
So we use that defensively. And then of course, the litmus always, of course is weighing the quality of your form submission or your website traffic against what real users would do. You're looking at that and saying like, what's the plausibility that a person wrote this? If that's really high, then you're doing really well.
If it's really low, then then you know that we can discard this from our data set. And you know, a lot of times if you are aware of those general impacts, you know, some bots are noise. We're just kind of keeping that if you're aware of that context, then it can help you better vet the data that you are getting.
And the last thing I'll say has to do with that first point I made when we talk about bots being out there and that noise in the background. So, that noise sort of like major spikes or some of those incidents that I went over, we don't really want to necessarily make a major effort to try to filter that out.
We want to treat that as sort of like a constant, like in the background. And when we look at the data in reporting on more of those baseline traffic metrics, we want to look at those trends and if we see shifts in those trends, either up or down, we need to see if that aligns with our hypothesis or our expectation for the facts.
Say for instance that if we have a paid media campaign and we're driving a bunch of traffic to a website and, all else being equal, we expect that traffic to rise because we're paying for visibility of that website. Then we would expect that traffic trend to go up. And if we're not running a paid media campaign and traffic goes up a whole lot and then suddenly we discover we have a bunch of traffic from Coffeyville, Kansas, then we might reach the conclusion that this is a disruption in sort of our baseline noise and therefore we should attempt to discount or otherwise filter out some of that traffic to better aggregate those metrics that we're reporting on. So those are some of the things that we do to detect filter account for the influence of bots in our reporting.
Sam Hofman: And Cathy, could I just add onto that? Because I think Nathan covered a lot of what we do once the traffic, once the bot activity exists in our ecosystem. From a paid media perspective, there's a whole other world around this when it comes to filtering out any sort of bot traffic. So just for some background, when you're running paid media campaigns, oftentimes across a ton of digital channels, you are paying either per impression or you're paying per click on an ad, right?
And so that sets up a system that can incentivize some, what we'll call bad actors to spin up fake websites. They're called MFA sites “made for advertising” sites that are purely built to generate money for the person who built it. And what they'll do is they'll kind of sneak into your acceptable list of target URLs or target websites for something like a display campaign that's supposed to run on a host of different websites, and they will set up bots that will trigger conversions based on whatever the goal is for that campaign that gets them routed money on the side that you're theoretically paying. So, there are tools out there that are called ad verification tools. We have one in-house that we use basically as table stakes for our paid media campaigns. We use one called IAS. There's a bunch in the market, but IAS is kind of an industry standard and what it does is, it records and reports on any sort of invalid traffic or clicks that result from your paid media campaigns, which can help in a lot of ways with saving money on the paid media side.
Data out recently talks about how display and video channels, typically the amount of bot or fraud traffic that exists in those channels can be over 15% of all traffic. On social It's not quite as high, but it is one in 10 ad clicks can be fraudulent. That could be a bunch of different reasons it may or may not relate to bots. There are things like ads displaying on the wrong websites or not showing in a way that's visible for users.
It definitely includes bot traffic. And when you're talking about a campaign that runs hundreds of thousands or millions of impressions, that can waste a lot of money. So having a tool like IAS or any sort of ad verification tool is really, really helpful for countering any sort of like bad actors when it comes to the bot issue.
Cathy Donovan: Makes sense. And, obviously a really good factor to have when you're considering partnering with an agency that they take this very seriously. Every dollar for an institution is important, so you need to protect it. How do you talk to internal teams about what's real and what's not?
Because clearly when it comes to paid or with web there's a lot of different people involved in those enterprises. I'm just curious, what are good ways to keep everybody on the same page in terms of what they know as what's real or what's not in in terms of data? Nathan, did you want to start on that one?
Nathan Huff: I think the core of that question is really to say like there's some degree in a digital space to which the line gets blurred between measurables or traffic or activity, or even like those form submissions or those ad clicks. All of those things that take place in a digital space, whether or not they're real.
And, and of course we know that to some extent is some degree a philosophical question. You know, if I program a helpful bot that, you know, scans your website and puts it on a search engine and gets you some exposure, and in order to do so, it's got to follow the breadcrumbs to your, to your content. You know that that's real in a way.
It depends on, again, the nature and the purpose of the outreach and what the role of your website and your marketing is. Of course, to that end, managing those expectations internally is essentially to say, is to keep people informed first and foremost.
Like I said, you know, context is key. We want people to know that if I told you with no context that 40 to 60% of your traffic on your website is bots, you might freak out and say we need to do something about it. But if I told you that that's a problem for every website in in a public space, in a digital space, and that not all bots are bad and that all these other things that I went over previously, that it is just kind of keeping people informed of that context. So you can interpret those numbers again in line with the trends and everything else.
The other thing is it's important to have, I would say some qualified resource to audit or inspect. I may be giving away a future response, but to audit or inspect the data that you are collecting that is potentially more sensitive or has more stakes associated with it. So you should have someone who's looking at all of your inquiry responses or your applications or anything else that would have any personal stake in that, and by that I mean tied to a person.
You should have someone there to look at it and not just guess. So we want to make sure that there's a resource appointed to do that. And then as far as, again, communicating to internal teams, it's really about understanding that the majority of the data probably is real. I think especially when it comes to more down funnel measurements and how, how we're managing that is when someone actually gets to the point where, you know, they're filling out an application. They're a human being. They might have some things about them that might potentially be fraudulent, but at a certain point it's a lot harder to put real money behind it, let's say an application, than it is for more of these easier, just let's spam your web form type activities.
So again, arming the folks with context and keeping a dedicated person or resource around the quality of your data are two ways to ensure that people know what's real and what's not.
Sam Hofman: I think the only other thing is just to underscore something that Nathan mentioned earlier, which is, it's important to remember that bot traffic is not always bad traffic, right? It's important for clients to understand that bot traffic is essentially an inevitability. It will happen basically no matter what you do. And it's not the end of the world. The point is to make sure that you're taking the proper precautions to ensure that it's not negatively impacting your website or your enrollment funnel, and to just arm yourself with the knowledge of what is good or bad bot traffic, what is good or bad bot activity in general.
Nathan Huff: I actually want to talk about a fairly recent example just because I want to add something that's a little bit more tangible, as tangible as we can get on this topic. Full disclosure, this wasn't a higher ed client that we serve, but it was something that could definitely happen to a higher ed specific client. I want to go over this sort of example, but obviously with the paid media campaign, you run ads, even put creative material in those ads, whether that's some type of copy or digital asset or whatever it may be. And to some extent, Google and the other advertising services or vendors that you might be using in this paid media space might want to vet a couple things about you just uploading things into their platform.
And in addition to that, where you are sending any of the traffic when people do click on your ad, there is a verification process that the advertising vendor does in this case, to make sure that you're doing that. And you might say to yourself, well, how does Google do that? Or how does Facebook do that?
Well, they have their own army of bots and servers and AI processing engines and whole teams probably building all of these things and maintaining these things. So, they have done it in such a way that when you upload that creative asset, they are doing all of these automated scans and they're sending it into cyberspace somewhere.
And so one of the things that they use to do that is they use bots that crawl your website that have documented user agents and other attributes that let the website know, “Hey, I'm the Google ads bot,” or “I'm the Google search bot,” or “I'm the Google Mobile special delivery bot.” If you set yourself up too defensively, your website, your web application firewall, any of that, if you set yourself up so defensively, you are blocking all these things just because you don't have them in some allow list somewhere, which is what the case was here, that you can actually prevent folks from properly advertising because that bot can't essentially validate your creative asset.
And, you know, you're preventing these ad services from hitting your site because it ends up becoming what we call in some degree of info security, like a cat and mouse game where you try to put up all these defenses by targeting all these specific things and then the nefarious actors just go copy the things that you don't want to block because you think they’re legitimate and we’ll just pretend to be them.
In this instance, blocking helpful bots is also not always the best approach and something that happens. And full disclosure, that’s probably outside some of the scope of the audience of this podcast and, even getting into territories that I’m not really, an expert on. But it’s just one of those things where when we talk about bots and what to do about bots and some examples that I wanted to kind of bring that to life to show how this happens.
Cathy Donovan: Right, because it's an ever-changing landscape, and I think more and more institutions to compete and to be seen and heard, you need to exist in this digital world that bots are out there and doing their job. So, it's not a one and done, or this is our policy and we're going to just move on. It sounds like it's a constant evaluation and a trust in people who are supporting your interest and your enrollment marketing.
My last question for both of you. What's that one thing enrollment marketers should start doing differently with their data and what should it be? You touched on a lot of different things of this changing environment, but is there one thing to kind of focus on, to do differently as the digital landscape continues to evolve in such a fast way?
Sam Hofman: Yeah, I would say, and this is more speaking to what we talked about earlier on the podcast with good data and how to keep your enrollment data organized. I think taking it one step at a time is an important approach. When you're talking about creating good data and consistent data flows, it may sometimes feel really overwhelming to say nothing about our enrollment funnel is organized.
I don't know if I trust any of the data, but obviously it's like that saying about planting a tree where the best time to plant a tree is five years ago. And the second-best time to plant a tree is right now. Take small steps in the right direction for creating an organized and kind of mechanized environment for recording and reporting on enrollment data so that maybe five years down the line, it's not ideal, but it's the right direction to be moving. To make yourself a little bit more data-driven and data-enabled as an organization.
Nathan Huff: Yeah, it's hard to say, it's just one thing because I think part of success, not limited to that of enrollment marketers, is of course do a lot of things. Be eclectic, that's sort of like my motto. But I would say the main thing is you need to assign some type of data steward. And this is out of the spirit of, again, to potentially use another sort of mildly cliche quote is everyone wants to build, no one wants to do maintenance. It's easy to be sold on that flashy CRM. If you don't know how to maintain it and maintain it means evolving with all of the different avenues for growth or the changes that come your way in an industry. If you don't have someone to look at the data that you have and say is this working for us? Is it quality? Is it meeting these business needs? Is it structured in a way that's helpful for us? Is it exposing automation opportunities? Is it opening up avenues for better marketing because we're not using certain attributes of that data that we're collecting?
You know, the questions there are kind of endless. I would emphasize, again, if you have someone whose job or resources specifically aimed at answering all of those questions, or at least putting together a team to answer those questions and take action with the result of those answers, then that's really what you need.
I would say in the broadest stroke for the data landscape, because I think the other sort of piece in the background here is that this digitization of the data, the use of the data, the arm of the data is quite frankly the world that we live in. And it's the world that the prospective students are living in, and you have to meet them where they are. And that includes data.
Cathy Donovan: Well, I appreciate you both for taking the time to share your insights with me today. And as campaigns become more sophisticated and the student journey becomes even more non-linear, it's clear that smart data interpretation, filtering out what's not real is more important than ever.
For more about Sam and Nathan, see our show notes or connect with them on LinkedIn, and if you'd like to talk about how Paskill can support your team's enrollment goals with better data, reach out anytime. Thanks so much, Sam and Nathan.
Nathan Huff: Thank you, Cathy.
Sam Hofman: Thank you for having me.