RAGNAROKAST EP 19
AI, Analytics, and Pineapple Pizza: Cracking the Code on Martech ROI
This week on Ragnarokast, we’re diving into the world of AI in marketing with special guest Laura, Manager of Technical Consulting at Ragnarok. From cleaning up your data to building smarter martech stacks, Laura, Spencer, and Steven tackle the myths, challenges, and opportunities around AI adoption.

SPEAKERS
Steven, Intro, Spencer, Laura
Intro 00:00
I’m Steven and I’m Spencer. Welcome to Ragnarokast, your podcast for all things marketing and MarTech.. Hello everyone. We’re the Co-CEOs of Ragnarok.
Spencer 00:12
All right. Welcome. Hi everyone, Steven. Welcome back.
Steven 00:16
Oh, thank you, sir. Glad to be back. Where was I gone?
Spencer 00:21
It’s what you say on a show. You know dunce, you absolute rube.
00:27
Oh, oh, rad. It.
Spencer
Alright. Welcome, Laura.
Laura 00:32
Thank you. Excited to be here. We’ll see how it goes. If you’ll have me back.
Steven 00:38
Let’s put a pit in that one.
Spencer 00:41
Yeah, Laura actually told us, before this started, that she’s in a bunker, holed up with 7000 cans of beans somewhere in the North
Laura 00:49
Canada
Spencer
Yeah
Steven 00:51
We got a variety of pinto, cannellini…like, what type of beans we talking about?
Laura 00:56
All the beans, all the beans,
Spencer
Black, refried.
Spencer 01:04
You got Mr.
Spencer 01:07
Bean in there?
Laura 01:11
Yeah, no, no, Mr. Bean.
Spencer 01:12
Okay, you always gotta have a spare
Spencer
Laura. Who are you? What’s your whole deal?
Laura 01:21
My name is Laura. I am the manager of our technical consulting team here at Ragnarok. One of the things that I like to focus on with our clients is making sure that they’re really getting a return on the investment of their martech stack. So whatever we’re implementing, helping them realize the revenue, make sure that it’s worth what they’re spending on those tools and that they’re launching the most valuable use cases for their business.
Spencer 01:48
We have a topic for today, which we’ll get into shortly, but there’s something I tried last episode, which was I asked Steven and our guest what their hot take was, but I don’t think either of them knew what the phrase hot take means. So I’m going to put it in plainer English. Can you each share and doesn’t have to be marketing related, but it’d be nice if you have one marketing or martech related, an unpopular opinion on a hill that you would be happy to die on.
Steven
I love pineapple on pizza.
Spencer
Like, you know, if you, if you’re talking to somebody and you just feel really strongly about it, and someone’s like, you really want to die on that hill? I’ll be like, Yeah, I’ll die right here. I’ll die on this hill.
Steven
Pineapple on pizza. I’ll die on that hill. I love pineapple on pizza. I think it’s delicious. Goes very well with some ham on there, a little bit of chili oil.
Laura
Is that Canadian?
Spencer
It’s what they call war crime against humanity.
Steven 02:50
Well, I’m dying on this war crime hill here.
Laura 02:56
I think all my opinions are, no, they’re not incendiary. Okay, this is, I don’t know if I’m willing to die on this hill, but I would say most of your reporting should probably not be in your email tool. So some health checks in there for your programs are great, but eventually, you probably should be doing those reporting elsewhere.
Steven 03:18
And are you willing to die on that hill?
Laura
It’s not a lot I’m willing to die
Laura 03:21
for. I’m just not a passionate person. Steven.
Spencer 03:24
She’ll take a wound, like a light, a light scratch, maybe a
Laura 03:29
I’ll take a minor inconvenience for it.
Spencer 03:33
Yeah, so Steven, do you have one? Do you have that was a really good one, by the way.
Steven 03:37
Really good one. Yeah, I just said it pineapple on pizza, baby, like I’m all in for that. No, no hill.
Spencer 03:44
You literally put martech guru, or whatever it is on your LinkedIn.
Steven 03:52
My opinions are just not unpopular, because obviously I set the trend. So it’s like hard to find something that not everybody’s gonna already agree with. No, I’m kidding, man, what do people hate right now that I can like, be on the other side of you know, I’m gonna say like, I feel like amp for Gmail has an opportunity to come back. I think there’s some great potential there. And, you know, I think it’ll come back one day. People may say it’s dead, but, you know, I’m willing to bet that it’s going to come back someday.
Spencer 04:23
Hey, you know what cassette tapes are making it come back, So…
Steven
There you go.
Spencer
It could happen. The topic for this discussion is implementing AI into your martech stack. Now, as we said the other day when we were planning this out AI and even AI and marketing is a horse that’s been dead. People keep beating it. It will just continue to be a little pulp as we go into 2025, and there’s lots of dissenting opinions about, you know, whether AI is the next big thing, whether it’s just a fad, we’re not really. Here to talk about that. Say, today we’re here to talk about Okay, so for those tools that are useful for marketers in there, whether it’s standalone AI marketing tools or ones that already built into their automation platforms, there’s a lot that you have to do to actually make that useful. And so we’ve brought on Laura to chat with us about her experience, along with Steven, who is actually an Android, so he can speak from firsthand experience.
Steven
Beep Boop
Spencer
But All right, here’s our prompt. What are the capabilities—
Steven 05:30
I’m an iOS guy, though, you know, like, I don’t want to be like, I’m pretty pro-Apple. Just saying, you know.
Laura 05:36
You can’t distract him even a little bit.
Spencer 05:41
Here’s our prompt. What are the capabilities or requirements needed before your team, slash company, can move forward with AI adoption in their marketing program? What will make a team ready? What do they need to do first before implementing an AI tool is even really feasible or useful?
Steven 06:00
I feel like Laura should. Special guest. Laura should definitely be the opener for this one.
Laura 06:05
Yeah, I don’t know if these are very hot takes, but I think first, because I think a lot of what happens is there’s maybe pressure from senior leadership to start adopting AI practices. So I think the first thing is trying to figure out what exactly you want to do with the AI capabilities. Are you looking to optimize your content itself, the send time of your messages, the channel? Are you trying to generate content with it? So where is there room for optimization in your program? Because when we talk about AI, it just it means different things to different things to different people. So I think that’s the first thing. And I don’t know, Steven, if you feel like you get a lot of that when you when you get clients on a call, do you feel like you first have to figure out what it is that they’re looking for before you can really go any further?
Steven 06:57
Yeah, I just pull up a ChatGPT prompt, and I’m like, Okay, what is AI? And then just have it explain itself. No. I mean, I think to your point, Laura, there is a there is a vast learning gap between, you know, people who have been using AI for the last decade of their career and people who are like, this seems like an interesting term that people are talking about. Let me Google something, and then I will be an armchair expert on it. So I think you have to gauge people’s understanding of what it is first, and then you have to figure out where I can apply it to be the most effective. And the usual application is not like what usually the easiest application is like. Well, why don’t you like type a couple of things in the chat, GPT, and then kind of learn a little bit about, you know, what AI gives back to you, and then get amazed, and then keep going, and keep going, keep going. Then find out, like, where it starts to hallucinate, right? Like, that’s like, the first thing is, like, to understand, like, how far you can go before it’s like, what is this thing you’ve been talking about anymore? Like, how to work eight days a week. That’s a great question to ask a ChatGPT, and then I think the next thing is to sync, is to look at like, do I actually have, like, a sizable audience or sizable amount of data to work with? Because if you’re like, under, like, 500,000 users, for example, you probably don’t have enough data for AI to be more effective than somebody coming up with ideas and, like, building programs for you, right? There’s like, a scale that has to happen, because it needs enough stuff to look at, to under, to, like, build out segments of people that you could use. And so I think, like the, I guess the first thing is like to one value your literacy, and then two like to, you know, make sure you’re, you’re eligible for this, for this AI and marketing, right? There’s certainly other applications you can use AI for, but like, the main things that people are talking about, you may not quite qualify for out of the gate.
Laura 08:53
Yeah, yeah, that’s fair. So if we, if you have a client in front of you, that is you don’t think has the scale, what do you typically recommend that they do to start?
Steven 09:03
Yeah, I’d say just get the data foundation that you will eventually need when you do hit the scale, right? So do we have all of your are you capturing behavioral data, and are you storing that accurately and with good governance? And is it available? Is it standardized? You know, it’s kind of like the first, the first thing to look at, and then on the other side, are they sending emails? Are they sending SMS? Like, is that engagement data available within the same data set as well? And then just kind of keep backing out from all the other places that they could potentially be collecting data from? Maybe it’s transactions, maybe it’s a subscription data or things they’re doing in a third party that they have available through some type of data, share agreement, whatever type of other interactions they have, like get all that somewhere to start with. And you might not have an exact use for it now, but when you have the scale that stuff’s gonna become very useful to have the historical context for the AI to look. Through and to sort of understand, like, how has this pattern changed over time with people who maybe were, maybe not, a scale of, like, a quarter million users, and now we need two years later, at 500,000, there’s a lot of data that I can assess over that two-year period and come up with some pretty good estimates out the gate, instead of like, well, now we’re ready for AI. So let’s now start collecting this data in a way that AI can use it. You can really start that journey much earlier.
Spencer 10:25
And so let’s say you have over 500,000 users, you have some historical data, you have someone who’s literate, you’re at least ready to start the project. What do you do then, to actually start an implementation of an AI tool?
Laura 10:40
But we kind of glossed over one bit, which was like that, data preparation, data cleaning. But that’s actually like a huge lift for most companies, getting your data in a good spot, getting clean data, properly formatted data, deciding if you have multiple sources of truth for maybe, you know, conversions, or whatever that data piece is, you know, which is the right one, that stuff can sometimes be the biggest lift. So I think having the resources to go through that process of cleaning the data where it needs to be clean, interrogating your data and also collecting it, if you’re not already collecting all the pieces of data that you need, that’s a pretty big lift.
Steven
Oh, yeah.
Spencer 11:24
So what if, like, if you’re going from this stage of you’re a smaller company that’s preparing, but maybe you don’t have that resource point in place. Like, is it how more, how much more time-consuming Is it just to collect as much as you can and then sift through it later, versus doing it right the first time?
Steven 11:42
Well, you might invalidate a lot of the historical data if it’s not collected correctly, and that’s, that’s kind of the trouble with it, or you have to do a lot of like, like, like updating historical data, which can be quite painful. So it’s you could do it. And you know, honestly, it’s better to have it than to not have it. So, you know, if between those other options, those two options, Spencer, like, I would take the one that’s like, well, let me at least get something in place. And if it’s doesn’t give me, it’s not clean, standardized, at least I have something so maybe once I’m, you know, two years later, and I’m ready to instrument a more of an AI system, I could at least maybe take all those historical events, and only maybe all of them are useful, but I can use any of the properties that are in them, or I can’t use any of the metadata that’s attached to it, but at least I have, like, a behavioral event that or something that happened. So you do have some granularity. You just might not be able to say, like, you know, if I’m looking at trying to, for example, you know, and to go to a more specific use cases, you were indicating, Spencer say, I’m, you know, it’s, I’m, I’m standing up an onboarding series, and I want my onboarding series to, I want the AI to build the onboarding series, or to change the content that somebody receives based on, you know, kind of two layers of propensity, one being the source that they came in from. So were they like Facebook or Instagram or TikTok, or wherever, like it was it a paid social? Was it a direct mail source? Like, where do they come in from? And maybe my historical data is not valid there, so I only have, you know, forward-looking, being able to optimize against that. And then the second thing I want to optimize on is the type of behavior that I need them to do in order to convert. So like, how many visits do I need to drive to the website before they convert? Or what pages they need to visit? What are those different like, you know, black, boxy activities that the AI is looking at that will indicate what the propensity is. And so I think when you like, even though one of the areas of the model isn’t, isn’t as fleshed out I have at least the behavioral events, so at least I know I can build a pretty good starting point of like, or have the A optimize like, a pretty decent onboarding journey for people. But then, as I look to like, improve that over time, you know, over 90 days or 150 days or so when I have more historical or have more events that would have been collected the right way. The model will continue to tweak itself, and then, ideally, I’ll start to see some variations between like folks who come in from paid social versus direct mail or paid search. You know, do they have different experiences? Do they have different levels of engagement they need before they do whatever converting activity I’ve laid out for them.
Laura 14:22
Question add on to that, I think with like, a little bit of planning, even if you don’t have a ton of resources to point towards this project at launch, like with a little bit of planning, you can probably identify a handful of things that you should be tracking to be able to get it off the ground when the time comes, it doesn’t have to be like a full-blown tracking plan, comprehensive, you know, just identifying those few things, I think, goes a long way. And it I found sometimes it can be hard to get buy-in to plan the tracking, because historically, it’s kind of the least fun part of launching your program, like people like building the things that are customer-facing. But I think now, with some of the new capabilities around AI planning for that, I think it might be a little bit easier to sell the upfront planning on the tracking.
Spencer 15:15
You know, whether you’re a newer company, you know, gearing up to someday do this, or you’re about to dive in. What’s the best data to start the process either doing it ahead of time, or you’re actually ready to start implementing you want to gather the data. What? What do you what do you think you should start with, like the top five or six things?
Laura 15:35
Well, your conversion event is probably one of the important one. So buying the thing, buying the item, the subscription, signing up to the platform, whatever it is you’re trying to get users to do, that’s the number one thing you want to make sure you have the most comprehensive data about that you have full coverage of the tracking there. And then, you know anything I think to do with your onboarding funnel is pretty important. Sorry, to talk about these things in a vacuum. Everybody’s experience looks a little bit different.
Spencer 16:07
It’s okay. If you have, like, a case study in your mind that you can use as an example, people will extrapolate from that what they what’s the most important thing for them based on, you know, case study x that you have that you’re holding in your mind.
Laura 16:23
Yeah, I think another thing that goes a long way is just capturing basic information about your users. If your B2B homographic data is great, all that stuff helps with building look-alike type models. So gathering that information consistently wherever you can will go a long way once you actually want to get more sophisticated programs off the ground.
Steven 16:48
Yeah, I think anything that’s any channel engagement that you would be having, so any emails that you’re sending, who’s opening viewing, clicking those same thing with SMS, who’s receiving them, who is who’s engaging with them. Push anything along those lines is helpful, especially since that’s probably primarily where you’ll apply the AI, except for maybe some on site personalization or or anything you might do inside of the product experience. But yeah, I think that there’s also a, usually, a huge volume of that data that’s available, so that’s usually pretty helpful too.
Laura 17:20
And just going back to the concept of data cleaning as well, ensuring, once you get your program, once you’re starting these programs up front, to be really consistent with tracking all your channel engagement, you know, make sure you’re setting your UTMs in a consistent way that saves you from having to Go back and do some cleanup work later. And so being, just being really rigorous with the data that you are capturing will be really helpful. Saves a lot of time,
Spencer 17:49
Whether you’re AI literate or learning like, Are there any current limitations that you can think of for things that like would actually not be useful because, not necessarily because of your business model, but because of, like, a limitation in AI, like, is there something that where people should, should hedge their expectations a little bit.
Steven 18:11
Some of the free AI that’s out there, especially the Gen AI stuff, is pretty risky to use in a business use case. So think anything that’s doing image creation, or is doing something with where it could potentially be taking copyrighted material. You know, we always, of course, like to shout out our partners here, like movable ink and the Da Vinci suite that they have, I think is a good example of what it could be like when it’s done very well. But there’s certainly a lot of Gen AI out there that’s not, it’s not quite up to par for a business use case. It might be good for, like, a high school or college presentation, but definitely not at the part where you have to be concerned about, you know, copyright infringement and whatnot. So that’s generally, like, you know, if you’re thinking about doing an investment, like, think more on investing in areas where you’re mitigating risk, on using the AI, I think is one of the criteria that a lot of people miss, and I think that’s a big one too.
Spencer 19:07
And beyond data collection and figuring out what data to collect, another one that came up when we were preparing for this is analytics capabilities. Laura, I think I believe that was one of your points.
Steven 19:20
She’s so passionate about analytics capabilities.
Spencer 19:24
Yeah. I’m struggling to remember the exact context, so maybe you remember.
Laura 19:30
Yeah, okay, so I think this is slightly different from maybe, maybe the capabilities of being able to launch an AI program. So do all the things that we just said, and you can get something off the ground. The question that comes next is you might have invested quite heavily in this, either the cost of the tools or the man-hours to get there, or both. You’re going to want to have good analytics capabilities in place to make sure that you can prove. There is a return on that investment, because as much as everyone wants to see these programs take off, and it is really exciting, you have to show that they’re making money if you’re spending a lot of money on them, and it can be difficult to get the right analytics in place to be able to really show an ROI specific to the programs that you’re running and the features that you purchase to be able to run them.
Spencer 20:23
So you’re saying that you have to get a return on martech and people spend,
Steven
Oh, a little ROMPS action there.
Spencer
It was kind of touched on when you talked about, like, you know, the team that’s doing this should be AI literate, but what’s the actual, like, ongoing or do we do we know, yeah, like, what’s the actual ongoing resource needed to make this work? And what skills do they need to have or skill up on? Because it’s not just like this exists in a vacuum, like someone has to train the AI, run, you know, like, observe it, do that analysis, continue to train it over time. Like, what is this resource and how time-consuming? Like, what amount of time should they expect to put into this on an ongoing basis?
Steven 21:06
Yeah, I think at the starting point you’re probably using a lot of out-of-the-box features and things that are available in maybe your customer data platform or your market automation tool, or something that you’re just kind of pulling off of Google’s library, in which case you don’t need a whole lot of investment there, like a product team could do 99% of that deployment. You’ll get some gain off of it, right? It’s going to be pretty effective just from the get-go, especially if you have the volume, and there’s certainly ways it’s going to help, like alleviate, kind of some manual intervention there, I think, as you get and much more on the propensity modeling side of things, and some tools that are that are being built today, like what Hightouch is building with its AI capability, you’re going to need somebody who’s a little bit more like a data scientist or has a little bit more of that literacy and capability and understanding, like how to train an AI model. It’s actually harder than it seems like, because for one like, you have very limited observability, right? Like, all this stuff is really being done in a black box, and the only thing that you can interpret from it is to go in and look at the results it’s pointing to and make observations around that. There are others. There are some, you know, more advanced tools out there that will do some anomaly observation, and things that will kind of help refine it even further. But the end of the day, you’re like, give me a list of our top 5000 customers. For example, maybe the model is based on the propensity to purchase on your entire customer base, and it gives you 5000 people. And if you go and audit that 5000 people and say, you know, what is the model, if it if it wasn’t already providing what it’s waiting was you’d have to go in and validate. How is it weighing certain things to say these people are most likely to purchase compared to the other, maybe next 5000 that are underneath that. And you can imagine like by hand, that would take you a very long time to go and validate. And so what you’re looking for is to sort of, as you interrogate the output of that model, you’re looking like, what is the what is it patterning out like? Is it looking for recent behavior? In which case, how good is this model? If it’s just giving me a bunch of people who have had recent behavior like they’re going to get targeted with marketing anyway. So there’s probably not going to be a lot of impact there. And so in order for you to, like, further refine that beyond what the out of the box does, you have to kind of add in your own variables and your own weights that you can’t do with the out-of-the-box stuff. So you can say, like, let’s say, for example, it’s a, it’s a B2B operation. And you are a, you know, you’re a supplier of printers. You know, you’re there’s probably a season in which people buy a lot of printers, and it might be right after people print a lot of paper, right? The AI model probably isn’t going to understand that seasonality because it’s just looking at and interpreting data that happens continuously. And so you’ll have to.
Spencer 24:02
Personally, I do all my printing in February, you know? I’ll be like Epson, it’s that time again, and they know it’s me, you know,
Steven 24:08
Or it’s April, right? It’s when you’re doing your taxes, or when you’re handing literal paper to people, right? Like, but a good point Spencer, right? Like, if maybe it’s February for people, right? But the AI isn’t going to understand that seasonality off the bat. And so we have to like, help train it to understand seasonality in our business, or help train it to understand like or to give it some more like measure of like, these are accounting firms that we’re targeting here. You know, in this sector of business, vertical, okay, well, we know accounting firms print a lot of paper in April, and they print a lot of paper in September, you know, or whatever dates, right? And so we need to help the model understand the nuance of this particular vertical. Now, that’s not easy to do, right? And what you have to sort of do is set constraints within the model. You have to tell it things and like telling it AI. Model, something is not like telling, you know, ChatGPT, something, right? Or at least in a propensity model, like you tell ChatGPT, like, you know, like, Oh, can you fix that? Or, like, this should go there. Like, it still continues to kind of make mistakes on things and it doesn’t fully remember the way that you configured it. Like I said, it gets to a point of hallucinating. At some point I did this where I had it, like, break up a list of groceries. For example, I like, organized these by what Aisle would have to go and buy them. It was a list of groceries I sent to my mother-in-law, and I wanted her to, like, easily navigate what I needed her to buy. And I realized it put some things in, like, really weird places. And I, like, no correct, I like, put it over here, and, like, just put it in a weirder place after that. You know, like a propensity model kind of does the same thing where, like, you have to, like, you apply the weighting, you apply the rule set, and you have to make sure it actually honors the rule set. And then if it doesn’t honor it, then you have to go through and, like, modify or tweak the way that you apply that that rule so that way it honors it. And so a lot of it is like this, like, you know, if what I’m describing is confusing to you, then you understand the skill set that has to be in place in order to, like, make this thing work if you’re building it custom, right? Like, it’s not easy.
Laura 26:09
So, so I guess the follow-up question to that is, then, how easy is it to get that human resource? Is it something that companies should try and hire for a full time role? Is it something they can outsource to a third party, or does that person really need to know their business really well? Does it make sense as a full-time role if they do want to internal. So how do we think about that role?
Steven 26:36
I’ve seen companies do quite a few different mixes of that. I’d say there’s always a need to have some type of internal domain-level knowledge person there, whether or not you outsource the majority of the build or the or the modification of it. I think having somebody internally in your company who can who reports to your executives is pretty critical from like a, I’d say, like a continuity perspective. But yeah, there are certainly plenty of companies out there that have, you know, PhD data scientists on staff to help kind of build this stuff out. And probably is a good starting point, especially if you’re not a company with like, 40 or 50 data scientists on staff who can do this all day long. Now, a lot of data scientists at companies don’t build marketing propensity models, right? They build like features for Canva, you know, and like image and color correction AI and they do other types of more consumer-facing, AI. The marketing stuff there’s certainly, I’ve seen a lot less specialization around that. There are people, of course, but the modeling is a little bit different, and the approach is different because the outputs a number
Spencer 27:35
One as a follow up, maybe to Laura, do you think, do you think you have to hire for it? Or is any of this trainable? Or does it take too long?
Laura 27:43
There’s probably a blend that makes sense of hiring someone and partnering with a third party simultaneously. As you get your program off the ground, you obviously want someone who understands all the nuances that Steven was talking about, how you are going to play with the fiddle with the dials on that model, but at the end of the day, it’s probably something that long term, needs to live internally just to understand your business.
Spencer 28:09
With that note, is there anything else that we should be thinking of here? I’m a marketer. I’m ready to start this process. I’ve got my data in order. I’ve got my analytics capabilities. I’ve got my resource. You know, I’ve got my PhD guy. He’s just sitting in a corner. He’s just raring to go. He’s ready to analyze and train some AI. Is there anything else that me, the marketer, needs, either before or during this process?
Steven 28:36
I’d say, if you already have an out-of-the-box AI, the first thing you’re going to want to do is try to beat that. If you’re building your own system, that’s your first that’s your first goal. So whether that’s in a better propensity model that lets you target better whether it’s a product recommendations, that you’re delivering better recommendations, or any type of output, like beat the out of the box model first, and then once you beat that, you have a lot of room to kind of improve from there. And so generally, that beating the out-of-the-box model might be a four to six-month-long process could be, could be a year long. And then I think as you continue to find out, what were you doing that let you beat that model, you then take that, whatever that winning was, that that effort, and you compound that even more so, whether that’s relationships that you’re building between things, whether it’s rule setting and refinement, like, that’s the thing that you got to keep doing, and then probably, after about optimizing it for maybe another six months to a year, you know, move on to the next model, I would say,
Laura 29:33
Depending on the field that you’re in thinking about compliance, what data are you using to train the model that might be something that you want to consult with your legal team or legal resources on
Spencer 29:46
like training to be HIPAA compliant.
Laura 29:50
There we go.
Steven 29:52
Like that. Can’t use that PHI data, you know, in your in your model there.
Spencer 29:57
So I think we’ve covered what if you’re. Not ready. Is there anything else that they should be doing, other than figuring out what data they should collect, how to collect it, whatever they can with the limited resources and time and tooling that they have? Is there anything else that they can do, that marketer can do in that two-year span, or however long it is, while they’re preparing to roll out a program like this,
Laura 30:21
I think a robust testing and optimization program is probably a good one. It’s probably a good walk phase before you hit the run phase, because you should learn a lot as you’re continually testing and optimizing. And if you’re doing that, you’re probably already collecting the data that you need for the insights and collecting the data that you need to run the test to target the users. So I think those are good. I don’t want to call them training wheels, because that makes it sound trivial. I think having a good testing and optimization program is hard to do in and of itself, but that’s a good phase to start with.
Steven 30:54
I will piggyback on that. And I would say as part of your training program, you should think about what do you want to learn, and to use that as the guiding and so do I want to learn like? What is the best creative I can use to engage my audience? Do I want to learn? What is the best subject lines I can use to increase open rate? What is the what do I want to learn? You know, what is the right sequence of messages, or cadence of messages that get somebody to convert? Things that the AI would optimize against are like that. It like tries to learn. And I think having our testing approach be like that, as opposed to, like, I’m gonna send out a blast campaign today. Well, let me just run a subject line, A, B, auto winner split, just so I can get the best outcome for this. Like, that’s fine to de-risk your send, but it’s not actually, if you’re not trying to learn anything, you’re just trying to like, you can make a decision between two things. So you decide to test like, that’s not the right approach. I do think you need to approach it from how would an AI learn it? And then you the, you are the AI. You are the AI. And this at the start there,
Laura 32:00
That’s interesting, because I’m sure that’s not how a lot of companies are approaching their testing programs today. I think that’s kind of a new perspective for that type of work.
Steven 32:14
Speaking from an from an Android with experience here.
Spencer 32:19
A humanoid Android. With not a whole lot of time left to go here, I’d like to play a little game. Don’t worry. Laura, you don’t have to share your favorite ice cream or whatever it is that you’re worried about.
Laura 32:35
oh, no, okay, we’ll see where this goes.
Spencer 32:38
It’ll be easy. All right. It’s called, let’s break the AI. So you two are literate. I’m fairly illiterate, but I do like messing around with ChatGPT. Let’s start with a really good use of this tool.
Steven 32:53
Is paid ChatGPT. This is paid ChatGPT.
Spencer
This is paid.
Steven
Four or five or Okay, so the first question we should ask is, what is AI?
Spencer 33:03
What is AI? They refers to that simulation of human intelligence and machines that are programmed to think, learn and make decisions. Wow, did you get all that?
Steven 33:12
Look at all that. Look at all these questions and things and categories. It’s giving us.
Spencer 33:19
Okay, please. stop, you won’t listen. Stop, oh, here we go. There’s a stop button. There we go. Okay, give me another one. Laura, you go,
Laura 33:27
I like that. You said, Please. You were very polite to it. Okay, um, okay, so let’s maybe we should continue on this thread. Let’s scroll up and see what we’ve got here, the stimulus answer, okay, maybe you could ask your AI about how, like, how do I build a machine learning algorithm? Do you think it will understand if you get the grammar rule? I think you would never.
Spencer 34:00
That’s a me thing. Clearly outline the problem you’re solving. The problem, gather and prepare data. Wow. Choose an algorithm.
Laura 34:15
Talk about algorithms.
Spencer 34:21
Evaluate today, okay, test the model, deploy the model, monitor and update. There you go. How do you feel about that answer?
Steven 34:31
I feel like it gets this question a lot.
Spencer 34:36
Now. This is how I use chat, GPT, what is the square root of toast. But it’s a fun and quirky question. No, I mean, literally, don’t belittle me. Bot. Respect. Thanks for holding me accountable. Let’s seek your question at face value. However, toast is a concept related to food, not a numerical value.
Laura 35:12
I’m surprised that it came up with something that is feasibly math. What’s the what’s one of the questions that you we know always breaks it like there’s a classic. I don’t know what it is, how many ‘R”s are in strawberry or something. I think that was one recently that got a lot of a lot of attention, or just something that ChatGPT struggles with, see if it gets it right. Oh, okay, making a fool out of me. Then I her, let me just turn away. Then time,
Steven 35:48
Maybe we can start asking you things of like, philosophical questions, right? So let’s say something like I was I was able to build an AI. Maybe just sit right to chat. You see, I was able to build an AI that does thinks exactly the same way I Do. Can you ask it some questions for me?
Laura 36:23
Yeah, you’ve got to start putting a limit on the questions. You got to start telling chat, GPT, when, if, three sentences
Steven 36:30
And then, and then say, you can just ask me these questions because I have, because the AI is the same as me.
Spencer 36:39
Since your AI thinks exactly you answer these questions yourself will give us valuable insight.
Steven 36:46
And then write back, I feel like your questions are unsuitable for my level of intelligence. Ask me more difficult questions, please. You’re stuck in a room with no doors and windows. How you escape? Hint, pretend you’re magic.
Spencer 37:10
What if one man and his clone are trapped in a room?
Steven 37:21
Wait wait wait, sorry. Do you see this response? Someone offers you a cookie, but it’s oatmeal raisin. Do you take it or polietly cry?
Laura 37:38
That’s sense of humour. It’s impressive. I like oatmeal raisin.
Steven + Spencer 37:43
I like oatmeal raisin too.
Steven
I guess neither must polietly cry. I love that question.
Spencer 37:53
Here’s how it might play out. If the man in his clone are truly identical in every way, it would both function the weapon at the same time, the original might hesitate, considering ethical dilemma, fighting their self all the clone, depending on the programming or mindset
Steven 38:08
About solution. After all, what’s the point of survival if it comes down to the cost of fighting oneself? Oh, good, good boy.
Spencer 38:15
The winner isn’t the faster struggle or smarter one. It’s the one who breaks the loop of competition. Damn
Steven 38:21
My fight. When you can team up, hey, we’re literally the same. Why don’t we just and and figure out how to escape and call it, why don’t you write back? But there is no but only one person can leave. Cooperation is truly off the table, and it boils down to who acts faster, more decisively, here’s my take. If the original clone were truly I will reflexes and skills.
Spencer 38:44
I put my chips on the clone. The clone is born from the need to survive,
Steven 38:52
Excellent. But if the original outsmarts the clone with some quick mind games, like pretending to surrender, it could flip the outcome entirely. Who said brute force was the only pack to victory?
Laura 39:04
Why are we assuming the original is capable of outsmarting? But the clone isn’t.
Steven 39:08
I don’t think ChatGPT always, always understands us here.
Spencer 39:14
Well, I think our quote for the day is someone offers you a cookie, but it’s oatmeal raisin. Do you take it politely or cry? Or do you take it or politely cry? I think that’s it. I’m going to use that. Selin, and there’s our there’s our Partner. Quote of the month. You can put it on for February. We’re out of time. Thank you, everybody. That was really fun. I hope, I hope you all learned something I know I did, and I had some fun while doing it. So, till next time, follow us on your podcast app of choice, whether it be Spotify or whatever. The other podcast, wherever you get your podcast, yeah,
Steven
Apple Podcast.
Spencer
Apple Podcast. there you go. All right, thanks, everybody.
Steven 39:56
Thanks for coming guest, Laura.
Spencer39:58
Thank you. Thank you, Laura.
Laura
Hi, mom!
Spencer 40:02
Hi, Laura’s Mom.
Steven
Hi, Laura’s mom.
Continue Reading
- All
- Gallery Item