EPISODE 20: TESTING COLD PROSPECTING METHODOLOGIES

SDRealness Podcast Episode 20 Graphic Horizontal - Testing Cold Prospecting Methodologies with Ollie Whitfield from VanillaSoft

Episode 20 Transcript

Alex: Hello, and welcome to today’s episode of the SDRealness podcast, brought to you by Sales Development Revolution, where we are talking with practitioners in the space about their take on important topics. I’m Alex Ellison, here with my co-host, as always, Greyson Fullbright. 

Greyson: Hey Alex, and hey everybody. I’m really excited total about today’s topic. 

Alex: Me too. This season’s theme is all about ‘see action, take action’, where we’re diving into a specific topic in Sales Development (SD) to learn specific tactics and strategies from experts who use them every day. So, today’s topic is best practices for testing cold outreach cadences and methodologies. Joining us is Ollie Whitfield, the product marketing manager at VanillaSoft. Ollie, thanks so much for helping out with this. 

Ollie: Thanks so much for having me, gentlemen. I am pumped to be here. I’ve listened to a ton of your previous episodes, and killer guests. Big boots to fill today, but hopefully we have a good conversation, and folks will be able to apply what they have learned right away. 

Greyson: Exactly. And I think you have a really good perspective in the industry, Ollie, in terms of seeing the different methodologies, that are not only out there now, but just the ones that have evolved over the years. And I think this topic of testing is really important, because a lot of people fall into the trap of kind of sticking or relying on one methodology, either because it’s the first one they found, or the first one that they found success with.

So, I’m really glad that we’re having a good, tactical discussion about how to test outreach. To kind of kick things off and get things framed, I wanted to get your take on the good versus the bad of what is going on in terms of using the methodologies. And I brought some of my own examples a little early, kind of jumping the gun here, but what are you seeing out there that is working, in terms of how people are using either one or more methodologies in their outreach, and what are some of the mistakes or struggles that people are having when it comes to actually generating results with one or more methodologies?

Ollie: You know what? It sounds weird to say, but I have not actually ever been trained to use one. Maybe that’s just me, I’m a weird exception to the rule, or maybe everyone else gets training and I’ve just been unlucky, whatever the story is. But, I’ve never been taught: “Here is show you use AIDA.” That’s a pretty common one. I think it’s attention, interest, desire and action.

So, that is how you write your email. But, the problem is, I think you can use a lot of these different techniques, and a lot of different copywriting practices and all the different things. So you could maybe write your LinkedIn post with that in mind, you could write literally a letter, or an email. You could prospect a new podcast guest. You could do a lot of things with them.

And through the nature of that, I think a lot of sales reps, myself included, and a lot of people that I have worked with included, we just write what we think is right, and we read it back, we send it, and that is our testing. And then, if we get crappy results, “oh it is not a good result, or it is a bad email, or we’re targeting the wrong people”, or whatever. But that is as scientific as we get, and we are really relying on the art of sales to guide us in the right direction there, but if we’re a little bit smarter about how whe test and what we test, we can get a lot better results and find out some of those lessons a little bit quicker. That’s the main thing I see. It is just doing something, recording it, but we do not have too much else around that for context that helps work out what we need to do next.  

Alex: It’s super interesting, because I think- I agree with you, that’s what I see, and that’s what a lot of people see. And there’s a sort of sense that a lot of sales, what works and what doesn’t work, when you’re really on the ground level, the SDR or the outreach role, it’s a lot of anecdotal: “Well, I passed a lead by doing it with this email cadence”, so it must work, when, at the end of the day, if you send that to 1000 people, and you get one meeting with it, it doesn’t work. sure, it worked this one time, but I think you hit the nail on the head there that there’s a lot more that goes into knowing whether or not a cadence works than just “I got a meeting with it, we’re good to go”, right? 

Ollie: The main thing is, it can work perfect for a different role, or for a different size of company, there is no definition of ‘works’. So, I think you are right. Maybe Greyson can tell us a bit about his experience with doing it, but if you think about it, if I want ti sell to you, Alex, your job is totally different to Greyson’s. So, my messaging might work for Greyson, but not for you, just because of the things that you focus on. It doesn’t mean that it is wrong or right, it is just how I tweak that.

Greyson: Yeah. In my mind, I feel like almost- Sales leaders and orgs almost have an inverse in terms of how they think about testing, because I feel like one of the biggest problems when trying to make a system work is when you assume that this is the way you need to do it out o the gate, and at launch, and you don’t get the results that you expect, it’s kind of a backtrack.

Oh, we didn’t get what we expected, this was supposed to be the system, now we’re going to test and try to figure out how to actually generate what we thought we were going to, whereas I think that a more realistic approach to testing is starting from the opposite point, and saying: “Okay, we don’t have any data, we have some methodologies that might be tested in this industry, or in these spaces, or with these company sizes, but we do not have any hard evidence in what it’s going to do in our space. And instead of coming into it with the assumption that the methodology is the solution, you are using the methodology as a tool, and then testing how to best make it fit into your space.”

And I think that is a good call-out, because what you focus on in your testing at the end of the day is really what direction you are going to go in. you can start with a really banger methodology, and know everything about it, but like Ollie says, it might work today, but it might not work ever again tomorrow. And so, if you are not actively testing, even if you have a good methodology, you’re gonna fall short over time. 

Ollie: Well, that is why there’s so many spin-selling, gap-selling, [inaudible]-selling, challenges- There are hundreds of them, because it’s how it’s applied. Really. I can read you ten of the different email methodologies that I’ve tried, it does not mean that one is better than the other if I got a better open rate or reply rate. It is completely irrelevant. It is all about, how do I find what my benchmarking is, and add an attainable goal? Like, I want a 10% better engagement rate, whatever the numerical value and the metric is. It is that part, instead of getting more opens and more replies. So, it is all subjective. It is about the same numerical gain from the same metric, that is what you are trying to get. 

Alex: Yeah, that makes a lot of sense. And it is a good segue into the next question that I wanted to bring up, which is really talking about your process when it comes to testing these tactics, outreach cadences, methodologies, those sorts of things. You touched on it a little bit now, but I’d love to get a more in-depth walkthrough into your process when it comes to testing, to get the reliable results that you really want.  

Ollie: Where do we start? A sizeable question. So, what I try to do, and this is an experiment that happens all the time, I research what are the best email methodologies, and there are 2000 trillion blog posts about it if you search on Google. So, I went through a ton of them, and I looked at, like, that one is okay, that one seems quite good, that one is from a well-known sales trader who I know, and I have seen them apply it, so I like this one. That one kind of sounds like the other ones I’ve heard of. And I narrowed it down to about 5-10 that I really liked. And then, to start off with, to not kill myself with so much work, and so many emails to write, I just picked the best five, in my opinion. Doesn’t mean that they are the best. Just generally, my gut said, these 5 are good.

And what I tried to do is just write a very simple cadence, to one job role in one market, with one company range, in terms of headcount. So, I picked 200-500 employees. No other variables than that. Only the one role, one market, one size. The only variable, apart from the wording in the email, is the way that it is written. And this is still difficult to test. It’s never ever as black and white as “this is better, and this is worse”. There’s always gonna be reasoning. So, this is why I say it is an ongoing test. I’ve got five different emails, five different methodologies, five different subject lines, five different pitches, three different types of medium inside of the content, and all of that means that you get slightly different results.

But, the core thing I took from it is… So, email 1, I’ve got a methodology by somebody called Josh Braun. He’s a very well-known sales trader. He is an expert cold email raiser. This is his methodology. It is trigger, third-party validation, teach, and tell. That’s how he structures the email. So, I started off with the trigger, why I am reaching out, then the third-party validation about that, then teaching him something and telling him how to fix it. That’s where I started off with that email. What I said in that is kind of similar to the second one, but where I’ve gone different with it is, I have had to say it in a different way. Nothing else. Otherwise, I can’t guarantee this test is even worth the time to do.

So, if I change my value prompt, for example, or if I change the call to action, it becomes difficult to say what is better or what is worse, because if my call to action is: “Alex, have you got 30 minutes next Friday to talk about this?” And the second email is: “What do you think?” I would bet my hat that the second gets better results anyway, even if the subject line gets half the open. So, it’s difficult to test it like that, but what I have tried to do is remove as many variables as possible, like I said. Keeping everything the same apart from just how it is structured, so I can tell.

And I confess, I don’t have a super in-depth guide of which one is the best and which one works better than the other ones. I can guarantee you that the margins are so small at this point, because I’ve not been able to test it as extensively as I would like, and believe me, if I had done years’ worth of testing, there would be an awesome book on it out right now about it, but… hopefully I get there, and that is kind of how I started it. 

Greyson: Right. Well, you can’t expect one person to manage all that, because I feel like the data would change so quickly at this point. And I think you bring up a really good point about the need for open-mindedness when you are testing. I really admired how, out of the gate, the first thing that you are doing is researching what others are doing, and aggregating industry insights, and they laying it all on the table, rather than what we had talked about earlier, which is the other common approach, which is just choosing one, and saying that’s the one, and going with that.

So, I really love this concept of open-mindedness, and I think that any SDR’s out there that do have some control over their ability to test their messaging, or kind of make adjustments to how their cadences are progressing, I would really think about that, not trying to depend on just a single methodology, and try combining methodologies. I think that Ollie has enlightened me to that, because I tend not to pay attention too much to methodologies. I learn enough about them to get them, but I’m not trying to become a master at them. But, there is a very good argument to be made here but Ollie, that you could actually become a master of 3 or 4, and become a master of how to actually combine them, and make really great emails that are unique to you and your own methodology.

The second point that you brought up, that I think is fabulous, and I think too many people overlook this because no one likes spreadsheets, no one likes data, but this idea of testing focus, having too many variables. I love when I go into work, and I see them just testing emails. They’re not testing a specific thing, or a specific area of the email. They do not have any clear similarities between email 1 and email 2, it is hard getting different lists, and it all ends up being this big ambiguous- Like, you did not do anything. It is almost like you took no steps, because you do not know what changes were made and why.

So, I think that is a really good call-out. If you are going to test methodologies specifically, so we’re talking about messaging here, you want to make sure that you minimize as much as possible other variables that can affect your results. Easiest example is, if you are trying to test two entirely different methodologies, with two different pitches, and approaches, and CTA’s, you should probably use the same subject line. I feel like that would really change the results out of the gate if you are just trying to just test methodologies. So, that is just an example there, but I just wanted to note those two concepts that I pulled from your last statement, Ollie, because I think those two are really important, not only for doing it right the first time, but also for sustaining it. 

Ollie: Well, you know what the cool thing is as well? Let’s say you go for the Josh Braun TTTT method, the trigger, third-party, teach, tell.  You might find that I get, let’s say my benchmark level of engagement on an email. You kind of see the same thing. You are getting the same amount of replies, opens or whatever. Okay, great. That is cool. It doesn’t mean you have to anything different, or you have to scrap it. You could maybe add something to it.

So, everyone knows that video is extremely hard at the moment in sales. Possibly, a video ad in there would work, or a [inaudible], there’s loads of different tools to do it. And some of these email methodologies, they do lend themselves a bit better to it, or maybe- There’s one, I believe it is ‘before-after-bridge’, BAB. So, maybe the bridge part would lend itself to a funny gif, or something like that, to mix up the medium, otherwise you just have plain text. We’ve all seen it before, and that’s kind of boring, you could say. It depends. You’re not gonna go and send a Simpsons gif to a CFO, potentially. Especially if they are a massive enterprise company, maybe don’t do that. But it’s up to you.

When I can, I try to mix it up, so if I can do a video where it makes sense, and where I think it fits, especially so I can break up the monotony of text emails, then I can do it. But get the benchmark first, so you’re not just saying: “Well, is it the video that caused them to not engage as much with this email?” Because that is a variable. So, stage 1 is, get the benchmarks. Stage 2 is, add the variables based on what you think from the benchmarking. 

Alex: Yeah, and the benchmark might be even the most important part of this conversation, because if you don’t know where you are, you can’t figure out where you’re going. You need to have some sort of baseline especially because every industry, every market, every company is different. So, if you come to a new company, and maybe you are used to having a 40% open rate, and at the new one it’s 20%, that could have nothing to do with the messaging, that might be just the market. You don’t know until you run some tests to find that benchmark for the industry or the vertical that you are reaching out to. And from there, that’s when you can start to, one at a time, slowly add these variables to figure out how they then affect the process or the hours, right? 

Ollie: Yeah. It’s… Go with your benchmarking, and don’t shoot for the moon straight away. Obviously, just by going with your gut, you’re not gonna blow your current stats out of the water, you’re gonna get marginal improvement, and that is what you should probably [inaudible]. So, if your test email performs way worse, that’s all right. It’s only a test. The most important thing is, never ever let your testing get in the way of your quota. So, this is potentially a little bit difficult, but it is one of the hurdles that you have to overcome if you’re gonna test that extensively at all.

Don’t think you’re gonna make 50 cold calls, for example, and you’re gonna test the new methodology on 25 of them, because if you need those 50 calls to meet your quota, then you’re jeopardizing that. I’m not saying it’s gonna ruin you and you’ll get nothing, but you’re putting that chance out there that this new tactic is not going to work as well.

Same thing for email. So, if you have to send out 1, or 50, or 100, even 25% of that, whatever the number is, I would do it outside of what your existing activity is, just to alleviate the risk, that is the first thing I would- I probably should have reminded at the very start, to be honest. If you’re gonna do it, make sure that you‘re not losing anything because of chancing it on a different result. 

Greyson: I completely agree, and I think, I believe Jake Dunlap has talked about this, had some very good advice about this, essentially saying that if you are wanting to test something, to make an improvement, and you’re not in a position where you can’t be making decisions for yourself and you need to get buy-in from the org, he suggested going to your leader and saying; ‘Hey, I’m gonna keep doing everything that you guys ask me to do, to the T. if you want me to do 50 calls, I will do them. And I’m gonna do that, but then I’m also going to be doing my own thing when I have my time, and I’m gonna do it over this period, and then actually show you the differences.”

And I think that the really awesome gold from that feedback is less I think about the testing and more about the communication with the manager, and saying: “I’m wanting to collaborate with you, and make this a team adventure for the benefit of the company, to get results.” It is not about the ego with the SDR, which can often happen. Every sales person thinks they’re the best sales person, or will be one day. So, it’s easy to get trapped in this ego battle, but if you involve the managers and involve your leaders, and say: “I found these insights that I think are really gonna work, I don’t wanna jeopardize my quota here, so I’m gonna keep doing the things we’ve set, but can I try this?”

I definitely don’t think that it hurts to ask, and this does kind of pair well to the last question that I wanted to ask Ollie before we wrap up, which is really around general best practices and advice for an SDR that’s trying to start their own testing system, whether it be for cadence, for multiple channels, maybe for methodology. And I want to kind of start where we are at now, with this kind of relationship between the SDR and the manager when you’re trying to start something like this. Do you have any advice or tips that you would give an SDR? 

Ollie: Yeah. To repeat the point a moment a go, because I think it is that important, I generally ask for forgiveness instead of permission, for the most part. Obviously there are definitely exceptions for that. I think, so long as you’ve got a good intention, which this is absolutely one of those things, you are going to be all right, as long as you don’t incur huge costs, or any unforeseen problems like that, which this won’t do.

 You’ve already got contact data, you’ve already got email capability, so you can do that. Aside from that, the next thing is just to go and remove every single barrier as best as you can. If that means that you test two email methodologies for the next month, that’s fine. That means you do 10 a day, and it takes you an hour after work. That’s fine. You don’t have to do 5, you don’t have to do 10. You don’t have to send it to 200 people, or 20, or even 10.

Whatever the number is that you can do, that you can do well, that’s absolutely fine, that’s better than doing what you’re doing and just going with it, because if you think about it, that’s like going to my manager, and I say: “Look, I got this cool idea, I’m gonna try and do it.” Meanwhile, the clock is ticking while he’s listening to me talk about it. “Here’s how I’m thinking about it, I’m trying to do this. What do you think about it? Can you give me some feedback? Maybe you can spend some time reviewing my emails before I send them.” Time-time-time-time-time. Lots of expensive manpower going into that, before it even sees the light of day in a prospect’s inbox. So, for me, when it’s just testing, I’m just gonna do it in the most safe and appropriate way that I can, making sure that I have put my best effort forward to begin with, before I ask for anything, so that I can say: “Look, I tried this thing, it works this much, I thought it would work that much, could you give me a hand?

And by the way, just so you know, I’m not sacrificing any of my normal activity to do this. So don’t worry about my number. I’m safe for it, as you know. You can see my stats in our reports and so on, but I’m just trying to do this to see if we can find other ways to break through into our market. What do you think?” Buy-in, automatically, because there’s no loss of everything else, and you’ve gone above and beyond, and it is minimal time spent on that part. That’s how I try to approach it. 

Alex: Yeah. And that’s great too because, I know it’s not the subject of this episode, but it shows that self-starter attitude, a lot of the go-getter-ness- That’s not a real word, but that ability, it’ll help you in your career path, is really what I’m saying. By taking some initiative on yourself, we did a podcast episode with [inaudible] where we sort of touched on this, that if you take that initiative yourself, to put in some work on your own, and then bring it up to the people above you, they’ll be really impressed, whereas if you just go up to people above you with questions, depending on who they are, they may be just annoyed. They might just be like: “Oh, okay. Go ask someone else. I don’t have the time for you.”

But, having that, being prepared for the conversation, coming up with some data to back it, goes a huge way towards that, and I think overall, everything you touched on was huge. Testing should be such an integral part of the SDR’s day-to-day process, but I don’t think enough of them are doing it.

So, Ollie, thank you so much for hopping on with us to talk about testing, how to do it, best practices, bad practices, things like that, and hopefully some listeners can take this away an go start testing themselves. If anybody has any questions for you, Ollie, or wants to learn more about VanillaSoft, how can they get in touch with you?

Ollie: Is it really boring to say LinkedIn? I love it when people say that. That is what every single person, on every podcast of all time had ever said. But, yeah. I have a weird name, so I’m probably close to the top of the search results, which is a bit of an advantage, a bit lucky for me, but… yeah. I spend most of my time on LinkedIn. It is my Facebook. I hate Facebook. I like LinkedIn. So, I’m always on there. 

Alex: There we go, there you have it. Find Ollie on LinkedIn, he’d love to connect. This has been Alex and Greyson, with the SD Realness podcast. Until next time, SDR’s, keep it real. 

Join Our Community to Get Access to Regular Content,
Training Resources, Events, and Exclusive Groups.