Brand Storytelling Put to the Test

For years now we’ve been hearing about the importance of story in marketing and branding efforts. From Seth Godin’s All Marketers Are Liars to Jim Signorelli’s Storybranding and everything in between, we’ve been told that the primary job of branding is to establish an organization’s story and the goal of marketing is to spread that story. You’d be hard-pressed to find an agency that doesn’t espouse some version of “helping you tell your story” on their website. It’s become conventional wisdom that the best way to communicate authenticity and connect with an audience is through story. We’ve accepted it at face value because some organizations have done it really well and experienced enormous success (Google, Chipotle, and Dove come to mind).

The science also supports the theory that storytelling connects with audiences better than other forms of communication. Cognitive scientists like Steven Pinker and H.P. Newquist have found that our brains process new information using the same cognitive zones we use to create stories. Biologists argue that 100,000 years of evolution as storytelling beings have hardwired our brains to interpret the world through story. Psychologists Roger Schank and Robert Abelson hypothesised that stories help us remember new information because they provide context and greater sensory detail to attach to prior knowledge. Communication theorist Walter Fisher, who developed the narrative paradigm, conceptualized humans as primarily storytelling beings who use story to understand the world around them.

But the thing about conventional wisdom, as we’re reminded time and again, is that it’s often wrong. So I got to thinking. As marketers, we’ve started taking it for granted that our audiences want stories. That’s a hard thing to test and measure. And in general maybe we don’t test our assumptions often enough. It can be hard (and ill-advised) to start questioning people who have been successful at something for a long time.

But change is the nature of our business. And it we’re not changing with it, we’re dying.

So I took an opportunity while working at a private university to test the storytelling theory out on over 15,000 prospective college students.

Want to know what I learned?

Here’s what I did

How do you test an audience’s response to story? Well...you pit a story against a non-story, or varying degrees of story, and see what happens.

So I wrote 4 different versions of an email to send to the 15,399 high school seniors who had requested information about the university in question. Each version delivered a message along the same branded theme of excellence in scholarship, leadership, and service, but each used a distinct communication style.

  • Email 1 used the university’s formulaic and style guide-prescribed brand language.
  • Email 2 clearly spelled out the 3 elements of scholarship, leadership, and service in expository prose.
  • Email 3 showcased a profile of a recent graduate written in a third-person journalistic style.
  • Email 4 featured a profile story of that same recent graduate told in the first person, using direct quotes from the profile subject.

All of the emails included an “Apply Now” call to action.

I then randomly divided my audience into 4 groups (about 3,850 prospective students in each) and sent each group a different email.

I assumed, if all the story hubbub had any validity, that the 4th email would blow past the others like Usain Bolt in a footrace with anyone who’s not Usain Bolt.

I believed prospective students receiving that email would connect with the main character of the email, resonate with the story told in the first person, start to see themselves as a possible character at the university, and respond to the message by opening the email and clicking on the call to action with greater frequency. All would be right with the world. Conventional wisdom confirmed. Marketers could keep telling stories and audiences would keep lining up to hand over their money.

Here’s what happened

And that’s exactly what happened! Sort of.

Here were the results.

Email Results

As you can see, email 4 (the story email) did perform significantly better in terms of open rates, and more total prospective students clicked through. But email 4’s click-through rate didn’t perform nearly as well as I had expected (or hoped), and actually lost to  email 1 (boring email 1? Really 17-year-olds?)

What happened?

Let’s start with the open rates, which are significant in their own right when you consider the subject lines.

  • Email 1 Subject Line: [University Name]—Where Excellence Means Something More
  • Email 2 Subject Line: The 3 Parts of a Complete Education
  • Email 3 Subject Line: [University Name] Graduate [Alum Name] ‘14 Strives for Excellence
  • Email 4 Subject Line: Recent Grad [Alum Name] ‘14 Shares His [University Name] Experience

Email 4 clearly indicates that a story will be shared in the message, which likely accounts for the higher open rate of that email.

I even ran a Chi-square test to determine if the open rate of email 4 (compared to the other emails) was different enough from what you might expect, based on sample audience variation, to make it significant. Here were the results:

  • Email 4 vs. Email 1. X2 (1, n = 1) = 103.50, p < .001
  • Email 4 vs. Email 2. X2 (1, n = 1) = 51.89, p < .001
  • Email 4 vs. Email 3. X2 (1, n = 1) = 45.33, p < .001

I know, a bunch of gibberish amirite? You’ll just have to take my word that the results were indeed significant. So the subject lines support the theory that stories will be more effective than other types of messaging.

So why didn’t email 4 have a higher click-through rate?

I mean besides for 17-year-olds being a bunch of little punks who live to screw up my tests?

Ahh, the nuances of message testing and data. It really could’ve been any number of things. Maybe the teenagers in group 1 were .02% more predisposed to clicking links. Maybe it was the photo used in the email header (multiple smiling students in Email 1, individual smiling student in Email 2). Or, and this is theory I find most plausible, maybe it boiled down to email length.

Email 1 was 115 words. The message and call to action were straightforward and succinct. Chances are good that the Apply Now button was visible as soon as the recipient opened the email, possibly leading to some impulsive clicks before the message was even read.

Email 4 was 427 words (Whoa!) It took a little more digging and decoding to get to the call to action. It’s highly possible that the email was abandoned before the Apply Now button was reached, and it didn’t benefit from impulsive clicks.

What’s it all mean?

I’ll be honest. This test wasn’t nearly as conclusive as I’d hoped. I was looking for a silver bullet. For conclusive data that story is the best and should be used in every instance. For the magic formula that I could take with me into every agency interview and pitch meeting to validate my work and my process.

But alas. I got hints that story is a good method to catch an audience’s attention. But I also saw there are other factors at play when an audience is deciding whether or not to act on a message.

So really my test confirmed what I should’ve already known: In this business, there are rarely perfect, catch-all solutions, and everything should be considered within a larger context.