Part 1: Why It's So Hard to Justify the Cost of a Social Network.
More in category: Longhorn Project Diary
In Part 1 I mentioned an evaluation model, referred to in the training industry as Kirkpatrick's Four Levels of Evaluation. I suggested it might have some application as a framework in understanding social network ROI, and why it's so tough to give "warm fuzzies" to managers who want to know "its value."
Kirkpatrick's Four-Level Evaluation.
The model is a workhorse of sorts in the training industry. It's attributed to learning guru Donald Kirkpatrick. Ask any instructional designer about it and they'll light up and tell you probably more than you want to know. As a matter of fact, allow me (I warned you):
Level 1: Reaction (or satisfaction).
In a nutshell, this level attempts to assess questions like, "Did you like the training?" "What did you think about it?"
You've given Level 1 feedback before. Think to the last company training program you attended that also asked you to complete a post training feedback form. The questionnaire likely asked you to rate different aspects of training by using stars, smileys or "A, B, C" type ratings to gauge your satisfaction with the course.
Applicability in social networks: Just like in training programs, Level 1 evaluations for social networks would also fall in the area of measuring satisfaction. This could be like a questionnaire members participate in to rate satisfaction of features or content (e.g., Least Favorable to Most Favorable; Low to High; "A, B, C, D, F", etc.).
Whenever you rated a YouTube video, clicked a star on a blog comment, rated someone with a "Best Answer" on LinkedIn or even Digg'd a blog post (hint, hint?) you gave Level 1 feedback in a social network.
Level 2: Learning (or knowledge retention).
In training programs, Level 2 evaluations include assessments to determine changes in knowledge. This one's a little more involved. Instructional evaluators usually--but not always--devise some kind of pre-test and post-test.
You take a test before going through training so evaluators can establish a "baseline." Then, you complete the training program. After training, you take a post-test. The results are compared with the baseline to determine if there was a change in the thing that was being evaluated. (Usually your knowledge and retention of the subject matter.)
You participated in Level 2 evaluation's all the way through school whenever you took a quiz or an exam.
Applicability for social networks: Just as with the case for training programs, here we want to think a step beyond simply measuring satisfaction; more along the lines of measuring changes in quantity or quality of relationships the network generates or perhaps changes in collective wisdom as a result of implementing a social network.
Possible metrics that might be used to "prove value" for a social network at Level 2 could include before/after comparisons for metrics such as the following:
- Subscriber counts;
- Unique visitors;
- Returning visitors;
- Page views;
- Bounce rates;
- Quantity of content (both user- and company-generated);
- Quality and popularity of content, and so on.
Level 3: Transfer (to the real world).
For instructional designers Level 3 is the crux of an effective training program. That is, to what extent can learners take the knowledge gained from the training program and transfer/apply that to the real world?
Level 3 assessments for training programs are a little more involved than those for either Level 1 or Level 2. Such an assessment might involve putting you, the learner, in a controlled scenario and making direct observations about whether or not you can perform "desired behaviors" which the training program was designed to teach.
Think about it this way. When you took your first DMV written test, you were engaged in a Level 2 evaluation. The friend who drove you to the test may have asked you "how'd it go?" That was a form of Level 1. But, when you later come back to take the driving test, you're engaging in a Level 3 type evaluation; it's designed to see if you can take the book knowledge (Level 2) and transfer that knowledge to real time performance in the driver's seat (Level 3).
Applicability for social networks: Level 3 gets a little tougher for social networks. To get to this point of evaluation, the implication is that your organization should first have defined the objectives for developing a social network. That is, what desired behaviors, if you will, is your organization hoping to achieve?
And therein lies a rub.
Unless your organization first comes to grip with answers to the business objectives for developing a social network, measuring and managing the value of such an animal isn't really possible beyond Level 1 and Level 2.
Evaluations at Level 3 are further complicated by the need to observe the program's impact in the real world. In this case, a social network's effect on the folks that ultimately benefit from it. And get this, that doesn't necessarily mean its members.
The members of the network and its ultimate beneficiaries are typically--but not always--the same group of folks. Consider, for example, an internal social network developed for sharing expertise between employees, but whose ultimate goal is to benefit customers.
To gauge the real-world impact of a social network, the metrics used in a Level 3 assessment would likely have to be customized with a nod to the desired behaviors the platform was designed to effect. In that sense, each implementation would be different. But, some examples might include:
- Improvements in demonstrated call management behavior by CSRs;
- Improvements in demonstrated employee process knowledge;
- Improvements in demonstrated sales person product knowledge;
- Improvements in the quality of employee interviews.
Level 4: Results.
This is like the holy grail. From a training program perspective, Level 4 assessments try determine to what extent the program has impacted the bottom line. Metrics like this might include financial measures like:
- increased sales,
- and the ever elusive return on investment (ROI).
But the scope might also be scaled down to include department-level productivity measures like:
- improved customer satisfaction,
- reduced customer complaints,
- reduced customer call times,
- increased production,
- reduced production defects,
- improved fix response times, and so on.
Applicability for social networks: To a large degree, expectations for Level 4 metrics in a social network don't really differ much from those expected of a training program. It's a tough nut to crack any way you apply it, largely because "bottom line" measures are so broad.
If sales improve after a social network implementation, is it really because of the social network or could it be a result of an improved economy? It's for reasons like this that training departments typically find it easier to stick with assessments no higher than Level 3. And if they were to attempt Level 4, it usually involves scoping things down to a department or group level where metrics can be reported upon within more defined boundaries.
In a nut, all the stuff above is why I believe managers fall into analysis paralysis when faced with a decision to shell-out for a social network.
They understand the need for members to be "satisfied." (Level 1.) They also understand the benefits associated with page views, content, and uniques. (Level 2.) But what's really going on is that they're trying to translate all that to a Level 4 outcome: "how much does it cost?", "when's breakeven?", "how much revenue can it be expected to generate?"
Scan the lists above and you'll see our challenge as a community of social network evangelists. We typically use a Level 2 arsenal to answer questions based on Level 4 metrics.
So what can we do?
1. Recognize that when we rationalize the value of a social network development project using the benefits of page views, rankings, visitors and SEO, we're appealing to a set of characteristics that define a snapshot of community activity and popularity of content. It's helpful but by itself, it comes short--about two steps removed, in fact--from delivering that "warm fuzzy" managers are really looking for. Essentially, we're asking managers to bridge the gap themselves between page views and organizational productivity.
2. Develop awareness for real life case studies in Level 3 and Level 4. There's no shortage for examples of the other two types. If you're like me, you probably have bookmarks and favorites tagged with keywords like SEO, Community Tools, and Cool Stuff. Start one now called "Case Studies" and start bookmarking real world examples you stumble upon where companies communicated the behavioral improvements (Level 3) and bottom line impact (Level 4) of their social networks. Later, you can pull from these to develop "takeaways" and handouts for project proposals.
3. When making the case for a social network, in addition to all the change management aspects (see Part 1; also stay tuned for future posts in the Longhorn Project Diaries) make sure to also shop the project internally. Get feedback from key managers about departmental costs and projections of revenue or productivity. Plug these into a P&L spreadsheet. Starting with the baseline projections, you can then add line items to factor in assumptions about the impact of the planned social network on costs (for development, for example) and productivity.
I found this last bullet particularly helpful in greasing the skids later in the planning phase when everything looked all but dead. I'll save describing that experience for another post.
In the meantime, you tell me, what else can you think to add to the list above?
Part 1: Why It's So Hard to Justify the Cost of a Social Network.
More in category: Longhorn Project Diary
12/16 update: As I was catching up on RSS feeds from my favorite blogs, I caught up to Tony Karrer's post last week about 100 Conversations on the eLearning Technology blog. Given the relevancy of this subject to the community of eLearning professionals Tony enjoys, I'm including this update as part of the process to add this post to the conversation stream under the ROI category. I'd love to hear from some of you about any case studies or lessons learned you know of around the subject of ROI in learning projects that used social media to meet some of the objectives. Feel free to post a link in the comments to your blog post about it.
If you liked this, or any of the articles on this site, please subscribe!
Are you a fellow "Tweep"? Follow me on Twitter (@melaclaro).
Or, connect with me on LinkedIn: LinkedIn.com/in/melaclaro.
Got friends? Forward this post or save it to any of the bookmarking sites below.