Learning Analytics


“If you can’t measure it, you can’t manage it.” — Nolan & Norton Consultants


Business Needs


For decades companies have struggled with the real costs, benefits and ROI of training. Using new tools and technology now available, organizations can apply business analytics to understand the activity, effectiveness and impact of learning and training. Enter analytics. The concept of analytics is to provide a solution that allows an organization to understand what’s going on in their training and learning operations. To do so, the solution should answer basic business questions, such as:
  • How much did something cost?
  • What were the components of the cost?
  • Who took or completed a learning offering?
  • What can we do to improve it?

User Profiles


A training analytics solution should give different users the information they need to make decisions. To that end, Bersin & Associates has identified three categories of analytics users or information consumers. Depending on their jobs, they use information for different purposes. It’s important to keep in mind that these three different groups need slightly different views of information. Executives want dashboards or charts. Line managers typically need tabular reports and charts designed around their audience and programs. Training managers and executives need the ability to slice, dice, drill-down and filter information on a continuous basis.

LA_01.jpg

Perception of ROI


If one looks at any facet of business the concept of “return on investment” (ROI) is always a relevant business topic. ROI can have many connotations depending upon the users perceptions and motivations. In reality, ROI is really a measure of perceived value. Value can be different for different stakeholders. Let’s look at some examples:
  • An organization provides training to a group of participants. This person wants to know the satisfaction levels of the participants.
  • A course designer creates an e-learning module. This person wants to know if the module did its job in transferring new knowledge or skill to the learner.
  • A business unit manager sends two employees to training. This person wants to know the impact the training has made on the job.
  • A senior executive measures performance by the business objectives that drive the company. This person wants to know the degree to which training has helped drive key business results.
  • The finance group manager views benefit relative to cost on every decision. This person would want to know the benefit to cost ratio, payback period and ROI percentage from training.

Value is inherent in each of the aforementioned examples. So the first question one should ask when contemplating an ROI solution is “How does my user of this information define value?” Having said that, there is a strong need to ensure that one has a balanced approach to learning measurement. A balanced approach requires an understanding of all stakeholders’ perceptions of return on investment. The best approach to accomplish this balanced scorecard is the legendary and time-tested Kirkpatrick Model, with the additional fifth level added by Dr. Jack Phillips.

Kirkpatrick-Philips Model


Knowing there is a definitive need to measure the impacts of a large corporate cost like learning it is fitting to have an industry acceptable model for doing so. This model is actually one that has been in existence since the 1950’s but continues to be accepted today using technology and creativity to maximize its benefits for the modern corporation.

In 1959, Donald L. Kirkpatrick, PhD, published a series of four articles called “Techniques for Evaluating Training Programs.” The articles described the four levels of evaluation that he had formulated based on his work for his PhD dissertation at the University of Wisconsin, Madison. Kirkpatrick’s goal was to clarify what evaluation meant. The model clearly defined evaluation as meaning “measuring changes in behavior that occur as a result of training programs.” The model itself is composed of four Levels of training evaluation. A fifth level, ROI has been added since then. The fifth level was the brainchild of Dr. Jack J. Phillips, PhD. The illustration below and subsequent commentary summarize Kirkpatrick’s Four Levels and Phillips’ Fifth Level.

LA_02.jpg
Representation of Kirkpatrick's and Phillips' model of learning analytics showing level-wise measurement objectives


Level 1 - Satisfaction


Level One is a satisfaction survey. Per Kirkpatrick, “evaluating reaction is the same thing as measuring customer satisfaction. If training is going to be effective, it is important that students react favorably to it.”

The guidelines for Level One are as follows:
  • Determine what you want to find out
  • Design a form that will quantify the reactions
  • Encourage written comments and suggestions
  • Strive for 100% immediate response
  • Get honest responses
  • Develop acceptable standards/ benchmarks
  • Measure reactions against standards, and recommend appropriate action
  • Communicate reactions and actions as appropriate

The benefits to conducting Level One evaluations are:
  • A proxy for customer satisfaction
  • Immediate and real-time feedback on an investment
  • A mechanism to measure and manage learning providers, instructors, courses, locations, and learning methodologies
  • A way to control costs and strategically spend your budget dollars
  • If done properly, a way to gauge a perceived return on learning investment

The instruments for conducting Level One valuations are:
  • “Happy sheets”, feedback form
  • Verbal reaction, post-training surveys, or questionnaires

Level 2 - Learning


Level Two is a “test” to determine if the learning transfer occurred. States Kirkpatrick, “It is important to measure learning because no change in behavior can be expected unless one or more of these learning objectives have been accomplished. Measuring learning means determining one or more of the following.”
  • What knowledge was learned?
  • What skills were developed or improved?
  • What attitudes were changed?

The guidelines for Level Two are as follows:
  • Use a control group, if practical
  • Evaluate knowledge, skills, and or attitudes both before and after the program
  • Use a ‘test’ to measure knowledge and attitudes
  • Strive for 100% response
  • Use the results to take corrective actions

The benefits to conducting Level Two evaluations are:
  • Learner demonstrates the transfer of learning
  • Provides training managers with more conclusive evidence of training effectiveness

The instruments for conducting Level Two valuations are:
  • Typically assessments or tests before and after the training
  • Interview or observation

Level 3 - Impact


Level Three evaluates the job impact of training. “What happens when trainees leave the classroom and return to their jobs? How much transfer of knowledge, skill, and attitudes occurs?” Kirkpatrick questions, “In other words, what change in job behavior occurred because people attended a training program?”

The guidelines for Level Three are as follows:
  • Use a control group, if practical
  • Allow time for behavior change to take place
  • Evaluate both before and after the program if practical
  • Survey or interview trainees, supervisors, subordinates and others who observe their behavior
  • Strive for 100% response
  • Repeat the evaluation at appropriate times

The benefits to conducting Level Three evaluations are:
  • An indication of the ‘time to job impact’
  • An indication of the types of job impacts occurring (cost, quality, time, productivity)

The instruments for conducting Level Three valuations are:
  • Observation and interview over time to assess change, relevance of change, and sustainability of change

Level 4 - Results


According to Kirkpatrick, Level Four is “the most important step and perhaps the most difficult of all.” Level Four attempts to look at the business results that accrued because of the training.

The guidelines for Level Four are as follows:
  • Use a control group if practical
  • Allow time for results to be achieved
  • Measure both before and after the program, if practical
  • Repeat the measurement at appropriate time
  • Consider costs versus benefits
  • Be satisfied with evidence if proof not possible

The benefits to conducting Level Four evaluations are:
  • Determine bottom line impact of training
  • Tie business objectives and goals to training

The instruments for conducting Level Four valuations are:
  • Measures to be in place via normal management systems and reporting—challenge is to relate to the trainee

Level 5 - ROI


Level Five is not a Kirkpatrick step. Kirkpatrick alluded to ROI when he created level Four linking training results to business results. However, over time the need to measure the dollar value impact of training became so important to organizations that a fifth level was added by Dr. Phillips.

The guidelines for Level Five are as follows:
  • Use a control group, if practical
  • Allow time for results to be achieved
  • Determine the direct costs of the training
  • Measure a productivity or performance before the training
  • Measure productivity or performance after the training
  • Measure the productivity or performance increase
  • Translate the increase into a dollar value benefit
  • Subtract the dollar value benefit from the cost of training
  • Calculate the ROI

ROI calculations are being done by a few world-class training organizations. They help these organizations:
  • Quantify the performance improvements
  • Quantify the dollar value benefits
  • Compute investment returns
  • Make informed decisions based on quantified benefits, returns, and percent return comparisons between learning programs

Dr. Phillips has created an ROI Methodology that he conducts certifications and workshops on and has helped training organizations use the right tools to measure the ROI on organizational learning. A summary of his methodology is illustrated below:

LA_03.jpg
Representation of Phillips' ROI methodology showing the four phases and associated analytics levels

The methodology is a comprehensive approach to training measurement. It begins with planning the project (referred to by Dr. Phillips as an Impact Study). It moves into the tools and techniques to collect data, analyze the data and finally report the data. The end result is not only a Level 5 ROI but also measurements on the Kirkpatrick 4 Levels as well. This yields a balanced scorecard approach to the measurement exercise.

Evaluation Targets


LA_04.jpg
Visual representation of the "one-half" rule-of-thumb indicating the quantum of L&D programs to be measured at different analytics levels


Critical Success Factors


(1) Fix the front-end analysis: Unfortunately, within too many organizations the needs assessment process is inadequate. Sometimes, its inadequacy is unknown until the ROI work begins. To develop the ROI and capture the six levels of data, objectives must include application and impact. This means that programs must be implemented with the business need in mind. Unfortunately, few learning programs have this level of performance analysis up front. Without it, programs often do not deliver value, which creates negative ROI. This creates frustration and makes practitioners realize that perhaps none of their programs are adding value. This must be addressed early in the process.
(2) Develop the skills: While this is a given, the skill sets necessary for effective ROI are not the same skill sets used in other processes, such as leadership development. ROI makes us think and learn things we have not learned before if our preparation did not include measurement and evaluation.
(3) Follow the guiding principles: The standards developed for the ROI methodology are the “12 guiding principles.” They provide rules for collecting and analyzing data and reporting it to various target audiences. The rules are needed for consistency, stability, and standardization. More important, the rules add credibility. The principles are based on a conservative philosophy, and when data are presented to a management team, credibility is a key issue. These standards can help secure acceptance for the data. They must be followed.
(4) Educate management: The management team is our greatest ally, and must understand ROI. Initially, managers will say they understand ROI. Many studied it for their MBAs. They see ROI calculations for their divisions, departments, or companies. They know it is a financial measure. But they can’t fully appreciate ROI for learning and development unless they understand that it is a process that not only generates the financial ROI, but also collects five other types of data. All these data sets are important for management teams to know. This will require repeated explanations and ROI briefings at every opportunity.
(5) Plan, plan, plan: Planning is one of the most important steps of the process, but it is often either short-circuited or omitted altogether. Thorough planning of a study is absolutely essential to keep cost and time at a minimum.
(6) Collect the right quality and quantity of data: This is easier said than done. Quantity of data reflects the vast amount of information needed for an ROI analysis. If data collection is left to questionnaires, which only 50 percent of studies use, the challenge is to have an adequate response. Good return rates can be obtained, and it is not unusual for our clients to achieve a 60 to 90 percent return rate on a three-to-five page questionnaire.
(7) Isolate the effects of the program on the data: A step that is omitted by some, and ignored or hated by others, is isolating the effects of the program on the data. Whenever a learning and development program is implemented and a business measure is influenced, the key challenge is to determine how much of the change in the measure is actually connected to the program. Perhaps the most important contribution that we have made in the last decade is to refine ways to make this determination.
(8) Plan a face-to-face meeting with key executives for the first study: This is a difficult, but necessary step. The first ROI study in an organization often brings interest and anticipation. Executives will usually attend a face-to-face meeting. They may be curious, supportive, or cynical, but they will come. The challenge is to make sure this meeting is planned properly and executed perfectly. The meeting should educate the team about the methodology as well as the data that has been collected, analyzed, and reported.
(9) Take steps to make the cost of the process minimal: Many shortcuts can be used to keep the time and direct monetary costs to a minimum. Too often they are not used. Take shortcuts whenever necessary, and when alternatives are available, always take the one that costs the least or saves the most time.
(10) Make plans to sustain the process: With any change comes a need to sustain it. Early in the implementation, processes must be put into place to make ROI evaluation routine and important. Otherwise, it will become a nuisance and an add-on activity that may quickly cease if the ROI champion leaves the organization.

Guiding Principles


Principle 1: When conducting a high-level evaluation, collect data at lower levels.
Principle 2: When planning a high-level evaluation, the previous level of evaluation is not required to be comprehensive.
Principle 3: When collecting and analyzing data, use only the most credible sources.
Principle 4: When analyzing data, select the most conservative alternative for calculations.
Principle 5: Use at least one method to isolate the effects of a project.
Principle 6: If no improvement data are available for a population or from a specific source, assume that little or no improvement has occurred.
Principle 7: Adjust estimates of improvement for potential errors of estimation.
Principle 8: Avoid use of extreme data items and unsupported claims when calculating ROI.
Principle 9: Use only the first year of annual benefits in ROI analysis of short-term solutions.
Principle 10: When analyzing ROI, fully load all costs of a solution, project, or program.
Principle 11: Intangible measures are defined as measures that are purposely not converted to monetary values.
Principle 12: Communicate the results of ROI methodology to all key stakeholders.

Best Practices


(A) Plan your metrics before writing survey questions: First and foremost, never ask a question on a data collection instrument unless it ties to a metric you will utilize. As simple as this sounds, often is the case where organizations create questions with no purpose in mind.
(B) Ensure the measurement process is replicable and scaleable: Organizations tend to spend thousands of dollars on one-off projects to measure a training program in detail. This information is collected over many months with exhaustive use of consultants and internal resources. Although the data is powerful and compelling, management often comes back with a response such as “great work, now do the same thing for all the training.” Unfortunately such one-off measurement projects are rarely replicable on a large-scale basis. So don’t box yourself into that corner.
(C) Ensure measurements are internally and externally comparable: Related to best practice #2 is the concept of comparability. It is a significantly less powerful endeavor to do a one-off exercise when you have no base line of comparability. If you spend several months calculating out a 300% ROI on your latest program how do you know if that is good or bad? Surely a 300% ROI is a positive return but what if the average ROI on training programs is 1000%?
(D) Use industry-accepted measurement approaches: Management is looking to the training group to lead the way in training measurement. It is the job of the training group to convince management that their approach to measurement is reasonable. This is not unlike a finance department that must convince management of the way it values assets. In both cases, the group must ensure the approach is based on industry accepted principles that have proof of concept externally and merit internally.
(E) Define value in the eyes of your stakeholders: If you ask people what they mean by ‘return on investment’ you are likely to get more than one answer. In fact, odds are you’ll get several. Return on investment is in the eyes of the beholder. To some it could mean a quantitative number and to others it could be a warm and fuzzy feeling.
(F) Manage the change associated with measurement: As you can likely see from some of the best practices, they might be doomed for failure if you fail to manage the change with your stakeholders. Successful organizations will spend considerable time and energy planning for the change. Assess the culture and the readiness for change. Plan for change or plan to fail.
(G) Ensure the metrics are well balanced: Although you want to understand the needs of your stakeholders and have them define how they perceive value, you also need to be proactive in ensuring that your final ‘measurement scorecard’ is well balanced.
(H) Leverage automation and technology: Although this goes hand and hand with a measurement process that is replicable and scaleable it is worthy of separate mention. Your measurement process must leverage technology and automation to do the heavy lifting in areas such as data collection, data storage, data processing and data reporting.
(I) Crawl, walk, run: When designing a learning measurement strategy it is nice to have a long term vision, but don’t attempt to put your entire vision in place right out of the blocks. The best approach is to start with the low hanging fruit that can be done in a reasonable time frame to prove the concept, demonstrate a ‘win’ and build a jumping off point to advance it to the next level.
(J) Ensure your metrics have flexibility: The last thing you want to do is roll out a measurement process that is inflexible. You will likely have people who want to view the same data but in many different ways. You need to have architected your database to accommodate this important issue thereby creating measurement flexibility.

Sources: Donald L. Kirkpatrick, Evaluating Training Programs: The Four Levels, 2nd Edition, Berrett-Koehler Publishers; Jack J. Phillips, Measuring ROI – Fact, Fad, or Fantasy, ASTD White Paper, April 2007