An Introduction to Slide-level Analytics

By
Devlin Peck
. Updated on 
May 5, 2023
.
Slide-level analytics xAPI article cover photo

Slide-level analytics brings eLearning evaluation to the next level.

Powered by xAPI, slide-level analytics allows us to collect and analyze valuable data about how users interact with eLearning programs. We can then use this data to optimize learning programs as needed.

Let's break down the term:

Therefore, slide-level analytics is the meaningful discovery, interpretation, and communication of data generated by user activity on any given slide of an eLearning course or interaction.

Slide-level Analytics Foundation: xAPI

As mentioned earlier, slide-level analytics is possible because of the Experience API (xAPI). This API enables us to collect data about human experience in a way that's consistent, human-readable, and flexible.

We generate data with xAPI by sending xAPI "statements." An xAPI statement consists of an actor, verb, and object at its most basic level.

For example, these are all valid xAPI statements:

However, the xAPI specification makes it possible to collect much more detailed information with our statements. For example:

These are just a few examples, but the specification is flexible enough to collect much more. We'll cover some of the data that you might want to collect later in this article.

Therefore, xAPI allows us to collect data about what the user is selecting on a slide, how long they're spending on a slide, which text they're entering in open response questions, and what they're doing across many learning sessions (in addition to a whole lot more).

Let's dive in and explore the benefits of collecting slide-level data.

Benefits of Slide-level Analytics

Now that you know what slide-level analytics is, let's discuss how it can improve your learning design.

Measure content and activity effectiveness

One of the greatest benefits of slide-level analytics is that it allows us to assess (and subsequently optimize) the effectiveness of eLearning content, activities, and resources.

To measure this effectiveness, we'd look at correlations between the user's experience on the "learning" slides and their performance on the assessment(s) or on the job.

For example, let's say that we want to know if an animated video in the course effectively teaches the learning objectives that it's supposed to teach. We could look at correlations between time spent on the video and the number of associated questions that users answer correctly.

If we find that the users who watch the video score an average of 95% on the associated questions, and those who do not watch the video score 70%, then it indicates that the video is likely effective. Using this data, we may decide to add additional guidance in the eLearning course about how important it is to watch the full video.

Likewise, if we find that there is no significant difference in question performance between those who watch the video and those who do not, then we may want to explore more deeply why the video isn't helping people answer the questions.

The example above shows how slide-level analytics helps with level 2 evaluation, but what if we want to measure whether our eLearning program is helping improve on-the-job performance?

This would require us to compare our slide-level data to on-the-job performance data (which we can pull from sales systems, scheduling software, supervisor reviews, and any other metrics that employees are held accountable to).

For example, imagine that we want to know whether the job aid that we provide to customer support agents helps them spend less time on a specific type of support call. To start our evaluation, we would look at whether the call time of those who downloaded the job aid is shorter than the call time of those who did not download the job aid.

As you can see, xAPI data is like a spotlight. It shows us where potential issues may lie so that we can ask follow-up questions and explore more deeply to identify the best way to improve the learning program.  

Also, to make the most of our data, we should approach our learning design project with specific questions in mind, such as:

By identifying our questions before we begin collecting data, it focuses our efforts on the data that matters most. This saves time and money down the road.

Furthermore, if we have specific enough questions with valuable enough answers, we can take our evaluation efforts beyond correlation. We can do this by approaching the question like a learning scientist and conducting mini experiments.

For example, imagine spending $25,000 on a custom animated video from a cutting-edge vendor that uses a "neuroscience-backed approach" to animation.

With the video in hand, we want to know whether it helps people learn the content better than the video that we were using previously. To answer this question, we could develop two versions of the eLearning course: one with the $25,000 video and the other with the much cheaper video that was made in-house.

We would then analyze the data that comes in to see how much the "neuroscience-backed" video helps or hurts the users' performance on the associated quiz questions or their on-the-job performance. If we find that it does not help, then we may conclude that spending vastly more money for the snazzy animated videos is not worth it.

Identify user experience issues

Another big selling point of slide-level analytics is that it helps us pinpoint user experience issues within eLearning courses.

Since most learning departments are concerned with the number of employees who complete their eLearning courses, slide-level analytics can help us identify exactly where people are dropping off. We can then use this knowledge to modify the content as needed in a way that optimizes for course completions.

For example, let's imagine that 10,000 people start an eLearning course but only 2,000 people complete it. We know that we have a problem, but without slide-level analytics, we don't know where that problem may lie.

With slide-level analytics, on the other hand, we can see that 5,000 people exited the course on the eighth slide of the course, which happens to be a drag and drop interaction. By looking at the sequence of actions on the page, we find that if a user drags and drops the items in a certain order, the course freezes up and the user is unable to continue.

Now, rather than handing the course to a learning designer and telling them to "figure out what's wrong," we can ask them to fix the interaction on the eighth slide of the course.

Furthermore, we may find that 2,500 of the other users who exited the course did so on the 20th slide. After looking at it more closely, we find that this slide has significantly more paragraphs of text than there are on any other slide. We decide to break that slide up into multiple slides with more digestible chunks of content.

After making these changes, we would continue to monitor the data to see if our solutions resolved the issue (as well as identify any other potential user experience issues that may crop up).

This is one example, but hopefully you see how monumental slide-level analytics is for identifying user experience issues.

In fact, slide-level analytics is infinitely more efficient than relying on users to report the issues that they're experiencing, and since we have such granular data available to us via xAPI, we can see the exact sequence of issues that led to the user experience problem (therefore making it much easier to troubleshoot and resolve).

Finally, we can collect user-reported user experience issues in conjunction with the automatic tracking.

For example, we could include a feedback widget on each slide of an eLearning course; when the user selects it, they can provide feedback directly on the slide that they have feedback about. When it comes time to review these responses, we would see exactly which slide or interaction the user is referring to — there's very little guesswork involved when it comes to slide-level analytics.

Understand user behavior

Slide-level analytics also helps us better understand user behavior overall. This is particularly true when it comes to identifying the level of effort that a user is devoting to a given slide's content or interactivity.

For example, when it comes to compliance training, managers are concerned with whether or not employees completed the compliance course and / or passed the associated quiz.

However, what if we found that users were in a different tab for the duration of every video or slide in the course? What if we found that, when they got to the quiz at the end of the course, they tabbed out of the course before answering any of the questions?

Looking at this data, we would likely conclude that the user did not engage with the content appropriately (and they likely researched the answers to the questions because they didn't learn them from the course).

We can also learn more about users (and their experiences with the course) by asking open-ended, qualitative questions.

For example, if a user spends less than 30 seconds viewing a 5-minute video, we can trigger a slide in the eLearning course asking "Why didn't you view the full video?" and include an open-text box for the user to enter their response.

We can then use these qualitative responses in conjunction with the raw xAPI statement data to get a full picture of user behavior within our eLearning offerings.

Slide-level analytics benefits: wrap-up

As you can see, slide-level analytics is the key to effectively evaluating and optimizing eLearning offerings. Without sufficient data, claims about an eLearning course's effectiveness are weak and unsubstantiated.

By collecting this data and revising our eLearning courses accordingly, we can ensure that we're not only providing the best experience possible for the users, but also optimizing our courses in a way that improves performance and is beneficial for the business.

In the next section we'll discuss the barriers facing widespread adoption of slide-level analytics.

Slide-level Analytics: Potential Barriers

Despite the clear benefits of slide-level analytics, there are some potential barriers that are slowing down adoption. We'll explore these barriers in this section.

Learning Record Store (LRS) cost

If we want to collect xAPI data, then we need a Learning Record Store (LRS). The LRS is what holds all of the xAPI statements that we generate. However, since most companies aren't leveraging xAPI in their evaluation efforts at the moment, it's not the norm for companies to have access to an LRS.

Therefore, the up-front and often ongoing cost associated with purchasing an LRS is often enough to sway companies away from slide-level analytics.

With that being said, many of the most popular LRSs offer generous free plans that you can use to collect and analyze moderate amounts of data. Watershed LRS and Veracity LRS are two popular options.

You can also install your own version of Learning Locker, which is an open source LRS. This option does have minor ongoing costs associated with it since you need to run it on your own server.

If you're generating millions of statements from tens or hundreds of thousands of employees, then obviously you would need to invest in an enterprise solution.

Despite the additional cost of purchasing an LRS, it will soon be the norm for companies to have their own LRSs. This is because cmi5, an xAPI profile that defines content-to-LMS communication, is replacing SCORM. Read this article to learn more about cmi5 and how its pending adoption.

Additional evaluation effort required

Another potential barrier to slide-level analytics is that you need a team member who has the time and skill set to analyze the data.This is because, as you can imagine, the xAPI data is only useful if there is someone to analyze it and draw actionable conclusions.

It also requires additional up-front effort from the learning design team. They need to discuss the evaluation efforts and get on the same page about what data will be collected and how it will be used to improve the learning program.

Since the current level of evaluation focuses on reading course completion and quiz score data from a Learning Management System (LMS), this more hands-on, data-driven approach can be seen as asking a lot.

However, if learning teams are concerned with better understanding their users and designing effective solutions, then they will feel empowered by slide-level analytics.

The learning gains and performance improvements associated with this level of evaluation quickly justifies the increased level of effort required.

Also, if learning programs are mapped to metrics that impact business performance, then this level of evaluation is even more important. It translates directly into metrics that matter to the business, and it helps your learning team stay accountable and informed.

Privacy concerns

As you may be aware, there is currently a high level of scrutiny surrounding privacy and data collection. This may raise red flags for some people who are considering implementing slide-level analytics.

However, there are many ways to alleviate these concerns.

The most stringent and widespread data protection regulations to date come from Europe's GDPR. It's possible to implement slide-level analytics in a way that's completely GDPR-compliant, especially with tools like Learning Locker's GDPR tool for enterprise clients.

You can also allow your users to opt in or out of the data collection measures, and you should inform them how you will be using the data to improve the learning program.

If you want to obscure who it is that's using the course so that you don't have such personally identifiable information, you can easily generate a randomized identifier for each user.

Then, instead of Devlin launched xAPI 101 Course, it would be User29382911 launched xAPI 101 Course. This protects your user's privacy, but it could make it more difficult to track one user's experience across multiple learning programs and applications.

Finally, you can set up your LRS so that only specific people can read or write xAPI statements. This ensures that a particularly JavaScript-savvy user cannot query your LRS to collect information about other users.

Technical implementation skillset

Last but not least, it does require a technical skill set to implement xAPI tracking and make the most of slide-level analytics.

In short, your team needs:

As you can see, you don't need to be a full-blown software engineer. However, your eLearning developer or someone else on your team will need to devote a moderate amount of effort to bring themselves up to speed on the xAPI specification and how to implement it.

With that being said, it's not terribly difficult to learn (especially with the vast amount of resources that are available for free online). If you're interested, you can check out the Full Guide to xAPI and Storyline, which is a tutorial series I've written that walks you through, step-by-step, how to send custom xAPI statements from a course developed in Articulate Storyline 360.

Conclusion

In short, slide-level analytics empowers learning teams to bring their eLearning offerings to the next level. It helps teams make data-driven decisions to optimize their offerings and improve every aspect of their programs.

This is why there is a high degree of excitement in the xAPI community. As companies become more comfortable with xAPI and begin leveraging slide-level data to improve their programs, we will enter a new era of L&D accountability and, if we listen to the data and adjust accordingly, workforce productivity.

So, now is a better time than any to incorporate slide-level analytics into your overall learning and development strategy.

Devlin Peck
About
Devlin Peck
Devlin Peck is the founder of DevlinPeck.com, where he helps people build instructional design skills and break into the industry. He previously worked as a freelance instructional designer and graduated from Florida State University.
Learn More about
Devlin Peck
.

Explore more content

Explore by tag