Recognition has grown that content, by itself, is not necessarily going to make a difference. Formal learning, without having helped the learner be in a context where the content makes sense, cognitively and emotionally, doesn’t work. Even a well-designed job aid won’t make sense in the wrong context or if the performer doesn’t use it. Hence the recent rallying cry: “If content is king, then context is emperor.”
Content, context, and customization
At core, it’s about meeting a learning or performance need. Whether we’re developing a person systematically, or assisting them serendipitously, it’s about improving outcomes. And information without context is arbitrary and meaningless.
Content here means any information, whether static or dynamic. It can be text, images, diagrams, audios or videos, and combinations of the above. There is interactive content, too, ranging from a simple form-filling interface to a full interactive simulation, and even a virtual world or augmented reality. It can be a digitally mediated conversation with a person, pre-prepared resources, or resources assembled on the fly. It’s any information augmentation that assists us in performance, whether directly or preparatory.
Contextualizing support means two things. Based upon the individual’s goals, we either push specific information to the individual or optimize information they’ve requested. We’re curating on the fly, and programmatically. The goal is to have smart systems that can help individuals at scale.
At the desktop, we have been able to embed content in the workflow, so we know what learners are doing by the application they are using. Electronic performance support systems are really just contextualized help, providing the right support at the right time. For example, think of the wizards built into something like TurboTax. Mixed initiative systems, e.g., interactive performance support, can also quite literally ask the individual some questions to determine the context and proceed from there.
It can also mean customization, so what you see is different than what I see, based upon information about us, including what we demonstrably know or have shown through behavior. Think of Netflix or Amazon recommendations that are based upon your pattern of previous consumption and perhaps your current search.
Mobile technology, these new portable computing platforms, complicates the picture. Devices can know where we are and start doing contextual support. We can know a learner’s goals in learning from their learner profile, and we can know a performer’s goals from their currently assigned tasks or role. Then we can provide custom support. What could that look like?
Ultimately, our mobile devices could act like wise personal mentors. They could know where we are and what we’re trying to accomplish, then combine that with what we currently know and provide support based upon what’s available. What we’re talking about is customized, contextualized personal development, both short- and long-term.
Imagine a system that, as you go about your day, is looking for opportunities to support your immediate performance and develop you over time:
- If you’ve a meeting, it preps you beforehand to make the most out of it, provides some support during the meeting, and then provides self- or other-reflection to cement the learning.
- If you’ve a task, your device would let you know when you’re near a relevant location to accomplish it.
- If you’ve a learning goal, the system might point out a relevant example nearby or provide a sample problem if it’s been too long since you’ve last had a chance to practice the skill.
- If you’re solving a problem, it might connect you to someone who’s faced a similar problem.
The necessary components would link together a rich picture of what you know, your responsibilities or tasks, a map of the content resources available, and a model of contexts, and tie them together via a system of rules to pull out the important information (Figure 1).
Figure 1: Context system components
We haven’t yet put together a complete such system. We have yet to have interoperable models of what one knows and what one’s supposed to do. We could do it in limited ways, but we’re not there yet. Where are we?
We have parts of this already. Mobile devices have the ability to know where you are via GPS, which way you’re facing via a compass, and even how you’re holding and moving them via accelerometers and gyroscopes. They also can know what you are doing (or are supposed to be doing) from your calendar and/or reminders list. Other sensors that are possible, or already in limited use, include thermometers and barometers.
Apps already take advantage of at least some of these opportunities. Restaurant searches via apps can help you find a particular type of restaurant near your current location, and scanning apps can tell you where you can find the same product nearby and at what price. There’s also a “to do” system that supports local goals, using geo-fencing to trigger messages when you’re near a location. In a sense, many apps are mixed-initiative systems, in that you provide some of the data and then the system reacts. There is nothing wrong with that, but it could be richer and more serendipitous.
Further, there are now architectures you can use to take advantage of these opportunities. In addition to API access to the hardware through system SDKs, there are libraries that allow hybrid development (using HTML5 with a wrapper instead of native code) to take advantage of these features. ARIS, from the University of Wisconsin Madison, is architecture specifically for augmented reality, useful for such things as ARGs (alternate reality games).
Gimbal, a new, context-sensitive system recently announcement by Qualcomm, has inspired excitement over the possibilities. While oriented towards marketing, the integration of geo-fencing, user profiling, and image recognition to support content delivery provides a nice model for learning as well. (And, I argue, the best marketing is good customer education.)
We also know what you know, via competency maps and your learning record in an LMS. We can use the new Tin Can API for learning to paint a richer picture of relevant learning experiences to characterize the learner.
However, as of yet there’s been no real advantage taken of opportunities with the calendar. This is a missed opportunity. We could be providing structured help before an event, such as meeting preparation or a reminder of things we want to work on. We could be supporting performance during an event, such as a job aid or a calculation wizard. And we could be closing the loop on the event, by having a self-evaluation rubric or a mentor checking in to see how it went or by providing some post-event tools to capture outcomes and plan forward.
We already see the opportunities of taking advantage of the local context, but more opportunity is on the table. And the opportunities are about supporting performers.
If we take a bigger picture of learning, looking at every way we can help people perform better, we naturally include performance support, informal learning, and social learning. We can now broaden this perspective to include supporting people not just at the desktop, but whenever and wherever.
And we can leave it to individuals to take their own initiative, as they well and truly are doing—but we can go further. We can provide support for the things we know we have to do and give people more-focused apps than we can find on the open market. The tools to create these are in place.
Even if you’re not ready now to go this far, and I understand that it may take some time for the possibilities in your organization to emerge, there are some steps you can and should be taking. The steps that will support personalization and customization will also prepare the ground for contextual support regardless of platform, whether desktop or mobile. They include getting more detailed about your content systems. Adaptation to the individual takes into account some of the models above, such as knowing who the learner is and what content is available.
You should also be developing your capabilities, whether in house or outsourced, to get mobile applications developed. And that doesn’t have to mean custom development for each mobile operating system. Hybrid approaches—wrapping cross-platform HTML5 with platform-specific environments—let you take advantage of local contextualization APIs while maximizing cross-platform delivery opportunities. Yes, it’s more complex than just mobile web, but it’s simpler than fully customized development for each platform (at a cost of less efficient operation and less elegant interfaces). There are no right answers, only tradeoffs.
Context-sensitive delivery is already here, but the learning opportunities are just nascent. Getting your mind around the possibilities now is a necessary prerequisite to taking full advantage of it, and that time is closer than you may think. The right context is the right opportunity—the teachable, and supportable, moment—and you want to seize it to provide the best chance for your organization to succeed.