Everyone’s agog at the latest technology wonder: AI (artificial intelligence). However, they’re generally doing it with less than a full understanding. And, I argue, they’re focusing on the wrong thing. I suggest that if we really want to leverage technology to empower businesses, let’s talk about IA (intelligence augmentation).

Some thoughts on AI

We’ve heard lots of claims about what AI is going to do—everything from automate our learning to taking our jobs. What’s real, and what should we be thinking about?

AI is about trying to make computers do smart things. It isn’t always about doing it how people do it, but that’s been a major strand of the effort: making models that reflect what we believe about how we think. The generic platform of a digital computer makes for a lovely opportunity to create many different models. And that’s been powerful for computing and for cognitive research.

The initial models were based upon a belief that we’re logical reasoners, and expert systems and problem-solvers. But it soon became clear both through cognitive research and computer modeling that we weren’t the formal thinkers we envisioned. While there remain useful areas for these sorts of models, new directions were needed. Research into alternate models led us to neural networks, the basis for most of the field now termed machine learning.

Machine learning has two major subsets: supervised and unsupervised. The former is when you train on a set of data that forms the basis for the system to start making those decisions on its own. Examples could include evaluating essays, for instance. Unsupervised is when the system is given data and tasked to look for emergent patterns. Computers can find patterns humans struggle to detect, and vice versa. This can be data mining to find out unexpected insights from activity data, such as correlations between activity and outcomes.

There are many new promises in AI. We may be able to get tutors and richer automatic feedback. But there is a flip side. AI systems have the potential to make errors in judgments. If the data history used has historical bias, so too will the outcomes. There is also the potential for AI to take jobs, potentially replacing roles in L&D, for example.

Some thoughts on IA

For a different perspective, think about the relative strengths and weaknesses between people and computers. While there is considerable work on making computers good at pattern recognition (a major AI effort), it’s hard for them whereas it’s an artifact of our architecture. And the reverse is true; computers can remember large quantities of arbitrary information and perform complex but rote calculations perfectly, repeatedly, without a decrease in exactness due to fatigue. We, on the other hand, are bad at those things.

This diversity in capability suggests that the best solution is to find ways to combine the strengths of each architecture, machine and human, to create a whole greater than the sum of the two parts. This, I suggest, is a better goal and the aim of IA (intelligence augmentation).

Cognitive research has documented that our thinking isn’t just in our heads, but is distributed across our tools (as well as other people). Thus, there’s an inherent capability for this distribution, so our goal should be to optimize the outcome. We should be looking to augment our capabilities instead of replacing them.

The first way we can augment ourselves from an L&D perspective is to have computers assist us in learning. They can, for instance, use algorithms to do a systematic job of spacing out our learning; hypothesizing and detecting when certain material might be close to being forgotten (the optimal time to reinforce). Adaptive and spaced learning can optimize the rate at which we can learn (given that an event and knowledge model is pretty much the worst thing we can do). AI may be able to check our learning designs or automate some of the practice evaluations, as well.

A second way to augment our capabilities is with performance support. This isn’t new, we’ve been using checklists, lookup tables, and decision support tools for a while. However we can now do more, using contextual support to know what we’re doing and most likely need support with. It’s like a GPS, but for tasks. Instead of “turn right here,” it’s “now find out what the customer’s trouble is.”

Combining contextual recognition with learning provides a new opportunity as well—contextualized learning. We’re on the cusp of being able to do this (meaning, we can do it now; the limitations are intent and pocketbook, no longer the technology). Here, we’re adding information on top of work tasks— real learning in the workflow—to turn performance moments into learning situations.

Moving forward

While we might say that we should automate anything we can, my principle is somewhat different. I believe we should choose what we want to automate and outsource to computers, freeing ourselves to pursue the tasks we want to own. We can (and should) have oversight over what the software does, and we might have software checks on our actions as well.

For now we should start thinking of a human/computer symbiosis. We should look at what would be an optimal performance, and then design backwards. That design should build the tools first, and then the training that incorporates the tools. And this includes other people as well, so we are looking for the combination of people and tools that is optimal. We also should be thinking about both fast and slow innovation—solving problems in the moment, as well as fostering continual percolation of new ideas.

Going forward we will want to continually evaluate the tools, tracking new developments and understanding the possibilities. From there we can look to improve our ecosystem—in a sense continuing to augment our augmentation. This is the vision Douglas Engelbart gave us, and the opportunity we have to use technology in alignment with how we think, work, and learn. If we can augment the organization’s intelligence—a broader picture of IA—we can be playing a key role in organizational success. And that’s what we’re about, right?