AI Is Everywhere, but What Is AI?

AI, or artificial intelligence, is a buzzword in the purest sense: It’s a term that crops up seemingly everywhere, yet it means something different in almost every context. It’s big, nonspecific, and ubiquitous.

So, what is AI?

One definition, from Kate Crawford and Meredith Whittaker, co-chairs of the AI Now symposium, held in July 2016: “Artificial intelligence (AI) refers to a constellation of technologies, including machine learning, perception, reasoning, and natural language processing.”

Another, from Stuart Russell, a professor of computer science and engineering at University of California, Berkeley, is building machines that are intelligent—that can see, hear, understand, learn, discover, help make plans, and decide how to behave.

Russell, in a December 2015 TED talk, points out that AI is already everywhere: Google’s search engine is an example of AI, as is Siri. Video games are based on AI, often dumbed-down so that the human player can actually win occasionally. The algorithms that correct your spelling and guess what word you’re typing use AI, as do robots that perform literally thousands of tasks. But each of these examples uses AI differently and might represent different manifestations of AI, including what’s known as machine learning or deep learning.

To further muddy the waters, AI can be classified into “general” and “narrow” AI:

  • General AI would describe a machine that, according to Russell, has human-level or greater intelligence and abilities. This machine, so far, exists only in the imagination—primarily popping up in movies and science fiction books.
  • Narrow AI refers to machines with specific intelligence; they can learn and perform specific tasks as well as humans can (or better). Examples range from the machines that beat champion Jeopardy, Go, and poker players to applications that recognize and classify images in photographs or robots that fold towels.

Based on these definitions, it’s obvious that AI is already an inextricable component of eLearning. As machine learning becomes more sophisticated, the role of AI in eLearning is likely to increase dramatically.

What is machine learning?

Machine learning takes a smart machine (AI) and teaches it to use algorithms to make decisions, and, ultimately, to act without explicit direction. For example, the texting app on your phone learns that “Cali” is the correct spelling of your dog’s name and (after many, many repetitions) eventually stops “fixing” it to “Kelly” or “Callie;” after even more “training,” when you type “ca,” it begins to spontaneously suggest “Cali.”

When Amazon suggests items you might want to purchase or Netflix recommends titles, they are using machine learning. These services sift through enormous amounts of data, including your past behavior, and, based on algorithms, make decisions or suggestions.

The suggestions are automated, but machine learning is based on human guidance. People write the algorithms; humans tell the machines what to pay attention to, what to ignore—and how to decide what actions to take based on the information gathered. As with anything human-controlled, machine learning algorithms reflect the assumptions and biases of the programmers and, potentially, of the users.

In her paper on discrimination in algorithm-served ads that pop up during web searches, Sweeney (2013) explains that, in Google Ads, “an advertiser may give multiple templates for the same search string and the ‘Google algorithm’ learns over time which ad text gets the most clicks from viewers of the ad. It does this by assigning weights (or probabilities) based on the click history of each ad copy. At first all possible ad copies are weighted the same, they are all equally likely to produce a click. Over time, as people tend to click one version of ad text over others, the weights change, so the ad text getting the most clicks eventually displays more frequently. This approach aligns the financial interests of Google, as the ad deliverer, with the advertiser.” It also, as the paper explains, incorporates the biases of the users, resulting in ads including the word “arrest” more frequently for searches of typically African-American names than for typically white names.

That bias is inherent in supposedly neutral machine-learning algorithms is demonstrated by a 2016 study by Princeton and University of Bath researchers. “We have shown that AI can and does inherit substantially the same biases that humans exhibit,” the researchers write. “Bias in AI is important, because AI is increasingly given agency in our society for tasks ranging from predictive text in search to determining criminal sentences assigned by courts.”

What is deep learning?

Can better results be achieved by turning more of the “decision-making” over to the machines? This is where deep learning comes into the picture. While still affected by the decisions of the humans who program them, deep-learning systems perform “unsupervised learning.”

Deep learning is a sophisticated form of machine learning where the computer learns how to learn on its own, no longer needing human input. It uses algorithms that mimic neural networks in the human brain. They take in information and produce an output. It’s not that simple of course; many layers of processing occur between the input of information and the output of a result. And, though not every action and decision is programmed by a person, the algorithm that provides initial guidance on decision-making is.

Deep learning can quickly learn patterns; photo recognition software uses deep learning to identify pictures of cats, in an oft-cited example. The computer is given thousands and thousands of photos and told that these are cats. The cats are different colors and different sizes and are in different positions or engaged in different activities. The computer “looks at” a set of features—devised by a human programmer—to determine what is, and what is not, a cat in the photos. As it processes more and more photos, the computer’s accuracy improves. It has learned how to recognize a cat.

Language learning is another area where deep learning has made great strides. A Google Translate upgrade based on deep learning resulted in an overnight—dramatic—improvement, according to the New York Times Magazine.

As computers have become better at recognizing patterns, the systems have also become better at identifying other objects in an image and discerning subtle differences between images. This is a function of the deep network; different layers have learned to identify different items, and the system essentially “teaches itself” to sort and categorize items, even though no human has explicitly deconstructed the images and labeled each object.

What is reinforcement learning?

A variation of deep learning is reinforcement learning, hailed as one of 10 “breakthrough technologies” of 2017 by MIT Technology Review. Combined with deep learning, it’s used, for example, by software that controls self-driving cars.

Reinforcement learning goes beyond recognizing and categorizing items to actually choosing a course of action. The software “learns” by practicing a maneuver over and over again in a simulator, using slightly different parameters each time. When the results are good, that set of parameters is favored; instructions that caused negative results are less likely to be repeated. After many trials, the algorithm learns to choose the actions that produce the best outcomes. The “reinforcement” is feedback from the environment. In theory, the system can continue to learn, and improve its performance, indefinitely.

What does it mean for eLearning?

Training deep learning and reinforcement learning systems requires a tremendous amount of computing power; until recently, only powerhouses like Google and Facebook possessed the computing power to train AI systems to perform specific, data-intensive tasks. However, open source tools and other advances are making it easier for corporate eLearning developers to consider using these tools. And the potential benefits to eLearning are enormous.

Chatbot technology, for example, is improving rapidly and shows the potential of “personal” interactions to aid or reinforce eLearning. AI is already used in learning programs that adapt to each learner’s responses to provide a personalized eLearning experience. As the deep-learning abilities of eLearning become more sophisticated, eLearning will be able to adapt to each learner’s preferences, performance, and behavior—leading to greater engagement. Even “required” eLearning can move away from a model where all learners wade through the same modules, watching the same videos and reading text on material many already know. The program would take each learner through an individualized course covering just his or her weak areas, presenting content in a format and at a pace best suited to that learner.


Caliskan-Islam, Aylin, Joanna J. Bryson, and Arvind Narayanan. “Semantics derived automatically from language corpora necessarily contain human biases.” 2016.

Crawford, Kate and Meredith Whittaker. “The AI Now Report: The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term.” Summary of the AI Now symposium. 7 July 2016.

Hof, Robert D. “Deep Learning.” MIT Technology Review. 2013. Downloaded 6 March 2017.

Knight, Will. “Reinforcement Learning.” MIT Technology Review. 2017. Downloaded 6 March 2017.

Lewis-Kraus, Gideon. “The Great A.I. Awakening.” New York Times Magazine. 14 December 2016.

Sweeney, Latanya. “Discrimination in Online Ad Delivery.” Queue 11, no. 3:10. 2013.

Velusamy, Balasubramanian, S. Margret Anouneia, and George Abraham. “Reinforcement Learning Approach for Adaptive E-learning Systems using Learning Styles.” Information Technology Journal 12, no. 12: 2306-2314. 2013.

More Technology

You May Also Like