At DevLearn 13, I had the honor of speaking on an exciting topic: Augmented reality (AR) and its uses in mobile learning. After the presentation I heard from a number of attendees who were equally excited about the possibilities and examples I shared, and many were also interested in discussing the technical and pedagogical challenges that we often dismiss because of the novelty associated with AR technology.

While I had collected numerous resources on AR in preparation for DevLearn, I wasn’t able to share all of them during my presentation. In addition, several attendees and peers have asked for links to the AR examples and resources I covered, so I’m making them available through this article.

Mobile augmented reality

What is “augmented reality”? AR comprises a live view of a real-world environment (“reality”) with computer-generated input (including sound, graphics, text, video, and GPS information) supplementing (“augmenting”) the visual elements in the view. In other words, AR provides us with an enhanced view of the real world. In spite of the impression of novelty that eLearning practitioners and their managers may have, AR has “been around” now for many years—it is not a new phenomenon.

Why might AR matter to the field of eLearning? Allow me to cite a key point made in a paper from the Open University: “eLearning designers, developers, and educators often lack clarity regarding the impact that a learner’s situation has on their learning.” (See References at the end of this article.)

Mobile augmented reality provides learning designers and educators with a new opportunity to start thinking more deeply about the mobile learner’s context and situation. In fact, the key thing to remember about mobile augmented reality is that it is about augmenting experiences in real-world environments, wherever the learner happens to be. AR technologies can take any situation, location, environment, or experience to a whole new level of meaning and understanding. AR is uniquely changing the way people learn with mobile devices.

A note of caution is in order before I go further. During my session at DevLearn, Nancy Proctor, head of mobile strategy and initiatives at the Smithsonian Institution, pointed out that there are many examples of ineffective augmented reality. These often involve applications that their creators could have easily been developed by using more primitive forms of engagement, such as static graphics. I’m in agreement with Nancy about the misuse of AR, and we can expect to see the consumer world exploiting the novelty and commercialism of AR in the coming years. (See the story by Stephen Vagus in the References at the end of this article.)

Having said that, I believe that some the best opportunities to leverage AR technology for learning are during situated activities or contextual experiences—in other words, where a person is, while a person is doing something: mobile. How big will this mobile AR opportunity be? According to Semico’s report on augmented reality (see References), over 864–million high-end cell phones will be AR-enabled by 2014, with revenues related to AR technology approaching 600 billion by 2016. While still in its infancy, mobile AR is starting to drive innovation within the education, gaming, medical, mobile, automotive, and manufacturing markets.

Classifying AR

There are many forms of AR (see Sidebar 1, and also the Common Craft video listed in References). My interest for this article lies specifically with mobile augmented reality as one of the most powerful forms of contextual mobile learning. In addition to the many examples of AR that utilize smartphones and tablets, I’m also interested in mobile wearables such as Google Glass that will provide us with even more options for contextual learning in the mobile AR landscape.

Sidebar 1: Augmented virtuality: similar to but not the same as mobile AR.

I’d like to point out an important distinction between mobile AR and a similar application. It is possible to augment virtual and real world environments and to merge them together. This falls under the category of “augmented virtuality” and is really outside the scope of contextual mobile learning. Much interesting work has been done in the area of virtual reality, mixed reality, and virtual worlds and education; however, as I’ve probably made clear, I’m most intrigued with how learning takes place in an augmented real world.

While collecting a number of mobile AR examples during my research work, the paper from the Open University (cited earlier) provided some much needed clarity and guidance for augmented reality when examining its implications and unique affordances for mobile learning. The authors provided a working definition of AR to include the fusion of any digital information within real-world settings, i.e. being able to augment one’s immediate surroundings with electronic data or information, in a variety of media formats that include not only visual and graphic media but also text, audio, video and haptic overlays.

The authors also addressed AR in a broader “reality” context and provided several important distinctions between virtual reality and mixed reality. However, I found the following four aspects that are unique to mobile AR worth mentioning. Combining mobile with AR fosters use of the following types of information in support of learning:

  • The mobility of the user
  • The user’s geographical position
  • The physical place where learning can occur
  • Formal learning connections to informal learning

The authors also investigated a variety of device types that I wouldn’t think of as being truly “mobile.” However, the classification scheme they presented is well suited for analyzing today’s different forms of mobile AR. The authors of the paper classified AR according to these key aspects (Figure 1):

  1. Device/technology
  2. Mode of interaction
  3. Type of media used (sensory feedback method)
  4. Personal or shared experience
  5. Character of the experience
  6. Learning activities or outcomes

Figure 1: Classification table for different types of mobile augmented reality

Most of these classification categories are intuitive and don’t require much explanation. However, “modes of interaction” warrants some discussion. These modes relate to either providing passive information overlays to the learner, depending on their physical location, movements, and gestures, or engaging the learner in an exploratory mode where they are encouraged to actively discover or create media nearby in order to solve a problem or meet characters from a story.

The authors pointed out that more modes of interaction could evolve in the future although some, such as the “constructionist” mode, may be more relevant to specific knowledge domains (e.g. architecture or structural engineering) while the “active/exploratory” mode is more relevant to AR games.

Many of the technical and pedagogical challenges identified in the paper are common concerns often associated with designing and developing for mobile. However, some of the key AR concerns identified by the authors include:

  • The novelty of AR technology may detract from the learning experience
  • Using the AR technology may require tech support (if not easy to use and install)
  • The overlay of labels and features could harm observation skills through excessive reinforcement

This paper really helped me to begin thinking about the different characteristics of mobile augmented reality and how we as designers and developers might begin thinking about leveraging augmented reality technologies for learning through this classification lens.

Mobile learning AR app examples

The classification scheme presented in the paper from the Open University was additionally useful in identifying the augmented reality examples I wanted to show at DevLearn. While there are many other examples of mobile AR out there, the following examples were primarily selected because they provide excellent models of using AR for contextual mobile learning experiences or performance support. I hope this list of examples provides some ideas for those who want to get started using AR for mobile learning. While looking at these examples, consider reflecting upon the categories in the classification scheme. What other examples of augmented reality for contextual mobile learning have you seen?

  1. Dow Day is one of the most widely known AR examples for mobile learning and was developed using the open source Augmented Reality Interactive Storytelling (ARIS)platform. (ARIS video)
  2. Word Lens builds on the flash-cards concept providing real time word translation using AR technology. (Word Lens video)
  3. Fun Maps for Kids augments a world map to provide more contexts when learning about the continents, geographical landmarks, and animals. (Fun Maps video) 
  4. Star Walk is an augmented reality astronomy guide and provides a real-time view of the sky’s stars, constellations, and satellites by pointing the camera at the sky. (Star Walk video)
  5. Leafsnap is a free electronic field guide for trees, provides leaf-shape recognition, and has thousands of photos of several species of trees’ flowers, fruit, bark, and more. (Leafsnap video)
  6. Anatomy 4D allows learners to explore the human anatomy and was built using Qualcomm’s vuforia platform. (Anatomy 4D video)
  7. DASH Smart Instrument Technologies is a portable surgical navigation system designed to assist orthopedic surgeons in performing knee and hip joint replacement procedures. (DASH video)
  8. HP Support’s performance support app helps you change the ink cartridges in select HP printers. (HP Support video)
  9. Audi’s augmented owner’s manual app shows the range of functions the car offers without having to read the manual. This app was developed using the Metaio platform. (Audi owner’s manual video)
  10. Volkswagen’s Mobile Augmented Reality Technical Assistance (MARTA) app provides service support. This app was developed using the Metaio platform. (MARTA video)
  11. Aurasma’s augmented reality app can be used to create auras for augmenting any object by triggering and loading a 3-D object, image, or pre-recorded video, and is ideal for creating learning opportunities for training or performance support. Check out the two different examples using Aurasma below.

    Combat Medic is card game that is augmented by Aurasma and was created by the University of Central Florida’s METIL lab for the U.S. Army Research Lab. (Combat Medic video)

    The second example below is a video captured by a mechanic while using Aurasma to show how this AR technology combined with pre-recorded videos could be used for training novice mechanics as well as providing performance support to more experienced ones. (Aurasma demo: mechanic)

Mobile AR content creation and development platforms

In some of the examples above I shared links to the AR creation apps or development platforms that their authors used. During my presentation at Devlearn I also concluded with a list of the AR creation apps, tools, and development platforms that I’ve been exploring.

Mobile wearables and the future of AR

While many of the examples in the paper from the Open University addressed more than just mobile AR, the paper still provided a good foundation for thinking about the attributes of new mobile-device types such as wearables like Google Glass.

Believe it or not, there are many AR concepts already under development for Google Glass. In fact, Junaio, a well-known AR platform, recently announced support for Google Glass at the InsideAR conference. In terms of contextual mobile learning with Glass there is Field Trip App that is also available on iOS and Android mobile devices. One of the most recent examples of contextual mobile learning I’ve seen on Glass utilizes a new feature called Vignettes. It’s a Word of the Day App that actually evolved out of combining the Glass App with social media.

The potential uses of Google Glass for performance support have been a hot topic this year, but it’s very compelling to see actual proof of concepts being developed for real-world situations. But what do wearables hold for the future of augmented reality and contextual mobile learning? While looking for wearable AR examples like Google Glass I found this conceptual and futuristic video on Space Glasses. Future solutions like this seem to suggest a direction toward a contextually rich mixed-reality environment. While there are mixed-reality AR examples available today such as Zspace, they don’t allow for mobility as wearables do. With the advent and adoption of wearable mobile devices the future of AR could evolve into something I previously thought only Hollywood movies might portray.

References

Common Craft (June 10, 2010). “Augmented Reality—Explained by Common Craft.” (Free Version). http://www.youtube.com/watch?v=D-A1l4Jn6EY

FitzGerald, Elizabeth, Anne Adams, Rebecca Ferguson, Mark Gaved, Yishay Mor, and Rhodri Thomas. “Augmented Reality and Mobile Learning: The State of the Art.” 11th World Conference on Mobile and Contextual Learning (mLearn 2012). 2012. http://oro.open.ac.uk/34281/1/ARpaper_FINAL.pdf

Semico Research Corporation. Augmented Reality: Envision a More Intelligent World. October 2012. http://www.semico.com/content/augmented-reality-envision-more-intelligent-world

Vagus, Stephen. “Mobile Augmented Reality May Have a Bright Future.” Mobile Commerce News. 12 November 2013. http://www.qrcodepress.com/mobile-augmented-reality-may-bright-future/8524103/