Pity today’s mLearning developers. They already wear numerous hats just to do the essentials of their job. More hats than Don Draper. More hats than Pharrell Williams. More hats than TV shows on Netflix. Well, OK, that last one may be an exaggeration.

But the truth is that mLearning developers need to wear a lot of hats to do their jobs, and now they must wear a new hat, a QA engineer hat.

What is QA?

In some circles—particularly in the biopharmaceutical industry—quality assurance (QA) is defined as a verification of processes and quality control (QC) is a verification of products, which would make QA an inappropriate term for testing and verification of mLearning products.

However, in the software-development industry, QA engineers verify software products, and the software development world tracks very closely to the tasks and responsibilities for today’s mLearning developer. To ensure that their “product” works for their end-users, mLearning developers must QA test and validate their mLearning software applications.

While training professionals have always done “quality checks” of training courses through content reviews, dry runs, beta classes, and so on, the migration to eLearning and mLearning has increasingly changed this process to a software quality check. And the role that handles that task is now equivalent to that of a software QA engineer.

Why is this important?

The advent of eLearning pushed us into relatively simple QA testing because we needed to test for things such as multi-browser compatibility and proper behavior on a specific LMS. However, the landscape for today’s mLearning developer is much more complex, and testing needs to be much more thorough. Learners now have a plethora of hardware devices running many different operating systems and potential environments for mLearning applications, making it increasingly important to have a more formal QA testing and validation system.

How can an mLearning developer possibly hope to test and validate with all of the potential variables that will be used by their audience? The answer lies in an organized QA testing strategy, and that begins with an effective working knowledge of QA testing concepts.

The Quality assurance framework

Let’s start by examining categories of QA testing that will help to organize our test strategy. There are three commonly accepted categories that are focus areas for testing.

  • Functional testing: Tests that verify all technical functionality on all certified devices and platforms.
  • Non-functional testing: Tests for non-functional areas such as performance, security, and the user interface.
  • Acceptance testing: Tests and validation by subject-matter experts (SMEs) to determine whether the lesson meets content requirements for the target audience.

Functional testing

Of the three categories of testing, functional testing is definitely the most crucial to ensure success for an mLearning lesson. The “old school” approach to functional testing was for the course developer to run through the course and ensure it behaved properly on their own workstation. This is clearly insufficient for mLearning development and you must include additional steps.

First, developers can no longer handle this task on their own. It may be reasonable to expect developers to test on one desktop environment, one tablet device, and one smartphone. However, if you need to validate on more devices than that, the best strategy is to establish a QA testing team, and you should establish that team before the development process begins, not just before the testing process begins.

In addition, a formal, written QA test plan should be moved up from a nice-to-have to a requirement. There are many excellent templates for QA test plans on the web, but if you want to avoid the formalities, you should at least include:

  • What features you are going to test
  • What platforms and devices you are going to validate

For example:

  • All clicks
  • All audio
  • All animations and transitions
  • All hyperlinks, including all branching pathways
  • All triggers, as applicable
  • All variables, as applicable
  • On current versions of Google Chrome, Internet Explorer, Mozilla Firefox, and Apple Safari
  • On iPad 4 and Samsung Galaxy Tab 10.5
  • On iPhone 6, Samsung Galaxy 5, and Windows Phone 8 or 8.1

You should also be aware of several other types of functional tests that you may need, depending on your content and audience. (And even if you never use these tests, it will help you as a professional to be familiar with what they are.)

  • Load testing: Performance testing that identifies areas in the application that may cause unacceptable wait times for learners. Sophisticated tools are available for this testing, but an online stopwatch can often do the trick.
  • Gorilla testing: Testing things that only a gorilla would do. For example, click the browser’s back and forward buttons to see what happens. Often requires a little creativity—but this can be a very fun exercise!
  • Regression testing: A repeatable test that you do after each new publishing. A good idea to do this at regular intervals during the development process.
  • Smoke testing: A high-level test to ensure nothing catches fire. OK, seriously, smoke testing is a set of tests that ensure that the most important functions work. If all of them work, the product is stable enough to proceed with further testing.

Non-functional testing

For mLearning content, the most important non-functional test is readability. This is especially true on smartphone devices, where your goal should be to avoid the need for learners to have to use zoom-in gestures to view at least the core content. Optionally, you can add other non-functional tests such as user access, user security, conformance to marketing standards and product branding, etc.

But no matter what non-functional test you choose to include, make sure it gets added to the test plan!

Acceptance testing

As with any type of training or learning content, it will help your SMEs if you define their responsibilities for reviewing and accepting your mLearning application. Unlike other types of training or learning content, it can be helpful to use a little creativity with the scope of the SME review.

For example, if you select your SMEs with consideration to the devices that they own and use, you can request or require that your SME test the content on more than one device, which could validate your application on additional devices.

Of course, it is important to note that the scope of an SME review is often much different than the scope you would use on functional testing validation. The typical scope for an SME review may include:

  • All written content
  • All narrated content
  • All assessments
  • All/any omissions

If you are going to validate your mLearning for a device based on an SME review, you should add at least a few key functional tests to their acceptance test requirements. And as with functional and non-functional tests, make sure to include the SME’s review tasks in your QA test plan.

Applying the QA testing framework

In addition to the aforementioned strategies, there are two other key areas that can make your QA testing tasks much easier.

First, the development tool that you select can dramatically impact the interface issues that you have to adjust to ensure that your content works properly on each device. Just about all development tools now have a checkbox for HTML5, but the capabilities are very different from tool to tool.

Before you start your project, test your tool on the devices that your learners will use. DO NOT simply select or use untested a tool that claims it outputs HTML5!

Next, the development process that you use can also have a big impact on your QA testing burden. For example, if you use the ADDIE development model with storyboarding reviews, you may have a mountain of surprises at the end of the development process when you finally put the lesson into mLearning mode. As a result, you may wind up doing significant re-design at a point where you should be focusing on adding final content to the lesson.

On the other hand, a rapid prototyping development process such as Agile or SAM can help to identify design issues up front and can even foster creativity with other features that you may not have envisioned in the content outline or design spec phase. In addition, reviewers will be able to offer more meaningful suggestions for making your application a success.

Saved by a disclaimer?

No matter how many devices you test and validate on, it is almost impossible to cover all possible device combinations. If nothing else, think about how many of your validated devices will be upgraded within 12 months of your mLearning go-live. (Answer = most?)

Whenever possible, create a disclaimer statement that is in the beginning of your mLearning content. For example:

“This content is designed for desktop and mobile devices. It is best viewed on Chrome, Internet Explorer, or Firefox browsers, iPad or Galaxy Tab tablets, and iPhone, Galaxy, or Windows smartphones.”

Even if a learner does not have one of those devices, they will appreciate the warning and will be more willing to accept minor issues as they navigate through the content.

Your new hat

Congratulations! You’re now ready to wear your new hat.

The truth is that you have probably had this hat for quite a while but never thought about it as a QA hat. Perhaps it was simply a check-your-work hat or a be meticulous hat. But now that you have a QA hat, it should help you in several areas:

  • You will be able to organize your lesson testing more effectively
  • You will have better assurance that you are testing your lesson in all key areas
  • You will have a new vocabulary that will help to convince people that you know your stuff
  • You may (with a little luck!) be able to gain resources to help you with the testing process
Wear it well, and prosper.