Do you enjoy a challenge? My assignment was to determine if specific harvest skills had been transferred from our workshop to lead farmers and then on to individual farmers in remote villages in Rwanda. (Figure 1)

 

photo of villagers in Rwwanda demonstrating their clickers

Figure 1. Kirehe, Rwanda: Members from two farmer cooperatives (Kimaranzara and Tuzamure Ubukungu) holding audience response devices.

 

Completing this assignment required coming up with answers to four key questions:

  • How can you determine if new skills are being applied in a remote location?

  • How can you expand the size of a focus group and still ensure everyone is heard?

  • How can you get qualitative and quantitative data to analyze training outcomes?

  • How can you create an engaging experience for the participants and the interviewer to learn together?

I discovered that, with an audience response system, it is possible to collect meaningful data in the most remote places. In this article, I’ll share my experiences with you, and some lessons learned that you can apply no matter where in the world your particular challenge is located.

The “back story”

The United States Agency for International Development (USAID) has been collaborating with the government of Rwanda to ensure adequate food supply for the citizens of this landlocked nation. One strategy has been to increase the local production of critical staple crops such as maize (corn) and beans.

To this end, our company (ACDI/VOCA) had been selected to work on a program to help 25,000 farmers reduce crop loss during and after harvest. Specifically, farmers are to apply new skills to better collect, process, dry, and store their crops.

To reach this large number of farmers, we focused on existing agriculture cooperatives that could help us to facilitate cascade training. In cascade training, cooperatives identify and send lead farmers to our harvest training. After being trained, these lead farmers return to their communities and train other members of their cooperatives.

Twenty-four cooperatives were selected to participate in the program. We trained eight cooperatives in first season. After the training, we assessed the program and made adjustments before training the other sixteen cooperatives.

In January 2011, the program trained 240 lead farmers from eight cooperatives in Kirehe, a southeastern district near the Tanzania border. In March, I went to Rwanda to assess the results of the cascade training. My assignment was to determine if specific harvest skills had been transferred from our workshop to lead farmers and then on to individual farmers in the remote villages of Rwanda.

Clicking: giving everyone in the room a voice.

I used audience response devices from Turning Technologies. I had thirty responders (affectionately nicknamed “clickers”) and one handheld receiver. Individuals respond to multiple-choice questions with a clicker, and I capture the aggregate results in my handheld receiver (Figure 2).

 

handheld receiver clicker with 12 button pada responsder clicker has n-s-e-w navigation button and black and white simple results interface

Figure 2. Audience response devices from Turning Technologies: receiver (left) and responder/clicker (right)

 

I met with over 130 members from all eight cooperatives. With clickers, I was able to increase the size of my focus groups to 15-30 people. (Figure 3)

 

Photo of Rwandan students in classroom using their clickers as part of classroom activity

Figure 3. COACMU and COACLMA farmer cooperatives

 

Focus groups are an effective way of collecting information, but the focus group leader must make efforts to compensate for outspoken participants, response conforming, withdrawal, and atypical responses. Using audience response devices allowed me to address each of these concerns and to provide a more comprehensive report to the project staff.

Outspoken participants. An interviewer dreads prolonged silence after asking a question, but even worse is a participant who monopolizes the time. The interviewer will try to call on others to speak, but this can make others uncomfortable by thrusting them in the spotlight. With clickers, everyone has a chance to immediately respond. It also gives everyone more time to prepare their comments. From the responses, the interviewer can ask questions such as, “I’d like to hear from those who selected option #2” to prompt different people, even those in the minority, to speak. This strategy was effective when I was asking the participants about moisture testing. For farmers, testing moisture is critical in knowing when a crop is ready to sell. Many farmers are loudly adamant about using an older tasting technique. After asking which moisture testing method the farmers used, I noticed that two participants had selected a newer method. I was able to draw out their story for others to learn from their experience.

Response conforming. Participants might be inclined to adjust their initial responses to conform with perceived leaders or towards answers that appear to be more accepted. Clickers democratize responses so that the interviewer gets a truer impression of participant opinions. When I asked participants to tell me how many hours they spent training other farmers, verbal responses tended to norm around the first answer given. When I used clickers, I got a broader range of answers (and, typically more accurate when I compared them to time estimates given by trainees).

Withdrawal. This can occur when there are outspoken participants (see above). However, withdrawal can occur with any focus group, as people are passive while waiting for a turn to speak. Clickers allow everyone to respond immediately after a question. Participants are also eager to hear the results. Hearing how others have responded is valuable information to individuals assessing their own behaviors and performance. I have witnessed groups with clickers buzz with energy as they await the next question and, most importantly, the results.

Atypical responses. Focus groups can provide in-depth stories and examples. But, an interviewer must determine the commonality of an individual’s experience. A compelling story might be unfairly weighted as an example of the collective experience. Others in a focus group may be reluctant to provide contradictory evidence. Clickers provide a foundation of quantifiable information upon which stories can be put into context for frequency and commonality. I had one farmer tell me about the long distance he had to travel to take his crops to market. With the clickers, I was able to create ad hoc survey questions. A quick survey with clickers informed me that his situation, although a real challenge, was not a common problem.

Learning: are Rwandan farmers applying new skills?

Monitoring and evaluation is a critical process for reviewing activities and assessing performance. I loosely adopted Kirkpatrick’s framework for reviewing the training effectiveness.

My goals were to record trainee satisfaction and perceived relevance of the training (level 1), confirm learning of new skills (level 2), and identify skills being adopted in the fields (level 3). It was too soon to record if the new skills resulted in less crop loss and more revenue for the farmer (level 4), but I could begin to collect their anticipated outcome. (Figure 4)

 

Photo of Rwandan farmers huddled in rudimentary classroom with clickers in hand

Figure 4. Remote visit to Indakemwa farmer cooperative

 

Clickers facilitated my collection of information for all levels of questions. Clickers, in the end, cannot replace the richness of observations that come from walking through fields and observing skills in action. However, clickers do provide a foundation of data by which one can conduct observation visits to the farming fields more systematically.

Level 1: trainee satisfaction and perceived relevance

I reviewed the four stages of the harvest training (collect, process, dry, and store) with the participants. For each stage, I asked if they were satisfied with the skills they were taught; specifically, did they believe the skills would make a positive impact in their farming.

With the clickers, I could see that the results were nearly unanimous that all the skills would make an impact. Although this was helpful feedback, I wanted to delve deeper. I then asked participants to click on the stages that were most helpful and least helpful.

The clickers showed me that participants were most satisfied with training in harvest collection and processing skills, and that they were least satisfied with the training in storage skills. I was then able to have a discussion as to how we could improve the storage training and make it more relevant.

Level 2: confirm learning of new skills

Typically, Kirkpatrick’s level 2 asks if the participants have acquired new knowledge or skills. As this was a cascade training program (training of trainers), I asked questions about how these lead farmers were delivering the training to other farmers.

With the clickers, I could rapidly collect information on average number of farmers trained (16), number of hours spent training each farmer (4), and which skills were well received (scheduling the harvest and drying crops on plastic sheets). I was also able to identify skills that were not being adopted. Assessing moisture content of stored grain is an important skill.

Through the use of clickers, the results showed few people adopting new techniques for measuring moisture content. This led to in-depth discussions to identify resistance and to appropriately change our training materials.

Level 3: adoption of new skills

As with any training program, adoption is the linchpin for success. For cascade (train-the-trainer) training, I essentially had two questions: What skills did the trainers (lead farmers) adopt? What skills had they observed their trainees (individual farmers) adopting?

Clickers, as with any self-reporting instrument, can be biased towards telling the interviewer whatever he wants to know. First-hand observation cannot be omitted. However, having clickers gave me a clear picture of what to look for and the type of questions to ask during my observation visits to the fields.

For example, the clickers helped me to see what skills trainers said were being adopted. There was one glaring inconsistency. In level 2, trainers reported that trainees found drying grain on plastic sheets a great improvement over drying on the ground for enhancing grain quality. But here in level 3, trainers were reporting low adoption by farmers of actually using plastic sheets for drying grain. Follow-up discussions quickly revealed that the more remote villages did not have access to the polyurethane (plastic) material. When I later conducted my observation visits, I was now prompted to ask those farmers using plastic sheets where they had purchased their materials and to talk to cooperative leaders about including plastic sheets in their supply stores.

Level 4: impact at the organization (cooperative) level

Here is the purpose of the program: reduce harvest losses to provide more food and income for farmers and their families. Although it was too early to measure this, I could use the clickers to collect information from farmers about current land size, historical losses, and anticipated loss reduction with the new skills. Although the forward-looking responses are conjecture and subject to environmental factors, the clickers allow me to rapidly collect and aggregate information from the people closest to the fields.

Telling: sharing results with external stakeholders

The most effective reports contain both quantitative and qualitative information. Observation visits to farming fields and rural focus groups are typically qualitative in nature due to the limited sample size and the limited ability to collect standardized answers from everyone. Clickers helped to collect quantitative information that was emphasized with specific qualitative examples from follow-up discussions. This avoided the overemphasis of interesting, but atypical stories.

To demonstrate clicker functionality to external stakeholders (funding donors and other organizational staff) to whom I was reporting, I had the stakeholders use the clickers to guess answers to information I had collected. For example, before presenting how many hours the average trainer trained a farmer or which training session was most popular, I had stakeholders click an answer. (Audience response systems also allow you to project clicker responses, in real time, into a PowerPoint presentation.) Not only did this to demonstrate the use of the clickers, it showed that stakeholders were learning new information. And, it invited active participation and meaningful discussion throughout the entire presentation!

Interviewer tips

There is no one single method of data collection that should be used exclusively. Researchers must consider an appropriate balance of observation, interviews, and record review. For focus groups, I found the use of an audience response system to be engaging and informative. I have a few tips to offer, so that you can learn from my mistakes and experiences.

  • Introduce the clickers to focus groups by asking simple questions. I asked, “Will it rain tomorrow?” as my first question. This helped participants get familiar with the clickers. Also, the diverse results led to a discussion that differing opinions were acceptable and encouraged.

  • Show how results are aggregated and anonymous. After the first question, I walked around and showed everyone the results on my screen. They could see what information I had and that there was no way to identify them individually.

  • Share results after each question. Participants are extremely curious to see the aggregate results. It is the least we can do in providing information back to focus group participants who are volunteering their time to meet with us.

  • Avoid abstract scales. When I asked people to rate their satisfaction on a scale of 1-9 with 9 being high, I got blank stares. Several minutes of explanation didn’t help. It was more effective to say “Press 1 if you didn’t like it, press 2 if you did like it, press 3 if it was your favorite.” Each number needed to represent a specific answer. Scales can be too abstract for some people.

  • Explore several vendors. There are several vendors offering audience response systems. Be sure to research several options before selecting a system that meets your organizational needs.

But I don’t work in Rwanda, how does this apply to me?

Not everyone works in rural Rwanda. However, if the definition of “remote location” includes any place that isn’t within a standard meeting room, then there are many more places that an audience response system can be effective in collecting information.

At work, trainers can collect information from employees wherever they might gather – from the break room to the factory floor to the call center. In community workshops, facilitators can query participants in any indoor or outdoor setting. Even for assessing low-intensity behavioral change efforts such as public messaging, evaluators can interview groups of random citizens rapidly and thoroughly through clicker use.

Remote settings can be challenging, but an audience response system empowers trainers and evaluators to rapidly gather extensive information with a minimal amount of equipment. There are so many voices waiting to be heard, and you can be just a click away from listening. Good luck and happy clicking!