Last week, I started off by talking about perhaps the oldest myth, that a Mac is better than a PC. There are other, more recent myths in video production for e-Learning that I’m going to discuss this week and I’ll start off with what is perhaps the most pervasive myth in video for e-Learning that I keep hearing.

Stephen Haskin's 2-part discussion

Ten Myths about Video in e-Learning

More myths

HD is the only way to go

Production houses, internal e-Learning design departments, and e-Learning developers have come to believe that High Definition (HD) video is the only way to make video for e-Learning. This myth has been perpetuated since affordable HD video cameras became available starting in about 2004, but it is patently not true. Let me explain why you don’t need HD, how it slows your productivity, and how it doesn’t make a difference in your end product.

What is HD really? There are many flavors. Are you going to work in 720p, 720i, 1080p, 1080i? There are a lot of standards for HD. Is it really better than Standard Definition (SD) video? Is the color better? Is it really sharper? I’ll start busting the myth right away. HD video is really no better than SD video, but there’s a lot more information (pixels) in HD, so it looks a lot sharper. HD color is no better, and to make matters even more complicated, there is no one color space for HD. Just as there are lots of HD standards, there are several HD color spaces. Without getting too technical as to why, most HD signals are handled incorrectly by computer screens, which is what flat panel TVs are. Color is also perceptual and no two sets look alike. The human eye is pretty tolerant of color variations, and minor variations are common because of the previous two statements. HD video color is not really that different from SD video color. The color spaces are somewhat different, but there’s not a particularly greater gamut (the numbers of different colors the video can capture) that gives HD an advantage. And the screens that are typically attached to computers or laptops can’t display the entire color space of video or other color spaces, so there’s another reason that HD is not better than SD.

There’s a strongly held belief that, if you start out with a big picture, then turn that big picture into a small picture, the quality is better. But is it? When reducing the size of the video from the native capture size of HD or SD, you are reducing the number of pixels, not reducing the size of the pixels. Since you’re cutting pixels out, which pixels get cut? Who decides? Not us. The program we’re using to edit or composite the video decides for us. If we go the other way (make the video larger than its native size), the quality of the video is lowered because we’re essentially “doubling up” on pixels, Since we can’t decide which pixels will get taken out when we make the picture smaller, it’s probably safer to start out with fewer pixels in the first place, since in most cases of developing video for e-Learning the video is going to have smaller dimensions anyway.

When it comes to e-Learning, the heart of the issue of SD vs. HD is editing and delivery. I’ll go a little backwards and put delivery first: What’s your end product going to be? How are you going to deliver the video part of your training? If it’s going to be delivered as Flash or QuickTime or Silverlight via the Web, then at most it will probably be 360 or so pixels wide. Any wider and you could have bandwidth issues. Any smaller, and you’ll have issues with visibility. At 360 X 240 pixels you have 86,400 pixels to display. That’s a lot less than the 2,073,600 pixels you start with in HD (well, at 1080 anyway). See below for the complete mathematical discussion.

Editing

Video editing takes a lot of computer horsepower. A lot. Frequently, I’ve got Premiere Pro, Photoshop, and After Effects open at the same time. My primary computer has a very fast processor, 8 GB of RAM and a copious amount of hard drive space and I still have to sometimes wait for it to catch up with my editing. This is in SD, which, at 720 X 480, is about 345,600 pixels. If I’m working in HD at a resolution of 1920 X 1080 (about 2,073,600 pixels), the story is completely different. This is six times the number of pixels as in SD video. Remember that, when rendering a video, each pixel is handled separately. So each frame takes about six times the amount of time to render. Six times. A three-minute sequence that takes 10 minutes to encode in SD will now take an hour for HD. That’s a reason to not do HD right there. We’re still about two to three generations of processors and probably software away from the point at which HD will be a truly efficient process. If you’re shooting in 1080p, and the final project will be on the Web, you’re reducing the image from over two million pixels to 86,400 pixels, about 1 in 20. Does that make sense? Well-exposed, steady SD will make video every bit as good as anything you shoot in HD. Period.

Delivery

Take a look at different screen resolutions. Here’s a Wiki page that defines most of the computer and video resolutions:

http://en.wikipedia.org/wiki/List_of_common_resolutions#Television

One caveat, don’t get computer resolutions and video resolutions confused. They are two different issues. Yes, computer screens are made of pixels and yes, video screens are made of pixels. But the two are different in both editing and playing back. To steal a title from a recent movie: “It’s Complicated.” But it’s not hard.

Myth: You can’t stream video over an internal network (IT people … are you listening?)

Here’s one that I’ve never understood. A gigabit Ethernet network, which most organizations have internally, never ever gets saturated unless there’s something wrong with the network. A video with a stream rate of 1 megabit constant (and video isn’t constant) would take a thousand simultaneous views to bring down the network segment that is looking at it. In addition, there are some very sophisticated methods to buffer and control network activity. The funny thing is that on a network segment, there’s never going to be 1,000 people taking a training simultaneously. And if there were, there’s an easy way to control bandwidth, and that’s a Web feed that’s very similar to a Webinar.

Why is the internal intranet able to handle the traffic for video? The math is simple, but convoluted: Let’s start with a picture the size of 360 X 240 (that’s 86,400 pixels). If the video was uncompressed, then you’d be sending that many 8-bit pixels down a pipe times 30 times a second (the approximate frame rate). That math is indeed scary, over 20 million (86,400 X 30 X 8) bits of information every second. That’s uncompressed video and nobody tries to push uncompressed video down a network or intranet. Only commercial producers and Hollywood work with uncompressed video. Modern compression methods using H.264, MP4, MP2, FLV, etc. whatever, get the file size down to about 100,000 to 120,000 bits per second. It’s going to take a lot of simultaneous streams to put the hurt on a gigabit network. Here’s an example: I just finished a video that runs 7 minutes and 30 seconds. The H.264 compressed video is 44,650,000 megabits. Divided by 450 (the number of seconds in the video) that’s 99,222 bits per second. Call it 100Kb. That’s a rate that any network, wired or wireless, can handle easily with hundreds of simultaneous users. A hundred users would put 10Mb of stress on the network. That’s not much, even if the Gigabit network gets maxed out at half that bandwidth. I almost always hook up to the Web via wireless. Even the lower bit rate of wireless can easily handle any normal video you might be trying to watch. That’s why you can watch YouTube video on your wireless connection without problems. Video just doesn’t take up that much bandwidth.

So the next time your IT department tells you that you can’t stream video over your intranet, uh, tell them to read this.

Myth: Video costs a lot to make

Video used to cost a lot to make. Not any longer. There was a time that doing a live (or taped) remote like a sportscast or the county fair (I really did that for several years!) took a lot of equipment and electricity. Big, heavy, expensive cameras, lights, cables, microphones (and there was no such thing as a wireless mic then), sound boards, video switchers, two phase 220 current, a bazillion amps and watts, etc., were the rule of the day. Then if you were shooting something for broadcast later, you needed huge (2”) tape machines, a big studio switcher with sync generators (a sync generator keeps the video frames from two tapes or cameras aligned with their interlacing), etc., etc., etc. It wasn’t easy. Now, you can get a camera that takes a teensy battery, fits in the palm of your hand, a wireless microphone, and whatever light you’ve got and you can make a video. Cameras can be had for less than $150. A good microphone you can take on location still costs more, but the sound is great. A little software and you’re good to go. It’s the democracy of video thing again. The quality is a different story. I’m 100% sure that a great videographer and storyteller can make great video from a Flip camera, but the technical quality of the video won’t be great. It’ll be grainy and have higher contrast and no any subtlety in the mid-tones where most of the visual information is located. All that said, if there’s a good story, then it’s a good video. And people will watch it. And they’ll be able to get something out of it. Our tolerance for video that’s not technically great has gone up. Thank you YouTube.

Myth: Video takes a high degree of skill to make

Video is more complex than audio by an order of magnitude. You not only have sound, but the pictures you place in the video have to be linked to the sound somehow. Then there’s the visual processing that takes place along with the sound processing. These are all learnable skills. And we’ve been surrounded by it since the 1950’s. We all know what it’s supposed to look like. Almost anyone can figure out how to make video in one sense or another. Even if we’re all not like Ingmar Bergman, we can all tell stories. Video just doesn’t take a super-talented high degree of skill to pull off. You can do it. Yes you can.

Myth: You can get away with 15 frames per second

There is a very good reason that when film first started to become popular (in the late 1800’s), there had to be a standardized frame rate. Nickelodeons were opening up all over the country to show moving pictures. The reels of film needed to be transported from town to town so they could be viewed using the same equipment. After a lot of experimentation, the movie industry (and this happened worldwide too, unlike the TV industry) settled on 24 frames per second. It was low enough to not use up too much precious film and fast enough to give the impression of motion. The key word is “impression” because video and film do seem to jerk across the screen when seen at normal speeds. U.S. video has a frame rate of 29.97 frames per second. European video has a frame rate of 25 frames per second. Film is still 24 frames per second.

The human eye (Mark I eyeball to the military) doesn’t see in frame rates. We see a continuous stream of photons striking our retinas in patterns our brains put together to make up images. And even though 30 frames per second (fps) is considerably higher than 24 fps, there’s still a stuttering feel to action, especially when the camera moves across the scene (called panning) at that frame rate. I can see that stuttering look when I go to the movies. I can see it on TV. A frame rate of 30 fps still shows jerky motion when the scene is changing rapidly. A frame rate of 60 fps seems to be the point where successive black and white frames stop flickering and look grey, although there are individual variations. In TV, 60 fps is a whole lot of bandwidth and our standards were developed because of frequency allocation issues. In that old analog video world, frequency was the rate-limiting factor. In film, it was the capacity of the magazines and the ability of camera manufacturers to make cameras that were reliable with faster shutters. Cameras with high frame rates are difficult to make and break down frequently. The same is true of the projectors that would show the finished product. In digital broadcasting, whether on the Internet, over the air, or whatever, higher frame rates mean higher stream rates. Higher stream rates mean higher bandwidth. In the Internet and intranet world, bandwidth is everything. We all need to pay careful attention to bandwidth vs. frame rate.

If you’re making any sort of video or video-not-video, you’ll want to have a frame rate of 24 fps or greater unless you need to have extra jerkiness in your action (for some reason) or you don’t mind some extra jerkiness).It doesn’t have to be fast motion, just motion. And if you don’t have motion in your video, then you should be making PowerPoint slides.

If you’re using one of the newer cameras, many shoot at a frame rate called 24p. It’s supposed to give a cinematic look. Not to me. Video still has a lower contrast ratio (the difference between the lightest and darkest parts of a video and the number of discernable steps between them. Video is just not as good as film to capture the soft look and extended gray scale of film, which is still closer to life. Someday video will be as good as film, but not yet.

Close

There you have it. Ten myths for video in e-Learning. There will be more myths. We work in a technological world and technology doesn’t stand still — it is continuously developing. One day, we’ll be able to make video by recording our brains, I’m sure, but we’re a long way from that now.