-->

Producing Content for Mobile Delivery

Article Featured Image
Article Featured Image

Preprocessing Audio
Before encoding, preprocess audio and video tracks in order to maximize the post-encoding video quality. Preprocessing audio and video can be done within the timeline of an editor using available plug-ins. Alternately, preprocess audio separately from video using an audio mastering application. Then recombine the audio and video to build your master clip. If you process outside of a nonlinear editor, import the audio to the timeline and edit to picture, replacing original nonprocessed audio.

Perform low-pass and high-pass filtering on your audio (see Figure 1). This will reduce frequencies not reproducible in handsets due to the limited frequency response of miniature speakers. It will also reduce frequencies that will not encode well. AMR, for example, was designed as a speech codec, thus it does not encode the low and high frequencies in music very well. It is important, therefore, to reduce the low and high frequencies prior to encoding. Finally, the low and high filtering will reduce frequencies that when encoded will interfere with the clarity of the voice.

You will want to control the dynamics on the audio as well use normalization, compression, and limiting techniques. When the audio is normalized, the peak volume is identified and all audio is raised proportionally, maintaining the original dynamic range.

Proper compression creates a consistent audio level with a narrowed dynamic range. It raises the low-level audio and lowers the high-level audio, while maximizing the amount of audio signal going to the codec. Low-bitrate audio codecs typically do not have enough resolution to cleanly encode audio at low signal levels. Higher signal levels keep the codec working in the range where it sounds the best, with smoother sound quality.

Limiting creates a fixed "amplitude ceiling" to prevent overloading and distortion of audio codec due to excess amplitude. Reducing -3dB to -6dB down from full-scale audio will give the audio codec enough headroom to prevent clipping.

Once you have your final audio track, convert the sample rate to match the sample rate of the codec to be used at final encode. This increases the efficiency and accuracy of the encoding. You will get better results using "audio-tools quality" sample rate conversion instead of a video encoder’s. Remember that noise is the archenemy of any encoding process. Noise robs bits that should be allocated to cleanly encode the subject of the content. Noise reduction can be overdone, however, and should be used judiciously with a critical ear.

You may choose to convert stereo to mono audio. Mono is half the amount of data as stereo since the audio bitrate is not divided between two channels (128Kbps stereo = 64Kbps per channel, whereas 128Kbps mono = 128Kbps per channel).

Mono audio quality will be much better because there are twice as many bits per channel dedicated to reproducing sound. Not all handsets have stereo playback. Some content, usually music videos and movie trailers, may require stereo audio, but if you don’t need to use stereo, don’t.

Preprocessing Video
There are several things to remember when preprocessing your video. You should crop video edges. Clean, well-defined edges encode best, while edge jitter robs bits needed to cleanly encode the subject. There’s no reason to encode video overscan. This prevents wasting encoding bits on borders instead of on content.

Also, remove letterboxes where possible. If you cannot remove it, make sure it is Super Black. Since handsets do not use interlaced displays, you should deinterlace content shot on interlaced video, which will convert the video scanning from interlaced to progressive. This will result in smoother motion and smoother edges.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues