-->

Let's Work Together

Article Featured Image
Article Featured Image

So you’ve just arrived in San Francisco and picked up your rental car; you’re headed for a meeting somewhere in Silicon Valley. You get on the 101 heading south when it happens: You receive a "tech support" call from your father, asking about a problem he’s having printing or sending pictures of your children to your sister. So here you are, dodging traffic, trying to decide whether to take the Lawrence Expressway or El Camino Real, while at the same time asking your dad to give you details about what the onscreen error message says. It’s enough to drive you to distraction.

This article won’t help you with intrafamily relationships, nor will it offer options on how to drive and use your computer at the same time, but it will provide an overview of a set of tools equally as helpful for training, troubleshooting, sales, and family computing issues. Straddling the intersection of several technologies (including instant messaging, streaming, videoconferencing, application sharing, and presentation software), collaborative computing tools are an emerging—and essential—part of your computing toolbox.

The tools are split into several key areas: 1) desktop presentation tools, 2) desktop application tools, 3) groupware, and 4) enhanced video collaboration tools, including rich media recording. The second part of this article will cover these areas, but in the first half, let’s explore where this all began.

Let’s Start at the Very Beginning …
Back in the early 1990s, videoconferencing systems were used for basic talking head meetings. A meeting might consist of several physical rooms, each run by an operator and filled with participants; participants in each room saw themselves in one monitor (the "vanity monitor" as those of us who operated the rooms called it) and saw participants in another room in a second monitor. The rooms were connected by something called a multipoint control unit (MCU) that allowed the operators in any room to switch the video from room to room (that second screen) depending on who was talking. Some MCUs even used a technology that allowed the video to be switched automatically based on the current predominant speaker—someone who, say, spoke for at least 5 seconds.

The systems were adequate for talking heads, but the way the information was presented was substandard, often consisting of a camera focused on one part of the table where participants sat. Subsequent systems added document cameras, small self-contained units that had overhead lights, a lightbox, or a combination of both. Papers would be placed underneath the document camera and displayed intermittently, providing a giant image of a paper or acetate, with a hand reaching into the image occasionally to scratch notes on the paper if other participants requested changes to the document.

This issue is not that far removed from basic video production for streaming in a corporate environment. The videographer, who often shoots with a single camera, has to move back and forth between the presenter and the screen or projected image. And just like rich media recorders, such as those from Accordent or Sonic Foundry, which record the video of the presenter and a separate copy of the presentation, videoconferencing systems needed to find a way to present PowerPoint and other computer graphics tools that were then emerging.

The biggest problem, though, was bandwidth, as most videoconferencing systems used T1 or lower data rates, and most video codecs required at least 768Kbps to provide adequate face quality for talking heads. Add to this the fact that the MCU was integrated into some lower-cost videoconferencing systems—the ones made by Polycom that would later become popular with enterprise customers—and the bandwidth issue was greatly exacerbated since video from all locations in a videoconference (the "points" or "nodes") needed to flow into a single point, which would then switch the video and send it back out. This meant a facility with a T1 that needed to do a four-point call would only be able to use 384Kbps for each node on the call.

As is true in many technological innovations, a small company invents a solution and then licenses it to the big players, each branding it with its own special name and supplementary tools. In the collaborative computing space, a solution was proposed by a company called DataBeam, which was later acquired by Lotus, then wrapped into IBM’s SameTime when Lotus was subsequently acquired. DataBeam’s proposed solution for the videoconferencing data presentation problem took root, grew into a standard, and became, arguably, the foundation of all collaborative computing systems on the market today, albeit in many derivations.

The product, FarSite, and the technology, T.120, allowed low-bandwidth presentation of graphic images at very low frame rates—sometimes as low as one frame every 2 seconds—to be sent on a separate channel from the main videoconferencing system. The low-bandwidth scenario, often less than 128Kbps, was necessary since bandwidth would need to be stolen from various areas of the traditional videoconference.

By 1996, DataBeam’s T.120-based technology had been licensed by more than 40 software and hardware manufacturers and service providers. Companies such as Apple, British Telecom, Cisco Systems, MCI, Microsoft, PictureTel, and Sun Microsystems were using DataBeam’s toolkits to add real-time collaborative capabilities to products as varied as multimedia microprocessors, desktop operating systems, MCUs, and videoconferencing applications. FarSite also included document collaboration, based on a product it released in 1993 that allowed for a radical new tool: the shared whiteboard.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues