Tag Archive | video

In the Trenches with RTCWEB and Real-time Video

The concept of video streaming seems extraordinarily simple. One side has a camera and the other side has a screen. All one has to do is move the video images from the camera to the screen and that’s it. But alas, it’s nowhere near that simple.

Cameras are input sources, but they have a variety of modes for which they can operate. Each camera has it’s own dimensions (width/height), aspect ratio (the ratio of width/height) and frame rate. Cameras are often capable of recording at selectable input formats, for example, SD, or HD formats, which dictate their pixel dimensions and aspect ratio sizes (e.g. 4:3 or 16:9). If a camera opens in one format and switches to another there can be a time penalty before the video starts streaming again from the camera (thus switching modes needs to be minimized or avoided entirely). On portable devices, the camera can be orientated in a variety of ways and dynamically change its pixel dimensions and aspect ratio on the fly as the device is physically rotated.

Some devices have multiple camera inputs (e.g. front camera or rear camera). Each source input need not be identical in dimensions nor capability and the user can choose the input on the fly. Further, there are even cameras that record multiple angles (e.g. 3D) simultaneously, but I’m not sure if that should be covered right now even though 3D TVs are all the rage (at least from Hollywood’s perspective).

If I could equate cameras to a famous movie quote: they are like a box of chocolates, you never know what you are going to get.

Cameras aren’t the only sources though. Pre-recorded video can be used as a source just as much as a camera. Pre-recorded video has a fixed width, height and aspect ratio, but it must be considered as a potential video source.

The side receiving video typically renders the video onto a display. These output displays are also known as a type of video sink. There are other types of video sinks though, such as a video recording sink or even videoconferencing sink. Each has it’s own unique attributes that vary and the output width and height of these video sinks vary greatly.

Some video recording sinks work best when they receive the maximum resolution possible. While others might desire a fixed width/height (as it’s intended for later viewing on a particular fixed size output device). When video is displayed in a webpage, it might be rendered to a fixed width/height or there might be flexibility in the size of the output. For example, the page can have a fixed width, but the video height could be adjustable (up to a maximum viewable area), or vice versa with the width being the adjustable axis. In some cases both dimensions can adjust automatically larger or smaller.

Some output areas are adjustable in size when manually manipulated by a user. In such cases the user can dynamically resize the output areas larger or smaller as desired (up to a maximum width and height). Other output screens are fixed in size entirely and can never be adjusted. Still other devices adjust their output dimensions based upon the physical rotation of the device.

The problem is how do you fit the source size into the video sink’s area? A camera can be completely different in dimensions and aspect ratio than the area for the video sink. The old adage “how do you fit a square peg in a round hole” seems to apply.

In the simplest case, the video source (camera) would be exactly the same size as the output area or the output area would be adjustable in the range to match the camera source. But what happens when they don’t match?

The good news is that video can be scaled up or down to fit. The bad news is that scaling has several problems. Making a small image into a big image makes the image appear pixelated and ugly. Making a bigger image smaller is better (except there are consequences for processing and bandwidth).

Aspect ratio is also a big problem. Anyone who’s watched a widescreen movie on a standard screen TV (or vice versa) will understand this problem. There are basically three solutions for this problem. One solution is shrinking the wide screen to fix into the narrow and put “black bars” above and below the image, known as letterboxing (or pillarboxing on the other axis). Another solution is to expand the image large enough while maintaining aspect ratio so there are no black bars (but with the side effect that some of the image is cropped because it’s too big to fit in the viewing area). Another method is to stretch the image making images look taller or fatter. Fortunately that technique is largely discredited, although still selectively used at times.

Some people might argue that  displaying video using a letterboxing/pillarboxing technique is too undesirable to ever be used. They would prefer video was stretched to fit the display area and any superfluous video image edges are automatically cropped off. Videophiles might gasp at such a suggestion for the very idea that discarding part of an image is nearing sacrilege. In practical terms, it’s both user preference as well as context that determine which technique is best.

As an example of why context is important, consider video rendered to the entire view screen (i.e. full screen mode). In this context, letterboxing/pillarboxing might be perfectly acceptable, as those black bars become part of the background of the video terminal. In a different context, black bars in the middle of a beautifully formatted web page might be horrifically ugly and unacceptable under any circumstance.

The complexities for video are far from over. When users place video calls, the source and the video sink are often not physically located together. That means that the video has to go from the source to the video sink located on different machines/devices and across a network.

When video is transmitted across a network pipe, a few important considerations must be factored. A network pipe has a maximum bandwidth that fluctuates with usage and saturation. Attempt to send too large a video and the video will become choppy and glitch badly. Even where the network pipe is sufficiently large, bandwidth has a cost associated, thus it’s wasteful to send a super high quality image to a device that is incapable of rendering it to the original quality. To waste less bandwidth, a codec is used to compress images and thus preserve network bandwidth as much as possible (the cost being the bigger an image, the more CPU required to compress the image using the codec).

As a general rule…

  • a source should never send video images to the remote video sink that ends up being discarded or at a higher quality than the receiver is capable rendering, as it’s a waste of bandwidth as well as CPU processing power. For example, do not send HD video to a device only capable of displaying SD quality.

Too bad this general rule above has to have an exception. There are cases where the video cannot be scaled down before sending, although rare. Nonetheless, this exception cannot be ignored. Some servers offer pre-recorded video and do not scale the video at the source because doing so would require expensive hardware processing power to transcode the recorded video. Likewise, a simple device might be too underpowered or hard wired to its output format to be capable of scaling the video appropriately for the remote video sink.

The question becomes which end (source or sink) manipulates the video? And then there are the questions of how and what does each side need to know to do the right thing to get the video in the correct format for the other side?

I can offer a few suggestions that will help. Again, as to the general rules..

  • A source should always attempt to send what a video sink expects and nothing more
  • A source should never attempt to stretch the source image larger than the original source image’s dimensions.
  • If the source is incapable of adjusting the dimensions to the video sink completely, it does so as much as it is capable and then the video sink must finish the job of adjusting the image before final rendering.
  • The source must understand the video sink can change dimensions and aspect ratio anytime with a moment’s notice. As such, there must be a set of active “current” video properties the source must be aware of at all times with regard to the video sink.
  • The “current” properties include the active width and height of the video sink (or maximum width or height should the area be automatically adjustable). The area needs to be flagged as safe for letterboxing/pillarboxing or not. If the area is unable to accept letterbox or pillarbox then the image must ultimately be adjusted to fill the rendered output area. Under such a situation the source could and should pre-crop the image before sending knowing the final dimensions used.
  • The source needs to know the maximum possible resolution the output video sink is capable of producing to not waste its own CPU opening a camera at a higher resolution than will ever be possible to render (e.g. an iPad sending to an iPhone device). Unfortunately, this needs to be a list of maximum rendered output dimensions as a device might have multiple combinations (such as an iPhone device suddenly turned on its side).

I’m skeptical if a reciprocal minimum resolution is ever needed (or even possible). For example, an area may be deemed letterbox/pillarbox unsafe and the image is just too small to fit a minimum dimension (and thus would have to be stretched upon rendering). In the TV world, an image is simply stretched to fit upon output (typically while maintaining aspect ratio). Yes, a stretched image can become pixilated and that sucks, but there are smoothing algorithms that do a reasonable job within reasonable limitations. People playing DVDs on Blu-ray players with HD TVs are familiar with such processes, which magically outputs the DVD video image to the HD TV output size. Perhaps a “one pixel by one pixel” source connected to an HD (1920×1080) output would be the extreme case of unacceptable, but what would anyone expect in such a circumstance? That’s like hooking up an Atari 2600 to an HD TV. There’s only so much that can be done to smooth out the image, as the source image quality just isn’t available. But that doesn’t mean the image shouldn’t be displayed at all!

Another special case happens when a source cannot be scaled down for whatever reason before transmission and the receiving video sink is incapable of scaling it down further to display (due to bandwidth or CPU limitations on the device). The CPU limitation might be pre-known, but the bandwidth might not. In theory the sink could report failures to the source and cause a scale back in frame rate (i.e. cause the sender to send fewer images rather than smaller images). If CPU and bandwidth conditions are pre-known, then a maximum acceptable dimension and bandwidth could be elected by the video sink thus such a non dimension adjusting source must be incapable of connecting.

Aside from the difficulties in building good RTC video technology, those involved in RTCWEB / WebRTC have yet to agree on which codecs are Mandatory to Implement (MTI), which isn’t helping things at all. Since MTI Video is on the agenda for IETF 86 in Orlando maybe we will see it happen soon. If there is a decision (that’s a big IF), what is likely to happen is that there will be two or more MTI video codecs, which means we will need to support codec swapping and all the heavy lifting related thereto.

I have not even touched on the IPR issues around real-time video, but if patents around video were the only problem, perhaps RTCWEB would be ready by now. The truth is that video patents are not likely to be the biggest concern that needs to be addressed when it comes to real time video. It’s just that “doing it right” in a browser, using JavaScript, on various devices… is rather complex.

IETF 80 – Prague mobile roaming no workie, SIP to the rescue

Well, I am a little sad that I have to turn ON international mobile roaming with Bell in order to get my mobile phone working here, which it is still not, but all is not lost. I have been using FaceTime over the WifI on my MacBook Air and iPhone 4 to call my business partner and my wife back home. Kinda fitting actually, FaceTime is standards based and is all about SIP and RTP etc. Now if we can just get them to open up that API…

Keep on smiling!

Avaya Makes a Bold Move into the Video Collaboration Space

On September 15th, Avaya announced several new products that nicely round up its Unified Communications (UC) applications and endpoints portfolio. The product launch focused mostly on video conferencing and video collaboration. Unlike its arch rival Cisco, Avaya has been lacking strong video capabilities, though it has been working closely with partners such as Polycom to provide end-to-end UC solutions to its business customers.

With its new Avaya Desktop Video Device and enhanced video support through Avaya Aura 6.0, Avaya is now able to deliver more comprehensive video conferencing capabilities on its own. The new Android-based device features a small form factor, touch-screen technology, HD video and audio, bandwidth efficiency, mobility (using WiFi, Bluetooth or 3G/4G via a USB plug-in) and a competitive price in the range of $3,000 to $4,000.

One of the most fascinating aspects of the new video device is the Avaya Flare experience. Avaya Flare is a user-centric UC interface with a spotlight in the middle that highlights ongoing communications sessions (IM, audio or video calling, and so on); on the right hand side – a list of contacts arranged by source – corporate directory, Facebook, etc. – and searchable by name; and on the left-hand side – a list of applications (such as calendar, for example). The Flare interface allows users to conveniently drag contacts into the spotlight and choose a communication mode based on presence status and/or the user’s preference and purpose. With an easy click of a phone icon, for instance, all contacts in the spotlight are immediately joined into a conference call. Other possibilities include video, IM, email, social (networking) and slideshare. Web conferencing is built into Flare as well.

Avaya Flare

In essence, the Avaya Desktop Video Device is a high-end, SIP-based, multimedia endpoint that enables users to conveniently use a variety of communication modes to communicate and collaborate more effectively. While the price point is certainly high for the average phone user, for users looking for cost-effective video, the Avaya Desktop Video device offers a compelling alternative. Typical users of such videoconferencing endpoints can be found in the legal or healthcare sectors, for example. Dr. Alan Baratz demonstrated a scenario in a healthcare environment where a specialist doctor was contacted via video to properly diagnose a patient. For a busy, multi-tasking and typically mobile executive, this device can prove a highly effective communications and collaboration tool, competing with a Cisco CIUS or an iPad as well as emerging smart deskphones.

The good news for those looking for a smart interface, yet not crazy about video or unable to afford the premium price, is that Avaya plans to introduce the Flare experience on other devices as well. In the near term, Flare will be available on select Avaya 9600 series phones and eventually – on smartphones. Integration with Microsoft Outlook for contact management and ability to control voice, conferencing, IM and presence can turn the SIP deskphone into a smart device providing a single point of access to communication tools currently available on disparate endpoints (e.g. IM and presence on PCs and laptops, voice on phones, and so on).

Furthermore, Avaya one-X Communicator 6.0 will provide ad-hoc video conferencing capabilities to Aura customers looking to use their PC or laptop as their primary interface to multiple, integrated communication and collaboration tools. Presence and IM federation, tight integration with Outlook, Communicator, Microsoft Office, IBM Sametime and Lotus Notes, video interoperability across Avaya’s portfolio and third-party endpoints, and  centralized management through Aura, make Avaya’s one-X Communicator UC solution an appealing option for desk-bound knowledge workers and other heavy communications users.

Avaya also announced its Avaya Aura Collaboration Server – a virtualized platform delivering all Avaya Aura 6.0 core capabilities, including the Session Manager, Presence Services, Communication Manager and System Manager, on a single server. This is a cost-effective (list priced at $27K) solution for up to 50 users that allows businesses to leverage Avaya Flare and Avaya videoconferencing while avoiding a large CAPEX commitment.

Avaya also highlighted its professional and managed video services capabilities, which will be key in complex environments and with businesses lacking sufficient in-house expertise to deploy and manage advanced video applications on their own.

Finally, Avaya launched the Avaya web.alive Experience – a cloud/SaaS-based collaboration solution featuring a 3D environment with avatars. Avaya web.alive enables users to collaborate using audio or video conferencing and sharing presentations and other content. Businesses can license a “space” within that environment and then customize it based on their needs. It is also available for on-premises implementations when security and control are key concerns (for instance, in government deployments). While the avatars create the illusion of an immersive experience, their movement on the screen may be distracting to some users. They may wish to use a 2D version and still leverage the full range of collaboration capabilities available on the platform. The web.alive Experience is being touted as particularly effective in marketing and sales scenarios (when presenting to customers and demonstrating the capabilities of specific products or solutions) and in e-learning environments. The platform provides interesting analytics tools that can be used to assess the effectiveness of collaboration and each participant’s contribution to the collaborative process.

Some customers inquired about the possibility of Avaya delivering certain advanced features such as video call park, hold, transfer, and so on in the future. Avaya confirmed that it can eventually enhance the video capabilities using Aura. Avaya was also asked to substantiate its claims of significant hardware cost reduction compared to competitors. It responded that it had benchmarked itself against Polycom and Cisco/Tandberg and came up at a 20% to 30% cost advantage vis-à-vis Polycom and up to 70% cost advantage vis-à-vis Cisco.

During Q&A, Avaya also provided some clarifications around the deployment options for the new video solutions. All new capabilities are available with Aura 6.0; however, previous Aura versions, as well as IP Office, can be front-ended with the Collaboration Server in order to leverage existing infrastructure and take advantage of the new capabilities. Additionally, through Aura, other vendors’ telephony platforms can also be integrated with Avaya’s video solutions. Furthermore, Aura provides bridges between Avaya’s new SIP-based solutions and existing H.323 video systems.

With the new announcements Avaya once again demonstrated its commitment to innovation and continuously enhancing the value of its products and solutions. It’s made some strong claims about the cost efficiencies and productivity benefits of its solutions and it remains to be seen how those become realized in individual customer scenarios. Also, Avaya has traditionally benefited from its more partner-centric approach (vis-à-vis Cisco’s one-stop shop approach), including in the area of video collaboration, and it will be important for Avaya to continue to function effectively in a broader eco-system. While the Aura architecture enables Avaya’s customers to leverage multi-vendor technologies for best results, it is possible some of its former partners may feel threatened by the new move. However, with the growing recognition of the value of videoconferencing in replacing costly travel and helping geographically dispersed teams collaborate more effectively, Avaya has rightfully sought to enhance its video capabilities. The new video solutions are likely to help it broaden its customer reach and add new sources of revenue.

Nortel yanked from NYSE?

Telecom equipment manufacturers are put to the test as the economy tightens.

Things are not looking all that rosy for Nortel these days, Canwest news had this to say..

With significant declines expected for the telecom equipment market in 2009, Nortel Networks Corp. runs the risk of losing market share regardless of whether it actually files for bankruptcy.

But while the company has denied it is pursuing insolvency protection and analysts say reports that it will are premature, others say a pre-emptive filing might not be a bad idea.

While Nortel is unlikely to face cash issues in 2009, UBS analyst Nikos Theodosopoulos said it might make sense to file in advance of a cash crunch.

Nortel on Thursday received notice from the New York Stock Exchange that it has six months to bring its average common share price back above $1 US, although the company said it is considering another share consolidation to remedy the problem.

Is this a reason to completely remove Nortel from consideration when shopping for a new small business phone system? No, it’s likley that they will get bought up or the government will bail them out (ugh, that is a whole other blog post), but if I were an SMB/SME looking at a Nortel solution I would likely give it some extra thought.

Can Response Point compete with Nortel in the SMB/SME space? I certainly believe that Response Point, when combined with high availability Internet and VoIP SIP trunks, drives great value and features not usually found in the 50 seat and under IP PBX offers.

The most significant advantage Response Point has over the competition (not just Nortel) is that it’s so darn easy to manage. Check out the Response Point Videos, more information on Response Point for Canada.

%d bloggers like this: