Tag Archive | open peer

Stop Talking to Yourself. Go beyond the RTCWEB Silo!

RTCWEB / WebRTC is designed to let two or more browser-enabled devices communicate P2P (peer-to-peer) with audio, video or data. But there’s a big catch. The browsers can’t communicate out of the box unless some undefined “external process” gathers information about each browser and hands the information to the other browser.

This mystical external process is known as “on the wire signaling”. Gathering information from a browser/peer needed to communicate isn’t incredibly difficult for a moderately talented programmer, nor is exchanging the required information. All that would be required is some kind of go-between web server and a socket or two. This solution is relatively simple and there are other companies setting themselves up to provide that kind of service offering.

But that kind of signaling will quickly becomes unwieldy to manage in the real world and misses many critical use cases and components in much larger deployments. The overriding presumption in such a model assumes both ends want to communicate and does not define how they want to communicate, let alone addressing very complex security issues.

So what does make up a robust and complete P2P communication solution?

A well thought out P2P solution should addresses these concerns:

  1. Initiation of communication between peers that are not actively expecting communication
  2. Exchanging the types of communication desired (audio/video/text/etc.)
  3. Allow peers the option to allow or disallow communication
  4. Allow peer to disengage communication at any time gracefully
  5. Changing the nature of the communication at any time (adding or removing media types like audio/video/text, media on hold, transferring sessions to other participants, etc.)
  6. Handle users’ identities so that users on independent systems can interoperate (and identify themselves when communicating)
  7. Handle users logged into multiple locations as the same user
  8. Find users to communicate with by their known identities (social, generic, 3rd party, etc)
  9. Validate the identity of the user you think you are connecting with
  10. Secure communication channels in a way that even servers involved in the “communication setup” are not able to decrypt information exchanged between peers
  11. Handle group conversations amongst peers without needing servers to relay the data
  12. Handle communication to applications outside to the browser (e.g. interoperate with mobile apps)

A well designed P2P platform should be designed to enable users on various websites to talk beyond each respective web silo. Users of one website can find and communicate with users on another website and even to users on mobile devices.

It should work with your existing identity model. Alice and Bob on your website are still known as Alice and Bob in the P2P network. You don’t need to administer and map a separate database of usernames and password that would be required with other legacy signaling protocols.

The network should allow users to locate other users by their social IDs, phone numbers, email addresses or by using your own custom defined identities – social or otherwise. It should be built with strong security in mind. Each user has their own private and public key, which when tied with an identity model yield strong proof of identity with completely private communications between peers.

A developer should be able to take the open source libraries and rapidly build and deploy powerful client applications with all of these features built-in and deploy without the headache of managing a communications network. No web developer I have ever met volunteered to be the one trying to figure out the complex ins-and-outs of everything that a good P2P design will resolve. So, I ask you, do you as a developer really want to be stuck in a little silo of communication, maintaining your own custom communication signalling protocol?

If you are looking to leverage WebRTC in a browser or if you just want to build a powerful communications feature into an app, you owe it to yourself to do the research. Before you get headlong into your project and find out the tech you chose was not up to the challenge, take a look around. Libraries like the one found in the Open Peer project could very well fit the bill.

Authored by Robin Raymond, edited by Erik Lagerway

In the Trenches with RTCWEB and Real-time Video

The concept of video streaming seems extraordinarily simple. One side has a camera and the other side has a screen. All one has to do is move the video images from the camera to the screen and that’s it. But alas, it’s nowhere near that simple.

Cameras are input sources, but they have a variety of modes for which they can operate. Each camera has it’s own dimensions (width/height), aspect ratio (the ratio of width/height) and frame rate. Cameras are often capable of recording at selectable input formats, for example, SD, or HD formats, which dictate their pixel dimensions and aspect ratio sizes (e.g. 4:3 or 16:9). If a camera opens in one format and switches to another there can be a time penalty before the video starts streaming again from the camera (thus switching modes needs to be minimized or avoided entirely). On portable devices, the camera can be orientated in a variety of ways and dynamically change its pixel dimensions and aspect ratio on the fly as the device is physically rotated.

Some devices have multiple camera inputs (e.g. front camera or rear camera). Each source input need not be identical in dimensions nor capability and the user can choose the input on the fly. Further, there are even cameras that record multiple angles (e.g. 3D) simultaneously, but I’m not sure if that should be covered right now even though 3D TVs are all the rage (at least from Hollywood’s perspective).

If I could equate cameras to a famous movie quote: they are like a box of chocolates, you never know what you are going to get.

Cameras aren’t the only sources though. Pre-recorded video can be used as a source just as much as a camera. Pre-recorded video has a fixed width, height and aspect ratio, but it must be considered as a potential video source.

The side receiving video typically renders the video onto a display. These output displays are also known as a type of video sink. There are other types of video sinks though, such as a video recording sink or even videoconferencing sink. Each has it’s own unique attributes that vary and the output width and height of these video sinks vary greatly.

Some video recording sinks work best when they receive the maximum resolution possible. While others might desire a fixed width/height (as it’s intended for later viewing on a particular fixed size output device). When video is displayed in a webpage, it might be rendered to a fixed width/height or there might be flexibility in the size of the output. For example, the page can have a fixed width, but the video height could be adjustable (up to a maximum viewable area), or vice versa with the width being the adjustable axis. In some cases both dimensions can adjust automatically larger or smaller.

Some output areas are adjustable in size when manually manipulated by a user. In such cases the user can dynamically resize the output areas larger or smaller as desired (up to a maximum width and height). Other output screens are fixed in size entirely and can never be adjusted. Still other devices adjust their output dimensions based upon the physical rotation of the device.

The problem is how do you fit the source size into the video sink’s area? A camera can be completely different in dimensions and aspect ratio than the area for the video sink. The old adage “how do you fit a square peg in a round hole” seems to apply.

In the simplest case, the video source (camera) would be exactly the same size as the output area or the output area would be adjustable in the range to match the camera source. But what happens when they don’t match?

The good news is that video can be scaled up or down to fit. The bad news is that scaling has several problems. Making a small image into a big image makes the image appear pixelated and ugly. Making a bigger image smaller is better (except there are consequences for processing and bandwidth).

Aspect ratio is also a big problem. Anyone who’s watched a widescreen movie on a standard screen TV (or vice versa) will understand this problem. There are basically three solutions for this problem. One solution is shrinking the wide screen to fix into the narrow and put “black bars” above and below the image, known as letterboxing (or pillarboxing on the other axis). Another solution is to expand the image large enough while maintaining aspect ratio so there are no black bars (but with the side effect that some of the image is cropped because it’s too big to fit in the viewing area). Another method is to stretch the image making images look taller or fatter. Fortunately that technique is largely discredited, although still selectively used at times.

Some people might argue that  displaying video using a letterboxing/pillarboxing technique is too undesirable to ever be used. They would prefer video was stretched to fit the display area and any superfluous video image edges are automatically cropped off. Videophiles might gasp at such a suggestion for the very idea that discarding part of an image is nearing sacrilege. In practical terms, it’s both user preference as well as context that determine which technique is best.

As an example of why context is important, consider video rendered to the entire view screen (i.e. full screen mode). In this context, letterboxing/pillarboxing might be perfectly acceptable, as those black bars become part of the background of the video terminal. In a different context, black bars in the middle of a beautifully formatted web page might be horrifically ugly and unacceptable under any circumstance.

The complexities for video are far from over. When users place video calls, the source and the video sink are often not physically located together. That means that the video has to go from the source to the video sink located on different machines/devices and across a network.

When video is transmitted across a network pipe, a few important considerations must be factored. A network pipe has a maximum bandwidth that fluctuates with usage and saturation. Attempt to send too large a video and the video will become choppy and glitch badly. Even where the network pipe is sufficiently large, bandwidth has a cost associated, thus it’s wasteful to send a super high quality image to a device that is incapable of rendering it to the original quality. To waste less bandwidth, a codec is used to compress images and thus preserve network bandwidth as much as possible (the cost being the bigger an image, the more CPU required to compress the image using the codec).

As a general rule…

  • a source should never send video images to the remote video sink that ends up being discarded or at a higher quality than the receiver is capable rendering, as it’s a waste of bandwidth as well as CPU processing power. For example, do not send HD video to a device only capable of displaying SD quality.

Too bad this general rule above has to have an exception. There are cases where the video cannot be scaled down before sending, although rare. Nonetheless, this exception cannot be ignored. Some servers offer pre-recorded video and do not scale the video at the source because doing so would require expensive hardware processing power to transcode the recorded video. Likewise, a simple device might be too underpowered or hard wired to its output format to be capable of scaling the video appropriately for the remote video sink.

The question becomes which end (source or sink) manipulates the video? And then there are the questions of how and what does each side need to know to do the right thing to get the video in the correct format for the other side?

I can offer a few suggestions that will help. Again, as to the general rules..

  • A source should always attempt to send what a video sink expects and nothing more
  • A source should never attempt to stretch the source image larger than the original source image’s dimensions.
  • If the source is incapable of adjusting the dimensions to the video sink completely, it does so as much as it is capable and then the video sink must finish the job of adjusting the image before final rendering.
  • The source must understand the video sink can change dimensions and aspect ratio anytime with a moment’s notice. As such, there must be a set of active “current” video properties the source must be aware of at all times with regard to the video sink.
  • The “current” properties include the active width and height of the video sink (or maximum width or height should the area be automatically adjustable). The area needs to be flagged as safe for letterboxing/pillarboxing or not. If the area is unable to accept letterbox or pillarbox then the image must ultimately be adjusted to fill the rendered output area. Under such a situation the source could and should pre-crop the image before sending knowing the final dimensions used.
  • The source needs to know the maximum possible resolution the output video sink is capable of producing to not waste its own CPU opening a camera at a higher resolution than will ever be possible to render (e.g. an iPad sending to an iPhone device). Unfortunately, this needs to be a list of maximum rendered output dimensions as a device might have multiple combinations (such as an iPhone device suddenly turned on its side).

I’m skeptical if a reciprocal minimum resolution is ever needed (or even possible). For example, an area may be deemed letterbox/pillarbox unsafe and the image is just too small to fit a minimum dimension (and thus would have to be stretched upon rendering). In the TV world, an image is simply stretched to fit upon output (typically while maintaining aspect ratio). Yes, a stretched image can become pixilated and that sucks, but there are smoothing algorithms that do a reasonable job within reasonable limitations. People playing DVDs on Blu-ray players with HD TVs are familiar with such processes, which magically outputs the DVD video image to the HD TV output size. Perhaps a “one pixel by one pixel” source connected to an HD (1920×1080) output would be the extreme case of unacceptable, but what would anyone expect in such a circumstance? That’s like hooking up an Atari 2600 to an HD TV. There’s only so much that can be done to smooth out the image, as the source image quality just isn’t available. But that doesn’t mean the image shouldn’t be displayed at all!

Another special case happens when a source cannot be scaled down for whatever reason before transmission and the receiving video sink is incapable of scaling it down further to display (due to bandwidth or CPU limitations on the device). The CPU limitation might be pre-known, but the bandwidth might not. In theory the sink could report failures to the source and cause a scale back in frame rate (i.e. cause the sender to send fewer images rather than smaller images). If CPU and bandwidth conditions are pre-known, then a maximum acceptable dimension and bandwidth could be elected by the video sink thus such a non dimension adjusting source must be incapable of connecting.

Aside from the difficulties in building good RTC video technology, those involved in RTCWEB / WebRTC have yet to agree on which codecs are Mandatory to Implement (MTI), which isn’t helping things at all. Since MTI Video is on the agenda for IETF 86 in Orlando maybe we will see it happen soon. If there is a decision (that’s a big IF), what is likely to happen is that there will be two or more MTI video codecs, which means we will need to support codec swapping and all the heavy lifting related thereto.

I have not even touched on the IPR issues around real-time video, but if patents around video were the only problem, perhaps RTCWEB would be ready by now. The truth is that video patents are not likely to be the biggest concern that needs to be addressed when it comes to real time video. It’s just that “doing it right” in a browser, using JavaScript, on various devices… is rather complex.

Let's not build WebRTC apps in silos

People are talking about how WebRTC could in fact create more silos in communication that it potentially tears down. The fact that this video codec debate may never be resolved is not really the biggest issue, video codecs are not that easy to come by so as developers it’s likely we will all implement the most common and accessible codecs out there, including: VP8 and H.264, that is certainly the approach we are taking @hookflash.

The more glaring issue it seems could in fact center around the lack of a defined signalling protocol on the wire. Currently developers are left to their own devices (no pun intended) when identifying and signalling between endpoints in their interpretation of WebRTC. Which begs a question, “How one implementation of WebRTC communicate with another implementation of WebRTC?”

There are plenty of answers, most of them include “http, oauth, etc”. Which in itself is great, leve the developers decide, after all it’s their app! Some more telephony-centric developers will gravitate towards a SIP or Jingle implementation. But what about those who want to federate with other P2P-centric WebRTC offers out there and still maintain some sort if interoperability?

Tsahi Levent-Levi says…

I’ve been working for over a decade with SIP and H.323 – developing interoperable SDK solutions for the rest of the industry. At the end of the day, none of it mattered:

  • We ended up as an industry with single vendor deployments for enterprises
  • Interoperability was only skin-deep. The moment you wanted to do something real (security, collaboration, video), it just didn’t work
  • Extending communication beyond the boundaries of the organization was impossible without PSTN

To me this seems awfully close to what you can achieve with WebRTC with two minor differences:

  1. WebRTC takes that for granted and makes a real statement of it: there is no signaling – do whatever it is you feel like
  2. It provides a common API with a common delivery platform (the browser)

As it stands today, there is nothing that fills that gap, but that is changing quickly. “Open Peer” is being positioned as a P2P signalling protocol on the wire for WebRTC with full control over Voice, Video, Messaging and Identities: local & social. As a founder @hookflash (creators of Open Peer), I may be somewhat biased (and sometimes I have a big mouth) but if you are building for WebRTC you really do owe it to yourself to check out Open Peer: http://openpeer.org and the Open Peer SDKs on Github.

%d bloggers like this: