WebRTC Status Update 2017 Q2

It’s been a long time there hadn’t been any communication about any WebRTC roadmap, so it is understandable than when Mr. Huib, the new WebRTC PM at Google, made a formal announcement on discuss-webrtc mailing list, followed-up by a tweet from Tech Lead Justin Uberti, everybody went curious. Unfortunately, a lot of what has been written about those announcements and tweet is … inaccurate. Very recently I was given the opportunity to speak at the Sydney’s WebRTC meet up and mingle with people from snapchat, tokbox, dolby, CoViu,  ….. I thought it would be a good opportunity to write down things I know (and I can speak about) with respect to the status of WebRTC and what’s to come.

The slides presented at Sydney are available here. In this blog post, I will follow more or less the same flow.

I. Standards To make it easy

The standards are developed to make things easy. The deal is easy: if you implement according to standards, you are ensured interoperability with others.

The W3C standards define the JS API browser should implement, and how they should behave so that web apps are interoperable. In the web space, interoperable means that your website or web app will work and look the same whatever the browser you are running it in. Testing web app interoperability is a one-by-one testing affair … before WebRTC.

The IETF standards define what is sent “over-the-wire” so that another webrtc-compliant client or device can interoperate. In the Internet world, interoperable means that two clients can communicate and exchange understandable information with each other. Testing communication app interoperability is at least a two-by-two testing affair.

This dual meaning of the world interoperable has been causing some confusion even among experts, so I thought I describe it first.

II. What was the announcement again?

A. Finish The Standards!

The first point is relatively easy, we need to finish the specifications, and make it a standard. While most of the spec has been mainly stable since December 2015, the process required getting feedback beyond the original working group, addressing that feedback, getting two independent implementations of the specs, and having a comprehensive enough test suite.

As a W3C Invited Expert, and quite neutral in the group, I have been tasked since the meeting in Washington D.C. on may 2014 to report on implementation status and test status. Here is the status of things as of the technical Plenary meeting, last year in October in Lisbon.

Basically, most of the work is done, from the working group point of view, and Feedback outside the group has been already requested for the Media Capture and Streams Specifications (GetUserMedia). WebRTC was slightly behind but is also reaching CR stage as we speak.

The reason why things where slower than expected is, among other things, the relative slow speed of implementation, and tests suite, which in turn resulted in less feedback for the specs. Let’s dive into the implementation now, and a following chapter will dive into the tests.

In brief, the work to be done on specification writing itself is pretty small by now, and is mainly reduced to address the small things that the tests or browser implementors are finding when implementing them.

B.  Finish Implementation

This is the part of the message which triggered most reaction on the mailing list, understandably. Obviously, people on the webrtc-discuss mailing list are users of the technology, and while a few may be interested in the standard, most of them are pragmatic person who need to deal with implementation on a daily basis. 

One of the most sought after feature in WebRTC has been Simulcast and layered codecs (SVC).

Originally Firefox was more limited than chrome. They had chosen to use a SIP VoiP engine (SIPCC – open sourced by Cisco) as the base for their webRTC implementation. Alas, it would only support 1 video stream and 1 audio stream at a time at most. Meanwhile, chrome would support multiple streams, and even implemented early support for simulcast. At that time, there was no standard for how one should signal that multiple streams would come from the same source (SSRC), so Google came up with it s own way and called it “Plan B”. 

After a complete rewrite, Firefox could manage multiple streams of a given media type. They directly went ahead and implemented classes that were needed for simulcast and hot-shifting of tracks during call (shifting from front cam to back cam on a mobile without restarting the call), according to a new spec that had been written in the meantime: “Unified Plan”. At that point of time, Firefox was more spec-compliant than chrome. The very few project that were trying to support simulcast across both chrome and firefox where challenged. Jitsi provided a shim to help smoothing the differences between plan B and unified plan, but there is no one-to-one mapping between the two, so one need to make compromise.

Chrome was then hard pressed to implement Unified Plan. Unfortunately the architectural changes needed to implement Unified plan go well beyond SDP. The way the streams, and corresponding classes were grouped together were intrinsically different. In other words, the implementation of Unified Plan was dependant on an implementation of not only RTP- Senders and Receivers, but also RTP Transceiver which would map those classes to unified Plan signalling in the SDP.

C. More tests!!

This is maybe the part that was covered the worse in all the blog posts I could read. A lot of clarifications are needed here.

W3C JS API tests

As described above, writing a test suite for the JS API is a mandatory step in the W3C Standardisation process. It goes through the extension of an existing effort: “Test the Web Forward”, that is also sometimes referred as Web Platform Test (WPT). This is a compliance test suite, meaning it checks if the browser implement the specification or not. It acts as an interoperability test for browsers (in the web sense), by forcing all browsers to implement the same APIs and Behave the same, as far as there is a standard to test against.

The original announcement was a little bit fuzzy on the subject and could be read as if google was not only writing WPT (that would be a new tool) but that also WPT was dealing with browser-to-browser interoperability. Both statements are wrong. Many bloggers fell for that one, likely not being aware of what WPT is.

The status of WebRTC testing in WPT back in October last year was …. not ideal.

Google recognised the fact that they needed more testing (not only for webrtc) and initiated a chrome-wide testing spike, bringing some external consultants in to help on specific cases. BOCOUP, from NYC has helped a lot for example (See their great blog post on the subject). As disclosed by Justin uberti, CoSMo has been chosen to implement the missing WebRTC tests, so you’re getting information directly from the Source.

Of course, much more than writing the base tests is going on right now.

All the browser vendors used to keep a copy of the tests, sometimes modified, sometimes brand new tests, in their own test suite. Following the lead of Firefox and Apple, Google is now automatically sending their modifications back to the upstream WPT repository.

The WPT is a collection of manual tests. Some scripts and tools are provided to run those test automatically, like the “WPT Runner”. However, it could only run tests locally, limiting the number of browsers it could test in one run (Edge is win10 only, Safari is MacOS only, …). Google started three projects there. First, add SauceLab support to WPT Runner, so it can run the test suite in remote browsers. Second, automate running the tests for each commit using commit bots. Finally, a visualisation tool, the wpt dashboard, is being developed as well. 

IETF Protocols Tests

The IETF does not have anything even remotely close to the WPT. Testing is not part of the requirements for a draft to become a standard. While it is not usually a problem for web developers, since the browser is handling all the IETF specifications for them, with WebRTC and its per connection APIs that implement p2p (well, browser-to-browser) communication, some IETF specs needs to be tested. Actually, some chapters of the W3C specifications can not be tested without that either. Here there was a need for interoperability (as in browser-to-browser) testing. 

I made an evaluation for the webrtc Working Groups of how good or bad we were at it back in 2016, and I think the next three slides just say it all:

The way the announcement is written, it is very likely that Google is implementing such a browser-to-browser interoperability test (a.k.a. protocol testing) in addition to the WPT compliance test suite.

Note that in the entire announcement they was no reference to the adapter.js project. What google seem to be aiming at is compliance (W3C) and protocol (IETF) testing, while adapter.js project aims at smoothing implementation differences. The hope always was that at one point adapter.js would not be needed anymore, and hopefully we’re getting there, even though I doubt it will ever be achieved:

  • FF upper layer is all JS anyway,
  • Safari is not implementing the legacy APIs natively, shimming them in adapter.js instead,
  • Edge’s ORTC ……

D. It is strong, they want it ARMY strong.

The last part of the annoucement is straightforward, and has very low impact on existing product or services. Google is just repeating their commitment to make the engine better: better echo cancellation, better jitter, better bandwidth estimation, better mobile support, faster connections, ……..

We should all benefit from those improvement automatically when chrome updates, without modifications to the code.

III. Conclusion

The announcement from google is definitely a good one. It provides practical steps for finishing the implementation and make webrtc as stable as it can be. Not that it is bad today, we have enough example of very large scale implementations to know how good it can be, but it was still a slow moving target.

What they propose to do in the next quarter to half a year (my bet is on it being finished in time for the W3C technical plenary meeting in november in CA), will likely render some projects, products and services, especially on the testing side of things, obsolete or irrelevant. As such, I expect mixed feedback from bloggers involved in those activities. However, I do think this is good news for developers, and that’s all that matters (to me).


Disclaimer: My company is under contract from Google for some of the items described above. 

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.