War for webrtc talent?

In the small world of innovative software industry (Bay Area Fever), VC money is not hard to come by. There is a lot of money around, a LOT! Around 23% of all the VC money in USA (47+ billions) went to SF companies last year.

The value is not so much associated to the capital, but on the competence of the people that are going to take that capital and make it worth a multiple of it’s original financial value. Capital has a value the day you get it, competence of the team has a value for the future of the company.

Continue reading

WebRTC 1.0 Finally!

For me, the most interesting part is of course the feeling that we are getting there: webRTC 1.0 specs are quasi final. Everybody agrees on what needs to be done, everybody agrees on how, only a few minors points are left to be validated by other working groups, without those decision delaying webRTC 1.0. The groups will now focus on pushing this through the standard process, and prepare webrtc NV.

Continue reading

Serious WebRTC Heat in Asia This month.


Most of the WebRTC action takes place in USA. Meet ups, development, big events, funding, all point to the US of A with a couple of companies fighting to make it happen further north (hi to my friends at hookflash, priologic, …). Most recently the excellent IIT’s Real-Time conference organized simultaneously with tad hack mini put the center of gravity of the webRTC ecosystem in Chicago. 

Regularly, the center of attention of the webrtc community shift to Europe, usually London, or the annual december webrtc conference in Paris.

More rarely is Asia the center of interest, despite the effort of silvia pfeiffer and other webrtc enthusiasts in Sydney, or the tenacity of  the Japanese webrtc hackers in Tokyo, each maintaining their own webRTC meetup. 

Just like an eclipse (and almost as rare), a great collision between standard bodies’ meeting location rotation rule, meet up dates, and Big commercial event sponsors interests is required to make Asia the center of interest. Guess what, this is the month of the great eclipse!


1931273_44091252631_7974_nAll my Japanese friends are excited, for almost two weeks non stop, Japan will be the WebRTC happening place:

  • On October 23rd, for the 18th edition of their HTML5 study meeting, techbuzz japan will focus on the latest version of JS (ES6) and webrtc for real time communication (In Japanese).
  • The last week of October (26~30), the W3C TPAC meeting, the most important W3C meeting of the year, will take place in Sapporo, Japan. Half price for the new attendee that will attend the IETF meeting the following week! It should finally give birth to the most awaited webrtc 1.0 specs during the dedicated webrtc sessions of october 29th and 30th, which, with a little bit of luck, could include simulcast. (in english, mostly)
  • The first day, there will be a W3C dev meet up in sapporo (october 26th) (here)
  • The original tokyo webrtc meetup #10 on october 29th (In Japanese) HAS BEEN REPLACED by the november 4th event.
  • After a week end spent to recover from the intense discussion, abstract APIs, and abuse of sushi (or spent enjoying crazy Japanese halloween parties), the 94th IETF meeting will take place in Yokohama, Japan. A lot of goodness as usual for a packed week, with the rtcweb group sessions on the 3rd and the 5th. (In english, mostly).
  • Co-located with IETF, but open to all, On the 4th, a special event will take place where english speakers will present to the local devs. Google will present its roadmap (an encore of the cranky geek show presentation), Citrix will provide feedback on their webRTC usage, and more. (In english with Japanese translators) (here).


388907_292525327458331_109931954_nMy Chinese friends are even more excited as for the first time the WebRTC Conference and info is going to BeiJing on November 10 and 11! More business oriented than the previous version, with a specific chinese-speaking track and a webrtc university track (run by the usual dan and alan, co-authors of the reference webrtc book, used in courses at US universities, the most webrtc-knowledgeable couple in the industry).

The conference will take place in the HaiDan district (electronic component shopping, anyone?), more specifically at ZhongGuanCun, which not only is the silicon valley of BeiJing (top 1 and top 2 universities of china + Oracle, Google, …… centers), but is also a few blocks away from the main Korea town / student area of WuDaoKou  with all the nice cheap restaurants, karaoke and clubs. Get your foreign food fix at Lush while watching the crossing, and get down two floors to get into one of the main Beijing club (Only in BeiJing can a club be named “propaganda”). If the life scene is not for you, you are a couple of bucks away in taxi from the Summer Palace which is also a must seen, during the day. 


My Singaporean Friends are ecstatic, as JSConf.asia is coming back to Singapore this year (November 19th and 20th), with a multitude of side parties and events, and great speakers, all visible under the Dev Fest Asiastretching the web devs party from November 12th to 22nd. Beyond Thomas gorissen, the organizer (and consultant for skylink.io) himself, many other webRTC/webaudio talks are popping through the agenda like the one from mozilla’s Paul Adenot. More about the speakers on the event blog.

Pick your event, and come join the party, webrtc heat is hitting asia this month.

Creative Commons License
This work by Dr. Alexandre Gouaillard is licensed under a Creative Commons Attribution 4.0 International License.

This blog is not about any commercial product or company, even if some might be mentioned or be the object of a post in the context of their usage of the technology. Most of the opinions expressed here are those of the author, and not of any corporate or organizational affiliation.

Why are you so mean? WebRTC NV / ORTC APIs are too hard!

I. Introduction

I saw a lot of reactions to the ORTC announcement by (small) webRTC solution vendors that the new API, wether the webRTC NV or the ORTC one (they share a common inspiration after all) was too complicated. People to start wondering about the reasons why the standard committee / MS was doing that, and why the Australian Government was renaming their research centers “Data61” (here), would that be because area 61 actually exists but is in Australia? Was NSA involved? … and I stop reading the conspiracy theories at that point 🙂

II. Standard Committees: Cathedral or Bazaar?

(For those too young to know which book I’m referring to, I recommend finding yourself a copy and read it.)

The W3C and the IETF are open consortia. Anybody can join and participate. Joining an IETF mailing list actually makes you a member! As far as the W3C is concerned, a membership is involved, but small start up only pay a couple of thousand US Dollars for the first two years. Those two are the two entities involved in the specification of the core of webRTC. There are others that are also worth mentioning like IMTC or 3GPP that are interesting depending on your use of webRTC (interoperability between VoIP and webrtc and Mobile, respectively). In the case of W3C and IETF, the mailing lists are public and not limited to members, so anybody can go there and ask questions, provide feedback, and interact in any way with the members that will eventually make the decision. That feedback from users of the technology is very important for us to make the right decision, and I encourage everybody to go there and exchange. 

III. Use case and feedback

Like any other software, defining a JS API for the browser is about defining the right use case. Whatever you define it will in turn impact the API surface. In the case of webRTC, the original use case is a 1:1 call with audio and video, and that use case was implemented as appRTC. For a very long time appRTC was the reference for bug reports, tests, interoperability between browsers, etc. In turn the Peer Connection API has been tailored to make that use case dead simple. Most of the underlying machinery (all of ICE, the encryption, the codecs, ….)  was hidden within the browsers, and most of the parameters were hidden in SDP. It made writing a webpage that could do a video call a 10 lines homework a student could do.

Use cases evolve as one understanding of a technology improve, as reflected by the corresponding document that is used as an informational reference for both webRTC and RTCweb. Looking at the document tracker, you can see that no less than 16 revisions exist before it was stabilized early this year. If your use case is not in this list, it is very likely that webrtc 1.0 (due sometime around Xmass 2015, if we’re all good boys/girls) will not support it. However, you can voice your need and try to have your use case taken into account for the next version of webRTC (no, not webrtc 1.1, no, not webrtc 2.0, no, not ORTC, just ……. webRTC NV for next version).

Some thought that a 1:1 use case was too simple: peer connection would be too big a black box, and shoehorning all parameters in an SDP blob was just adding complexity and dependencies. Vote happened, decision was made, peerconnection was here to stay. The ones in disagreement created a Community Group, with no standardization power, named ORTC to prepare what could be the base for specifications the day people would want to do things differently, if ever.

IV. WTF happened with all those new API you’re throwing at us?

As usual in the web, people use API in way they were designed for, and it’s awesome. When they do, things break, and/or we get feedback about things that are not working because people assumed it was working in a different way it actually is (or different browsers implement it in a slightly different way). The bug-or-feature discussions happens next, we take note, and put it in the agenda for the next meeting if enough people are interested in it. This time, we were facing very clear and convergent cases.

1:1 is bo~ring!

First, 1:1 is boring and most people are expecting multiparty calls, simulcast or even smarter simulcast using SVC codecs (264, vp9, …). There are slight differences there and the order in which those have been mentioned is not random.

Supporting multiparty calls is having the capacity to have several people join the same conversation, wether in p2p or not. While you can do that with multiple peer connections the underlying assumption is that you want to do it with a single peer connection, to leverage synchronization of streams, common bandwidth adaptation, port optimizations, ….. The problem here is more about how to signal this case between browsers, and gave birth to the infamous Plan B and Unified plan. The former was implemented in chrome for very long, the later is the official spec, but is only fully implemented in Firefox today. Those media streams can be completely independent, i.e. they can come from different sources.

Simulcast is about sending a media stream from a single source using different resolutions. The main usage here is to choose which resolution you are going to use depending on external factors like the resolutions of the remote peer’s screen, the bandwidth of the remote peer, …. While you can implement simulcast using a multiparty implementation as above, you would be losing the information about the relation between the media streams, namely that they all come from the same source, and that one is a decimated/scaled down version of the other. The multiparty implementation would treat all stream equally and in bad network conditions would reduce the resolution of all the streams, killing the purpose. Simulcast usually comes with smart Bandwidth adaptation algorithm that knows he needs to keep the lower resolution stream untouched, and just adapt the highest resolution stream first when bandwidth goes down. Simulcast is most important in use case that involve a media server. In simulcast, the media streams come from the same source, but are independent in the sense that they each can be decoded and rendered/played separately.

SVC codecs allow for yet another level of greatness. SVC will encode the lowest resolution media stream as a normal stream, that can be decoded on its own, and will then encode only the difference between the higher resolution and the base resolution in subsequent streams. The advantage here are multiple: lower bandwidth (low frequency information is not duplicated across streams), better resilience, ….. SVC codecs are especially useful in cases that involve a media server. In this case, the media streams come from the same source, and are NOT independent, except for the lowest resolution stream. The subsequent streams need to have all the lower resolution streams available to be rendered/played.

People are jumping the fence because they have unanswered needs

People are today modifying the SDP on the fly to be able to have access to properties, or capacities of peer-connection internal objects, or to be able to set those properties, or parameters. Several underlying object were modified this way: the ICE agent, the encryption (DTLS), the codec choice, the bandwidth, …

If the use case is valid (and more often than not they are) adding a JS API that  does what people were doing by manipulating the SDP is the right thing to do. We slowly replace an opaque, not specified, API by a specified, JS API with JSON objects. It does not give more work to the developers, since they were doing it already, even though they will have to take the opportunity to refactor and clean their code.

V. Here is why

It so happen that some of the API proposed by the ORTC group would answer both the multiparty/simulcast/SVC problems and the SDP munging problems. They are being slowly integrated in the webRTC specification when and where they make sense (except for Microsoft, which just implements it all his way and dumps it on an unexpecting audience). The time to bring them in webRTC 1.0 specs was shortened by the fact that those had been though about for quite some time now, and overlapping members had worked on both webrtc and ortc and could bridge the gap.

Most of the new API you have seen coming out of the last meeting were APIs that would just provide a good way to achieve what people where trying to achieve by manipulating the SDP, *AND* could be integrated before the end of the year not to push further webRTC 1.0. The other changes are related to paving the way to simulcast, but I already spoke about that in a previous post.

Because the APIs are more granular instead of being tailored for a 1:1 case, it makes writing the 1:1 case with those API look overly complicated in contrast. I do not believe it to be really a problem, as it is always easy to go from granular to simple. Within a few weeks, you will have webrtc-on-ORTC shims, and your website will work exactly the same (as long as you don t need video), or you can keep ignoring Edge all together. There are quite a few things that are overly complicated to do in webRTC today that will be easily doable with the new APIs. No regression in any case, just possible improvements. I expect the same thing to happen for the latest additions to webRTC 1.0 API set. Eventually webRTC and ORTC should also converge. 

I hope that this post brought some light on the decision process followed by W3C. The core of it is feedback from users, and timeline considerations, so once again, if you have a use case, or a question, voice them on the w3c’s public-webrtc mailing list (not the discuss-webrtc mailing list).


Creative Commons License
This work by Dr. Alexandre Gouaillard is licensed under a Creative Commons Attribution 4.0 International License.

This blog is not about any commercial product or company, even if some might be mentioned or be the object of a post in the context of their usage of the technology. Most of the opinions expressed here are those of the author, and not of any corporate or organizational affiliation.