Wowza’s marketing at work again. #webrtc.

Let me be honest, I really dislike marketing in general. Public researcher by training, I’m looking forward to the truth, to reproducible results, and claims that are backed up by data and processes I can access by myself to reproduce the results and the conclusions of the analysis. Marketing is very often the opposite of researching of the truth: it aims at making the market buy your product(s) and service(s). Very often, the end justify the mean, and FUD, deceiving and/or unsubstantiated self-serving claims become the norm.
Let’s be clear, if you don’t have a killer feature, you might as well pretend that either a/ you have it, b/ it is not something important, c/ what you have is so much better. Eventually, as all lawyers know, if you can’t argue the facts, go after the credibility of those bringing them to light. Microsoft have made this approach very famous and even gave it a name: Fear, Uncertainty and Doubt: FUD.
When I read some Wowza marketing piece I cannot help but noticing a strange correlation with those technics, and it’s frankly upsetting. We have just made a “Fact checking” session about WebRTC at Live Streaming West in New York. I think it’s time to do it again here, so at least people can have access to enough verifiable information to make their own mind.

Continue reading

Video CDN and Satyagraha: admitting defeat in the course to perfect live streaming.

Satyagraha, (Sanskrit and Hindi: “holding onto truth”), was the philosophy proned by gandhi when addressing the British Imperialism: non-violence, self-scrunity and research of truth. It lead to citations like “First they ignore you, then they laugh at you, then they fight you, then you win”, and more simple saying like “believe what they do, not what they say”, which should be applied any marketing material, really.

For the past two years, all CDNs and most streaming server vendors, have been trying to downplay the disruptive impact that WebRTC would have on their business. It wouldn’t scale. There would be no market for it. It would not be needed. It has now changed.

Continue reading

KITE 2.0 – The best WebRTC testing engine now with more free content

WebRTC testing through KITE is getting more and more popular. As we updated the cosmo website, KITE is also receiving a lot of new features in this 2.0 release. Still the only end-to-end testing and load testing on the market that can be run on-premises, and to support web apps (desktop browsers and mobile browsers) as well as native apps, KITE has been leading the market from the clients application support, and price, points of view, but was still slightly being in term of usability. It was an expert tool made for experts. This new release add many free features, as well as many new commercial options for automated load testing on top of your own AWS account, for minimum cost.

Continue reading

WebRTC 1.0 Simulcast vs ABR

In a public comment to millicast recent post about simulcast, Chris Allen, CEO of infrared mentioned that they have been supporting ABR with WebRTC in their Red5 Pro Product for a long time. While his claim is valid, and many in the streaming industry use a variation of what they do, there are two very important distinctions that needs to be made between ABR and simulcast. We made the distinction about latency quickly in our presentation at streaming media west last year, however possibly too quickly, and we never really explain the distinction about end-to-end encryption, so we though we should dedicate a full post this time around. WebRTC with simulcast is the only way to achieve the lowest latency possible, and real end-to-end security, with a higher flexibility than DRM can provide.

Continue reading

WebRTC 1.0 Simulcast Hackathon @ IETF 104

Finally finding the time to go back to the root of this blog: webrtc standard. In this post we will revisit what simulcast is and why you want it (product view), before we go into the implementation status score cards for both Browser vendors and Open Source SFUs out-there that could be established after a fast and furious hackathon in Pragues two weeks ago. WebRTC Hackathon before the IETF meeting are becoming a tradition, and as more and more come together to participate, the value for any developer to participate is getting higher. We hope to see more participants next times (free food, no registration fee, all technical questions answered by the world experts, what else could we ask for?)

Continue reading

HOT ‘n’ EASY WebRTC FUZZ

by invited guest blogger: Sergio Garcia Murillo

Image result for hot fuzz

There has been quite some buzz (and fuzz) in the webrtc world recently due to a serie of great posts from Natalie Silvanovich, Project Zero in which she demonstrated the power of end-to-end fuzzing to find vulnerabilities and bugs in webrtc stacks and applications, including browsers, FaceTime, WhatsApp….

Inspired by the above wonderful but admittedly complicated work, we decided to investigate how we could provide a 

much easier to use fuzzer to stress test all webrtc apps with. Combined with the automation and interoperability testing capacities of KITE, it would become the perfect stress tool for the entire WebRTC community.

Continue reading

QUIC is the future of #WebRTC ….. or is it?

QUIC has been the source of all the heat during the internet meetings for the past two years at least, after being kind of restricted to a smaller group since its existence became known in 2013. There is a reason for that, QUIC as a transport protocol takes the best of TCP and the best of UDP, add encryption for security, multiplexing for speed, and bring additional improvements to make sure deployment will be fast on top of existing equipment, and updates (in user land) will be faster than before (in the kernel). For those interested in a first overview (possibly slightly biased as indicated in the respective sections) wikipedia is always a good start. For the hardcore Fans, the corresponding IETF working group and mailing lists will satisfy all your desire for detailed information. The latest drafts (#16), are hot out of the press as IETF 103 is ongoing in Bangkok this week.

Continue reading

Libwebrtc is open source, how hard can it be.

Recent discussions with several parties, make me realise that the first steps to master webrtc are not yet documented enough. Once, as a challenge, I asked a master student doing his graduation project with us to try to recompile and example provided with libwebrtc separately from the compilation of libwebrtc itself. Basically to make a project with the same example source code but that would link against the pre-compiled library. 5 months later, it was still not successful. Thanks to our internal tool that I shared then, we eventually did it in two weeks. This post is about the usual ordeal people have to go through to understand the state of affair, and of course how CoSMo can shield you from that.

Continue reading

#WebRTC Video Quality Assessment

How to make sure the quality of a [webrtc] video call, or video streaming is good? One can take all possible metrics from the statistic API, is still be nowhere closer to the answer. The reasons are simple. First most of the statistic reported are about network, and not about video quality. Then, it is known, and people who have try also know, that while those influence the perceived quality of the call, they do not correlate directly, which means you cannot guess or compute the video quality based on those metrics. Finally, the quality of a call is a very subjective matter, and those are difficult for computers to compute directly. 

In a controlled environment, e.g. in the lab, or while doing unit testing, people can use reference metrics for video quality assessment, i.e. you tag a frame with an ID on sender side, you capture the frames on the receiving side, match the ID (to compensate for jitter, delay or other network induced problems) and you measure some kind of difference between the two images. Google hasfull stack teststhat do just that for many variations of codecs and network impairments, to be run as part of their unit tests suite.

But how to do it in production and in real-time?

For most of the WebRTC PaaS use cases (Use Case A), the reference frame is not available (it would be illegal for the service provider to access the customer content in any way). Granted, the user of the service could record the stream on the sender side and on the receiving side, and compute a Quality Score offline. However, this would not allow to act on or react to sudden drops in quality. It would only help for post-mortem analysis. How to do it in such a way that quality drops can be detected and act on in real-time without extra recording, upload, download, ….?

Which webrtc PaaS provides the best Video Quality in my case, or in some specific case? For most, this is a question that can’t be answered. How can I achieve this 4×4 comparison, or this zoom versus webrtc, in real time, automatically, while instrumenting the network?

CoSMo R&D came up with a new AI-based Video Assessment Tool to achieve such a feat in conjunction with its KITE testing engine, and corresponding Network Instrumentation Module. The rest of this blog post is unusually sciency so reader beware. 

Continue reading