Many questions on the discuss-webrtc mailing list nowadays are about specific configuration flags. It’s frequent enough to deserve a blog post to provide the basic and try to reduce the number of stuck people.
Continue readingBuild and packaging
I want to support #WebRTC, do I really need to use google library?
WebRTC has been a magical word for the best part of the 2010 decade, starting, if we had to put a date on it, with 2011 Google IO presentation. From conversations as early as 2018, and many small signs (dropping support for official mobile release in m80 release notes), it was clear that, for Google at least, the WebRTC star itself was already the past. Still, more people depend on webRTC, or want to adopt it, today than ever. What are the options out there? How should one prepare for WebRTC in 2020?
Continue reading#WebRTC 101: 1st assignment
Now that we understand the basis of libwebrtc code management, we can start answering otherwise problematic questions. This week I was at CommConUK, and was discussing the number of contributors to libwebrtc, pointing my interlocutor to the AUTHORS file to start with. “Less than 100” was the other party position, and to be honest, I had never checked. So, who would risk a guess as to how many contributors to the webrtc stack there were in the past three years, and more importantly, how to check?
Continue reading#WebRTC 101: Fetch the source
I first started writing about libwebrtc source management, build and test systems almost 5 years ago. While the posts are still here, and mostly accurate, people forget, and/or the system as changed just enough that we need to update what is the de-facto reference for libwebrtc. As we are writing a book, with examples and illustration to be used in classrooms to teach the underlying principles, and in companies for e.g. on boarding Engineers, we though we should put some extract here.
Continue readingLibwebrtc is open source, how hard can it be.
Recent discussions with several parties, make me realise that the first steps to master webrtc are not yet documented enough. Once, as a challenge, I asked a master student doing his graduation project with us to try to recompile and example provided with libwebrtc separately from the compilation of libwebrtc itself. Basically to make a project with the same example source code but that would link against the pre-compiled library. 5 months later, it was still not successful. Thanks to our internal tool that I shared then, we eventually did it in two weeks. This post is about the usual ordeal people have to go through to understand the state of affair, and of course how CoSMo can shield you from that.
Why are you so mean? WebRTC NV / ORTC APIs are too hard!
I. Introduction
I saw a lot of reactions to the ORTC announcement by (small) webRTC solution vendors that the new API, wether the webRTC NV or the ORTC one (they share a common inspiration after all) was too complicated. People to start wondering about the reasons why the standard committee / MS was doing that, and why the Australian Government was renaming their research centers “Data61” (here), would that be because area 61 actually exists but is in Australia? Was NSA involved? … and I stop reading the conspiracy theories at that point 🙂
II. Standard Committees: Cathedral or Bazaar?
(For those too young to know which book I’m referring to, I recommend finding yourself a copy and read it.)
The W3C and the IETF are open consortia. Anybody can join and participate. Joining an IETF mailing list actually makes you a member! As far as the W3C is concerned, a membership is involved, but small start up only pay a couple of thousand US Dollars for the first two years. Those two are the two entities involved in the specification of the core of webRTC. There are others that are also worth mentioning like IMTC or 3GPP that are interesting depending on your use of webRTC (interoperability between VoIP and webrtc and Mobile, respectively). In the case of W3C and IETF, the mailing lists are public and not limited to members, so anybody can go there and ask questions, provide feedback, and interact in any way with the members that will eventually make the decision. That feedback from users of the technology is very important for us to make the right decision, and I encourage everybody to go there and exchange.
III. Use case and feedback
Like any other software, defining a JS API for the browser is about defining the right use case. Whatever you define it will in turn impact the API surface. In the case of webRTC, the original use case is a 1:1 call with audio and video, and that use case was implemented as appRTC. For a very long time appRTC was the reference for bug reports, tests, interoperability between browsers, etc. In turn the Peer Connection API has been tailored to make that use case dead simple. Most of the underlying machinery (all of ICE, the encryption, the codecs, ….) was hidden within the browsers, and most of the parameters were hidden in SDP. It made writing a webpage that could do a video call a 10 lines homework a student could do.
Use cases evolve as one understanding of a technology improve, as reflected by the corresponding document that is used as an informational reference for both webRTC and RTCweb. Looking at the document tracker, you can see that no less than 16 revisions exist before it was stabilized early this year. If your use case is not in this list, it is very likely that webrtc 1.0 (due sometime around Xmass 2015, if we’re all good boys/girls) will not support it. However, you can voice your need and try to have your use case taken into account for the next version of webRTC (no, not webrtc 1.1, no, not webrtc 2.0, no, not ORTC, just ……. webRTC NV for next version).
Some thought that a 1:1 use case was too simple: peer connection would be too big a black box, and shoehorning all parameters in an SDP blob was just adding complexity and dependencies. Vote happened, decision was made, peerconnection was here to stay. The ones in disagreement created a Community Group, with no standardization power, named ORTC to prepare what could be the base for specifications the day people would want to do things differently, if ever.
IV. WTF happened with all those new API you’re throwing at us?
As usual in the web, people use API in way they were designed for, and it’s awesome. When they do, things break, and/or we get feedback about things that are not working because people assumed it was working in a different way it actually is (or different browsers implement it in a slightly different way). The bug-or-feature discussions happens next, we take note, and put it in the agenda for the next meeting if enough people are interested in it. This time, we were facing very clear and convergent cases.
1:1 is bo~ring!
First, 1:1 is boring and most people are expecting multiparty calls, simulcast or even smarter simulcast using SVC codecs (264, vp9, …). There are slight differences there and the order in which those have been mentioned is not random.
Supporting multiparty calls is having the capacity to have several people join the same conversation, wether in p2p or not. While you can do that with multiple peer connections the underlying assumption is that you want to do it with a single peer connection, to leverage synchronization of streams, common bandwidth adaptation, port optimizations, ….. The problem here is more about how to signal this case between browsers, and gave birth to the infamous Plan B and Unified plan. The former was implemented in chrome for very long, the later is the official spec, but is only fully implemented in Firefox today. Those media streams can be completely independent, i.e. they can come from different sources.
Simulcast is about sending a media stream from a single source using different resolutions. The main usage here is to choose which resolution you are going to use depending on external factors like the resolutions of the remote peer’s screen, the bandwidth of the remote peer, …. While you can implement simulcast using a multiparty implementation as above, you would be losing the information about the relation between the media streams, namely that they all come from the same source, and that one is a decimated/scaled down version of the other. The multiparty implementation would treat all stream equally and in bad network conditions would reduce the resolution of all the streams, killing the purpose. Simulcast usually comes with smart Bandwidth adaptation algorithm that knows he needs to keep the lower resolution stream untouched, and just adapt the highest resolution stream first when bandwidth goes down. Simulcast is most important in use case that involve a media server. In simulcast, the media streams come from the same source, but are independent in the sense that they each can be decoded and rendered/played separately.
SVC codecs allow for yet another level of greatness. SVC will encode the lowest resolution media stream as a normal stream, that can be decoded on its own, and will then encode only the difference between the higher resolution and the base resolution in subsequent streams. The advantage here are multiple: lower bandwidth (low frequency information is not duplicated across streams), better resilience, ….. SVC codecs are especially useful in cases that involve a media server. In this case, the media streams come from the same source, and are NOT independent, except for the lowest resolution stream. The subsequent streams need to have all the lower resolution streams available to be rendered/played.
People are jumping the fence because they have unanswered needs
People are today modifying the SDP on the fly to be able to have access to properties, or capacities of peer-connection internal objects, or to be able to set those properties, or parameters. Several underlying object were modified this way: the ICE agent, the encryption (DTLS), the codec choice, the bandwidth, …
If the use case is valid (and more often than not they are) adding a JS API that does what people were doing by manipulating the SDP is the right thing to do. We slowly replace an opaque, not specified, API by a specified, JS API with JSON objects. It does not give more work to the developers, since they were doing it already, even though they will have to take the opportunity to refactor and clean their code.
V. Here is why
It so happen that some of the API proposed by the ORTC group would answer both the multiparty/simulcast/SVC problems and the SDP munging problems. They are being slowly integrated in the webRTC specification when and where they make sense (except for Microsoft, which just implements it all his way and dumps it on an unexpecting audience). The time to bring them in webRTC 1.0 specs was shortened by the fact that those had been though about for quite some time now, and overlapping members had worked on both webrtc and ortc and could bridge the gap.
Most of the new API you have seen coming out of the last meeting were APIs that would just provide a good way to achieve what people where trying to achieve by manipulating the SDP, *AND* could be integrated before the end of the year not to push further webRTC 1.0. The other changes are related to paving the way to simulcast, but I already spoke about that in a previous post.
Because the APIs are more granular instead of being tailored for a 1:1 case, it makes writing the 1:1 case with those API look overly complicated in contrast. I do not believe it to be really a problem, as it is always easy to go from granular to simple. Within a few weeks, you will have webrtc-on-ORTC shims, and your website will work exactly the same (as long as you don t need video), or you can keep ignoring Edge all together. There are quite a few things that are overly complicated to do in webRTC today that will be easily doable with the new APIs. No regression in any case, just possible improvements. I expect the same thing to happen for the latest additions to webRTC 1.0 API set. Eventually webRTC and ORTC should also converge.
I hope that this post brought some light on the decision process followed by W3C. The core of it is feedback from users, and timeline considerations, so once again, if you have a use case, or a question, voice them on the w3c’s public-webrtc mailing list (not the discuss-webrtc mailing list).
This work by Dr. Alexandre Gouaillard is licensed under a Creative Commons Attribution 4.0 International License.
This blog is not about any commercial product or company, even if some might be mentioned or be the object of a post in the context of their usage of the technology. Most of the opinions expressed here are those of the author, and not of any corporate or organizational affiliation.
Patching lib webrtc almost automatically
I. Introduction
Most product will want to modify the lib webrtc to add some features. Some, like tokbox, will want to change the libvpx compilation flags to enable VP8 SVC (only temporal scalability) to use between their mobile SDKs. Some others, like voxeet, might want to add additional audio codec. Many, including pristine.io will want to add h264 support.
Whatever the goal, everybody will need to be able to patch the source code in a consistent way against a fast moving library, and keep the number of patch and how to apply them manageable.
Moreover, since this is a public code, there are some patch I will contribute for all, but users might want to have their own private patches and keep them for themselves. The architecture should allow for that.
Finally, just for the fun of it, I wanted to have a way to quickly get patches from the google review process to be able to have in the lib features before they even appear in chrome (or to test those patches with this system).
In this post we will propose one easy, but efficient way to do just that.
II. Implementation
1. Specific CMake Command used here.
CMake has this nice add_subdirectory() command that makes a lot of thing easy. Basically what the command does is just to iterate into the corresponding folder and act on any CMakeLists.txt file that would be present there.
- add_subdirectory(Patches)
By making it conditional you can design a nice layout to keep your patches managed, for example by platform:
- if( APPLE )
- add_subdirectory( mac )
- elseif( WIN32 )
- add_subdirectory( win )
- endif()
2. Patch creation and specific libwebrtc concerns
libwebrtc code source is a patchwork of separated libraries that are being fetch depending on the DEPS file. While gclient as an option to generate a global patch, we preferred simply using git. Then, each patch you generate is for a specific git tree and you have to remember where to apply it.
We created a CMake macro to help automate that:
- set_webrtc_patch_target(
- GIT_APPLY_CMD
- APPLY_DIR
- PATCH
- DEPENDS_ON_TARGET
- )
The DEPENDS_ON_TARGET, allow us to make sure the patches are applied after the code is downloaded.
The GIT_APPLY_COMMAND allows for flexibility in which git command you use. Some prefer the “git diff” / “git apply” approach, while other prefer “git format-patch” / “git am”. In our case, we keep it simple:
- set(
- GIT_APPLY_CMD
- git apply –ignore-space-change –ignore-whitespace
- )
3. How to make a clean/undo command
The problem with patches is that they leave the source tree “dirty”, and a good rule for development or even build bots is that the source code should stay clean (unmodified) as much as possible.
When using git, one way to get back to a clean state is to do a reset: “git reset –hard -q”. This would bring all the tracked files to a clean state, but can leave untracked files behind, e.g. if you add new files, or delete others. “git clean -qfdx” if then needed to make sure the source tree is back to where you want it. The code looks like that:
- set( GIT_RESET_CMD git reset –hard -q )
- set( GIT_CLEAN_CMD git clean -qfdx )
Additionally, you need to know where to apply the commands, so you need to keep track of all the directories where a patch has been applied. For each patch, we’re going to add the directory it s applied to in a list. When times come to “unpatch”, we’ll use that list:
- list(REMOVE_DUPLICATES PATCHED_DIRS) # remove duplicates
- add_custom_target(
- UNPATCH_ALL
- ${CMAKE_COMMAND} -E touch dummy.phony
- )
- foreach( dir ${PATCHED_DIRS} )
- add_custom_command(
- TARGET UNPATCH_ALL POST_BUILD
- COMMAND ${GIT_RESET_CMD}
- COMMAND ${GIT_CLEAN_CMD}
- WORKING_DIRECTORY ${WebRTC_SOURCE_DIR}/${dir}
- COMMENT “Unpatching ${dir}.”
- )
- endforeach()
4. How to integrate my private/proprietary patches?
With the add_subdirectory() command, things are quite simple. The code below checks if there is a “PvtPatches” subdirectory and if there is, walk into it. You can use git subtrees, or the method of your choice to have in your local copy such a directory, with a CMakeFiles copied (hum … largely inspired) from the one in Patches and everything will be good.
- if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/PvtPatches)
- add_subdirectory(pvtPatches)
- endif()
Note that extra care has been taken about the variable that contains the list of directories to apply the reset and clean command to, so that you can modify it from within subdirectory and it remains valid. The “CACHE” option make sure it s consistent across the entire project whatever the current source directory is. The “INTERNAL” option is here to make sure this variable does not appear in the graphical user interface that goes along with cmake.
- set( PATCHED_DIRS “” CACHE INTERNAL “Internal variable.” )
Of course, you’re likely to re-run the build generation tool after applying the patches:
- python /src/webrtc/build/gyp_webrtc.py
III. Conclusion
You have seen in this post how to set up a simple patch system for libwebrtc. Of course, it would not be complete without a few examples, so in following post I will show how to integrate google patches:
The most difficult is not the patching, but the testing. I need to push some examples and stand alone tests first.
Cheers.
Alex.
This work by Dr. Alexandre Gouaillard is licensed under a Creative Commons Attribution 4.0 International License.
This blog is not about any commercial product or company, even if some might be mentioned or be the object of a post in the context of their usage of the technology. Most of the opinions expressed here are those of the author, and not of any corporate or organizational affiliation.
Dashboard “Greenness”, one bug at a time
I. Introduction
Precompiled libraries for stable version of webrtc (those used in chrome) have been requested many times on the mailing list, but so far nobody as put him/herself at making them. One of the goal of this blog is to provide those to lower the barrier of entry for people that want to build on top of webrtc.
As I was preparing the libraries on linux, i bumped again in the failing test I mentioned in a previous post:
“The first error seems to be related to a bad allocation. That’s were you realize that running this on the smallest possible linux instance in AWS was possibly a bad idea. It should disappear when I host the Linux built bot on a bigger instance. The second error is more elusive, and I can’t figure it out just from the logs. Once I will have set up a more powerful linux build host, I will debug there directly.”
As far as I am concerned, having even a single test failing is a no-no. So I dug deeper. Here is the build before the changes.
II. Investigating
In the mean time, I moved the build bot to a stronger (c3.2x) instance. Indeed, the first error was a memory allocation problem triggered by an undersized instance, and went away without any special attention.
The second error was related to screen sharing tests, which is not a surprise given that we are running on a virtual machine without display.
The original tests are run through a test driver written in python. The code is separated from libwebrtc and can be found there. The main file is here. Here again, it is code coming from chrome which contains a lot of things not needed to test the standalone version of webrtc (chrome sandbox, …).
It also does a lot of nice things in term of checking that no left over from previous, possibly failing, tests are not on the way. There are a lots of extra steps that improve the stability and robustness of the tests, so it’s not all bad.
To make things simple, you just need to install Xvfb and openbox,
- sudo apt-get install xvfb
- sudo apt-get install openbox
then define a display, create it, and run the window manager before you run your test (the code below is written to stay as close as possible to google tests conditions).
- export DISPLAY=:9
- Xvfb :9 -screen 0 1024x768x24 -ac -dpi 96&
- openbox&
Now, all tests pass!
III. Conclusion
The greenness of the dashboard is something that is of utmost importance. If the dashboard dis not free you are developing blindfolded. Making it green is an everyday challenge,. It can be seen as too much to bother about, but it is actually a developer safety net, and allow you to focus on developing only.
The advantages of cmake here are twofold: lower barrier of entry, and collaborative dashboard.
Once again, one can see that the chrome build tools, however good and advanced, are an overkill in the case of the standalone libwebrtc. I do believe it is slowing down adoption of and contribution to webrtc, as one needs to become a chromium developer first, and the learning curve is steep.
In any case, you can now download tested, precompiled, libraries and headers for linux, mac or windows on the Tool page. If what you want is just to develop something against libwebrtc that work against the latest stable chrome, you have all you need now.
Some people request features that are not, or not yet, in webrtc. In a following post, I will explain how to patch libwebrtc effortlessly as part of the process described before.
Enjoy.
This work by Dr. Alexandre Gouaillard is licensed under a Creative Commons Attribution 4.0 International License.
This blog is not about any commercial product or company, even if some might be mentioned or be the object of a post in the context of their usage of the technology. Most of the opinions expressed here are those of the author, and not of any corporate or organizational affiliation.
How to set up build bots for libwebrtc
I. Introduction
Following My previous posts, I got a lot of e-mails concerning setting up the build bots. I have to admit that my previous post did not address that in detail, and that the documentation about it is sparse and confusing, as the recommended way to do it in the cmake community changed through the 15 years of the (vtk, itk) projects. So here is a post to describe, step by step, how to set up your own bots in a matter of hours, and manage them remotely through git, without ever having to connect to them again (in theory, in practice, s#$%^ happens, and you might also want to connect from time to time to debug problems directly.)
It is good policy to keep the build bot script separated from the main code, as they might contain sensitive information about your infrastructure. For example, you might put access key to upload the result of the build (packaged libraries) and that had better be private. Moreover, with the current setting, anybody that manage to have access to your build script end up being able to run anything on your build bot, which is also something you don’t want 🙂 In our case, it’s more a tutorial, and all scripts are accessible here.
II. Scripting CTest
So far, I touched on two ways of using ctest:
- as an extension of CMake, to handle test suite directly from within the CMake files.
- as an CDash client, to run CMake and automatically send the results of the upgrade, configure, build and test steps to a CDash server.
There is a third way to use ctest: through scripting. You can write a files, using cmake syntax to prepopulate CTEST_<> variables to run ctest in a controlled way. You then call “ctest -S” with your file as argument to run ctest in script mode.
Some very useful variables are defined for you to use, to set CTEST cache, or environment variables, or hardcode compilers, and any given program or cmake variable before hand. That allow for example to run 32 bits and 64 bits builds, debug and release, on a single machine. Another example is to have multiple versions of compilers on a given machine, and use ctest scripts to use a specific one at a time. One of th best script I saw, written by gaethan lehman, was handling different versions of MSVC and Java on windows. Hat off.
One can get more information about CTest capacity on the old page written when people were still using purify (here, here and here), and a more recent version here, .
III. What about libwebrtc?
For this example, I used the latest method, developed for ITK v4. A very fast overview is here. This version was focussed on git, and implement some nice tricks to handle different branches, which makes setting up bots for development branches easier.
1. A generic script that does all the heavy lifting
The idea is to have a very generic script that handle most of the problems for you, and to leave only a few variables to be defined to the user. I ported the generic script to be usable for libwebrtc: libwebrtc_common.cmake. Unless you’re a purist, I do not recommend modifying it, or even looking at it. It now allows you to define a set of parameters, some of the usual CMAKE or CTEST variables, but some new dashboard_ variables as well, to control your build.
- dashboard_model = Nightly | Experimental | Continuous
- dashboard_track = Optional track to submit dashboard to
- dashboard_loop = Repeat until N seconds have elapsed
- dashboard_root_name = Change name of “My Tests” directory
- dashboard_source_name = Name of source directory (libwebrtc)
- dashboard_binary_name = Name of binary directory (libwebrtc-build)
- dashboard_data_name = Name of ExternalData store (ExternalData)
- dashboard_cache = Initial CMakeCache.txt file content
- dashboard_do_cache = Always write CMakeCache.txt
- dashboard_do_coverage = True to enable coverage (ex: gcov)
- dashboard_do_memcheck = True to enable memcheck (ex: valgrind)
- dashboard_no_clean = True to skip build tree wipeout
- dashboard_no_update = True to skip source tree update
- CTEST_UPDATE_COMMAND = path to git command-line client
- CTEST_BUILD_FLAGS = build tool arguments (ex: -j2)
- CTEST_BUILD_TARGET = A specific target to be built (instead of all)
- CTEST_DASHBOARD_ROOT = Where to put source and build trees
- CTEST_TEST_CTEST = Whether to run long CTestTest* tests
- CTEST_TEST_TIMEOUT = Per-test timeout length
- CTEST_COVERAGE_ARGS = ctest_coverage command args
- CTEST_TEST_ARGS = ctest_test args (ex: PARALLEL_LEVEL 4)
- CTEST_MEMCHECK_ARGS = ctest_memcheck args (defaults to CTEST_TEST_ARGS)
- CMAKE_MAKE_PROGRAM = Path to “make” tool to use
- Options to configure builds from experimental git repository:
- dashboard_git_url = Custom git clone url
- dashboard_git_branch = Custom remote branch to track
- dashboard_git_crlf = Value of core.autocrlf for repository
If you want to extend the capacity of this core script, some hooks are also provided to keep things clean and compartementalized.
- dashboard_hook_init = End of initialization, before loop
- dashboard_hook_start = Start of loop body, before ctest_start
- dashboard_hook_started = After ctest_start
- dashboard_hook_build = Before ctest_build
- dashboard_hook_test = Before ctest_test
- dashboard_hook_coverage = Before ctest_coverage
- dashboard_hook_memcheck = Before ctest_memcheck
- dashboard_hook_submit = Before ctest_submit
- dashboard_hook_end = End of loop body, after ctest_submit
2. A very simple file to define a bot
Eventually, that makes writing a build script very easy indeed:
- set(CTEST_SITE “Bill_._Our_fearless_leader” )
- set(CTEST_BUILD_NAME “Ubuntu-12.04-32-Deb” )
- set(CTEST_BUILD_FLAGS -j8 )
- set(CTEST_DASHBOARD_ROOT “/home/ubuntu/Dashboards” )
- set(CTEST_TEST_TIMEOUT 1500 )
- set(CTEST_BUILD_CONFIGURATION Debug )
- set(CTEST_CMAKE_GENERATOR “Unix Makefiles” )
- set(dashboard_model Experimental )
- include(libwebrtc_common.cmake)
And … voila! you have a linux build bot all set up! Replace ‘Debug’ by ‘Release’, and you have your release build ready. To change from 32 to 64 bits, since we use ninja, you have to set up the right env variable, but it’s not difficult either:
- set( CTEST_ENVIRONMENT
- “GYP_DEFINES=’target_arch=x64′” # or ia32 for 32 bits
- )
The corresponding file is here.
A word of warning though, installing the dev environment for libwebrtc is hard. First, it will almost only work under ubuntu, second, the environment install scripts provided do not seem to work, so you will end up having to manually install quite a few libs yourself before being able to compile. The good news is, you will only have to do that once.
3. How to automate it all?
Now you are armed with several files for each build you want to run. You might very well run many build on the same machine, e.g. 32/64, Debug/Release. For Linux machines, you might want to cross-compile the android binaries as well (more on the mobile target in another post).
However, you still need to have access to the machine, and manually launch
- ctest -S My_Build_script.cmake
For it to work.
One way around this is to define a (shell) script that run those commands for you. However, whenever you make a modification to the script, you have to connect to the machine again, and manually update the local script to the new version, grrrrr
That’s where cron (on linux or mac) and Scheduled tasks (on windows) comes in handy, as it can run a command at a given time for a given user. Here you have two schools: the original cmake members designed everything so that people could configure their always-on desktops to be used during sleeping hours. More recent developers will want to set up either a dedicated build bot, or a hosted build bot, and might want to reduce the cost by switching the machine off when the job is done. I will illustrate the later for linux (and mac) and the files for windows will be provided in the github account for those interested. Note that for windows build bot specifically, it has been shown that you’d better reboot the machine once a day in any case if you want it to work ….
Starting devices remotely is easy, all the cloud providers provide command line API, and you can maintain a build master (very tiny instance) whose sole job will be to wake up the bots once day for them to fetch the latest code, configure, build test, and submit to the dashboard. In AWS EC2, that means playing with IAM, but nothing too hard there, and it’s very well documented. On linux, the cron daemon accept the ‘reboot’ keyword, and will run the corresponding task whenever the device is started.
- @reboot /home/ubuntu/Dashboards/Scripts/Bill-runall.sh
Finally, you can just use the shutdown command in your script to stop the instance when they’re done.
- shutdown -h now
We’re only left with automated updating of the scripts.
The trick used by the ITK community is to keep the scripts (and the cron table) in a git repository, and to update this repository first. In this example, you can see from the shell script that we expect a ~/Dashboard/Scripts directory to contain the build scripts, a ~/Dashboard/Logs directory to be present, and that the crontable also get updated on the fly. Now I can just commit to my git repository, and the build bot will auto update itself. Sweet.
- # Go to the working directory
- cd /home/ubuntu/Dashboards/Scripts
- # get the latest scripts
- git pull –rebase -q
- # update the crontable
- crontab ./Bill-crontab
- # Run the builds
- ctest -S ./Bill-32-Debug.cmake
- ctest -S ./Bill-32-Release.cmake
- # done, let’s shutdown the instance to avoid paying too much
sudo shutdown -h now
The full script with some additional features is here.
IV. Conclusion
It should now be pretty clear that setting up a build bot for libwebrtc is actually quite easy. The code provided in github should actually make it even easier. Feel free to set up your own build bot, hopefully with settings that are to yet present in any of the bots contributing to the dashboard today, and contribute to the fun. I should update to a bigger CDash server that will allow for more than 10 builds a day very soon. I would love to see people contributing for arm, android, iOS, …..
If you find this useful, let other know, and nice comments are also appreciated. 😉
Installing libwebrtc
I.Introduction
Once you have compiled a library for development, the can use it directly from the build tree for any other project that depends on it. It is usually not a good idea if you have several members in your dev team, if you use different systems, and/or if you want to distribute your work. That’s where the notion of install and versioning makes sense (1). Packaging (2) then allow you to install on a computer different than the one you built the project on. After you have installed a library (and corresponding headers and other needed files) it would be also nice to be able to import (3) it in a project easily.
The good news is that CMake handles all of that with (again!) a very few lines of code. Using the install() command you can define what to install, where to install it, and to some extend couple files per components for interactive installers. You might also remember that I told you in a previous post that CMake was kind of a trilogy (CMake/CTest/CDash). Well, there is a sequel called CPack. It’s not as good as the first ones (sequels rarely are), but it gets its own cmake variable prefix so I guess it’s cool 🙂 CPack handles the packaging part, which is build on top of the installation part. Now let’s dig in.
II.Installing targets, or files locally.
In our use of CMake, we do not have targets for each library or executable. Moreover, the tests are not relocatable easily, so it’s better not to try. Libraries can be installed in a flat directory structure, but headers need to follow a certain directory structure to be usable, so we will have to follow two different strategies there.Finally, the install() command has a lot of signature, so let’s focus on those which would be of use for us.
1. Versioning
The versioning follows the CMake convention (to be compatible with the other tools and command) as explained here.
- #———————————————————————————————–
- # Versioning
- set( WEBRTC_MAJOR_VERSION 0 ) # not fully tested yet
- set( WEBRTC_MINOR_VERSION 1 ) # really not fully tested, not fully implemented
- set( WEBRTC_BUILD_VERSION 1 ) # should be the SVN rev, but it s hard to get it automatically from the git commit msg.
- set( WEBRTC_VERSION
- ${WEBRTC_MAJOR_VERSION}.${WEBRTC_MINOR_VERSION}.${WEBRTC_BUILD_VERSION}
- )
- set( WEBRTC_API_VERSION
- # This is the ITK/VTK style where SOVERSION is two numbers…
- “${WEBRTC_MAJOR_VERSION}.${WEBRTC_MINOR_VERSION}”
- )
- set( WEBRTC_LIBRARY_PROPERTIES ${WEBRTC_LIBRARY_PROPERTIES}
- VERSION “${WEBRTC_VERSION}”
- SOVERSION “${WEBRTC_API_VERSION}”
- )
further reading:
- the original CMake Versioning file for GIT (super advanced)
2. set up destination folders per component types
Here again, nothing fancy, just following the CMake convention so that find_package can be used later (see find_package() documentation about the expected paths ).
- # ————————————————————————–
- # Configure the export configuration
- # WEBRTC_INSTALL_BIN_DIR – binary dir (executables)
- # WEBRTC_INSTALL_LIB_DIR – library dir (libs)
- # WEBRTC_INSTALL_DATA_DIR – share dir (say, examples, data, etc)
- # WEBRTC_INSTALL_INCLUDE_DIR – include dir (headers)
- # WEBRTC_INSTALL_CMAKE_DIR – cmake files (cmake)
- if( NOT WEBRTC_INSTALL_BIN_DIR )
- set( WEBRTC_INSTALL_BIN_DIR “bin” )
- endif()
- if( NOT WEBRTC_INSTALL_LIB_DIR )
- set( WEBRTC_INSTALL_LIB_DIR “lib” )
- endif()
- if( NOT WEBRTC_INSTALL_DATA_DIR )
- set( WEBRTC_INSTALL_DATA_DIR “share” )
- endif()
- if( NOT WEBRTC_INSTALL_INCLUDE_DIR )
- set( WEBRTC_INSTALL_INCLUDE_DIR “include” )
- endif( )
- if( NOT WEBRTC_INSTALL_CMAKE_DIR )
- set( WEBRTC_INSTALL_CMAKE_DIR “lib” )
- endif( )
3. Handle libraries
Just like we did for the tests, we will first need to import all the libraries name from the filesystem before we can do anything. Unlike for the test, where we had to worry about different arguments for each test, all libraries are treated equal so we can automate the process. The file( GLOB_RECURSE ) command does just that. Under mac, all the libs are at the root of the ninja build, but under windows, they are created on their respective subdirectory, so we need to use the GLOB_RECURSE and not just GLOB which would work only for mac.
- set(WEBRTC_BUILD_ROOT ${WebRTC_SOURCE_DIR}/src/out/${CMAKE_BUILD_TYPE}) # the CMAKE_BUILD_TYPE variable allow consistency with build target
- set(WEBRTC_LIB_EXT a) # the default
- if(WIN32)
- set(WEBRTC_LIB_EXT lib) # you’re on windows! you know who you are 🙂
- endif()
- file( GLOB_RECURSE # under windows, the libs are within the subfolders
- WEBRTC_LIBS # the output variable
- ${WEBRTC_BUILD_ROOT}/*.${WEBRTC_LIB_EXT} # the pattern, i.e. all files with the right extension under the build root.
- )
Now, we could directly feed this to the install() command:
- foreach( lib ${WEBRTC_LIBS}
- install(
- FILES ${lib}
- DESTINATION ${WEBRTC_INSTALL_LIB_DIR}
- COMPONENT Libraries
- )
- endforeach()
However, we want to remove the libraries that were used for the tests, and we have to prepare a list of libraries to populate a configuration file that will be installed along the libraries, and make it easy to use the installed version. The full version looks like that:
- set(WEBRTC_LIBRARIES “”) # prepare the config for the build tree
- foreach(lib ${WEBRTC_LIBS})
- string(FIND ${lib} “test” IS_TEST)
- if(IS_TEST EQUAL -1)
- get_filename_component(lib_name ${lib} NAME_WE)
- string(REPLACE “lib” “” lib_target_name ${lib_name})
- set(WEBRTC_LIBRARIES ${WEBRTC_LIBRARIES} ${lib_target_name})
- install(
- FILES ${WEBRTC_BUILD_ROOT}/${lib}
- DESTINATION ${WEBRTC_INSTALL_LIB_DIR}
- COMPONENT Libraries
- )
- endif()
- endforeach()
4. Handle headers files
The delicate part of handling headers, is that a specific root directory and subdirectories layout is expected by the files including them. The DEPS file give you some hints about which dir you should use for the includes:
- # Define rules for which include paths are allowed in our source.
- include_rules = [
- # Base is only used to build Android APK tests and may not be referenced by
- # WebRTC production code.
- ‘-base’,
- ‘-chromium’,
- ‘+gflags’,
- ‘+net’,
- ‘+talk’,
- ‘+testing’,
- ‘+third_party’,
- ‘+webrtc’,
- ]
Apart from the missing flags, those are all top level directories of the WebRTC source. A quick sanity check (grep -R -h \#include * | sort -u > log) confirms that it seems to be the layout expected by the #include lines.
so for each of /net, /talk, /testing, /third_party, /webrtc we need to walk the subdirectory layout and use it at install time (that’s the main difference with the libraries handling code). That will justify using the RELATIVE option of the file( GLOB_RECURSE ) command.
- file(
- GLOB_RECURSE header_files # output variable
- RELATIVE ${WebRTC_SOURCE_DIR}/src # the path will be relative to /src/, as expected by the #includes
- FOLLOW_SYMLINKS # we need to follow the symlinks to chromium subfolders
- ${WebRTC_SOURCE_DIR}/src/net/*.h
- ${WebRTC_SOURCE_DIR}/src/talk/*.h
- ${WebRTC_SOURCE_DIR}/src/testing/*.h
- ${WebRTC_SOURCE_DIR}/src/third_party/*.h
- ${WebRTC_SOURCE_DIR}/src/webrtc/*.h
- )
Now the install command is easy to write.
- foreach( f ${header_files} )
- get_filename_component( RELATIVE_PATH ${f} PATH ) # NOTE ALEX: it seems that newer versions of CMake use DIRECTORY instead of PATH …
- install(
- FILES ${WebRTC_SOURCE_DIR}/src/${f}
- DESTINATION ${WEBRTC_INSTALL_INCLUDE_DIR}/${RELATIVE_PATH} # that’s the tricky part here
- COMPONENT Headers
- )
- endforeach()
5. Are we there yet?
YES! We can now install. You have now an install target in your build system. Under mac, you can simply type “make install”, and under windows, if you used the default (ninja/MSVC) you will have an “INSTALL” target in the list of target in MSVC. It is not build by default, and yo need to trigger the build manually. Administrator rights will surely be needed. By default, everything is installed under /usr/local on mac and unix, and under “Program Files” for windows (with (x86) for the 32b builds).
In a following post, I will show how to package all those files to be installed on a remote computer.
This work by Dr. Alexandre Gouaillard is licensed under a Creative Commons Attribution 4.0 International License.
This blog is not about any commercial product or company, even if some might be mentioned or be the object of a post in the context of their usage of the technology. Most of the opinions expressed here are those of the author, and not of any corporate or organizational affiliation.