Installing libwebrtc

I.Introduction

Once you have compiled a library for development, the can use it directly from the build tree for any other project that depends on it. It is usually not a good idea if you have several members in your dev team, if you use different systems, and/or if you want to distribute your work. That’s where the notion of install and versioning makes sense (1). Packaging (2) then allow you to install on a computer different than the one you built the project on. After you have installed a library (and corresponding headers and other needed files) it would be also nice to be able to import (3) it in a project easily.

The good news is that CMake handles all of that with (again!) a very few lines of code. Using the install() command you can define what to install, where to install it, and to some extend couple files per components for interactive installers. You might also remember that I told you in a previous post that CMake was kind of a trilogy (CMake/CTest/CDash). Well, there is a sequel called CPack. It’s not as good as the first ones (sequels rarely are), but it gets its own cmake variable prefix so I guess it’s cool 🙂 CPack handles the packaging part, which is build on top of the installation part. Now let’s dig in.

II.Installing targets, or files locally.

In our use of CMake, we do not have targets for each library or executable. Moreover, the tests are not relocatable easily, so it’s better not to try. Libraries can be installed in a flat directory structure, but headers need to follow a certain directory structure to be usable, so we will have to follow two different strategies there.Finally,  the install() command has a lot of signature, so let’s focus on those which would be of use for us.

1. Versioning

The versioning follows the CMake convention (to be compatible with the other tools and command) as explained here.

  1. #———————————————————————————————–
  2. # Versioning
  3. set( WEBRTC_MAJOR_VERSION 0 ) # not fully tested yet
  4. set( WEBRTC_MINOR_VERSION 1 )  # really not fully tested, not fully implemented
  5. set( WEBRTC_BUILD_VERSION 1 )    # should be the SVN rev, but it s hard to get it automatically from the git commit msg.
  6. set( WEBRTC_VERSION
  7.   ${WEBRTC_MAJOR_VERSION}.${WEBRTC_MINOR_VERSION}.${WEBRTC_BUILD_VERSION}
  8.   )
  9. set( WEBRTC_API_VERSION
  10.   # This is the ITK/VTK style where SOVERSION is two numbers…
  11.   “${WEBRTC_MAJOR_VERSION}.${WEBRTC_MINOR_VERSION}”
  12.   )
  13. set( WEBRTC_LIBRARY_PROPERTIES ${WEBRTC_LIBRARY_PROPERTIES}
  14.   VERSION       “${WEBRTC_VERSION}”
  15.   SOVERSION “${WEBRTC_API_VERSION}”
  16.   )

further reading:

2. set up destination folders per component types

Here again, nothing fancy, just following the CMake convention so that find_package can be used later (see find_package() documentation about the expected paths ).

  1. # ————————————————————————–
  2. # Configure the export configuration
  3. # WEBRTC_INSTALL_BIN_DIR          – binary dir (executables)
  4. # WEBRTC_INSTALL_LIB_DIR          – library dir (libs)
  5. # WEBRTC_INSTALL_DATA_DIR         – share dir (say, examples, data, etc)
  6. # WEBRTC_INSTALL_INCLUDE_DIR      – include dir (headers)
  7. # WEBRTC_INSTALL_CMAKE_DIR        – cmake files (cmake)
  8. if( NOT WEBRTC_INSTALL_BIN_DIR )
  9.   set( WEBRTC_INSTALL_BIN_DIR “bin” )
  10. endif()
  11. if( NOT WEBRTC_INSTALL_LIB_DIR )
  12.   set( WEBRTC_INSTALL_LIB_DIR “lib” )
  13. endif()
  14. if( NOT WEBRTC_INSTALL_DATA_DIR )
  15.   set( WEBRTC_INSTALL_DATA_DIR “share” )
  16. endif()
  17. if( NOT WEBRTC_INSTALL_INCLUDE_DIR )
  18.   set( WEBRTC_INSTALL_INCLUDE_DIR “include” )
  19. endif( )
  20. if( NOT WEBRTC_INSTALL_CMAKE_DIR )
  21.   set( WEBRTC_INSTALL_CMAKE_DIR “lib” )
  22. endif( )

3. Handle libraries

Just like we did for the tests, we will first need to import all the libraries name from the filesystem before we can do anything. Unlike for the test, where we had to worry about different arguments for each test, all libraries are treated equal so we can automate the process. The file( GLOB_RECURSE ) command does just that. Under mac, all the libs are at the root of the ninja build, but under windows, they are created on their respective subdirectory, so we need to use the GLOB_RECURSE and not just GLOB which would work only for mac.

  1. set(WEBRTC_BUILD_ROOT ${WebRTC_SOURCE_DIR}/src/out/${CMAKE_BUILD_TYPE}) # the CMAKE_BUILD_TYPE variable allow consistency with build target
  2. set(WEBRTC_LIB_EXT a) # the default
  3. if(WIN32)
  4.   set(WEBRTC_LIB_EXT lib) # you’re on windows! you know who you are 🙂
  5. endif()
  6. file( GLOB_RECURSE # under windows, the libs are within the subfolders
  7.   WEBRTC_LIBS           # the output variable
  8.   ${WEBRTC_BUILD_ROOT}/*.${WEBRTC_LIB_EXT}  # the pattern, i.e. all files with the right extension under the build root.
  9.   )

Now, we could directly feed this to the install() command:

  1. foreach( lib ${WEBRTC_LIBS}
  2.   install(
  3.     FILES                  ${lib}
  4.     DESTINATION ${WEBRTC_INSTALL_LIB_DIR}
  5.     COMPONENT   Libraries
  6.     )
  7.  endforeach()

However, we want to remove the libraries that were used for the tests, and we have to prepare a list of libraries to populate a configuration file that will be installed along the libraries, and make it easy to use the installed version. The full version looks like that:

  1. set(WEBRTC_LIBRARIES “”) # prepare the config for the build tree
  2. foreach(lib ${WEBRTC_LIBS})
  3.   string(FIND ${lib} “test” IS_TEST)
  4.   if(IS_TEST EQUAL -1)
  5.     get_filename_component(lib_name ${lib} NAME_WE)
  6.     string(REPLACE “lib” “” lib_target_name ${lib_name})
  7.     set(WEBRTC_LIBRARIES ${WEBRTC_LIBRARIES} ${lib_target_name})
  8.     install(
  9.       FILES       ${WEBRTC_BUILD_ROOT}/${lib}
  10.       DESTINATION ${WEBRTC_INSTALL_LIB_DIR}
  11.       COMPONENT   Libraries
  12.     )
  13.   endif()
  14. endforeach()

4. Handle headers files

The delicate part of handling headers, is that a specific root directory and subdirectories layout is expected by the files including them. The DEPS file give you some hints about which dir you should use for the includes:

  1. # Define rules for which include paths are allowed in our source.
  2. include_rules = [
  3.   # Base is only used to build Android APK tests and may not be referenced by
  4.   # WebRTC production code.
  5.   ‘-base’,
  6.   ‘-chromium’,
  7.   ‘+gflags’,
  8.   ‘+net’,
  9.   ‘+talk’,
  10.   ‘+testing’,
  11.   ‘+third_party’,
  12.   ‘+webrtc’,
  13. ]

Apart from the missing flags, those are all top level directories of the WebRTC source. A quick sanity check (grep -R -h \#include * | sort -u > log) confirms that it seems to be the layout expected by the #include lines.

so for each of /net, /talk, /testing, /third_party, /webrtc we need to walk the subdirectory layout and use it at install time (that’s the main difference with the libraries handling code). That will justify using the RELATIVE option of the file( GLOB_RECURSE ) command.

  1. file(
  2.   GLOB_RECURSE header_files                                      # output variable
  3.   RELATIVE  ${WebRTC_SOURCE_DIR}/src             # the path will be relative to /src/, as expected by the #includes
  4.   FOLLOW_SYMLINKS                                                      # we need to follow the symlinks to chromium subfolders
  5.   ${WebRTC_SOURCE_DIR}/src/net/*.h               
  6.   ${WebRTC_SOURCE_DIR}/src/talk/*.h
  7.   ${WebRTC_SOURCE_DIR}/src/testing/*.h
  8.   ${WebRTC_SOURCE_DIR}/src/third_party/*.h
  9.   ${WebRTC_SOURCE_DIR}/src/webrtc/*.h
  10. )

Now the install command is easy to write.

  1. foreach( f ${header_files} )
  2.   get_filename_component( RELATIVE_PATH ${f} PATH ) # NOTE ALEX: it seems that newer versions of CMake use DIRECTORY instead of PATH …
  3.   install(
  4.     FILES                  ${WebRTC_SOURCE_DIR}/src/${f}
  5.     DESTINATION ${WEBRTC_INSTALL_INCLUDE_DIR}/${RELATIVE_PATH}  # that’s the tricky part here
  6.     COMPONENT   Headers
  7.     )
  8. endforeach()

5. Are we there yet?

YES! We can now install. You have now an install target in  your build system. Under mac, you can simply type “make install”, and under windows, if you used the default (ninja/MSVC) you will have an “INSTALL” target in the list of target in MSVC. It is not build by default, and yo need to trigger the build manually. Administrator rights will surely be needed. By default, everything is installed under /usr/local on mac and unix, and under “Program Files” for windows (with (x86) for the 32b builds).

In a following post, I will show how to package all those files to be installed on a remote computer.

 

Creative Commons License
This work by Dr. Alexandre Gouaillard is licensed under a Creative Commons Attribution 4.0 International License.

This blog is not about any commercial product or company, even if some might be mentioned or be the object of a post in the context of their usage of the technology. Most of the opinions expressed here are those of the author, and not of any corporate or organizational affiliation.

Showing some Love to my dashboard.

11046789_1010459968998193_7337377444744452326_o

A Dashboard is like a young baby, fragile and in need of love. While for most people the dashboard is not adding value so much as a new feature to the lib itself (like a nice lib installer, more in a future post), for developer it’s the key to success.

Most developers have a bias toward some OS and some dev tools. I, for one, do most of my main coding on mac nowadays. However, you need to know that the code is tested across OS, compilers, options, etc. If your dashboard is well maintained, it removes the need for you to go and test all theses most of the time. So one need to have quite a few things before a Dashboard is usable to its full strength and be your safety net.

  • All the supported OS, compilers, and options need to appear as a separate build on the dashboard. (In our case, we want as many builds as there is on the official waterfall).
  • The code to be tested as much as possible for each platform, so you need to have visibility on the code coverage on each platform (more in a later post). Removing a failing test to make the dashboard appear green and hide an error is not OK.
  • The dashboard needs to stay Green (without any reported error) all the time, so when when a new error is introduced you see it right away, and it’s not hidden within an already failing test.

So yesterday I spent a little bit of time to test my scripts on linux. As expected it worked almost out the box, but the first results showed 9 failing tests! It’s quite a lot. So i went to double check the official waterfall, and indeed there were linux build, and they were all green, so I was doing something wrong. Or was I?

Looking at the waterfall, there are 47 build or try bots reporting. The base matrix to compute the different kind of builds use the following parameters:

  • OS: win, Mac, iOS, Linux, Android,
  • ARCH: 32 / 64
  • BUILD_TYPE: Debug / Release
  • OPTIONS: Normal / Large-tests / GN / Simulator / (other exotic builds)
  • For android: try bot on real devices: Nexus 5, 7.2, 9

I don’t want to reproduce all, at least not for now, but I would like to cover all the desktop OSes, arch, and build_type as a start. Looking at the linux 32 release builds, I realized that it was running less tests than its windows / mac counterpart. So I started by modifying my test scripts not to include those if the platform is linux. Boom, down to only two errors. The first error seems to be related to a bad allocation. That’s were you realize that running this on the smallest possible linux instance in AWS was possibly a bad idea. It should disappear when I host the Linux built bot on a bigger instance. The second error is more elusive, and I can’t figure it out just from the logs. Once I will have set up a more powerful linux build host, I will debug there directly.

Next blogs should be about packaging, then about adding coverage computation (for gcc and clang builds) and memory leak verification (using valgrind). Stay tuned.

 

Creative Commons License
This work by Dr. Alexandre Gouaillard is licensed under a Creative Commons Attribution 4.0 International License.

This blog is not about any commercial product or company, even if some might be mentioned or be the object of a post in the context of their usage of the technology. Most of the opinions expressed here are those of the author, and not of any corporate or organizational affiliation.

How to test LibWebRTC

I. Introduction

Building a library is good, but you do not know if there are any bugs until you exercice the library by linking it against an executable and running it. While building webrtc (and lib jingle) has been made relatively easy thanks to depot_tools and recipes, testing it has been a real problem. Since in the google system, commits are rolled, tested, then potentially unrolled, you cannot trust the HEAD revision to be stable. Using branch_heads you could compile and test the versions corresponding to a specific release of Chrome, however testing newer libwebrtc / libjingle like those that are used in canary is something wants to do to be ready when it moves to production, but it’s very challenging. This post is going to address two points: the testing system in google (for the curious), and how to achieve almost the same level of testing in a few lines of CMake code.

II. I want my own waterfall !

The official page describe how to see the tests being run, and how to add them, but not how to trigger them, or how to run them locally. It’s not that you can’t, it’s just that it’s quite complicated and demand quite a machinery in place. What one really want would be at minimum to have the capacity to test locally the result of the build before installing it or packaging it (more on that in a later post), and in the best case scenario to have the equivalent of a waterfall that could be run on demand. First, for those in a hurry, there is almost no way to do it yourself, as the infrastructure is own by google and require you to have a committer account with either webrtc or chrome. I do not know if it possible, but i can say that I do not know of anybody in the ecosystem (appart from one intel employee) that enjoy this privilege. For those that are curious or have time, see below what you can do to have the closest equivalent.

1. The missing Testing info

This part will deal with all the info missing from the webrtc.org page (but spread around in different chromium wiki pages) to actually set up testing the google way.

The system used by google for the development of chromium is great. Honestly, it makes a lot of sense, and a lot of effort has been put in it to actually scale to the number of developers working on chromium. The latest “infra” as they call it, has to handle more than 200 commits a day without disruption, and is apparently doing a great job at it. That being said, it is quite an overkill for someone who only wants to work on libwebrtc, and it’s unfortunate that smaller team are being forced into using it or surrender testing all together.

The original test part of the infra is made of several parts: Gtest to write the unit tests, the usual tools to integrate it in the build process (gyp, DEPS, …), buildbot for … well, the build bots, and a new special swarming infrastructure (here and here) coupled with isolated tests (here and here) to even scale better. I will not address the GTest, gyp or DEPS parts here, but deal with what need to be in place to test after a successful build of libwebrtc. To be thorough, i will only touch on the standalone libwebrtc, and not the webrtc-in-chromium and other embedded tests (*it s actually quite different, especially for the renderers, screensharing etc, but hey, we have to take it step by step). Finally, I will only build the webrtc.org version and not the gecko/firefox version (here again, quite different, love the screensharing in FF, and H264 integration, but we have to take it step by step).

The link below will take you step by step through everything you need to set up your own buildbot (with slaves), trybot and other locally. Note that the links to the repositories might be obsolete by the time you read this post (FAST moving target …) and you might want to read the code in the “infra” part of depot_tools on disk to know what’s the latest way of doing things (sic), more specifically, /build which can be directly fetched from https://chromium.googlesource.com/chromium/tools/build.git . The files configuring the standalone webrtc waterfall will appear in master.client.webrtc . Each column of the waterfall correspond to a builedbot (slave) whose information is also in the same folder.

further reading

  • https://www.chromium.org/developers/testing/chromium-build-infrastructure

2. Isolated Testing, WTF?

For all the readers working in early stage start up, I want to first make clear that WTF does NOT stand for “Where is the funding”.  After a build of libwebrtc you will find your build folder (/src/out/<CONFIG> by default, <CONFIG> being either Debug or Release) quite crowded. Among all those, you will see some executable with “isolated” and “isolated.state” alongside.  Those files are used by the swarming system to distribute the testing (here and here). In our case, the important information is that they contain the list of the files you need to pass as arguments to the tests to run them. This is still work in progress, but already stable enough to be used. There are 22 tests listed like this. If you then look at the waterfall, you will see that roughly, there are two types of buildbots, the normal ones which run all the unit tests, and the “Large” ones which run performance tests and long tests like [vie|voe]_auto_test, video_capture_tests and the audio_device_tests. Those take quite some time to run, and much longer than the other tests, which justifies running them separately on beefier instances. All the tests need to be run with an extra –test-launcher-bot-mode arguments. Also , for the sake on completeness, both vie_ and voe_auto_test are actually very powerful interactive executable that deserve a post on their own. I encourage everybody to play with it and look at the source code. Inspiring! To be run as a a test, you need to pass them an extra –automated arg.

3. So, how easy it is to support that with CMake?

CMake is part of a trilogy (yeah, another one), CMake/CTest/CDash. CMake is your main configuration and build tool, CTest is, well, your main testing and test suite management tool, but also a client for CDash, the equivalent of the waterfall, i.e. an interactive dashboard to browse an interact with the results of your builds and tests sent by CTest. The advantage here is that it’s all integrated.

NOTE: the best way would be to parse the isolated files, and extract the exact command line, as well as the output files. In our case, for the sake of simplicity, and time, we will manually add the 22 tests one-by-one, and we will ignore the output as long as the test passes. This is thus improvable, but gives you 99% of what you want, in 30mn work ….

to enable testing in CMake, you just need to add the two following lines of code in a CMake script:

  1. enable_testing()
  2. include(CTest)

In the absence of those two lines, the rest of the code will not crash or raise any error, but no test will be generated, be careful about it.

For each test you want to add, you can use the add_test() command. Here is an example below that handles both windows and mac for adding a normal test, while checking that the name you passed correspond to an existing file (remember, FAST moving target …):

  1. set( my_test_binary_name “common_audio_unittests” )
  2. set( my_test_binary ${my_test_binary_name} )
  3. if( WIN32 )
  4.   set( my_test_binary ${my_test_binary}.exe )
  5. endif()
  6. if( EXISTS ${MY_BUILD_DIR}/${my_test_binary )
  7.   add_test(
  8.     NAME                                      imp_${my_test_binary_name}
  9.     COMMAND                           ${my_test_binary} –test-launcher-bot-mode
  10.     WORKING_DIRECTORY  ${MY_BUILD_DIR}
  11.   )
  12. else()
  13.   message( WARNING “${my_test_binary} – NOT FOUND.” )
  14. endif()

You can rince and repeat. The final code can be seen here (and the macro add_webrtc_test is defined here).

Once you’re done building the project, you can check which tests have been added by running this command in the build directory “ctest -N“. It will not run the tests, just list them. “ctest” or “make test” (under mac/linux) are equivalent and run all the tests one after the other. For webrtc, it is better not to use the -jX options and run tests in parallel as the tests access the hardware and could interfere with one another. To make sure that people do not make this mistake, you could add a dependency between each test and the previous one in the list. We did not implement that (even though it’s only one more line). If you want to see the output of the tests, you can add a -V to the command line. Here you go, you can test the library as thoroughly as google tests it (well, almost, memory leaks and other thread sanitizer are missing, but hey, it s great already. How to add memory leak checks and coverage computation will be explained in a following posts). Now we can party like it’s 1999. Or can we?

NOTE: webrtc_perf_test needs to access the network, and so is voe_auto_test and thus if you re testing under window, you either have to configure your firewall to allow it, manually click-n-allow when prompted or skip that test if you want the test suite to run.

4. Ok I can run all the test locally, but I still don’t have my waterfall!

True. AT this stage you have enough to build and test the output on a single machine. That a good base for packaging the lib, a topic we will address in a following post. However, you don’t have visibility on the result of the build of the same code base on different systems and compiler, with different compiler options, and you also do not have the nice visual dashboard online.

That is where CDash comes in handy. CDash is the server component of the waterfall, and CTest can be very easily configured to send the result of a build and tests to a CDash server. KitWare, one of the main company behind cmake/ctest/cdash and the working place of some of th most impressive engineers I had to work with in my life, is proposing free hosting for open source project, and reasonably inexpensive hosting options. Of course, the code is free and open source and you can install your own CDash server internally if you prefer. I chose to have my own small account, and here is the very simple content of the CTestConfig.cmake file that you MUST keep in the root of the source directory.

  1. set(CTEST_PROJECT_NAME “libwebRTC”)                        # this is linked to a project you must have created in the server beforehand
  2. set(CTEST_NIGHTLY_START_TIME “00:00:00 EST”)    # Whatever you want, usually a time at which the computer is not used.
  3. set(CTEST_DROP_METHOD “http”)
  4. set(CTEST_DROP_SITE “my.cdash.org”)                               # will change depending on your server install
  5. set(CTEST_DROP_LOCATION “/submit.php?project=libwebRTC”)
  6. set(CTEST_DROP_SITE_CDASH TRUE)

From an empty dir (recommended) you can run ctest -D Experimental to configure, build, run the tests, (optionally check for me leaks, compute coverage, …) and then send the results to the dashboard. For this example, here is what the dashboard looks like (mac, windows). You can see windows and linux build appearing also, since cmake is cross platform it indeed works almost out of the box (except for me command line details and extensions differences) on all platforms. Here each line is a slave (as opposed to a waterfall where a column is a slave), and you can click on the result of each step (configuration, build, test) to get the results as it would have been printed to stdio, giving you approximately the same features as the builedbot waterfall. Eventually for a mature project it can look like this.

CTest also has the capacity to upload files separately. In a following post, I’explain how to use this to automatically drop compiled version of the library or packaged version of those library to a public repository if and only if the tests all passed. Dev can then trust that any binary or package made available is sound, without having to re-compile and re-test themselves (choose your style of victory dance).

further reading

  • http://www.cmake.org/Wiki/CMake/Testing_With_CTest
  • http://www.cmake.org/Wiki/CMake_Scripting_Of_CTest (Advanced! How to set up automated build instances.)

III. Conclusion

The google system is certainly more scalable. It also allows to trigger builds on demand as part of the validation of commits, which is great. While the CMake/CTest/CDash has been improved in the past 5 years to do this, it is still not a feature that is as easy to deploy (AFAIK, I’m not current on CMake). In the case of a very large project (200 commits a day ….) with only a few companies involved, it’s great.

What is great with the C^3 system, is that anybody can volunteer a build host and contribute to dashboard. You have a fancy configuration (early version of a compiler not out, old version of compiler [borlandCC 5.6], …) that nobody can test on, you can still build and report the errors for the Devs to see. It makes reporting errors as simple as “ctest -D Experimental”. If you configure your computer to be a nightly build host, then dev can hack the code and check if the result solves your problem without having access to the machine at all (yes, you can point the bot to a git branch, not to pollute the main dev or master branches during the trial and error process). So for simpler projects, with a lot of smaller teams contributing (like an open source community around a library ….), it is great. The barrier of entry is low, while being quite powerful if you want it to be. It has been around for years, and used in multiple projects, so it s easy to find people with knowledge of the tools (who is using depot-tools outside of the chromium project … ?). It has also a big code base out there for people to inspire themselves from. It supports ninja….

Here, in a few hours, we have been able to learn how to test the libwebrtc locally to the same level than google does, and to report to an hosted Dashboard that can be shared with others. In following posts, we will see how to add memory leak analysis, code coverage computation (compiling is a good start, testing is better, but how much do we test exactly?), how to create packages to install libwebrtc on computers, and how to then automatically import installed versions in projects that use Cmake to configure their build.

Creative Commons License
This work by Dr. Alexandre Gouaillard is licensed under a Creative Commons Attribution 4.0 International License.

This blog is not about any commercial product or company, even if some might be mentioned or be the object of a post in the context of their usage of the technology. Most of the opinions expressed here are those of the author, and not of any corporate or organizational affiliation.

Automating libwebrtc build with CMake

I.Introduction

libwebrtc is the base of many products. It is of course the base of the webrtc JS API implementation in chrome, firefox and opera, but it is also the base of a lot of mobile SDKs and other products out there. Keeping an updated version, based on a stable code base, tested, has been notoriously difficult if you’re not a google employee. This post will be the first one in a serie of post explaining step-by-step how to set up a system that can automatically compile test and package libwebrtc for you, for several platforms. It follows a previous post that explained how the google build system works (depot tools), and will focus more on the CMake part of the automation.

II. There are already (a few) very good resources out there

There is not a “best way” to compile libraries. As long as the library are built, tested, and integrated automatically, the job is done. Specifically for mobile the pristine scripts (see below) are doing a good job at this. I m biased toward CMake since I worked with it for many years, however, there are reasons why i kept working with it for so long. Not having to deal with compiler differences is a joy, not having to deal with OS differences is also a joy. Being able to add test on the fly with simple command, which in turn will also enable sending result to a dashboard is an overjoy, especially when support for valgrind and coverage comes out of the box, is AWESOME, and finally, installing and packaging is equally made easy (yes, cross platform). Following those blog post, you will be able in a few days to set up a state of the art compilation, testing, and packaging system for libwebrtc, where it would take (from my experience) much longer. Of course, if you have an existing testing infrastructure, and CI solution, that might not play along well, but it’s often best to separate building the webRTC libs and building whatever you build around it, be it a webrtc plugin, or a cordova wrapper (see below) or a mobile SDK.

further reading

  • http://tech.pristine.io/build-ios-apprtc/
  • https://github.com/pristineio/webrtc-build-scripts
  • http://tech.pristine.io/automated-webrtc-building/
  • https://github.com/eface2face/cordova-plugin-iosrtc

III. How to best use CMake

CMake is a cross-platform and cross compiler configuration manager, sometimes called a meta-build system. Its main purpose is to find libraries and program for you on the host computer, set up targets (executable, libraries, tests, ..), whatever the operating system and the compiler you use.

At the bare minimum, you can use it like any other script language and try to encapsulate the webrtc build system in system calls. the “execute_process” command would be use for that purpose. It has the disadvantage of doing everything at configuration time, and linearly. A second call to the script would re-run everything, every time. It is preferable to set up targets and dependencies at configuration time (the only time you would directly run cmake) and then to use the native compiler to handle the rest, in a resumable fashion.

A full usage of cmake would be to use it to replace entirely webrtc build system, and redefine all libraries and executable, the corresponding source files, compilation options, etc. While this might be tractable for single executable, or libraries, doing it for all of webrtc would mean migrating all the .gyp[i] files and keeping them synchrone at every update. Anybody that look at the mess the 6,000 lines src/build/common.gypi is, and still be sane, would shy away from this.

In this case we are going to take the “super build” approach, and let cmake drive the google build system, import and run the tests, add coverage support, prepare packages and so on and so forth. It allows to use the google build system untouched for the basics, and to use CMake where it’s best.

IV. Let’s do it

1. A new CMake module to detect depot tools

CMake has a collection of modules installed alongside to make detecting and using your usual libraries and packages easier, cross platform. Most of those modules are made of CMake code in a file named FindXXX.cmake. From your code you can use it by invoking the “find_package( XXX )“. If the package is mandatory, you can tell find_package to fail right away by adding the REQUIRED argument. For example, CMake comes with a FindGit.cmake pre-installed. In our code we can simply write this:

  1. find_package(Git REQUIRED)

Now, depot tools is not yet recognize by vanilla CMake. However CMake has a mechanism for projects to extend its capacity. You can write your own FindXXXXX.cmake modules and let CMake know about them by setting the CMAKE_MODULE_PATH variable in your code. You should deb careful to extend and not overwrite the variable, in case your code is called from another CMake code (super build style). In our case, we put the scripts in a “Cmake” directory right under the root directory.

  1. set( CMAKE_MODULE_PATH
  2.   ${CMAKE_MODULE_PATH} # for integration in superbuilds
  3.   ${CMAKE_CURRENT_SOURCE_DIR}/Cmake
  4.   )

You can see the corresponding full code here.

Now we still need to code the FindDepotTools.cmake. CMake provides all the primitives to do this. In our case, we just find client and we’re good. The rest is handled by CMake for us, including OS and Paths differences.

  1. find_program(DEPOTTOOLS_GCLIENT_EXECUTABLE
  2.   NAMES gclient gclient.bat # Hints about the name of the exe. client.bat is the windows version.
  3. # below this line is standard CMake stuff for all packages to handle QUIET, REQUIRED, ….
  4. include(${CMAKE_ROOT}/Modules/FindPackageHandleStandardArgs.cmake)
  5. find_package_handle_standard_args(DepotTools
  6.   REQUIRED_VARS DEPOTTOOLS_GCLIENT_EXECUTABLE
  7.   FAIL_MESSAGE “Could not find the gclient executable.”
  8.   )

Now our main script core is relatively easy to write.

  1. project(webrtc)
  2. set(CMAKE_MODULE_PATH
  3.   ${CMAKE_MODULE_PATH} # for integration in superbuilds
  4.   ${CMAKE_CURRENT_SOURCE_DIR}/Cmake
  5.   )
  6. find_package(DepotTools REQUIRED)
  7. find_package(Git               REQUIRED)

You can see the corresponding full code here.

2. Implement “gclient config” as a CMake custom command and custom target.

Now, we want to run the “client config” command line from CMake. This command line will create a .gclient file which should not exist beforehand. If the file exist, the command should be a noop. Moreover, I want the build system to try every time it is run. I want it work both across Operating systems.

let’s first define what the command should be, in a cross-platform manner.

  1. set( gclient_config
  2.   ${DEPOTTOOLS_GCLIENT_EXECUTABLE} config # gclient has been found before automatically
  3.   –name src
  4.   https://chromium.googlesource.com/external/webrtc.git
  5.   )
  6.   if(WIN32)
  7.     set(gclient_config cmd /c ${gclient_config}) # Windows command prompt syntax
  8.   endif()

The first constraints are met with CMake’s add_custom_command(). It allows to create a CMake command that will generate a file as output and will not be triggered if the file already exist.

  1. add_custom_command(
  2.      OUTPUT   ${CMAKE_SOURCE_DIR}/.gclient
  3.      COMMAND  ${gclient_config}
  4.    )

However, nothing tells you in the script when to run this custom command. That’s where the add_custom_target() comes in the picture. It creates a (potentially empty) target for the chosen build system, to which commands can attach themselves. Also, it helps defining dependency with other targets to define build/execution order. In the case of client config, there is no previous step, so there is no dependency. By making the target depend on the file generated by the command you automatically link them. The ALL parameter always include this target in the build.

  1. add_custom_target(
  2.      webrtc_configuration ALL
  3.      DEPENDS ${CMAKE_SOURCE_DIR}/.gclient
  4.    )

You can see the corresponding full code here.

3. Implement “gclient sync” and add dependency

Implementing “gclient sync” is more or less the same: define the command, add a custom command, then add a custom target. This time though, we want this target to only execute AFTER “client config”. So we need to add a dependency. The code is actually self explanatory. Note that we force sync NOT to run the hooks (-n option).

  1. set(gclient_sync ${DEPOTTOOLS_GCLIENT_EXECUTABLE} sync -n -D -r 88a4298)
  2.    if(WIN32)
  3.      set(gclient_sync cmd /c ${gclient_sync})
  4.    endif()
  5.   add_custom_command(
  6.      OUTPUT  ${CMAKE_SOURCE_DIR}/src/all.gyp 
  7.      COMMAND ${gclient_sync}
  8.    )
  9.   add_custom_target(
  10.      webrtc_synchronization ALL
  11.      DEPENDS ${CMAKE_SOURCE_DIR}/src/all.gyp
  12.    )
  13.    add_dependencies( webrtc_synchronization webrtc_configuration )

4. Implement “gclient runhooks”

Gclient runhooks will generate the build files, wo we need to be sure that the most obvious parameters are set beforehand. Ninja handles Release and Debug at build time, so no need to take care of that now, but if you had chosen another type of compiler, you might need to do it now. the switch between 32 and 64 bits architectures (or arm flavors for mobile) are handled through environment variables. CMake has a proxy function to access those:

  1. if(DEFINED ENV{GYP_DEFINES})
  2.   message( WARNING “GYP_DEFINES is already set to ENV{GYP_DEFINES}”.
  3. else()
  4.   if(APPLE)
  5.     set(ENV{GYP_DEFINES} “target_arch=x64”)
  6.   else()
  7.     set(ENV{GYP_DEFINES} “target_arch=ia32”)
  8.   endif()
  9. endif()

Then the rest of the code is pretty much the same as before:

  1.  set(gclient_runhooks ${DEPOTTOOLS_GCLIENT_EXECUTABLE} runhooks)
  2.    if(WIN32)
  3.      set(gclient_runhooks cmd /c ${gclient_runhooks})
  4.    endif()
  5.    add_custom_command(
  6.      OUTPUT  ${CMAKE_SOURCE_DIR}/src/out # it’s arbitrary.
  7.      COMMAND ${gclient_runhooks}
  8.    )
  9.    add_custom_target(
  10.      webrtc_runhooks ALL
  11.      DEPENDS ${CMAKE_SOURCE_DIR}/src/out
  12.    )
  13.    add_dependencies(webrtc_runhooks  webrtc_synchronization)

Full code for sync and unhook can be seen here.

5. Prepare the Build with Ninja target

As mentioned previously, we have to deal with Debug / Release modes here. We follow the CMake syntax, which luckily fits ninja syntax.

  1. if(NOT CMAKE_BUILD_TYPE)      # allow to set it in a superscript and honor it here
  2.   set(CMAKE_BUILD_TYPE Debug) # Debug by default
  3. endif()

We can then run the usual.

  1.    set(webrtc_build ninja -v -C ${CMAKE_SOURCE_DIR}/src/out/${CMAKE_BUILD_TYPE})
  2.    add_custom_command(
  3.      OUTPUT  ${CMAKE_SOURCE_DIR}/src/out/${CMAKE_BUILD_TYPE}/libwebrtc.a # it’s arbitrary.
  4.      COMMAND ${webrtc_build}
  5.    )
  6.    add_custom_target(
  7.      webrtc_build ALL
  8.      DEPENDS ${CMAKE_SOURCE_DIR}/src/out/${CMAKE_BUILD_TYPE}/libwebrtc.a
  9.    )
  10.    add_dependencies(webrtc_build webrtc_runhooks)

6. Finally building!

you can then run cmake once and then launch the build with your native compiler. This code has been tested on mac (use “make”) and windows (open webrtc.sln in MSVC, and build the ALL target, or hit F7).

Creative Commons License
This work by Dr. Alexandre Gouaillard is licensed under a Creative Commons Attribution 4.0 International License.

This blog is not about any commercial product or company, even if some might be mentioned or be the object of a post in the context of their usage of the technology. Most of the opinions expressed here are those of the author, and not of any corporate or organizational affiliation.

The chromium / webRTC build system

I.Introduction

This post is an introduction to the build system used by Google in many projects, with a specific focus on building WebRTC. It does not pretend to be exhaustive, but should give you an overview of all the steps involved, and all the files involved, so you can dig deeper and debug faster if there is a problem along the way. It will also set the ground for a follow up post about how to automate the build. Finally, it will give some hints about how to proceed with modifications of the code and rebuild incrementally after that. A full post will be written relative to the details of that last point.

II.depot_tools

depot tools is the name of the collection of tools used by the chromium team to handle all of their development process, including code reviews, remote testing, etc. Even if for simple steps like getting the code, everything eventually boils down to git or svn commands, chromium and by extension WebRTC have a dynamic way of handling dependencies and compiler flags. It’s buried within multiple interdependent files (DEPS, .gyp, .gypi, ….) and you should really avoid modifying these until you have a specific need for it.

further read

  • https://www.chromium.org/developers/how-tos/depottools [OLD]
  • http://commondatastorage.googleapis.com/chrome-infra-docs/flat/depot_tools/docs/html/depot_tools.html

III.fetch and the webrtc recipes

The official site will list more requirement in term of libraries and compiler for each system they support and is a must read. They recommend to simply run the “fetch” command line and be done with. For a one shot attempt at building webrtc that might be enough, but for most people it is not.

A look at the fetch source code (fetch.py not to be confused with client’s fetch) you will see it just calls a “recipe”. Recipes are hardcoded values and command designed to make it a one liner. In this case the webrtc recipe just hardcode the values  in a .gclient file, having the same result than a call to “client config” with the right parameters, and execute the equivalent of a call to “client sync”, which download the code. We feel it makes more sense to call explicitly the gclient commands and keep control on what we do.

IV.gclient config and .gclient file

You can achieve the same result than “fetch” by running the following command line:

  1. gclient config –name src https://chromium.googlesource.com/external/webrtc.git

Originally, the webrtc code was handle through SVN, and webrtc was using “trunk” as their base root name. Following the migration to git, and to stay closer to the way chromium way of doing things, that has been changed to “src” and need to be forced. That explains the “–name src” part of the command line.

Now if you look at the .gclient file generated by the recipe or the one generated by the hove command you will notice one line is missing:

  1. ‘with_branch_heads’: True,

That actually adds 1/2 GB to the download of the source,  and is only needed if you want to work with the same version that chrome includes. Most of the time this is not the case. (see this page, the “Working with releases branches” section).

Finally, there are also two recipes for iOS and android, which end up adding a single line to the .gclient file rested to the target_os (in the original recipe, target_os is a pass-through):

  • “target_os = [ios,mac]”
  • “target_os = [android,unix]”

It supposes that you are cross compiling the iOS lib on mac and the android lib on unix, respectively. If you are on mac/unix/win, it is worth adding the target_os line , and “target_os_only = True“, to reduce the amount of code and testing files downloaded to the bare minimum. It’s not mandatory though, and needs to be done through file manipulation, as there is no parameter in gclient config to do that (yet).

further read:

a .gclient file can be very complex indeed and like many things in webrtc/chromium, most of them are hand written. A perfect example is the chromium .gclient file hardcoded in the webrtc code. You can see that custom dependencies are being defined as a way to prune the original chromium dependencies. Chromium developer will tell webrtc developers who complain about the time it takes to fetch webrtc, that it would be twice worse with webkit enabled 🙂 Those who have time to spend optimizing their pure, lean-n-mean webrtc libs could investigate what else they could remove for a production build.

V. gclient sync

gclient sync is an interesting command with lots of options (grep/search for “CMDsync” in the client.py file). Its main role is to get the code from the url provided in the .gclient file, then parse the DEPS file. If the DEPS file contains any hooks, it can also run them directly. In our case, we separated running the hooks from the synchronization. There are many reasons for that, some of them we will describe in further posts, but one simple reason is to be able to illustrate how much time getting webrtc code vs getting chromium code takes.

The result of this parsing of DEPS is the .gclient_entries file which contains pairs of folder : url including first the name (here “src”) and original url, then all of the dependencies. If you are on windows, there will be an additional dependency to winsdk.

There are a lot of options, but we are going to keep things simple. We are going to use -r to fix a revision to make our productions build more stable. We are going to use -D to prune directories no more used. Finally we are going to use -n to avoid running the hooks during the sync step.

  1. gclient sync -n -D -r 88a4298

The revision tag here is a git hashtag in general and the hash to the specific commit before the Google team silently change the threading model, resulting in a lot of spin locks in the code. That also is the reality of working with the HEAD, and why we will treat in a separate post about how to test the compiled libraries before using them in production.

further reading

Later on, after running the hooks, you can out of curiosity take a look at the 78 entries strong file for chromium at /src/chromium/.gclient_entries You will see that some have “none” as url, as set up by the custom_deps we spoke about earlier.

VI.gclient runhooks

The hooks in this case come from the DEPS file.

  • check_root_dir.py    # just checks if you re still using “trunk” or not. Backward compatibility.
  • sync_chromium.py   # the one you re gonna learn to hate. Takes 40mn from Mountain View, CA with 89MB DL.
  • setup_links.py            # see below
  • download_from_google_storage # get the test files
  • src/webrtc/build/gyp_webrtc.py # generate the build files – see below

sync_chromium.py is just an extended wrapper for “gclient sync”. Since the .gclient file is already present in the /chromium folder, sync is the next steps. It should be noted that it modifies the GYP_DEFINES environment variable to remove the NACL tool suite from chromium. As you can see, there are many ways to do things, and they are all entangled in the source code, which makes it quite difficult to handle.

setup_links.py is here to address backward compatibility. SVN has one advantage over git: you can directly retrieve subdirectories, while in git you have to get everything. When both chromium and webrtc where using svn, some of the chromium subdirectories were directly retrieved and inserted in the webrtc layout using that SVN feature. Now that everything moved to GIT, it is not possible anymore. Instead of modifying the code and the includes to reflect the new layout (with all of chromium code under the /chromium dir), the google engineers decided to chose the backward compatible approach of creating symlinks for each of the previously inserted folders. The layout is then the same and there is continuity. This is what setup_links.py does. Unfortunately, it comes at the cost of running this command as an administrator under windows, which is an extra burden.

/src/webrtc/build/gyp_webrtc.py is a very important file. It is the file that will generate the build files based on the gyp and gypi files on one hand, and on some environment variable on the other. by default, ninja files will be created in src/out/Debug and src/out/Release, respectively. If you’re going for an automated build, you should use the default. Now, if you want to be able to debug in an IDE like Xcode or MSVC, there are a few extra steps for you to add to the process. You can follow the official page for that, section “build”.

If you modify any gyp or gypi file to add compiler options, or add projects, you will have to re-run gyp_webrtc. If you change generator, you will also have to run it again. Usually “gclient runhooks” does that for you.

VII. Build

That step depends on what choice you made with respect to your gyp generators. Here we will assume ninja since the main goal is to open the path to automated compilation (oh yes, and also, because google almost only supports ninja …..).

Some practical notes about ninja and webrtc.

more or less each gyp file will generate a .ninja file. If you look on the root directory, say /out/Debug in our case, you will see one main ninja file “build.ninja”and then everything else is in the obj subdirectory whose internal layout mirror the webrtc source layout. Once the build will be done, the resulting libraries and executable will be under the root for mac, and where their corresponding .ninja file resides for windows. That makes cross-platform packaging code … interesting to write. Ninja supports incremental compilation. so if you modify a gyp file to add a source file, add a project, etc, you can regenerate the ninja files and rerun to compile only what s needed. However, if you make some other modifications, like changing the compilation parameters, ninja will not recompile if the binary is still present, and you have to remove it manually. Practically, if time is not a real issue, to be safe, one might want to remove the /out/ directory before regenerating the ninja files and recompiling.

In our case

just run the following line and you should be all set (works with Release instead of Debug).

  1. ninja -C /out/Debug

further reading

  • https://martine.github.io/ninja/manual.html

Creative Commons License
This work by Dr. Alexandre Gouaillard is licensed under a Creative Commons Attribution 4.0 International License.

This blog is not about any commercial product or company, even if some might be mentioned or be the object of a post in the context of their usage of the technology. Most of the opinions expressed here are those of the author, and not of any corporate or organizational affiliation.

Talk is cheap. Show me the code.

quote-talk-is-cheap-show-me-the-code-linus-torvalds-273528
WebRTC by Dr. Alex is about WebRTC, the technology and the community that surrounds it. Given how WebRTC is still in its infancy, there is a need for continued monitoring and verifiable reporting on the technology advances, as well as well authored posts about the key technological developments and deployments.
Following the good recipes of open science, as much as possible, all the opinions expressed here will be backed up by verifiable data, from verifiable sources. The sources will be cited or quoted as appropriate.  As a place for the community to have discussions and exchange information through the comment section, we will welcome polite and respectful challenges, ideas and the opinions, and appropriately documented rebuttals. To keep the content current and accurate, the posts will be updated, corrected, amended and enhanced if new information  becomes available, or if the original statements are invalidated. And, for all contributions, credit and attribution will always be provided.

Creative Commons License
This work by Dr. Alexandre Gouaillard is licensed under a Creative Commons Attribution 4.0 International License.

This blog is not about any commercial product or company, even if some might be mentioned or be the object of a post in the context of their usage of the technology. Most of the opinions expressed here are those of the author, and not of any corporate or organizational affiliation.