Showing some Love to my dashboard.

11046789_1010459968998193_7337377444744452326_o

A Dashboard is like a young baby, fragile and in need of love. While for most people the dashboard is not adding value so much as a new feature to the lib itself (like a nice lib installer, more in a future post), for developer it’s the key to success.

Most developers have a bias toward some OS and some dev tools. I, for one, do most of my main coding on mac nowadays. However, you need to know that the code is tested across OS, compilers, options, etc. If your dashboard is well maintained, it removes the need for you to go and test all theses most of the time. So one need to have quite a few things before a Dashboard is usable to its full strength and be your safety net.

  • All the supported OS, compilers, and options need to appear as a separate build on the dashboard. (In our case, we want as many builds as there is on the official waterfall).
  • The code to be tested as much as possible for each platform, so you need to have visibility on the code coverage on each platform (more in a later post). Removing a failing test to make the dashboard appear green and hide an error is not OK.
  • The dashboard needs to stay Green (without any reported error) all the time, so when when a new error is introduced you see it right away, and it’s not hidden within an already failing test.

So yesterday I spent a little bit of time to test my scripts on linux. As expected it worked almost out the box, but the first results showed 9 failing tests! It’s quite a lot. So i went to double check the official waterfall, and indeed there were linux build, and they were all green, so I was doing something wrong. Or was I?

Looking at the waterfall, there are 47 build or try bots reporting. The base matrix to compute the different kind of builds use the following parameters:

  • OS: win, Mac, iOS, Linux, Android,
  • ARCH: 32 / 64
  • BUILD_TYPE: Debug / Release
  • OPTIONS: Normal / Large-tests / GN / Simulator / (other exotic builds)
  • For android: try bot on real devices: Nexus 5, 7.2, 9

I don’t want to reproduce all, at least not for now, but I would like to cover all the desktop OSes, arch, and build_type as a start. Looking at the linux 32 release builds, I realized that it was running less tests than its windows / mac counterpart. So I started by modifying my test scripts not to include those if the platform is linux. Boom, down to only two errors. The first error seems to be related to a bad allocation. That’s were you realize that running this on the smallest possible linux instance in AWS was possibly a bad idea. It should disappear when I host the Linux built bot on a bigger instance. The second error is more elusive, and I can’t figure it out just from the logs. Once I will have set up a more powerful linux build host, I will debug there directly.

Next blogs should be about packaging, then about adding coverage computation (for gcc and clang builds) and memory leak verification (using valgrind). Stay tuned.

 

Creative Commons License
This work by Dr. Alexandre Gouaillard is licensed under a Creative Commons Attribution 4.0 International License.

This blog is not about any commercial product or company, even if some might be mentioned or be the object of a post in the context of their usage of the technology. Most of the opinions expressed here are those of the author, and not of any corporate or organizational affiliation.

One thought on “Showing some Love to my dashboard.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.