Turning Xuggler Logging On and Off

June 29, 2009

Some people have asked us how to turn logging on and off (or up and down) so I wrote up the process on our Wiki.

See: http://wiki.xuggle.com/Performance_Tuning#How_do_I_turn_up_or_down_logging.3F

– Art

Setting Bitrate and Other Options with Xuggler

June 20, 2009

One of the questions we’re often asked about Xuggler is “seriously, you couldn’t find a better name?”

But a close second is “how do I change the bit-rate I encode my audio or video at?”  Usually it’s as simple as calling IStreamCoder.setBitRate(int), but sometimes it can be more complicated.

So, we’ve put together a set of guidelines on our wiki.  Check it out here if you’re interested:


Everyone Needs a Butler

June 14, 2009

Recently I talked to quite a few Xuggler users.  For those who don’t know, Xuggler is a Java library that can decode and encode pretty much any type of media file you like (FLV, MOV, H264, etc., we do ’em all) in Java, and in real-time to boot!

A phrase I kept hearing was “Xuggler just works”, and “the quality is really high”.  It’s always good to hear that, because Robert and I have spent a lot of time concentrating on quality.  But there was no way we could have hit that bar without some awesome tools: for example, our Continuous Build Server.

It uses Hudson.  We first starting using Hudson 8 months ago, and the investment has really paid off for Xuggler and for Red5.  Thanks to Hudson, when we decide it’s time to do a new Xuggler release, we know exactly what quality level it is at, we know it doesn’t leak memory, we have automatically built installers, we have automatically built documentation, and it just takes us about 1 extra hour to “ship” the software.

I thought it was finally time that I publicly thanked Hudson, and gave others who want to try using it a little leg up.  So, here’s my Hudson Case Study.

Here’s what I’ll cover:

  • Features Vs. Quality: The conflict.
  • Quality is a Feature: The answer.
  • Continuous Quality: What is it?
  • What is Hudson?
  • Hudson and Xuggler: What we use Hudson for
  • Why Philosophy Matters.
  • Hudson and Red5: What Red5 uses Hudson for
  • Hudson and Your Project: Some best practices we recommend to get the most out of Hudson.


Features vs. Quality.

When you’re working on commercial software under a deadline there is a famous adage: “features, quality, deadline — pick two”.  In most open source projects, the adage is slightly different: “features, quality, deadline — pick FEATURES!”  And as a result, almost every single software project ends up choosing features first, deadline second, and quality last with a sheepish promise to “do better next time”.

Over the last ten years though there’s been a renaissance in tools and techniques for building software.  “Unit Test” frameworks like JUnit for Java, CXXTest for C++ and ASUnit for Adobe Flash have appeared.  Methodologies such as “Test-Driven-Development“, and “Extreme Programming” and tried to put testing to the forefront.  And tools for continuously-building software behind the scenes have really matured.  See CruiseControl, Tinderbox, and Hudson, among others.

With all these tools available, you might assume the quality of software has gotten better.  Well, in some cases it has, but in most it hasn’t, despite the existence of these tools.  The tools are not sufficient — in order to have the tools be at all useful, software developers need a fundamental shift in how they think about Quality.

They need to treat Quality as a Feature.

Quality is a Feature.

I get excited about Features.  Really I do.  I’d rather work on adding a super-fast YUV-image mixer to Xuggler1 than spend 8 hours running through test plans.  At previous companies when I was asked to “assist QA” or “help with testing”, my stomach would fall and I’d lose motivation.    I believed quality was important, but clicking buttons and checking off boxes was so boring.

But at Xuggle we decided on the first day to do something different; we decided to treat Quality as a Feature.  What that means is that we budgeted time to build quality.  We made it an engineering challenge.  We spent time on our build system, and our test system, as features, not as “minimal infrastructure”.  We didn’t follow the Test-Driven-Development rule that you write your tests first2, but we did insist that if we build a feature, that we also shipped a few tests that made sure it seemed to work.  Pretty quickly writing the tests became fun — because the software we’re building is VERY complex, and a quick test gave us that immediate joy of “holy shit — it works!”, even when we knew we were months away from actually seeing a video on screen.

But about six months in we realized something — we’d developed so many tests that it would take a few minutes to run them all.  And we’re lazy and impatient, and that meant we sometimes skipped the “run all the tests” step.  And for a few weeks we found ourselves chasing bugs we’d added “3 days ago”.

This was a problem, but fortunately, there was an engineering solution.

Continuous Quality.

I knew we should run the test suite before every check-in.  We meant to, honest.  Best intentions, really.  But I’ve been around this industry long enough to know that you should never depend on engineers always running their tests — even if you’re the engineer.  Better to hire someone to do that for them.  And fortunately, there’s lots of software you can “hire” to do the job for you.

The concept behind “Continuous Building” is exactly what it sounds like: on every checkin (and sometimes more often) a computer checks out all the software and attempts to build and test it.  It then e-mails you if and when it runs into problems.

So we knew we needed a software solution, but which one.

What is Hudson?

I thought it would take us a few days to find the right answer.  I’m sure other software houses would do a nice multi-party comparison, check for features, assess their future needs, and then select a leading candidate to evaluate.  Instead, we had one criteria: spend no more than 30 minutes picking a solution, and change it later if it doesn’t work.

Under that criteria, you can’t pick CruiseControl.  You can’t pick Tinderbox.  Hudson on the other hand — well, we had Hudson up and running in TEN MINUTES!

The night we got it up and running I said to Robert, “this will do for now, but we’ll probably need something better in a few weeks…”

Fast forward 8 months, and where are we?

Hudson and Xuggler

We just finished our third major reconfiguration of our build system, and amazingly we’re still using Hudson.  Here’s some of the things our build server does with Hudson on every Xuggler check-in:

  • Builds a project containing over 250,000 lines of assembly, C++, Perl, Java and shell scripts, on 3 different slaves (Linux (32 and 64 bit) and Windows 32-bit).
  • Displays change-logs for each build, linking back to the actual check-in comments and (at least for Xuggler check-ins) links to a side-by-side diff of the change.
  • Automatically builds all dependent software, including integrating in the latest FFmpeg, an incredibly complex software project.
  • Reports on all build errors and warnings, with thresholds for when to fail a build.
  • Runs a robust CXXTest-based C++ test suite and reports errors.
  • Runs Valgrind, an assembly-level memory leak and error detection tool on all successful candidates, and reports errors (failing the build if it gets ANY).
  • Runs over 500 JUnit unit and integration tests, and aggregates all the results in easy to parse trend graphs.
  • Automatically promotes builds to a “stable” build if they pass all tests on all operating systems.
  • Automatically produces Windows installers, and Linux source-code bundles.
  • Provides easy to use Dashboards for developers to see quickly what builds are failing.
  • Sends e-mail if a build breaks (can’t compile) or becomes unstable (not all tests pass).

That’s a lot!  But to put it in perspective, I estimate we’ve only had to spend 8 hours total configuring Hudson to make all that happen (some of the build system work took a little more time, but that’s not Hudson‘s fault).

What did we get for that investment?  Well, when we shipped the very first version of Xuggler (something called 1.14.RC1), it had a very high quality bar for a virgin open source project.  And now that we’re at 3.0, Hudson helps us make our Quality Feature stronger and stronger with each release.

We picked Hudson at first because it was simple to start with.  We’re still with Hudson because it’s grown with us, through plugins, and as a result it still meets 90% of our need.  I’m not saying it’s perfect, but hell, it’s pretty close to it.

Why Philosophy Matters

Continuous Builders have been around for a while now, and as I mentioned above, they haven’t automatically made quality higher.  A project with a continous build system is, in my experience, likely to have a higher quality bar than average, but it is no guarantee.  And a project without a continous build system can have an extremely high quality bar.

In fact, FFmpeg was a great example of that.  Until relatively recently the 8-year-old project had no continuous build system.  Mike Melanson put together his own system, FATE, about a year ago, but before that they had nothing.  But the FFmpeg project treats Quality not only as a Feature.  They treat it as THE MOST IMPORTANT feature.

The reality is it’s the philosophy of the project team that matters the most.  If the team puts quality first (like FFmpeg does), then a continuous build system will make quality even stronger.  If the team puts features first and quality last, the continuous build system will happily produce error reports that are ignored, and quality will not get stronger.

Hudson and Red5

To illustrate an example of how much philosophy matters, let’s look at the second project we deployed Hudson on.  Last October we had several important demos that were based on Red5 go awry because the Red5 project team decided to add new feature and break backwards compatibility with no input from the community or advance warning (for example, the logging system changed completely between 0.6 and 0.7).  We wanted to stay on Red5 tip of tree, but we were quickly coming to the conclusion that Red5 just wasn’t reliable enough for our needs, and that we should use something else.

Instead though I decided to see if I could help the Red5 project become more reliable by actually creating a continuous build infrastructure.  And the Red5 project was extremely open to the idea, so I did.  The Red5 Hudson-Based continuous server was set up in December and it does the following:

  • Builds a project containing Java and shell scripts, on 2 different slaves (Linux (32 and 64 bit)).
  • Displays change-logs for each build, linking back to the actual check-in comments and links to a side-by-side diff of the change.
  • Automatically downloads all dependent software for each build.
  • Reports on all build errors and warnings, with thresholds for when to fail a build.
  • Runs over 150 JUnit unit and integration tests, and aggregates all the results in easy to parse trend graphs.
  • Runs a self-contained headless Flash-Based system-test that makes sure a web-browser can stream video from Red5, pause the video, record video, and that Red5 can handle the load of many connections and disconnections (using ASUnit and Hudson‘s XVNC plugin).
  • Automatically promotes builds to a “stable” build if they pass all tests on all operating systems.
  • Provides easy to use Dashboards for developers to see quickly what builds are failing.
  • Sends e-mail if a build breaks (can’t compile) or becomes unstable (not all tests pass).

The result was I (a) got invited to join the Red5 team and (b) with the Red5 team were able to get Red5 0.8.RC2 to a pretty high quality bar, and be assured with the tests that what we were shipping actually worked.  Red5 0.8 (just released two weeks ago) is much more tested than any prior release.

But the Red5 project team, unlike the FFmpeg team, explicitly does not treat quality as the most important feature.  Red5 has not yet reached “1.0” in the eyes of the core development team, and “quality” is something actually on the roadmap for the 1.0 release (seriously).  New features are in general added without new tests.  If a Red5 auto-build breaks, I usually get an e-mail asking “why am I getting spam about the failed build?”, instead of the developer actually looking at the log to see what failed and fixing their code (which so far for the main builds, has been the actual problem on 100% of times I was asked).  Part of this is because the system-test builds can sometimes fail if our build server is overloaded (see the next section), but part of this as well is that the culture of that project is Features-First.

I’m not saying that “Features-First” means a bad project — Red5 is one of the best open-source projects out there (and has over the last 6-months become much more quality-focused).  It just means that any investment you make in Quality tools (like Hudson) matters way less than the culture you have on your project team.  If you have a great quality culture, like FFmpeg does, you can ship quality software with just a C compiler and make.  If you view quality as an afterthough to be done later, then Hudson can’t help you.

Hudson and Your Project

Based on all that, I thought it might help people who are new to Hudson to see some best-practices we think work well with Hudson.

Don’t Use Hudson

If your project or team thinks Hudson will solve the “quality problem” for them, then don’t use Hudson.  If your team thinks Hudson means they don’t need to run unit tests themselves, then don’t use Hudson.  Instead, focus on building a culture on your team where Quality is considered a feature.  Then, consider using Hudson as a way to help build the Quality feature.

Measure Warnings and Ban Them

There is a plugin for Hudson that measures warnings in your code.  Use it.  Better yet, make your build system fail if something causes a warning.  This will drastically increase your odds of catching errors (especially in C and C++ code) when you move to different operating systems.  And don’t forget to check for JavaDoc warnings — your users depend on your Docs, even if you don’t.

All Test Graphs Should Be Republican

In terms of metrics make sure you are constantly reminding and rewarding your engineers for Hudson JUnit test-trends that are constantly getting more Republican: High and to the Right.  That is a good measure that your developers are actually using testing as a feature, not as an afterthought.  To see what I mean by this, look at the Red5 test trends versus the Xuggler test trends.

One Master To Rule Them All

Originally we had separate hudson servers for each build machine type, but now we use slaves on other operating systems.  Windows still needs some work (and we have to manually start our Windows slave), but if you set up slaves, they will automatically upload their results (like the Windows installer we build for Xuggler) to one place which makes things easier for your users.

Keep A Limited Number Of Builds

Hudson starts to fail in very non-obvious ways as the disk fills up, and it doesn’t always warn you.  It’ll get better over time, but for now use the Disk Usage plug-in, and keep a limited number of builds around.  This has bitten us more often than we’d like to admit.

Backup Early and Often

Hudson’s UI is relatively easy to use, but you’ll quickly get frustrated if you lose all your settings and have to recreate them from scratch.  Especially as your number of jobs increase.  So backup early and often.  Here’s a script (run from the HUDSON_HOME) directory that makes a configuration backup:

 find -maxdepth 3 -name config.xml | xargs tar -czvf build.xuggle.com-20090614.tar.gz

What’s In A Name

Think about a convention for job names.  As your machines and jobs multiply, you’ll really want your naming convention to grow with you.  Here’s what we use:

 <organization>_<main language>_<project>_<language version>_<cpu>_<os>[_<optional descriptor>]

So this:


Is Xuggle’s java build of the Xuggler project, using JDK 1.5, on an i386 processor running Ubuntu.

And this:


Is Xuggle’s java build of the Xuggler project, using JDK 1.5, on an i386 processor running Ubuntu, and running our exhaustive memory leak test.

False Positives are True Problems

We see this in our Red5 build.  Our system tests will fail sometimes due to non-reproducible startup/shutdown delays and timing.  It works most of the time, but about 20% of all builds will fail due to these issues.  As a result, the developers don’t pay attention to them.

Don’t blame the developers — false positives are bugs in your test system3.   You should work hard to minimize them.

I hope this was helpful,

– Art

  1. which, by the way, we’d add if folks were interested 🙂
  2. I believe that TDD works well if you know EXACTLY what you’re going to build, but falls down if you don’t yet know what you’re trying to build.  Put another way, during the innovation phase, I think TDD’s value is limited.
  3. Unfortunately we don’t have the cycles right now to fix all of those timing issues, and have elected to leave the system tests up because if they fail more than 2-times in a row, it’s always been because they find an actual bug.

Introducing Xuggler 3.0

June 5, 2009

Since 2.0 was a major release, the original plan was to have 2.1 be a “only bug fixes” release.

Screw that!

Here comes Xuggler 3.0! Here’s some of the new features you can try out:

  • A simpler API for decoding and encoding: MediaTool. See the new tutorials, but we think it rocks.
  • New MemoryModel technology that can speed up Xuggler programs by over 40%.
  • Improved scaling for multi-threaded programs, including the ability to interrupt blocking calls.
  • Seamless Java IO support (e.g. InputStream, and OutputStream).
  • A simpler API for getting access to video, audio and packet memory.
  • The ability to query which formats and codecs Xuggler supports.
  • … and more!

As usual, read the release notes for details.

– Art & Robert

Introduction To Xuggler MediaTools

June 5, 2009

(Eclipse Users:  See here for how to set up eclipse to write Xuggler programs.)

The MediaTools are a collection of Java classes that make it even easier to decode, encode, modify and use video with Xuggler.  We’ve put together some videos showing what you can do with the API.  And for those people who prefer the harder way, don’t worry, the existing Xuggler API is still fully supported (and in fact accessible from a MediaTool).  Enjoy!

If you prefer text-based tutorials, click here.

Part 1: Decoding & Encoding Media

Part 2: Modifying & Creating Media

For Reading At Home

To go along with the movie, you can now read the companion novel showing even more cool things you can do with MediaTools (like making thumnails of a video, or using a webcam, or capturing screenshots, etc.).  You can find that tutorial here.