Moz logging (former NSPR logging) file now has a size limit option

There are lot of cases of mainly networking issues rare enough to reproduce making users have their Firefox run for long hours to hit the problem.  Logging is great in bringing the information to us when finally reproduced, but after few hours the log file can be – well – huge.  Easily even gigabytes.

But now we have a size limit, all you need to do:

Adding rotate module to the list of modules will engage log file size limit:

MOZ_LOG=rotate:200,log modules...

The argument is the limit in megabytes.

This will produce up to 4 files with names appended a numbering extension, .0, .1, .2, .3.  The logging back end cycles the files it writes to while sum of these files’ sizes will never go over the specified limit.

The patch just landed on mozilla-central (version 51), bug 1244306.

Note 1: the file with the largest number is not guarantied to be the last file written.  We don’t move the files, we only cycle.  Using the rotate module automatically adds timestamps to the log, so it’s always easy to recognize which file keeps the most recent data.

Note 2: rotate doesn’t support append.  When you specify rotate, on every start all the files (including any previous non-rotated log file) are deleted to avoid any mixture of information.  The append module specified is then ignored.

Illusion of atomic reference counting

Regardless if an object’s reference counter is atomic, there is one major problem when a single RefPtr holding it is being re-assigned and read concurrently.

Here I’ll explain on a simple example. Note that all the time we are in a method of a single object, Type has ThreadSafeAutoRefCnt reference counter, when talking Mozilla code-base terms:

RefPtr<Type> local = mMember; // mMember is RefPtr<Type>, holding an object

And other piece of code then, on a different thread:

mMember = new Type(); // mMember's value is rewritten with a new object

Usually, people believe this is perfectly safe. But it’s far from it. Just break this to actual atomic operations and put the two threads side by side:

Thread 1

local.value = mMemeber.value;
/* context switch */
.
.
.
.
.
.
local.value->AddRef();

Thread 2

.
.
Type* temporary = new Type();
temporary->AddRef();
Type* old = mMember.value;
mMember.value = temporary;
old->Release();
/* context switch */
.

Similar for clearing a member (or a global, when we are here) while some other thread may try to grab a reference to it:

RefPtr<Type> service = sService; // sService is a RefPtr
if (!service) return; // service being null is our 'after shutdown' flag

And another thread doing, usually during shutdown:

sService = nullptr; // while sService was holding an object

And here what actually happens:

Thread 1

local.value = sService.value;
/* context switch */
.
.
.
.
local.value->AddRef();

Thread 2

.
.
Type* old = sService.value;
sService.value = nullptr;
old->Release();
/* context switch */
.

And where is the problem? Clearly, if the Release() call on the second thread is the last one on the object, the AddRef() on the first thread will do its job on a dying or already dead object, not talking about further access to a bad pointer.

The only correct way is to have both in and out assignments protected by a mutex or, ensure that there cannot be anyone trying to grab a reference from a globally accessed RefPtr when it’s being finally released or just being re-assigned. The latter may not always be easy or even possible.

Anyway, if somebody knows a way how to solve this universally without using an additional lock, I would be really interested!

Backtrack meets Gecko Profiler

Backtrack is about to be a new performance tool, focused on revealing and solving scheduling and delay problems.  Those are big offenders of performance, very hard to track, and hidden from conventional profilers.

To find out how long and what all has to happen to reach a certain point – an objective, just add a simple instrumentation marker.  When hit during run, it’s added to a list you can then pick from and start tracing to its origin.  Backtrack follows from the selected objective back to the originating user input event that has started the whole processing chain.

The walk-back crosses runnables and their wait time in thread event queues, but also network requests and responses, any code specific queues such as DOM mutations, scheduled reflows or background JS parsing 1), monitor and condvar notifications, mutex acquirements 2), and disk I/O operations.

Visually the result is a single timeline – we can call it a critical path – revealing wait, network and CPU times as distinct intervals involved in reaching solely the selected objective.  Spotting mainly dispatch wait delays is then very easy.  The most important and new is that Backtrack tells you what other operations or events block (makes the critical path wait) and where from have been scheduled.  And more importantly, it recognizes which of them are (or are not) related to reaching the selected objective.  Those not related are then clear candidates for rescheduling.

To distinguish related and unrelated operations Backtrack captures all sub-tasks that are involved in reaching the selected objective.  Good example is the page first paint time – actually unsuppress of painting.  First paint is blocked by loading more than one resource, the HTML and head referenced CSS and JS.  These loads and their processing – the sub-tasks – happen in parallel and only completion of all of them unsuppresses the painting (said in a very simplified way, of course.)  Each such sub-task’s completion is marked with an added instrumentation.  That creates a list of sub-objectives that are then added to the whole picture.

Screen shot of how Backtrack is integrated to the Gecko Profiler Cleopatra web UI

Future improvements:

  • Backtrack could be used in our perfomance automation.  Except calculation of time between an objective and its input source event, it can also calculate CPU vs dispatch delays vs network response time.  It could also be able to filter out code paths clean of any outer jitter.
  • Indeed, networking has strong influence to load times.  Adding more detailed breakdown and analyzes how well we schedule and allocate network resources is one of the next steps.
  • Adding PCAPs or even let Backtrack capture network activity like Wireshark directly from inside Firefox and join it with the Gecko Profiler UI might help too.

The current state of Backtrack development is a work in active progress and is not yet available to users of Gecko Profiler.  There are patches for Gecko, but also for the Cleopatra UI and the Gecko Profiler Add-on.  The UI changes, where also the analyzes happens, are mostly prototype-like and need a clean up.  There are also problems with larger memory consumption and bigger chances to hit OOMs when processing the captured data with Backtrack captured markers.


1) code specific queues need to be manually instrumented
2) with ability to follow to the thread that was keeping the mutex for the time you were waiting to acquire it

 

Filter out errors out of mozilla build on command line

Mozilla build errors filtered once again under the build log

I wrote a small filter script that lists all build errors once again at the bottom of the whole build log, so that you don’t have to look for them like for a needle in a haystack. Something I wanted mach to do natively.  But I was always told something like “filter yourself”.  So here it is :)

  • Download this small script *)
  • Copy it to your source directory or somewhere your $PATH points to
  • run mach as: ./mach build | err

When there are errors during build, those will be listed under the build log and conveniently highlighted.

*) It’s tuned for and mainly targeting mingw, but might well work on linux/osx too.

String parsing made simple with mozilla::Tokenizer

 

PL_strstr

 

I can see FindChar, Substring, ToInteger and even atoi, strchr, strstr and sscanf craziness all over the Mozilla code base. There are though much better and, more importantly, safer ways to parse even a very simple input.

I wrote a parser class with API derived from lexical analyzers that helps with simple inputs parsing in a very easy way. Just include mozilla/Tokenizer.h and use class mozilla::Tokenizer. It implements a subset of features of a lexical analyzer.  Also nicely hides boundary checks of the input buffer from the consumer.

To describe the principal briefly: Tokenizer recognizes tokens like whole words, integers, white spaces and special characters.  Consumer never works directly with the string or its characters but only with pre-parsed parts (identified tokens) returned by this class.

 

There are two main methods of Tokenizer:

  • bool Next(Token& result);

If there is anything to read from the input at the current internal read position, including the EOF, returns true and result is filled with a token type and an appropriate value easily accessible via a simple variant-like API.  The internal read cursor is shifted to the start of the next token in the input before this method returns.

  • bool Check(const Token& tokenToTest);

If a token at the current internal read position is equal (by the type and the value) to what has been passed in the tokenToTest argument, true is returned and the internal read cursor is shifted to the next token.  Otherwise (token is different than expected) false is returned and the read cursor is left unaffected.

Few usage examples:

Construction

  #include "mozilla/Tokenizer.h"

  mozilla::Tokenizer p(NS_LITERAL_CSTRING("Sample string 2015."));

Reading a single token, examining it

  mozilla::Tokenizer::Token t;
  bool read = p.Next(t);
  // read == true, we have read something and t has been filled
  // Following our example string...
  if (t.Type() == mozilla::Tokenizer::TOKEN_WORD) {
    t.AsString(); // returns "Sample"
  }

Checking on a token value and automatically skipping on a positive test

  if (!p.CheckWhite()) {
    throw "I expect a space here!";
  }

  read = p.Next(t);
  // read == true
  t.Type() == mozilla::Tokenizer::TOKEN_WORD;
  t.AsString() == "string";

  if (!p.CheckWhite()) {
    throw "A white space is expected here!";
  }

Reading numbers

  read = p.Next(t);
  // read == true
  t.Type() == mozilla::Tokenizer::TOKEN_INTEGER;
  t.AsInteger() == 2015;

Reaching the end of the input

  read = p.Next(t);
  // read == true
  t.Type() == mozilla::Tokenizer::TOKEN_CHAR;
  t.AsChar() == '.';

  read = p.Next(t);
  // read == true
  t.Type() == mozilla::Tokenizer::TOKEN_EOF;

  read = p.Next(t);
  // read == false, we are behind the EOF
  // t is here undefined!

More features

To learn more enhanced features of the Tokenizer – there is not that many, don’t be scared ;) – look at the well documented Tokenizer.h file under xpcom/ds.

As a teaser you can go through this more enhanced example or check on a gtest for Tokenizer:

#include "mozilla/Tokenizer.h"

using namespace mozilla;

{
  // A simple list of key:value pairs delimited by commas
  nsCString input("message:parse me,result:100");

  // Initialize the parser with an input string
  Tokenizer p(input);
  // A helper var keeping type and value of the token just read
  Tokenizer::Token t;

  // Loop over all tokens in the input
  while (p.Next(t)) {
    if (t.Type() == Tokenizer::TOKEN_WORD) {
      // A 'key' name found
      if (!p.CheckChar(':')) {
        // Must be followed by a colon
        return; // unexpected character
      }

      // Note that here the input read position is just after the colon
      // Now switch by the key string
      if (t.AsString() == "message") {
        // Start grabbing the value
        p.Record();
        // Loop until EOF or comma
        while (p.Next(t) && !t.Equals(Tokenizer::Token::Char(',')))
          ;
        // Claim the result
        nsAutoCString value;
        p.Claim(value);
        MOZ_ASSERT(value == "parse me");

        // We must revert the comma so that the code bellow recognizes the flow correctly
        p.Rollback();
      } else if (t.AsString() == "result") {
        if (!p.Next(t) || t.Type() != Tokenizer::TOKEN_INTEGER) {
          return; // expected a value and that value must be a number
        }

        // Get the value, here you know it's a valid number
        uint32_t number = t.AsInteger();
        MOZ_ASSERT(number == 100);
      } else {
        // Here t.AsString() is any key but 'message' or 'result', ready to be handled
      }

      // On comma we loop again
      if (p.CheckChar(',')) {
        // Note that now the read position is after the comma
        continue;
      }
      // No comma?  Then only EOF is allowed
      if (p.CheckEOF()) {
        // Cleanly parsed the string
        break;
      }
    }

    return; // The input is not properly formatted
  }
}

 

Currently works only with ASCII inputs but can be easily enhanced to also support any UTF-8/16 coding or even specific code pages if needed.