Just a photo…

23rd January, 2015

Misty and cloudy field

Firefox HTTP cache v1 API disabled

6th June, 2014

Recently we landed the new HTTP cache for Firefox (“cache2″) on mozilla-central.  It has been in nightly builds for a while now and seems very likely to stick on the tree and ship in Firefox 32.

Given the positive data we have so far, we’re taking another step today to making the new cache official: we have disabled the old APIs for accessing the HTTP cache, so addons will now need to use the cache2 APIs. One important benefit of this is that the cache2 APIs are more efficient and never block on the main thread.  The other benefit is that the old cache APIs were no longer pointing at actual data any more (it’s in cache2) :)

This means that the following interfaces are now no longer supported:

  •   nsICache
  •   nsICacheService
  •   nsICacheSession
  •   nsICacheEntryDescriptor
  •   nsICacheListener
  •   nsICacheVisitor

(Note: for now nsICacheService can still be obtained: however, calling any of its methods will throw NS_ERROR_NOT_IMPLEMENTED.)

Access to previously stored cache sessions is no longer possible, and the update also causes a one-time deletion of old cache data from users’ disks.

Going forward addons must instead use the cache2 equivalents:

  •   nsICacheStorageService
  •   nsICacheStorage
  •   nsICacheEntry
  •   nsICacheStorageVisitor
  •   nsICacheEntryDoomCallback
  •   nsICacheEntryOpenCallback

Below are some examples of how to migrate code from the old to the new cache API.  See the new HTTP cache v2 documentation for more details.

The new cache2 implementation gets rid of some of terrible features of the old cache (frequent total data loss, main thread jank during I/O), and significantly improves page load performance.  We apologize for the developer inconvenience of needing to upgrade to a new API, but we hope the performance benefits outweight it in the long run.

Example of the cache v1 code (now obsolete) for opening a cache entry:

var cacheService = Components.classes["@mozilla.org/network/cache-service;1"]

var session = cacheService.createSession(

    onCacheEntryAvailable: function (entry, access, status) {
      // And here is the cache v1 entry

Example of the cache v2 code doing the same thing:

let {LoadContextInfo} = Components.utils.import(
  "resource://gre/modules/LoadContextInfo.jsm", {}
let {PrivateBrowsingUtils} = Components.utils.import(
  "resource://gre/modules/PrivateBrowsingUtils.jsm", {}

var cacheService = Components.classes["@mozilla.org/netwerk/cache-storage-service;1"]

var storage = cacheService.diskCacheStorage(
  // Note: make sure |window| is the window you want
  LoadContextInfo.fromLoadContext(PrivateBrowsingUtils.privacyContextFromWindow(window, false)),

    onCacheEntryCheck: function (entry, appcache) {
      return Ci.nsICacheEntryOpenCallback.ENTRY_WANTED;
    onCacheEntryAvailable: function (entry, isnew, appcache, status) {
      // And here is the cache v2 entry


There is a lot of similarities, instead of a cache session we now have a cache storage having a similar meaning – to represent a distinctive space in the whole cache storage – it’s just less generic as it was before so that it cannot be misused now.  There is now a mandatory argument when getting a storage – nsILoadContextInfo object that distinguishes whether the cache entry belongs to a Private Browsing context, to an Anonymous load or has an App ID.

(Credits to Jason Duell for help with this blog post)

NGC 7000, NGC 6974, IC 1318 a okolí + IR

30th May, 2014

NGC 7000, NGC 6974, IC 1318

NGC 7000, NGC 6974, IC 1318 + Infrared


Dvě téměř zapomenuté fotky z lokace jižně od Prahy, focené loni v létě v noci z 16. na 17. června. Velmi krátká noc, slunce definitivně zapadlo snad až před jedenáctou a po druhé už zase začalo svítat. Zato divokých psů a prasat v okolní vysoké trávě bylo požehnaně :)


Horní fotografie je jen viditelné světlo, dolní má modrý overlay v IR pásmu nad 742nm. Kvalita je sice mizerná, základ je vždy jen jedna fotografie, ale mě se to líbí.


Canon 30D, MC mod
Canon EF 35mm/F2
HEQ5, ustavena tentokrát driftovou metodou
Astronomik CLS-CCD: 1x600s @ F4.0, ISO 1000
Astronomik ProPlanet IR 742: 1x300s @ F4.0, ISO 1000
0x Flat/Dark/Bias
Zpracování v CR a PS

Headless Fedora 20 and VNC with autologin

30th May, 2014

“Oh no! Something has gone wrong” message is all what you get when you VNC to Gnome 3 in Fedora 20 on a box without any physical monitor attached to any of the video outputs with enabled autologin and screen sharing (vino).  There is an error in /var/log/messages ‘TypeError: this.primaryMonitor is undefined’ at /usr/share/gnome-shell/js/ui/layout.js:410.  I haven’t found a Fedora bug open for this.

You cannot also simply configure e.g. tiger-vnc because of other two bugs, one closed and one open preventing login screen from entering the password – as somebody would be pressing the entry key on and on.

I was not able to find a straight and simple fix unless I’ve hit this solution for Ubuntu, and ported it to Fedora 20:

  • #yum install xorg-x11-drv-dummy
  • put this content to /etc/X11/xorg.conf (you will probably need to create the file):

Section “Monitor”
Identifier “Monitor0″
HorizSync 28.0-80.0
VertRefresh 48.0-75.0
Modeline “1280×800″  83.46  1280 1344 1480 1680  800 801 804 828 -HSync +Vsync

Section “Device”
Identifier “Card0″
Option “NoDDC” “true”
Option “IgnoreEDID” “true”
Driver “dummy”

Section “Screen”
DefaultDepth 24
Identifier “Screen0″
Device “Card0″
Monitor “Monitor0″
SubSection “Display”
Depth 24
Modes “1280×800″

You can then VNC to port :0 and you will be logged in directly without a need to enter the user password.  I suggest SSH tunneling.


New Firefox HTTP cache now enabled on Nightly builds

19th May, 2014

Yes, it’s on!  After a little bit more than a year of a development by me and Michal Novotný all bugs we could find in our labs, offices and homes were fixed.  The new cache back-end is now enabled on Firefox Nightly builds as of version 32 and should stay like that.

The old cache data are for now left on disk but we have handles to remove them automatically from users’ machines to not waste space since it’s now just a dead data.  This will happen after we confirm the new cache sticks on Nightlies.

The new HTTP cache back end has many improvements like request prioritization optimized for first-paint time, ahead of read data preloading to speed up large content load, delayed writes to not block first paint time, pool of most recently used response headers to allow 0ms decisions on reuse or re-validation of a cached payload, 0ms miss-time look-up via an index, smarter eviction policies using frecency algorithm, resilience to crashes and zero main thread hangs or jank.  Also it eats less memory, but this may be subject to change based on my manual measurements with my favorite microSD card which shows that keeping at least data of html, css and js files critical for rendering in memory may be wise.  More research to come.

Thanks to everyone helping with this effort.  Namely Joel Maher and Avi Halachmi for helping to chase down Talos regressions and JW Wang for helping to find cause of one particular hard to analyze test failure spike.  And also all early adopters who helped to find and fix bugs.  Thanks!


New preferences to play with:


Number of kBs we reserve for keeping recently loaded cache entries metadata (i.e. response headers etc.) for quick access and re-validation or reuse decisions.  By default this is at 250kB.
Number of data chunks we always preload ahead of read to speed up load of larger content like images.  Currently size of one chunk is 256kB, and by default we preload 4 chunks – i.e. 1MB of data in advance.


Load times compare:

Since these tests are pretty time consuming and usually not very precise, I was only testing with page 2 of my blog that links some 460 images.  Media storage devices available were: internal SSD, an SDHC card and a very slow microSD via a USB reader on a Windows 7 box.


[ complete page load time / first paint time ]

Cache version First visit Cold go to 1) Warm go to 2) Reload
cache v1 7.4s / 450ms 880ms / 440ms 510ms / 355ms 5s / 430ms
cache v2 6.4s / 445ms 610ms / 470ms 470ms / 360ms 5s / 440ms


Class 10 SDHC
Cache version First visit Cold go to 1) Warm go to 2) Reload
cache v1 7.4s / 635ms 760ms / 480ms 545ms / 365ms 5s / 430ms
cache v2 6.4s / 485ms 1.3s / 450ms 530ms / 400ms* 5.1s / 460ms*


Edit: I found one more place to optimize – preload of data sooner in case an entry has already been used during the browser session (bug 1013587).  We are winning around 100ms for both first paint and load times!  Also stddev of first-paint time is smaller (36) than before (80).  I have also measured more precisely the load time for non-patched cache v2 code.  It’s actually better.

Slow microSD
Cache version First visit Cold go to 1) Warm go to 2) Reload
cache v1 13s / 1.4s 1.1s / 540ms 560ms / 440ms 5.1s / 430ms
cache v2 6.4s / 450ms 1.7s / 450ms 710ms / 540ms* 5.4s / 470ms*
cache v2 (with bug 1013587) - - 615ms / 455ms* -

* We are not keeping any data in memory (bug 975367 and 986179) what seems to be too restrictive.  Some data memory caching will be needed.


“Jankiness” compare:

The way I have measured browser UI jank (those hangs when everything is frozen) was very simple: summing every stuck of the browser UI, taking more then 100ms, between pressing enter and end of the page load.


[ time of all UI thread events running for more then 100ms each during the page load ]

Cache version First visit Cold go to 1) Warm go to 2) Reload
cache v1 0ms 600ms 0ms 0ms
cache v2 0ms 0ms 0ms 0ms


Class 10 SDHC
Cache version First visit Cold go to 1) Warm go to 2) Reload
cache v1 600ms 600ms 0ms 0ms
cache v2 0ms 0ms 0ms 0ms


Slow microSD
Cache version First visit Cold go to 1) Warm go to 2) Reload
cache v1 2500ms 740ms 0ms 0ms
cache v2 0ms 0ms 0ms 0ms


All load time values are medians, jank values averages, from at least 3 runs without extremes in attempt to lower the noise.


1) Open a new tab and navigate to a page right after the Firefox start.

2) Open a new tab and navigate to a page that has already been visited during the browser session.


NTLMv1 and Firefox

26th April, 2014

In Firefox 30 the internal fallback implementation of the NTLM authentication schema talking only NTLMv1 has been disabled by default for security reasons.

If you are experiencing problems with authentication to NTLM or Negotiate http proxies or http servers since Firefox 30 you may need to switch network.negotiate-auth.allow-insecure-ntlm-v1 in about:config to true.

Firefox (Necko) knows two ways to authenticate to a proxy or a server that requires NTLM or LM authentication:

  • System API or library like SSPI, GSSAPI or ntlm_auth binary ; on the Window platform by default the SSPI is always attempted, on non-Windows systems must be allowed by modifying some of the Firefox preferences, see bellow
  • Our own internal NTLM implementation module that is currently disabled since it talks only NTLMv1 ; we may have some plans to implement NTLMv2 in the future

Note that if you are in an environment where the system API can be used we have no handles to influence what NTLM version is used. It’s fully up to your local system and network setting, Firefox has no control over it.


Preference list influencing NTLM authentication in Firefox

(Note: there is a similar list dedicated to the Negotiate schema)

EDIT: network.negotiate-auth.allow-insecure-ntlm-v1-https
Introduced in Forefox 31 on June 23th, 2014, enabled use of our own internal implementation of NTLM module that talks only NTLMv1 for connections to HTTPS servers (not proxies). Note: this preference influences both NTLM and Negotiate authentication scheme.
deafult: true
true: The NTLMv1 internal implementation module is enabled and used as a fallback for connecting secure HTTPS servers when system API authentication fails or cannot be used.
false: Usage of the NTLMv1 module is controlled by network.negotiate-auth.allow-insecure-ntlm-v1.
Introduced in Forefox 30 on April 25th, 2014, disables use of our own internal implementation of NTLM module that talks only NTLMv1. Note: this preference influences both NTLM and Negotiate authentication scheme.
deafult: false
true: The NTLMv1 internal implementation module is enabled and used as a fallback when system API authentication fails or cannot be used.
false: Usage of the NTLMv1 module is hard-disabled, it won’t be used under any circumstances.
Allows use of the system (e.g. SSPI) authentication API when talking to (and only to) a proxy requiring NTLM authentication, this also allows sending user’s default credentials – i.e. the credentials user is logged in to the system – to the proxy automatically without prompting the user.
deafult: true
true: The system API (like SSPI) will be used to talk to the proxy, default credentials will be sent to the proxy automatically.
false: Disallows send of default credentials to the proxy. On non-Windows platforms the fallback internal implementation, which is currently disabled, would be used. Hence with this setting you will not be able to authenticate to any NTLM proxy.
This is a list of URLs or schemes that you trust to automatically send the system default credentials without any prompts to when NTLM authentication is required, the system API like SSPI will be used. On non-Windows platforms without filling this list you cannot use the system NTLM API to authenticate and since the internal NTLM v1 is disabled, you will not be able to authenticate at all.
deafult: an empty string
example: “https://, http://intranet.com/” – this will allow sending the default credentials to any https: host and any address that starts with http://intranet/ you are connecting to automatically without prompts – BE CAREFUL HERE.
Influences automatic send of default system credentials to hosts with a non fully qualified domain names (e.g. http://corporate/, http://intranet/).
deafult: false
true: Allow automatic sending without prompts, this setting is examined before network.automatic-ntlm-auth.trusted-uris check. With this setting there is no need to list your non-FQDN hosts in the network.automatic-ntlm-auth.trusted-uris preference string.
false: Automatic sending is not allowed to non-FQDN hosts, although particular hosts can be manually allowed in network.automatic-ntlm-auth.trusted-uris.
Forces in all cases and on all platforms use of the internal NTLM implementation. This effectively bypasses use of the system API and never sends the default system credentials.
deafult: false
true: In all cases and on all platforms always only use the internal NTLM implementation. With network.negotiate-auth.allow-insecure-ntlm-v1 at false this will actually completely turn of any attempts to do NTLM authentication to any server or proxy.
false: When NTLM internal implementation is not disabled with network.negotiate-auth.allow-insecure-ntlm-v1 (default is disabled) it is only used when you are not on the Windows platform and the host being connected is neither a proxy nor an allowed non-FQDN host nor a listed trusted host.


Disclaimer: NO WARRANTY how accurate or complete this list is. I don’t know the Kerberos preferences (if there are any) at all. I am not the original author of this code, I’m only occasionally maintaining it as part of my HTTP work.

Brendan Eich – what the heck?

2nd April, 2014

Mozilla has few hundred employes and even more volunteers and outside contributors.  Worldwide.  I don’t know them all by person – and I never will.  I even don’t know Brendan Eich in person.

When you look at this just purely statistically, it’s clear that all these people creating Mozilla are of different religion, opinions to global warming, occupation of Tibet, human rights in Russia, China, global economical system, sexual orientation, or just of different favorite color.  And no one says “you don’t like pink color?  Then I can’t work with you!”

This is the same with any larger company and actually any group of people that do something as a team.

We all in Mozilla work together and do it as good as we can and create something that I think has an impact on the world and most of you out there like it.  For instance, one of my managers is a Catholic – strongly believing.  I am an atheist – also very strongly believing.  Do you think it matters?  No!  What only I’m interested in is how anyone I cooperate with does its job – that’s the only thing that matters – at least to me.

I don’t agree with Brendan Eich’s opinion on forbidding gay marriage but I very much respect him as a technical mind.  Period.


(Closed for comments, if you want to say something more on this topic, please say it somewhere else)

Interesting articles:
Reflex – Politická korektnost zabila šéfa americké Mozilly, vyjádřit názor se stává smrtící (in Czech)
FAQ on CEO Resignation (on The Mozilla Blog)

Get NSPR log from tryserver run

21st January, 2014

Mozilla tryserver nspr log


Get an NSPR log with your choice of modules from a try run.  So far works only for mochitests but I think can be extended for other test harness as well.

Based on an advice from Phil Ringnalda at this bug comment. The feature has just landed on mozilla-central.

  • Open file testing/mochitest/runtests.py
  • Search for NSPR_LOG_MODULES = "", should look as:
    # Set the desired log modules you want an NSPR log be produced by a try run for, or leave blank to disable the feature.
    # This will be passed to NSPR_LOG_MODULES environment variable. Try run will then put a download link for the log file
    # on tbpl.mozilla.org.
  • Change the empty string "" to a string with list of all your modules and levels to produce the log for
  • Push this modification to try along with your patch(es)
  • All produced NSPR logs in a single zip file are then uploaded for each completed run to amazonaws, you will find the link in the results window at the bottom of tbpl.mozilla.org

Note: doesn’t work for B2G so far, there is no way to upload the logs.

Building mozilla code directly from Visual Studio IDE

29th November, 2013



Yes, it’s possible!  With a single key press you can build and have a nice list of errors in the Error List window, clickable to get to the bad source code location easily.  It was a fight, but here it is.  Tested with Visual Studio Express 2013 for Windows Desktop, but I believe this all can be adapted to any version of the IDE.


  • Create a shell script, you will (have to) use it every time to start Visual Studio from mozilla-build’s bash prompt:

export MOZ__LIB=$LIB
export MOZ__PATH=$PATH
# This is for standard installation of Visual Studio 2013 Desktop, alter the paths to your desired/installed IDE version
cd "/c/Program Files (x86)/Microsoft Visual Studio 12.0/Common7/IDE/"
./WDExpress.exe &

  • Create a solution ‘mozilla-central’ located at the parent directory where your mozilla-central repository clone resides.  Say you have a structure like C:\Mozilla\mozilla-central, which is the root source folder where you find .hg, configure.in and all the modules’ sub-dirs.  Then C:\Mozilla\ is the parent directory.
  • In that solution, create a Makefile project ‘mozilla-central’, again located at the parent directory.  It will, a bit unexpectedly, be created where you probably want it – in C:\Mozilla\mozilla-central.
  • Let the Build Command Line for this project be (use the multi-line editor to copy & paste: combo-like arrow on the right, then the <Edit…> command):

call "$(MOZ__VSINSTALLDIR)\VC\bin\vcvars32.bat"
set LIB=$(MOZ__LIB)
set MOZCONFIG=c:\optional\path\to\your\custom\mozconfig
cd $(SolutionDir)
python mach --log-no-times build binaries


Now when you make a modification to a C/C++ file just build the ‘mozilla-central’ project to run the great build binaries mach feature and quickly build the changes right from the IDE.  Compilation and link errors as well as warnings will be nicely caught in the Error List.

BE AWARE: There is one problem – when there is a typo/mistake in an exported header file, it’s opened as a new file in the IDE from _obj/dist/include location.  When you miss that and modify that file it will overwrite on next build! Using CreateHardLink might deal with this issue.

With these scripts you can use the Visual Studio 2013 IDE but build with any other version of VC++ of your choice.  It’s independent, just run the start-up script from different VS configuration mozilla-build prompt.

I personally also create projects for modules (like /netwerk, /docshell, /dom) I often use.  Just create a Makefile project located at the source root directory with name of the module directory.  The project file will then be located in the module – I know, not really what one would expect.  Switch Solution Explorer for that project to show all files, include them all in the project, and you are done.

Few other tweaks:

  • Assuming you properly use an object dir, change the Output Directory and Intermediate Directory to point e.g. to $(SolutionDir)\<your obj dir>\$(Configuration)\.  The logging and other crap won’t then be created in your source repository.
  • Add:

    to your custom hg ingnore file to prevent the Visual Studio project and solution files interfere with Mercurial.  Same suggested for git, if you prefer it.


Note: you cannot use this for a clobbered build because of an undisclosed Python Windows-specific bug.  See here why.  Do clobbered builds from a console, or you may experiment with clobber + configure from a console and then build from the IDE.

QueryPerformanceCounter calibration with GetTickCount

14th November, 2013

In one of my older posts I’m describing how the Mozilla Platform decides on whether this high precision timer function is behaving properly or not.  That algorithm is now obsolete and we have a better one.

The current logic, that seems proven stable, is using a faults-per-tolerance-interval algorithm, introduced in bug 836869 – Make QueryPerformanceCounter bad leap detection heuristic smarter.  I decided to use such evaluation since the only real critical use of the hi-res timer is for animations and video rendering where large leaps in time may cause missing frames or jitter during playback.  Faults per interval is a good reflection of stability that we want to ensure in reality.  QueryPerformanceCounter is not perfectly precise all the time when calibrated against GetTickCount while it doesn’t always need to be considered a faulty behavior of QueryPerformanceCounter result.

The improved algorithm

There is no need for a calibration thread or a calibration code as well as any global skew monitoring.  Everything is self-contained.

As the first measure, we consider QueryPerformanceCounter as stable when TSC is stable, meaning it is running at a constant rate during all ACPI power saving states [see HasStableTSC function]

When TSC is not stable or its status is unknown, we must use the controlling mechanism.

Definable properties

  • what is the number of failures we are willing to tolerate during an interval, set at 4
  • the fault-free interval, we use 5 seconds
  • a threshold that is considered a large enough skew for indicating a failure, currently 50ms

Fault-counter logic outline

  • keep an absolute time checkpoint, that shifts to the future with every failure by one fault-free interval duration, base it on GetTickCount
  • each call to Now() produces a timestamp that records values of both QueryPerformanceCounter (QPC) and GetTickCount (GTC)
  • when two timestamps (T1 and T2) are subtracted to get the duration, following math happens:
    • deltaQPC = T1.QPC – T2.QPC
    • deltaGTC = T1.GTC – T2.GTC
    • diff = deltaQPC – deltaGTC
    • if diff < 4 * 15.6ms: return deltaQPC ; this cuts of what GetTickCount’s low resolution unfortunately cannot cover
    • overflow = diff – 4 * 15.6ms
    • if overflow < 50ms (the failure threshold): return deltaQPC
    • from now on, result of the subtraction is only deltaGTC
    • fault counting part:
      • if deltaGTC > 2000ms: return ; we don’t count failures when timestamps are more then 2 seconds each after other *)
      • failure-count = max( checkpoint – now, 0 ) / fault-free interval
      • if failure-count > failure tolerance count: disable usage of QueryPerformanceCounter
      • otherwise: checkpoint = now + (failure-count + 1) * fault-free interval


You can check the code by looking at TimeStamp_windows.cpp directly.


I’m personally quite happy with this algorithm.  So far, no issues with redraw after wake-up even on exotic or older configurations.  Video plays smoothly, while we are having a hi-res timing for telemetry and logging where possible.

*) Reason is to omit unexpected QueryPerformanceCounter leaps from failure counting when a machine is suspended even for a short period of time

Highslide for Wordpress Plugin