Tag Archives: gecko

Things I’ve Learned This Week (April 20 – April 24, 2015)

Short one this week. I must not have learned much! 😀

If you’re using Sublime Text to hack on Firefox or Gecko, make sure it’s not indexing your objdir.

Sublime has this wicked cool feature that lets you quickly search for files within your project folders. On my MBP, the shortcut is Cmd-P. It’s probably something like Ctrl-P on Windows and Linux.

That feature is awesome, because when I need to get to a file, instead of searching the folder hierarchy, I just hit Cmd-P, jam in a few of the characters (they can even be out of order – Sublime does fuzzy matching), and then as soon as my desired file is the top entry, just hit Enter, and BLAM – opened file. It really saves time!

At least, it saves time in theory. I noticed that sometimes, I’d hit Cmd-P, and the UI to enter my search string would take ages to show up. I had no idea why.

Then I noticed that this slowness seemed to show up after I had done a build. My objdir resides beneath my srcdir (as is the defaults with a mozilla-central checkout), so I figured perhaps Sublime was trying to index all of those binaries and choking on them.

I went to Project > Edit Project, and added this to the configuration file that opened:

{
    "folders":
    [
        {
            "path": "/Users/mikeconley/Projects/mozilla-central",
      "folder_exclude_patterns": ["*.sublime-workspace", "obj-*"]
        }
    ]
}

I added the workspace thing too1, because I figure it’s unlikely I’ll ever want to open that thing.

Anyhow, after setting that, I restarted Sublime, and everything was crazy-fast. \o/

If you’re using Sublime, and your objdir is under your srcdir, maybe consider adding the same thing. Even if you’re not using Cmd-P, it’ll probably save your machine from needlessly burning cycles indexing stuff.

3 people like this post.

  1. That’s where Sublime holds my session state for my project. 

Things I’ve Learned This Week (April 6 – April 10, 2015)

It’s possible to synthesize native Cocoa events and dispatch them to your own app

For example, here is where we synthesize native mouse events for OS X. I think this is mostly used for testing when we want to simulate mouse activity.

Note that if you attempt to replay a queue of synthesized (or cached) native Cocoa events to trackSwipeEventWithOptions, those events might get coalesced and not behave the way you want. mstange and I ran into this while working on this bug to get some basic gesture support working with Nightly+e10s (Specifically, the history swiping gesture on OS X).

We were able to determine that OS X was coalescing the events because we grabbed the section of code that implements trackSwipeEventWithOptions, and used the Hopper Disassembler to decompile the assembly into some pseudocode. After reading it through, we found some logging messages in there referring to coalescing. We noticed that those log messages were only sent when NSDebugSwipeTrackingLogic was set to true, we executed this:

defaults write org.mozilla.nightlydebug NSDebugSwipeTrackingLogic -bool YES

In the console, and then re-ran our swiping test in a debug build of Nightly to see what messages came out. Sure enough, this is what we saw:

2015-04-09 15:11:55.395 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 coalescing scrollevents
2015-04-09 15:11:55.395 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 cumulativeDelta:-2.000 progress:-0.002
2015-04-09 15:11:55.395 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 cumulativeDelta:-2.000 progress:-0.002 adjusted:-0.002
2015-04-09 15:11:55.396 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 call trackingHandler(NSEventPhaseChanged, gestureAmount:-0.002)

This coalescing means that trackSwipeEventWithOptions is only getting a subset of the events that we’re sending, which is not what we had intended. It’s still not clear what triggers the coalescing – I suspect it might have to do with how rapidly we flush our native event queue, but mstange suspects it might be more sophisticated than that. Unfortunately, the pseudocode doesn’t make it too clear.

String templates and toSource might run the risk of higher memory use?

I’m not sure I “learned” this so much, but I saw it in passing this week in this bug. Apparently, there was some section of the Marionette testing framework that was doing request / response logging with toSource and some string templates, and this caused a 20MB regression on AWSY. Doing away with those in favour of old-school string concatenation and JSON.stringify seems to have addressed the issue.

When you change the remote attribute on a <xul:browser> you need to re-add the <xul:browser> to the DOM tree

I think I knew this a while back, but I’d forgotten it. I actually re-figured it out during the last episode of The Joy of Coding. When you change the remoteness of a <xul:browser>, you can’t just flip the remote attribute and call it a day. You actually have to remove it from the DOM and re-add it in order for the change to manifest properly.

You also have to re-add any frame scripts you had specially loaded into the previous incarnation of the browser before you flipped the remoteness attribute.1

Using Mercurial, and want to re-land a patch that got backed out? hg graft is your friend!

Suppose you got backed out, and want to reland your patch(es) with some small changes. Try this:

hg update -r tip
hg graft --force BASEREV:ENDREV

This will re-land your changes on top of tip. Note that you need –force, otherwise Mercurial will skip over changes it notices have already landed in the commit ancestry.

These re-landed changes are in the draft stage, so you can update to them, and assuming you are using the evolve extension2, and commit –amend them before pushing. Voila!

Here’s the documentation for hg graft.

3 people like this post.

  1. We sidestep this with browser tabs by putting those browsers into “groups”, and having any new browsers, remote or otherwise, immediately load a particular set of framescripts. 

  2. And if you’re using Mercurial, you probably should be. 

Things I’ve Learned This Week (March 30 – April 3, 2015)

This is my second post in a weekly series, where I attempt to distill my week down into some lessons or facts that I’ve picked up. Let’s get to it!

ES6 – what’s safe to use in browser development?

As of March 27, 2015, ES6 classes are still not yet safe for use in production browser code. There’s code to support them in Firefox, but they’re Nightly-only behind a build-time pref.

Array.prototype.includes and ArrayBuffer.transfer are also Nightly only at this time.

However, any of the rest of the ES6 Harmony work currently implemented by Nightly is fair-game for use, according to jorendorff. The JS team is also working on a Wiki page to tell us Firefox developers what ES6 stuff is safe for use and what is not.

Getting a profile from a hung process

According to mstange, it is possible to get profiles from hung Firefox processes using lldb1.

  1. After the process has hung, attach lldb.
  2. Type in2, :
    p (void)mozilla_sampler_save_profile_to_file("somepath/profile.txt")
  3. Clone mstange’s handy profile analysis repository.
  4. Run:
    python symbolicate_profile.py somepath/profile.txt

    To graft symbols into the profile. mstange’s scripts do some fairly clever things to get those symbols – if your Firefox was built by Mozilla, then it will retrieve the symbols from the Mozilla symbol server. If you built Firefox yourself, it will attempt to use some cleverness3 to grab the symbols from your binary.

    Your profile will now, hopefully, be updated with symbols.

    Then, load up Cleopatra, and upload the profile.

    I haven’t yet had the opportunity to try this, but I hope to next week. I’d be eager to hear people’s experience giving this a go – it might be a great tool in determining what’s going on in Firefox when it’s hung4!

Parameter vs. Argument

I noticed that when I talked about “things that I passed to functions5”, I would use “arguments” and “parameters” interchangeably. I recently learned that there is more to those terms than I had originally thought.

According to this MSDN article, an argument is what is passed in to a function by a caller. To the function, it has received parameters. It’s like two sides of a coin. Or, as the article puts it, like cars and parking spaces:

You can think of the parameter as a parking space and the argument as an automobile. Just as different automobiles can park in a parking space at different times, the calling code can pass a different argument to the same parameter every time that it calls the procedure.6

Not that it really makes much difference, but I like knowing the details.

3 people like this post.

  1. Unfortunately, this technique will not work for Windows. :(  

  2. Assuming you’re running a build after this revision landed. 

  3. A binary called dump_syms_mac in mstange’s toolkit, and nm on Linux 

  4. I’m particularly interested in knowing if we can get Javascript stacks via this technique – I can see that being particularly useful with hung content processes. 

  5. Or methods. 

  6. Source 

DocShell in a Nutshell – Part 1: Original Intents

I think in order to truly understand what the DocShell currently is, we have to find out where the idea of creating it came from. That means going way, way back to its inception, and figuring out what its original purpose was.

So I’ve gone back, peered through various archived wiki pages, newsgroup and mailing list posts, and I think I’ve figured out that original purpose.1

The original purpose can be, I believe, summed up in a single word: embedding.

Embedding

Back in the late 90’s, sometime after the Mozilla codebase was open-sourced, it became clear to some folks that the web was “going places”. It was the “bees knees”. It was the “cat’s pajamas”. As such, it was likely that more and more desktop applications were going to need to be able to access and render web content.

The thing is, accessing and rendering web content is hard. Really hard. One does not simply write web browsing capabilities into their application from scratch hoping for the best. Heartbreak is in that direction.

Instead, the idea was that pre-existing web engines could be embedded into other applications. For example, Steam, Valve’s game distribution platform, displays a ton of web content in its user interface. All of those Steam store pages? Those are web pages! They’re using an embedded web engine in order to display that stuff.2

So making Gecko easily embeddable was, at the time, a real goal, and a real project.

nsWebShell

The problem was that embedding Gecko was painful. The top-level component that embedders needed to instantiate and communicate with was called “nsWebShell”, and it was a pretty unwieldy. Lots of internal knowledge about the internal workings of Gecko was leaked through the nsWebShell component, and it’s interface changed far too often.

It was also inefficient – the nsWebShell didn’t just represent the top-level “thing that loads web content”. Instances of nsWebShell were also used recursively for subdocuments within those documents – for example, (i)frames within a webpage. These nested nsWebShell’s formed a tree. That’s all well and good, except for the fact that there were things that the nsWebShell loaded or did that only the top-level nsWebShell really needed to load or do. So there was definitely room for some performance improvement.

In order to correct all of these issues, a plan was concocted to retire nsWebShell in favour of several new components and a slew of new interfaces. Two of those new components were nsDocShell and nsWebBrowser.

nsWebBrowser

nsWebBrowser would be the thing that embedders would drop into the applications – it would be the browser, and would do all of the loading / doing of things that only the top-level web browser needed to do.

The interface for nsWebBrowser would be minimal, just exposing enough so that an embedder could drop one into their application with little fuss, point it at a URL, set up some listeners, and watch it dance.

nsDocShell

nsDocShell would be… well, everything else that nsWebBrowser wasn’t. So that dumping ground that was nsWebShell would get dumped into nsDocShell instead. However, a number of new, logically separated interfaces would be created for nsDocShell.

Examples of those interfaces were:

  • nsIDocShell
  • nsIDocShellTreeItem
  • nsIDocShellTreeNode
  • nsIWebNavigation
  • nsIWebProgress
  • nsIBaseWindow
  • nsIScrollable
  • nsITextScroll
  • nsIContentViewerContainer
  • nsIInterfaceRequestor
  • nsIScriptGlobalObjectOwner
  • nsIRefreshURI

So instead of a gigantic, ever changing interface, you had lots of smaller interfaces, many of which could eventually be frozen over time (which is good for embedders).

These interfaces also made it possible to shield embedders from various internals of the nsDocShell component that embedders shouldn’t have to worry about.

Ok, but… what was it?

But I still haven’t answered the question – what was the DocShell at this point? What was it supposed to do now that it was created.

This ancient wiki page spells it out nicely:

This class is responsible for initiating the loading and viewing of a document.

This document also does a good job of describing what a DocShell is and does.

Basically, any time a document is to be viewed, a DocShell needs to be created to view it. We create the DocShell, and then we point that DocShell at the URL, and it does the job of kicking off communications via the network layer, and dealing with the content once it comes back.

So it’s no wonder that it was (and still is!) a dumping ground – when it comes to loading and displaying content, nsDocShell is the central nexus point of communications for all components that make that stuff happen.

I believe that was the original purpose of nsDocShell, anyhow.

And why “shell”?

This is a simple question that has been on my mind since I started this. What does the “shell” mean in nsDocShell?

Y’know, I think it’s actually a fragment left over from the embedding work, and that it really has no meaning anymore. Originally, nsWebShell was the separation point between an embedder and the Gecko web engine – so I think I can understand what “shell” means in that context – it’s the touch-point between the embedder, and the embedee.

I think nsDocShell was given the “shell” monicker because it did the job of taking over most of nsWebShell’s duties. However, since nsWebBrowser was now the touch-point between the embedder and embedee… maybe shell makes less sense. I wonder if we missed an opportunity to name nsDocShell something better.

In some ways, “shell” might make some sense because it is the separation between various documents (the root document, any sibling documents, and child documents)… but I think that’s a bit of a stretch.

But now I’m racking my brain for a better name (even though a rename is certainly not worth it at this point), and I can’t think of one.

What would you rename it, if you had the chance?

What is nsDocShell doing now?

I’m not sure what’s happened to nsDocShell over the years, and that’s the point of the next few posts in this series. I’m going to be going through the commits hitting nsDocShell from 1999 until the present day to see how nsDocShell has changed and evolved.

Hold on to your butts.

Further reading

The above was gleaned from the following sources:

7 people like this post.

  1. I’m very much prepared to be wrong about any / all of this. I’m making assertions and drawing conclusions by reading and interpreting things that other people have written about DocShell – and if the telephone game is any indication, this indirect analysis can be lossy. If I have misinterpreted, misunderstood, or completely missed the point in any of the above, please don’t hesitate to comment, and I will correct it forthwith. 

  2. They happen to be using WebKit, the same web engine that powers Safari, and (until recently) Chromium. According to this, they’re using the Chromium Embedding Framework to display this web content. There are a number of applications that embed Gecko. Firefox is the primary consumer of Gecko. Thunderbird is another obvious one – when you display HTML email, it’s using the Gecko web engine in order to lay it out and display it. WINE uses Gecko to allow Windows-binaries to browse the web. Making your web engine embeddable, however, has a development cost, and over the years, making Gecko embeddable seems to have become less of a priority. Servo is a next-generation web browser engine from Mozilla Research that aims to be embeddable. 

Gecko Profiler now works in Thunderbird Daily

One of the first steps to making software snappier is knowing where the bottlenecks are. For Thunderbird, finding those bottlenecks has been hard – we haven’t had any tools to make drilling down to the slow bits easy.

Until now.

The platform folks have developed a really awesome profiler tool, and it’s been working really nicely in Firefox for some time now.

This past week, I spent a few hours making it work in Thunderbird.

If you’ve got a recent Daily of Thunderbird around, you can try it out right now.

How to bask in the glory of this awesome tool

  1. Make sure your Daily is recent. Anything built from today onwards should work.
  2. Install this add-on. It’s restartless!
  3. Check out your status bar. There should be two little panels there – “Disabled”, and “Dump Profile”
  4. Click on “Disabled” to switch the profiler into “Enabled” mode. Once you do that, it starts recording.
  5. Do some stuff, like check your mail…or do a search.
  6. Click on “Dump Profile”.
  7. A content tab will open that will show you profiling data gathered up to that point.
  8. Click on “Enabled” to disable the profiler – this will clear out the recording, to let you do a new one.

The web app that lets you browse the profile data is pretty sophisticated – you can read the skinny about it here.

Cooking with gas

On Windows or OSX, do an optimized, non-debug build of comm-central with the –enable-profiling flag set. Now you get super rich profiling data. Now you’re cookin’ with gas.

So hopefully this will be useful in making Thunderbird better, faster and stronger.

Big thanks to Benoit Gerard for his help and guidance porting the add-on, and the platform team for creating such a badass tool.

7 people like this post.