Category Archives: Uncategorized

Firefox Performance Update #11

Wow, it’s been a while1 since I posted one of these. We haven’t been resting on our laurels though – a bunch of work has been going on, and I want to highlight some of the big pieces that I’ve seen go by.

But first…

This Performance Update is brought to you by: getBoundsWithoutFlushing

For privileged JavaScript running in the browser, you have access to an interface called nsIDOMWindowUtils. These days, instead of doing a bunch of XPCOM gymnastics to get to that interface, you can access it via window.windowUtils. windowUtils exposes a handy function called getBoundsWithoutFlushing, and it delivers exactly what it says on the tin: you can pass it an element, and it’ll give you the most recently calculated bounds for the element without causing a style or layout flush.

That’s great! However, use with caution – because we’re getting information without flushing, the bounds information might be stale. For example, if you have an element that’s 50×50 pixels, and then were to apply some style in JavaScript that makes the element 500 pixels wide, using getBoundsWithoutFlushing immediately after seeing the style would still return the 50×50 pixel box. The information will only be brought up to date after the next flush, which either will occur from the refresh driver (good!) or some other code causing a layout flush (maybe bad!).

If you want a refresher on style and layout flushes, I highly recommend reading this document that the front-end team put together.

And now, without further ado, here’s what the Firefox Front-end Performance team’s been up to lately!

ClientStorage (Completed by Doug Thayer)

This is a big one if you’re on macOS. Doug’s work here allows us to communicate more efficiently with the GPU on Mac hardware, which should result in smoother animations, and hopefully less CPU (and power!) bandwidth being hogged with memory-copying operations.

This was so effective that it closed out the remaining performance bug on macOS that was preventing tab warming from landing! This means that tab warming should be shipping to our release channel on macOS in Firefox 63!

Experiments with the Background Process Priority Manager (In-Progress by Doug Thayer)

This project attempts to take advantage of our multi-process architecture by reducing the priority of processes that have no tabs being displayed in the foreground to the user2. This is the first time, at least to my knowledge, that we’ve attempted to fiddle with process priority on Firefox Desktop3, which means there are a bunch of unknowns for us to sort through.

Doug and I have been testing lowered background tab process priority for a few weeks, and have already identified one bug that has been recently fixed. Once that bug is available in Nightly builds, we’ll keep tinkering with it to see if any other bugs surface, and then we’ll considering testing it out on our Nightly audience.

If you want to experiment with it right now, you can go to about:config, and create a new bool pref called dom.ipc.processPriorityManager.enabled, and set it to true. Please be warned, this is still very much in the early stages, so you might see some odd behaviour.

Migrate consumers to the new Places Observer system (In-Progress by Doug Thayer)

Doug overhauled the Places Observer Notification APIs a month or so back, allowing consumers to take advantage of batches of notifications4. Doug is now in the process of converting a number of callsites within our Bookmarking code to take advantage of this. Once he puts out a few test failures and lands these patches, operations on large numbers of bookmarks should be handled more efficiently.

Document Splitting (In-Progress by Doug Thayer)

With our Graphics team getting closer and closer to making WebRender a reality, we’ve been looking at ways we can make our front-end code work more efficiently with it.

Disclaimer: I’m not 100% up-to-speed on the various nuances to this project, so I might get a few things wrong below. If someone from the Graphics team reads this and has some corrections or clarifications, please send them my way.

Document splitting will allow Gecko and WebRender to draw updates to the browser UI independently from web content. Historically, we’ve done something like this with layers and layer invalidation, but with WebRender, we have one giant display list that gets shipped over to the GPU thread to render for the whole window.

With document splitting, we’ll have independent structures for (at least for starters) the UI and web content. We suspect this will allow us to render more efficiently – especially when there’s a lot going on in web content (or there’s a lot going on in the UI!).

Make the RemotePageManager lazy (Completed by Felipe Gomes)

Felipe made it so that the RemotePageManager module isn’t loaded until necessary, and that saved us a handy 3.5% on content process start-up time, and 1% on base content memory used by JavaScript!

Smoother Tab Animations (In-Progress by Felipe Gomes)

The Photon UI shipped in Firefox 57 to great fanfare, and all of us front-end folk were pretty psyched about it. Unfortunately, as is always the case, there was some work that had to be cut for time.

Felipe is picking up some work that we cut that re-works how we do tab animations5. Our current animations involve growing and shrinking tabs, and for each frame of that animation, we calculate the change in style and layout and paint the change on the main thread.

The flashing occurs when we paint. Notice how the tabs flash as they open and close.

The new animations were designed from the ground up to take advantage of compositor-accelerated CSS6.

Felipe has some early try builds that he’s posted with the new animations, and we’re pretty excited to see where it goes. Or, if you don’t want to try a build, you can check out this video. Or this video (it’s the previous video in slow motion). Or this video for a variation!

Overhauling about:performance (In-Progress by Florian Quèze)

Florian and Tarek Ziade have proven out the platform work to support the new about:performance, and are now trying to bang out the final bits to make the new about:performance something we actually want to ship. They’re working with our UX team to figure out exactly what that looks like, but we’re hoping ultimately to give the user the most informed picture possible on what is eating up their CPU cycles.

You can try the new about:performance today in Nightly by setting dom.performance.enable_scheduler_timing to true, then restarting the browser, and then visiting about:performance.

Browser Adjustment Project (In-Progress by Gijs Kruitbosch)

Informed by the Firefox Hardware Report, Gijs has been fitted out with some new hardware that we think helps to encapsulate what we consider to be “average” consumer hardware, and “weak” consumer hardware. Gijs has been focusing on prior art by other browser engines, as well as operating systems to see how best we can stand on the shoulders of friends and not re-invent the wheel.

Again, this is still an early-days research project, so no code’s been written yet, but we hope to have a clearer picture on how best to proceed soon.

Avoiding spurious about:blank loads in the parent process (In-Progress by Gijs Kruitbosch)

This work should allow us to avoid some unnecessary work when we create new windows and tabs. This has involved changing a very large number of tests, and doing a bunch of plumbing to get Firefox ready for this change. The dependency tree on the bug gives you a bit of the picture.

Thankfully, I think we’re approaching the home stretch on this one. Hopefully, this should buy us some precious milliseconds when starting up and opening new windows.

Enable the separate Activity Stream content process by default (In-Progress by Mike Conley and Jay Lim)

Enabling the separate Activity Stream content process will allow users to take advantage of the script caching work that my intern Jay Lim did a few months back, which should let us render about:newtab more quickly.

Unfortunately, turning this separate content process on by default has been plagued with problems – most recently, a shutdown leak when running our automated tests. Thankfully, we’ve recently made a breakthrough on the leak, and we’re working on eliminating the cause.

Cheaper tabs in titlebar (In-Progress by Mike Conley)

We run a bit of JavaScript when rendering the browser UI to figure out how exactly to lay out the tabs in the titlebar.

Unfortunately, this JavaScript involves synchronous style and layout flushes, and ultimately we’re doing calculations that’s best left solely to the layout engine.

I’m working on swapping out the JavaScript for raw CSS. Running the benchmarks locally, this saved anywhere from 16-20ms on the window opening Talos benchmark. That might not sound like a lot, but from a performance engineer’s perspective, that’s a pretty solid gain.

Grab bag of Notable Performance Work

And without further ado, here’s a bunch of miscellaneous work that’s gone into the browser recently that has helped make it faster and better! Kudos to all the folk who landed these things! A bunch of these fixes are going out in Firefox 63 and Firefox 64.


  1. Like… 2 months. Oof. The blog guilt is overwhelming! 

  2. Across all windows, if you were worried about that. 

  3. It looks like we did something like this for Firefox OS though, since I believe the infrastructure we’re using to do this comes from that project. 

  4. Up until now, changes were handled one at a time. 

  5. Check out the “new tab motions” attachment in this bug for some videos by our talented designer epang! 

  6. Here’s a great post from some of our friends at Google about this sort of work. 

Code Spelunking: Review Board Extensions

So this summer, I’m working on Review Board for the Google Summer of Code.

Until my GSoC acceptance, my romps into the code had been relatively shallow.  But with my proposal being given the green light, I’ve started doing more extensive explorations.

Review Board is built using the Django web framework.  I haven’t worked with Django before, but I have quite a bit of experience with Rails, so that should be an asset.  Using a web framework means having (relatively) predictable source code layout, and Review Board is no exception.

Djblets

At one point or another, the Review Board developers realized that a lot of their code wasn’t Review Board specific, and could be abstracted out into an external library.

That library is called Djblets.

Among other things, Djblets adds a DataGrid component for easy record sorting and pagination.  There are improvements to Django’s Authentication system.  Functions for easily displaying a user’s Gravatar.

And, low and behold, there is a branch of Djblets that provides classes and functions for giving a Django application an extension framework.  The classes are abstract enough so that, in your Django application, you can specify different types and behaviours for your Hooks.

Djblets -> Review Board

The Review Board extension branch takes these Djblets extension classes, and extends them into DashboardHooks, NavigationBarHooks, ReviewRequestDetailHooks…lots of different hooks.

So, Djblets creates the foundation abstractions.  Review Board makes these abstractions a little more specific.  And then an extension writer needs to instantiate and use these classes to design their extensions.  It sounds complicated, I know.

So Let’s Map It Out

When I start learning a new code base, I do a lot of drawing.  To me, getting to now a code base is like getting to know a city, and that means walking around it, and mapping it out.

So I’ve taken the liberty of mapping out the extension classes that I’ve found, and how they relate to one another.  Note that at the bottom of my map, a simple extension (RB Reports) is using some of those classes to hook itself into Review Board.  You can find this, and other extensions,here.

My map of the extension framework

Click here to check out my map of the current state of the extension framework

Now, before someone in the department starts complaining about my misuse of UML:  I’m not a UML guy.  I just wanted an easy piece of diagramming software, and the one that I found (Dia), did UML.  I just wanted something to draw boxes and lines. So please don’t freak out if you think I’m using the wrong symbols.

One symbol you might be wondering about is the blue quantum-flux-capacitor-implosion.

I’ll save that for a future post.

Process Improvement of Peer Code Review and Behavior Analysis of its Participants

Process Improvement of Peer Code Review and Behavior Analysis of its Participants

by WANG Yan-qing, LI Yi-jun, Michael Collins, LIU Pei-jie
SIGCSE ’08, March 12-15, 2008

If you’ve been following, I’ve been trying to figure out why code reviews aren’t a part of the basic undergraduate computer science curriculum.  The other papers and articles I’ve read so far have had less to do with the classroom, and more to do with code reviews in industry.

This paper got a little bit closer to the classroom, and, more importantly, closer to my particular question.

To begin, the paper introduces some terminology I’m not familiar with – the software crisis.  I’m familiar with the concept though:  writing good software for large systems is not a simple problem, and as computers become a bigger and more important part of our lives, this inability to easily write good code could quickly end up biting us in the collective rear.

Code review is one of several methods that the software industry has adopted to try to “tame” the software crisis.

I like this part:

Even though code reviews are time consuming, they are much more efficient than testing [19]. A typical engineer, for example,  will find approximately 2 to 4 defects in an hour of unit testing but will find 6 to 10 defects in each hour of review code [19].

What more argument do you need?  It’s just a matter of getting rid of that “time consuming” part, right?  Right…

And this is even juicier:

PCR [peer code review] is a technique which is generally considered to be effective on promoting students’ higher cognitive skills [9], since students use their own knowledge and skill to interpret, analyze and evaluate others’ work to clarify and correct it [2].

Wonderful!  I’m in my problem space!

Reading along, it seems that this paper is introducing a new, refined structure for PCR, and will detail results of a study on using that new structure in a programming course.  Cool.

The introduction ends by saying that the new structure seemed to enhance the quality of student’s work, as well as their ability to critique one another.  Great news!

It’s not all sunshine and puppies, though – they also mention that they ran into a few problems, and that they’ll be discussing those too.

So the first thing they’ve done, is tried to make the terminology clearer:

Roles

  • Author:  the student who writes the code that is being reviewed
  • Reviewer:  the person who is reviewing the code
  • Reviser:  the author, after receiving a Comments Form from a Reviewer
  • Instructor:  the teacher or qualified TA who is responsible for the class

Documents

  • Manuscript Code:  the unrevised code that is first submitted by an Author
  • Comments Form:  the comments given from the Reviewer to the Author
  • Revision Code:  the code that is revised by the Reviser after the Reviewer gives the Reviser the Comments Form (whew…follow that?)
  • Reference Solution:  the “answer” to the assignment, held by the Instructor

Now that we’ve got all the players and documents laid out, let’s take a look at the process:

Process

  • Phase 1:  The Author completes the Manuscript Code
  • Phase 2:  The Author emails the Manuscript Code to the Instructor.  Simultaneously, a blank Comments Form and a copy of the Manuscript Code is sent to a Reviewer
  • Phase 3:  The Reviewer reviews the code as soon as possible, filling in the Comments Form.
  • Phase 4:  The Reviewer sends the completed Comments Form back to the Author, and also sends a carbon copy to the Instructor
  • Phase 5:  After receiving the Comments Form, the Reviser (who was originally the Author…oh boy…almost went cross-eyed, there) makes the appropriate alterations to the original Manuscript Code, referencing the Comment Form where appropriate.  The completed Revision Code is emailed to the Instructor.
  • Phase 6:  The Instructor should now have a copy of the original Manuscript Code, the completed Comments Form, and the final Revision Code.  The Instructor should be able to check that the author and reviewer did their work properly.

Wow.  What a convoluted way of saying something simple.  They even included a diagram, with lots of arrows.  Somehow, I think this could be said simpler.  Oh well.

It also sounds like a lot of emailing.  You’re balancing your course on the reliability of the email protocol?  Errr….

Well, let’s see what problems they ran into…

  1. The assumption that all participants would carefully and responsibly carry out each phase of the process was faulty.  This may have been due to “careless authors, irresponsible reviewers and busy instructors in the review process”.
  2. Some students lack the coding ability to either:
    1. Produce code that is readable and reviewable in a constructive way
    2. Review code in a constructive, or informed way
  3. The process is difficult to control due to the reliance on email (no kidding!)
    1. Some students would not submit Manuscript Code or Comment Forms on time
    2. Some students would submit multiple copies of their Manuscript Code, due to an inherent mistrust of the reliability of email
  4. There was opportunity for students to “game” the process to their advantage. In this particular study, there was very little control of who was doing what.  Though a particular Author was supposed to write the Manuscript Code, this wasn’t enforced, and there was an occasion where another student wrote the code instead.  Same with review writing, and revision writing.  Yeah, cheating is always a problem.

The paper then goes into some discussion about the observed behaviour of Authors and Reviewers.  They noted that most students did not enjoy reviewing very poorly written code, and don’t give their best efforts on reviews for such code.  Mere encouragement from the instructor was not enough to compel them to give their best reviews either.  The paper suggests finding some way of making Reviewers review code more carefully; perhaps through awarding bonus marks.

Behaviour of Instructors was also analyzed.  The paper mentioned that Instructors with large class sizes might try to cut down on their workload by only viewing the Comment Forms that the Reviewers had provided.  But this strategy does not give the Instructor the entire story, and is open to manipulation from students.

The paper ends with a discussion about group formations, and how best to diffuse student cheating conspiracies.

At the last moment, they suggest some “web-based [application] with a built-in blind review mechanism” be developed.

Hm.

“Code reviews” by Arjen Markus (2009)

Code Reviews

by Arjen Markus
Deltares, The Netherlands
ACM Fortran Forum, August 2009, 28, 2

This is one of the first papers I found.  Consider it my “warm up” paper.

According to the header, Arjen Markus works for “Deltares”, and after a quick Google-hunt, I found out that Deltares is a “new independent Dutch institute for national and international delta issues”.

Upon closer inspection, it seems that Markus’ paper is concerned with what reviewers should be looking for during code reviews:

“What should you be looking for in the code?  It is not enough to check that the code adheres to the programming standard of the project it belongs to.  Such a standard may not exist, be incomplete or be focussed on layout, not on questionable constructs that are a liability.  With this article I would like to fill in this practical gap, at least partly.” (Page 4)

This isn’t exactly what I set out to look for, but I thought I’d give it a once-over anyways.

Markus’ paper is not what I would call a rigorous scientific publication.  There is no empirical data, no hypothesis, none of that good ol’ scientific method stuff.  Instead, it’s more akin to a “do” and “do not” set of advice and examples that one would find in a software engineering textbook.

A FORTRAN software engineering textbook, to be more precise.  Markus’ examples are all in FORTRAN.

Broken down simply, Markus has four principles, or bits of advice:

“The importance of being explicit”

Essentially, this means to be clear with what you’re doing in the code.  It’s common sense stuff:  don’t be overly clever, be readable, don’t use magic numbers or strings, document your code, group related routines into the same modules, use information hiding in your modules when appropriate, clear and precise error messages, etc.

“Don’t go your own way”

Markus advises developers to stick to an agreed coding standard / style guide.  Don’t reinvent the wheel – instead, use typical solutions to problems that arise.  “Don’t go against the grain” (Page 7).

“Be careful out there”

Markus advises developers to watch out for documented language quirks, common language pitfalls, etc.  This is followed by numerous examples in FORTRAN.

“Curiouser and curiouser”

Markus asks to keep an eye out for “a lack of attention to design, to readability and other aspects that are important for the program in the long run” (Page 11).    He also repeats a few things from “the importance of being explicit” – mainly, to make sure that the code is organized in a way that makes sense to the developers.

I don’t know.  I don’t think I’m the target audience here.  In hindsight, I found the information in this paper to be very general, and rather self-evident.  The only thing I seemed to learn was a bit about FORTRAN quirks.

I think I need to be less laissez faire in my paper selections.  This one didn’t help me find what I was looking for, and I should have seen that from the abstract.  Bah.

EDIT:  Why did I waste my time searching ACM when more interesting information was waiting right under my nose?  I think I have an idea what to review next.