MozReview will now create individual attachments for child commits
Up until recently, anytime you pushed a patch series to MozReview, a single attachment would be created on the bug associated with the push.
That single attachment would link to the “parent” or “root” review request, which contains the folded diff of all commits.
We noticed a lot of MozReview users were (rightfully) confused about this mapping from Bugzilla to MozReview. It was not at all obvious that Ship It on the parent review request would cause the attachment on Bugzilla to be r+’d. Consequently, reviewers used a number of workarounds, including, but not limited to:
Manually setting the r+ or r- flags in Bugzilla for the MozReview attachments
Marking Ship It on the child review requests, and letting the reviewee take care of setting the reviewer flags in the commit message
Just writing “r+” in a MozReview comment
Anyhow, this model wasn’t great, and caused a lot of confusion.
So it’s changed! Now, when you push to MozReview, there’s one attachment created for every commit in the push. That means that when different reviewers are set for different commits, that’s reflected in the Bugzilla attachments, and when those reviewers mark “Ship It” on a child commit, that’s also reflected in an r+ on the associated Bugzilla attachment!
I think this makes quite a bit more sense. Hopefully you do too!
That big spinner means that the graphics part of Gecko hasn’t given us a frame yet to paint for this browser tab. That means we have nothing yet to show for the tab you’ve selected.
In the single-process Firefox that we ship today, this graphics operation of preparing a frame is something that Firefox will block on, so the tab will just not switch until the frame is ready. In fact, I’m pretty sure the whole browser will become unresponsive until the frame is ready.
With Electrolysis / multi-process Firefox, things are a bit different. The main browser process tells the content process, “Hey, I want to show the content associated with the tab that the user just selected”, and the content process computes what should be shown, and when the frame is ready, the parent process hears about it and the switch is complete. During that waiting time, the rest of the browser is still responsive – we do not block on it.
So there’s this window of time where the tab switch has been requested, and when the frame is ready.
During that window of time, we keep showing the currently selected tab. If, however, 300ms passes, and we still haven’t gotten a frame to paint, that’s when we show the big spinner.
So that’s what the big spinner means – we waited 300ms, and we still have no frame to draw to the screen.
How bad is it?
I suspect it varies. I see the spinner a lot less on my Windows machine than on my MacBook, so I suspect that performance is somehow worse on OS X than on Windows. But that’s purely subjective. We’ve recently landed some Telemetry probes to try to get a better sense of how often the spinner is showing up, and how laggy our tab switching really is. Hopefully we’ll get some useful data out of that, and as we work to improve tab switch times, we’ll see improvement in our Telemetry numbers as well.
Where is the badness coming from?
I also seem to see the spinner when I have “many” tabs open (~30), and have a build going on in the background (so my machine is under heavy load).
One thing I’ve noticed is that there’s this function in the graphics layer, “ClientTiledLayerBuffer::ValidateTile”, that takes much, much longer in the content process than in the single-process case. I’ve filed a bug on that, and I’ll ask folks from the Graphics Team this week.
How you can help
UPDATE (June 1, 2015): Getting profiles from Windows is currently broken because the symbol server appears to be busted. Any profiles from Windows machines will be useless until this bug is fixed. Alternatively, set profiler.symbolicationUrl to http://symbolapi.mocotoolsstaging.net in about:config.
If you’d like to help me find more potential causes, Profiles are very useful! NOTE – I don’t mean “user profiles”, as in, your bookmarks / customizations / history, etc, in the profile folder. I don’t mean this thing. I mean a performance profile.
A performance profile is a read-out of everything that Firefox / Gecko is doing over a particular span of time. When the profiler is running, Firefox / Gecko will record where the process is in the stack every 1ms or so. It’ll also record information about how long since it’s serviced the event loop, which helps us find jank.
To help, grab the Gecko Profiler add-on, make sure it’s enabled, and then dump a profile when you see the big spinner of doom. The interesting part will be between two markers, “AsyncTabSwitch:Start” and “AsyncTabSwitch:Finish”. There are also markers for when the parent process displays the spinner – “AsyncTabSwitch:SpinnerShown” and “AsyncTabSwitch:SpinnerHidden”. The interesting stuff, I believe, will be in the “Content” section of the profile between those markers. Here are more comprehensive instructions on using the Gecko Profiler add-on.
Luckily, the last bug we were working on was related to this – we had a lot of context about cache keys swapped in already.
The other important thing to realize is that fixing this bug is a bandage fix, or a wallpaper fix. I don’t think those are official terms, but it’s what I use. Basically, we’re fixing a thing with the minimum required effort because something else is going to fix it properly down the line. So we just need to do what we can to get the feature to limp along until such time as the proper fix lands.
My proposed solution was to serialize an nsISHEntry on the content process side, deserialize it on the parent side, and pass it off to nsIWebBrowserPersist.
So did it work? Watch the episode and find out!
I also want to briefly apologize for some construction noise during the video – I think it occurs somewhere halfway through minute 20 of the video. It doesn’t last long, I promise!
For this episode, Richard Milewski and I figured out the syncing issue I’d been having in Episode 9, so I had my head floating in the bottom right corner while I hacked. Now you can see what I do with my face while hacking, if that’s a thing you had been interested in.
So, like last week, I was under a bit of time pressure because of a meeting scheduled for 2:30PM (actually the meeting I was supposed to have the week before – it just got postponed), so that gave me 1.5 hours to move forward with the View Source work we’d started back in Episode 8.
I started the episode by explaining that the cache key stuff we’d figured out in Episode 9 was really important, and that a bug had been filed by the Necko team to get the issue fixed. At the time of the video, there was a patch up for review in that bug, and when we applied it, we were able to retrieve source code out of the network cache after POST requests! Success!
Now that we had verified that our technique was going to work, I spent the rest of the episode cleaning up the patches we’d written. I started by doing a brief self-code-review to smoke out any glaring problems, and then started to fix those problems.
We got a good chunk of the way before I had to cut off the camera.
I know back when I started working on this particular bug, I had said that I wanted to take you through right to the end on camera – but the truth of the matter is, the priority of the bug went up, and I was moving too slowly on it, since I was restricting myself to a few hours on Wednesdays. So unfortunately, after my meeting, I went back to hacking on the bug off-camera, and yesterday I put up a patch for review. Here’s the review request, if you’re interested in seeing where I got to!
I felt good about the continuity experiment, and I think I’ll try it again for the next few episodes – but I think I’ll choose a lower-priority bug; that way, I think it’s more likely that I can keep the work contained within the episodes.
How did you feel about the continuity between episodes? Did it help to engage you, or did it not matter? I’d love to hear your comments!
In this episode, I kept my camera off, since I was having some audio-sync issues1.
I was also under some time-pressure, because I had a meeting scheduled for 2:30 ET2, giving me exactly 1.5 hours to do what I needed to do.
And what did I need to do?
I needed to figure out why an nsISHEntry, when passed to nsIWebPageDescriptor’s loadPage, was not enough to get the document out from the HTTP cache in some cases. 1.5 hours to figure it out – the pressure was on!
I don’t recall writing a single line of code. Instead, I spent most of my time inside XCode, walking through various scenarios in the debugger, trying to figure out what was going on. And I eventually figured it out! Read this footnote for the TL;DR:3
And when the stream finished, I found out the meeting had been postponed to next week, meaning that next week will also be a short episode. :( ↩
Basically, the nsIChannel used to retrieve data over the network is implemented by HttpChannelChild in the content process. HttpChannelChild is really just a proxy to a proper nsIChannel on the parent-side. On the child side, HttpChannelChild does not implement nsICachingChannel, which means we cannot get a cache key from it when creating a session history entry. With no cache key, comes no ability to retrieve the document from the network cache via nsIWebDescriptor’s loadPage. ↩