Thursday, June 23, 2011

Debugging in the P5 IDE

Recently I've been using the processing IDE to build wondrous things. It works reasonably well, having at least some of the features we've come to expect from modern IDEs, right until you try to debug a program in it. If you happen to have a missing parenthesis, curly brace, semicolon, etc. in your program the IDE will helpfully point you to a random file (chances of the error actually being in this file are pretty low) and complain that it encountered an unexpected symbol such as a }, ;, or ).

It is important to realize at this point that the }, ;, or ) is not the actual problem with your program, it's just the first thing that the parser encountered after the error. So all you have to do is find a } in your program (can't be too many of those, right?) and read backwards from it until you find your error.

Then, once you've re-read all the code in your program and didn't find the problem, you may come to the internet for help. Here is what you should do instead to find syntax errors in these programs.

Unlike other IDEs processing only deals with one error at a time. If it finds one it will immediately stop and tell you what it is, and syntax errors always take precedence over all other errors. How can we use this fact to help us find the bug?

Comment out half your code. Seriously. For half of your source code files, do a CTRL-A and CTRL-/. Then try to run your program. If the syntax error was in the files you just commented out then you will receive some other error which will not be a syntax error! Proceed to uncomment half of your commented-out code files and run again to find out which quarter of your source files the error is in. Repeat this process until you've narrowed it down to a single file. Then comment out half the methods in the file, and then a quarter, etc until you've narrowed it down to one method that you can scrutinize for the error.
 
If the error wasn't in the half of the files that you commented out then un-comment them and comment the other half and take it from there.

Note that this process becomes more complicated if you have multiple syntax errors, in which case you may need to comment the entire thing and un-comment file by file.

Here's another free tip: Don't re-declare class variables when you are initializing them in your constructor or you will be unable to access them later.

Friday, June 17, 2011

Performance profiling and analysis of webpages using Shark and nightly firefox builds

This entire post was lifted directly from a presentation by David Humphrey. Probably with some errors since I don't know what I'm doing as well as he does.

Sometimes you experience really poor performance in your JS applications. The causes of these performance issues can be very unintuitive, and you can't always find them by staring blankly at the code for hours on end. This post is how you might go about finding your problems so that you can fix them

You can see nightly builds of Mozilla Firefox right here. You can see that one of them is labelled as shark, as seen in this image.

This build of firefox has been made to work properly with the OS X application Shark. If you have it installed then it is at /Developer/Applications/Performance Tools/Shark.app, otherwise you'll have to grab Xcode from here. Note that these instructions are for Mac OS X users only!

Once you have that shark-enabled version of firefox you should run it and navigate to whatever page you want to test the performance of. Then open Shark and point it at Firefox and click Start. Let it run for a while and then click stop, and it'll start analyzing the results. Closing whatever performance heavy window you had open in the nightly build may cause the analysis to go faster.


The analysis window looks like this:

You can expand the functions in the symbols section to see what they are doing inside- in this case our most heavy functions are all doing WebGL related work, which is good. You want to avoid situations where a large amount of performance is spent on a XUL or page re-drawing related functions, indicating that you are re-drawing a page a lot more than you should be.

You can also view the test results in tree mode, like this:

This gives you another way of looking at the results, where big changes in the time % indicate where time is being spent- in this case it's in WebGL related functions, which is again good. Using these specific screenshots we can see that we aren't doing mistakes related to re-rendering the web page repeatedly or something like that, but our performance was still terrible.

So we must look in a different way for our results. Firefox uses Javascript engines like Jaegermonkey (I'm not very knowledgeable about these yet, so you should look for information on them elsewhere) to compile JavaScript code into machine code that runs much, much faster than the interpreter does. But it can't always compile JS code, such as if it hits long series of if-else statements, and if it decides it's not worth compiling your JS code then it will go back to the interpreter and you will go orders of magnitude slower.

To find out when it's doing this is a slightly more complicated process.
Step one is to download a debug-enabled build of firefox. You can find them right here. You're looking for a folder that is like the one in the screenshot, but has a more recent date.


Step two is to cry about the speed of wireless network access at your location.

Step three is to open up Terminal and go to to wherever you just installed that debug build. The .app file contained in mine was called Tumucumaque for trademark reasons. Once there you'll want to run the following command without the quotes: "TMFLAGS=minimal,abort JMFLAGS=abort TumucumaqueDebug.app/Contents/MacOS/firefox -profilemanager > /tmp/log 2>&1"

This will dump stderr and stdout into /tmp/log. The actual output that will be placed there is from the tracemonkey and jaegermonkey engines, and can be used to show all the errors and stuff that they encountered as they went through your code. By navigating to your test page and letting it run for a while it will cause errors encountered while parsing and executing the javascript code on that page to be placed in the log file, where you can look at them.

You can quite firefox once it has been running for a little while and then do a "cat /tmp/log | grep Abort" in Terminal to get a list of all the places where jaegermonkey or tracemonkey decided to start flinging poop at the walls instead of working hard on compiling your javascript. It may look something like this:

That highlighted bit looks interesting. The line that it is on is telling me that firefox gave up and went back to the interpreter at that location. I wonder why?

Well, when we open up that file and look at line 9068 we see this:

Now, you might say to yourself "Hey, self, this looks like the type of loop that the monkeys can't handle!" And it is. The first time that loop goes through the monkeys compile code that runs that code path. Then it goes through again and has to re-compile to include that extra path your code took because it did a different branch of the if-else tree. Repeat until the monkeys stop optimizing and the interpreter steps in. Given that this it the main rendering loop of C3DL this is a very bad place for that to happen.

Now your job is to re-write that loop in a way that the monkeys can handle. You'll want to replace those if-else statements with something better, like a switch statement or a javascript object that has functions with names that match your conditions, so you can do things like call the AABB function in that object instead of checking if culling === "AABB".

When the monkeys stop complaining about that loop you should see a large performance increase.

Friday, June 3, 2011

Automated testing

You can see the automated testing that I've created at Germany. It will automatically take you to the test results page when it is done testing. There are also performance tests at Germany-Perftests.

The test suite uses a slightly modified version of Sundae to perform the reference tests. Each test loads any necessary files, executes the reference test, uploads the results to the server by way of an HTTP get request, and then forwards the browser to the next page in the test sequence.

The server has two perl scripts on it. One of the scripts stores the results of tests in an SQLite database and can also create the database if it does not exist. The other script generates the HTML page that shows the test results.

The results page uses jQuery UI with the Datatable plugin to turn a normal HTML table into one that has re-arrangeable rows (to make them easier to compare) and has sortable table headers.


Please let me know about any comments you have. I'm still trying to determine the best way to show the performance results- the current system of just showing the FPS seems hard to read. And a lot more information could be gathered about the computers that are submitting the tests- at the moment only the IP address and user agent are recorded. I intend to add detail pages that show the test results from all of the computers that make up a given user agent, as well as one that shows a timeline of test results for a given test. This should help in analysis of the test results.

At the moment though I believe that it's a very good test suite for C3DL because the only thing needed to run the test is to visit the web page- there is no need for any sort of deployment which means it should scale extremely well and allow us to test on a multitude of platforms. Try it out and tell me what you think. I'm very interested in expanding the test suite so that it can be used for any sort of HTML canvas related testing.