Tuesday, October 4, 2011

The p5 IDE is not very good.

A project that I've been working on for the last 4 months at CDOT has involved making a relatively complex game (39 files, 240 KB of code). This was my first experience making an actual structured program in the Processing IDE and it really served to highlight the inadequacies of that program, which are as follows:

  1. No debugging. Yes, you'd better get used to println statements to narrow down a program's behaviour.
  2. Call stack on crash is not useful. The thing that normally tells you what line number your program crashed on as well as the set of calls that led to that point is less useful because all of your code files get jammed together by P5 when it compiles. So the line number that it gives you is the line number in the concatenated stack of code files. Which is a completely useless number because you have no way of finding that line.
  3. Their tabbed interface sucks for a large number of files. If it doesn't have enough space on the tab line to write the names of all the tabs than it collapses the tabs that don't fit into tabs without names on them, which makes it annoying to switch between them because you can't tell what file the tab you are clicking on contains.
In short, try using Eclipse with a plugin to develop processing applications. It involves more overhead because you have to specify the names of the objects that you are calling when you do things like draw() or image() but you will at least be able to debug the program and see where it is crashing. And it has much better tools for managing the files in your project as well.

Thursday, July 21, 2011

Parallax scrolling in processing and processingJS

A very special secret project that I am working on at CDOT involves the creation of a very large scene- one that extends well beyond the actually visible area. Here is an abstract representation of how that scene is laid out:






















The viewable area is the part of the scene that the user actually sees. You can set this area by calling translate(-screenCornerX, -screenCornerY), where screenCornerX and Y are the co-ordinates of the top left corner of the viewable area, at the start of each draw loop. This moves the entire sketch over so that the elements you want the user to see are placed on their screen.

A smooth scrolling effect can be created by keeping track of the amount of time between draw calls. The millis() function tells you the amount of time between now and the start of the program. Storing this value and subtracting what it is now from what it was in the previous frame tells you the number of milliseconds between frames. You can multiply that value by a scrollRateXPerMillisecond and scrollRateYPerMillisecond to get values to add to screenCornerX and screenCornerY which will cause the screen to scroll at a constant rate over time.

But there is a problem with this approach. You can see in the image that there are foreground objects and background objects. The background objects are intended to appear far, far behind the foreground objects but because they both scroll at the same rate the viewer sees that they are at the same height!
The solution to this problem is to use the technique known as Parallax scrolling.


With parallax scrolling when we are drawing a scene we want to draw the background elements first, so that the foreground elements appear on top of them. But we don't want to translate the full screenCornerX and screenCornerY values over- instead we want to translate(-screenCornerX/10, -screenCornerY/10). We refer to the 10 in that function call as our parallax factor- it's actually a good idea to take the 10 out of there and put it in a constant variable so that our call looks like translate(-screenCornerX/PARALLAX_FACTOR, -screenCornerY/PARALLAX_FACTOR) instead.


You can then draw all the background elements as normal, but because our translate call is different the purple area in this image will end up being our background instead of the green area.
Next we reverse the translation (this is most easily done by calling pushStyle() before the translation and drawing and popStyle() afterwards) and then translate the full screenCorner amounts and draw our foreground elements.


This image shows the results of the new drawing mechanism after the foreground has scrolled 90 pixels down and 90 pixels to the right. The background area only scrolled 9 pixels down and 9 pixels right during the transition, and the user perceived the foreground objects moving 10 times faster on the screen then the background objects did. This creates the illusion that the background elements are much further away than the foreground elements, adding a great sense of depth to the scene.

Thursday, June 23, 2011

Debugging in the P5 IDE

Recently I've been using the processing IDE to build wondrous things. It works reasonably well, having at least some of the features we've come to expect from modern IDEs, right until you try to debug a program in it. If you happen to have a missing parenthesis, curly brace, semicolon, etc. in your program the IDE will helpfully point you to a random file (chances of the error actually being in this file are pretty low) and complain that it encountered an unexpected symbol such as a }, ;, or ).

It is important to realize at this point that the }, ;, or ) is not the actual problem with your program, it's just the first thing that the parser encountered after the error. So all you have to do is find a } in your program (can't be too many of those, right?) and read backwards from it until you find your error.

Then, once you've re-read all the code in your program and didn't find the problem, you may come to the internet for help. Here is what you should do instead to find syntax errors in these programs.

Unlike other IDEs processing only deals with one error at a time. If it finds one it will immediately stop and tell you what it is, and syntax errors always take precedence over all other errors. How can we use this fact to help us find the bug?

Comment out half your code. Seriously. For half of your source code files, do a CTRL-A and CTRL-/. Then try to run your program. If the syntax error was in the files you just commented out then you will receive some other error which will not be a syntax error! Proceed to uncomment half of your commented-out code files and run again to find out which quarter of your source files the error is in. Repeat this process until you've narrowed it down to a single file. Then comment out half the methods in the file, and then a quarter, etc until you've narrowed it down to one method that you can scrutinize for the error.
 
If the error wasn't in the half of the files that you commented out then un-comment them and comment the other half and take it from there.

Note that this process becomes more complicated if you have multiple syntax errors, in which case you may need to comment the entire thing and un-comment file by file.

Here's another free tip: Don't re-declare class variables when you are initializing them in your constructor or you will be unable to access them later.

Friday, June 17, 2011

Performance profiling and analysis of webpages using Shark and nightly firefox builds

This entire post was lifted directly from a presentation by David Humphrey. Probably with some errors since I don't know what I'm doing as well as he does.

Sometimes you experience really poor performance in your JS applications. The causes of these performance issues can be very unintuitive, and you can't always find them by staring blankly at the code for hours on end. This post is how you might go about finding your problems so that you can fix them

You can see nightly builds of Mozilla Firefox right here. You can see that one of them is labelled as shark, as seen in this image.

This build of firefox has been made to work properly with the OS X application Shark. If you have it installed then it is at /Developer/Applications/Performance Tools/Shark.app, otherwise you'll have to grab Xcode from here. Note that these instructions are for Mac OS X users only!

Once you have that shark-enabled version of firefox you should run it and navigate to whatever page you want to test the performance of. Then open Shark and point it at Firefox and click Start. Let it run for a while and then click stop, and it'll start analyzing the results. Closing whatever performance heavy window you had open in the nightly build may cause the analysis to go faster.


The analysis window looks like this:

You can expand the functions in the symbols section to see what they are doing inside- in this case our most heavy functions are all doing WebGL related work, which is good. You want to avoid situations where a large amount of performance is spent on a XUL or page re-drawing related functions, indicating that you are re-drawing a page a lot more than you should be.

You can also view the test results in tree mode, like this:

This gives you another way of looking at the results, where big changes in the time % indicate where time is being spent- in this case it's in WebGL related functions, which is again good. Using these specific screenshots we can see that we aren't doing mistakes related to re-rendering the web page repeatedly or something like that, but our performance was still terrible.

So we must look in a different way for our results. Firefox uses Javascript engines like Jaegermonkey (I'm not very knowledgeable about these yet, so you should look for information on them elsewhere) to compile JavaScript code into machine code that runs much, much faster than the interpreter does. But it can't always compile JS code, such as if it hits long series of if-else statements, and if it decides it's not worth compiling your JS code then it will go back to the interpreter and you will go orders of magnitude slower.

To find out when it's doing this is a slightly more complicated process.
Step one is to download a debug-enabled build of firefox. You can find them right here. You're looking for a folder that is like the one in the screenshot, but has a more recent date.


Step two is to cry about the speed of wireless network access at your location.

Step three is to open up Terminal and go to to wherever you just installed that debug build. The .app file contained in mine was called Tumucumaque for trademark reasons. Once there you'll want to run the following command without the quotes: "TMFLAGS=minimal,abort JMFLAGS=abort TumucumaqueDebug.app/Contents/MacOS/firefox -profilemanager > /tmp/log 2>&1"

This will dump stderr and stdout into /tmp/log. The actual output that will be placed there is from the tracemonkey and jaegermonkey engines, and can be used to show all the errors and stuff that they encountered as they went through your code. By navigating to your test page and letting it run for a while it will cause errors encountered while parsing and executing the javascript code on that page to be placed in the log file, where you can look at them.

You can quite firefox once it has been running for a little while and then do a "cat /tmp/log | grep Abort" in Terminal to get a list of all the places where jaegermonkey or tracemonkey decided to start flinging poop at the walls instead of working hard on compiling your javascript. It may look something like this:

That highlighted bit looks interesting. The line that it is on is telling me that firefox gave up and went back to the interpreter at that location. I wonder why?

Well, when we open up that file and look at line 9068 we see this:

Now, you might say to yourself "Hey, self, this looks like the type of loop that the monkeys can't handle!" And it is. The first time that loop goes through the monkeys compile code that runs that code path. Then it goes through again and has to re-compile to include that extra path your code took because it did a different branch of the if-else tree. Repeat until the monkeys stop optimizing and the interpreter steps in. Given that this it the main rendering loop of C3DL this is a very bad place for that to happen.

Now your job is to re-write that loop in a way that the monkeys can handle. You'll want to replace those if-else statements with something better, like a switch statement or a javascript object that has functions with names that match your conditions, so you can do things like call the AABB function in that object instead of checking if culling === "AABB".

When the monkeys stop complaining about that loop you should see a large performance increase.

Friday, June 3, 2011

Automated testing

You can see the automated testing that I've created at Germany. It will automatically take you to the test results page when it is done testing. There are also performance tests at Germany-Perftests.

The test suite uses a slightly modified version of Sundae to perform the reference tests. Each test loads any necessary files, executes the reference test, uploads the results to the server by way of an HTTP get request, and then forwards the browser to the next page in the test sequence.

The server has two perl scripts on it. One of the scripts stores the results of tests in an SQLite database and can also create the database if it does not exist. The other script generates the HTML page that shows the test results.

The results page uses jQuery UI with the Datatable plugin to turn a normal HTML table into one that has re-arrangeable rows (to make them easier to compare) and has sortable table headers.


Please let me know about any comments you have. I'm still trying to determine the best way to show the performance results- the current system of just showing the FPS seems hard to read. And a lot more information could be gathered about the computers that are submitting the tests- at the moment only the IP address and user agent are recorded. I intend to add detail pages that show the test results from all of the computers that make up a given user agent, as well as one that shows a timeline of test results for a given test. This should help in analysis of the test results.

At the moment though I believe that it's a very good test suite for C3DL because the only thing needed to run the test is to visit the web page- there is no need for any sort of deployment which means it should scale extremely well and allow us to test on a multitude of platforms. Try it out and tell me what you think. I'm very interested in expanding the test suite so that it can be used for any sort of HTML canvas related testing.

Wednesday, May 11, 2011

C3DL Test Automation

My main assignment for this summer will be to add features to and improve the performance of C3DL. I believe that improving performance is impossible without an automatic and consistent way to gather evidence about what the performance actually is. It's also necessary to test that the engine is still rendering images correctly, so that we know immediately if something breaks.

In order to do both of these goals my first objective is to establish an automated testing suite. Because CDOT has given me a machine with a public facing IP address I can set up some sort of WIMP stack on it that can receive test results and store them in a database. Then I can set up some javascript test scripts that will run tests and deliver results to that stack.

The image rendering tests will be rendering a static image, and can be used in something like testswarm as a result. This will allow us to ensure cross-browser and operating system compatibility.

The performance tests will need to be run on machines that are guaranteed to be not doing anything else when they are run, but they don't need to be run on nearly as many machines or operating systems as the image tests because increases in the performance of our engine should be seen across all platforms when we make those increases. Eventually we may get into the area of coding specific enhancements for platform/browser/graphics cards, and I very much don't want to be working on this project if that happens, so I don't need to worry about that right now.

I'll be making a battery of tests that check very specific features and are designed for a machine to test them, unlike the human-oriented tests that we currently have. These tests will need to be designed in a way that allows some javascript code to automatically redirect between them once they have finished and the results have been delivered.

By storing the results of all the tests in a database we'll be able to make a timeline of rendering/performance increases across the lifespan of the project, showing quantitatively how much work we've been doing on it.

We can also show the results of our tests to the browser developers if they reveal bugs in the browsers.

Should be an exciting process! Although enabling the actual wimp stack itself is really boring right now.

Monday, May 9, 2011

JQuery, JSONP, and fun

I just created the first release version my twitter widget! It was not an easy process as I haven't ever really debugged anything in javascript before.
Many of my issues were due to this line:
$.getJSON(jsonurl+"&callback=?",{},function(json){ 
a whole bunch of code follows

While making that line and encountering numerous errors I discovered the following facts:

JSON is not the same as JSONP, and due to extremely valid browser security restrictions you can't access JSON files from outside your domain- JSONP, on the other hand, is fine because your browser interprets it as a script.

Twitter needs a callback=? parameter passed in order for it to give you JSONP instead of JSON.

Javascript will automatically place the function {} in place of that ?
Giving the function(json){a whole bunch of code follows} in place of that {} causes it to not work in some circumstances.

Now that those issues have been resolved the twitter widget works properly. It can either display tweets by searching for them or by showing all the ones from a given user.

You can check it out at http://pastebin.com/DUnqw0n6 . Please save it into a file named twitterWidget.js
An HTML file that will test it can be found at http://pastebin.com/GbnGQvB9 . It needs to be placed in the same directory as twitterWidget.js
You also need dashBoard.js, which needs to be placed one directory up from the other two files and can be found at http://pastebin.com/0i7kLEsa

Thursday, May 5, 2011

First week at CDOT

This is my first week here at Seneca's Center for Development of Open Technologies (CDOT).
So far it compares very favourably with other environments where I have done software development- the other developers and managers care about the things on lists like this one.
There is a genuine and concerted effort to make sure that developers here have the tools and environment they need to make good code.

I'm also proud to work here because of the altruistic goals of CDOT- to create technologies that everyone can benefit from. I've been a fan of open source software for years, beginning sometime in 2002; since then I've been to see Richard Stallman give talks twice and have switched to using almost entirely free and open source software in my day to day life.

So far most of my time has been spent updating myself on what javascript has been up to since the last time we seriously talked in 2006, which is a lot of things! It's like an actual real language now. Which I will be using to enhance C3DL and make it faster and better than ever before.
I'm also working on this dashboard project that Dave Humphry is having us do. The end goal is to have something that will let people see, at a glance, statistics/advertisements about CDOT's projects. Or anything else that people want to code javascript widgets for.

*fakeinformation*
The end end goal is, of course, to provide the actual dashboard for the free software powered car that CDOT is building. They plan to use that funny mannequin-looking guy by the windows to provide power for it. He's actually a real person! Just very lazy.
It's step 12 on their 42 step world domination plan.
Shhh. Don't tell anyone.
*/fakeinformation*

The first widget I'm working on is a twitter feed. Learning to do this has taught me many things about JSON (it's very easy to do; you just need to press x) and jQuery and javascript itself.

You can see the first version of the code for it here

That's all I can think to write on for the moment. Time to learn me some git.

Tuesday, May 3, 2011

First post

This is a sample post