This project is read-only.
Project Description
Some sample code demonstrating how various GUI architectural patterns can be implementing in WPF.

These patterns include:
  • Autonomous View
  • Passive View
  • Supervising Controller
  • MVVM/ViewModel/Presentation Model

In case you are wondering, Model-View-Presenter is covered in the list. Supervising Controller and Passive View are two flavors of MVP.

Be sure to check out for an excellent WPF/Silverlight framework.

 Christopher Bennage News Feed 
Monday, May 20, 2013  |  From Christopher Bennage

A friend was reviewing the last post and he asked two questions about this JavaScript snippet:

var entityIndex = entities.length - 1;
for (; entityIndex != 0; entityIndex--) {

  • Why am I initializing entityIndex outside the loop?
  • Why do I compare entityIndex to 0?

Initializing Outside

The answer to the first question is really just personal readability (well, perhaps a small touch of “this will make people pause and think”).

Let’s dig into the construct a bit though.The declaration of for loop consists of three expressions. (I’m not talking about for(in) here.)

The first expression is an initializer; it is executed just once. It is usually something like var i = 0. This expression is still subject to hoisting.

The second expression is a conditional. It’s executed at least once.

The third expresion modifies the state and is executed once each time the condition is true. (No big surprises here.)

In the case above, I instinctly felt there was too much going on one line, so I moved the first expression outside of the for. This does really have any impact on the way the code is executed and (since the variable is hoisted) is actually a bit closer to what the interpreter is really doing.

Comparing to Zero

I choose to use entityIndex != 0 not because I wanted to compare to zero, but because I wanted to avoid the cost of evaluating entities.length repeatedly. Since the second expression is evaluated over and over, we don’t want to do anything expensive there. If our entities had lots of members, then calculating length could have a significant impact.

There is some question about the relative performance of > 0 vs != 0, however the test results for that seem to indicate that it is not consequential.

Closing Thoughts

  • It’s easy to obsess about optmizations, but it’s important to understand than many micro-optimizations are browser-specific. So test, test, test before you waste time on it.
  • I made the one change because it was more readable to me. It might not be so to you. If so, don’t do it.


Tuesday, March 05, 2013  |  From Christopher Bennage

This is a continuation from the previous post.

Setting The Stage

The game we’re building will have waves of enemy ships fly in to attack the player’s units. Let’s begin by making a simple enemy as well as some dummy targets for them to attack. I’m going to keep the graphics very simple for the moment. Likewise we are going to focus on the enemy behavior and not worry about any player interaction just yet.

Here’s a demo of what we’ll make. Click on the start screen to transition into the game. The little yellow rectangles are our enemy ships. Each one projects its own target as a little red circle. Once it touches its target, it projects a new one and then flies toward it.

Let’s start from the top down. Our enemy units will “live” in our main screen for the game. (At least for the time being.) This screen needs to expose the same interface that we had for the start screen we made in the last post. We’ll also add a start method that we’ll call just once in order to initialize things.


Here’s the implementation:


The entities array will contain a list of the enemies we’re tracking. I used the name “entity” because this is a common term in game development. In general, it means something that has behavior and is drawn to the screen. Thus, you can expect entities to have update and draw methods. This is not a hard and fast definition though. You’ll find that the specifics of the definition can vary among engines, frameworks, and developers.

In our start function we populate entities by invoking our (as yet undefined) makeEnemyShip function. I’m passing in two numbers that makeEnemyShip will use to set the x and y position of the ship. I could have used random numbers or even hard coded values, however deriving from the loop’s controls makes it easy to cluster all the ships in the upper left corner of the screen.

The draw and update functions for the screen are very similar. They both iterate over entities and invoke the corresponding function on each entity. They also pass along the necessary context. For draw, this is the 2D drawing context of the canvas and for update it’s the elapsed time since the last frame.

Notice how the loop is structured differently from the loop in start. This is a performance optimization; though it has little consequence with so small an array. On some browsers, the call to length is a bit expensive. (Especially in cases when the array isn’t an array, but something that is array-like.) This adds up when you make the call once per iteration of the loop. We move it out of the loop so that we only call it once. Check out this test. Performance optimizations are tricky and change every time new browsers are released. It’s easy to get confused, and I recommend profiling your code frequently to look for hot spots rather than just guessing about optimizations. I hope to talk more about them later, but if you want more now check out the book High Performance JavaScript by Nicholas C. Zakas.

I had originally written my loops using the newer Array.forEach to iterate over entities. However, this proved to be significantly slower than a for loop.

The screen’s draw method also resets the canvas at the beginning of each iteration. If we did not do this, then every thing we drew on previous frames would still be present. For the start screen, I used clearRect however here I used fillRect with a solid color.

Here’s a function that will produce a simple enemy. It follows the same structure we’ve been using, update to handle the behavior and draw to actually draw it on the screen.

Some Bad Guys

Our enemy ships are a little more complicated than the screen they live on. Visually, they appear to have two components. The little yellow rectangle that moves about the screen and the phantom target that they project as a little red circle. In the final game, they will target one of the player’s units. However, the logic is very similar. In fact, it may become useful in debugging to how each enemy ship render something over it’s actual target.



Each enemy ship will be responsible for tracking its own state. In this code, the state is captured in a closure. In later code, we’ll track the track in a more traditional way. (I haven’t ran tests yet but I think that using a closure may have a performance impact.)

All of these variables represent the enemy ship’s state.

var position = { x: 0, y: 0 };
var orientation = 0;
var turnSpeed = fullCircle / 50;
var speed = 2;
var target = findNewTarget();  

position is the location on the screen where we will render our ship.

Technically, the is the position in “world space”. World space is the logical space that entities in your game “live in”. This is distinct from “screen space”, which corresponds to the actual pixels on the screens. You can think of it this way: in your game you might have a circle with a radius of 10 and located at (100,100). However, where you draw it on the screen will depend upon where the player is viewing it from. If the player zooms in, the circle will grow larger but this doesn’t change the logical position or radius of the circle. We use the term “projection” to describe this. We project from world space into screen space based upon how the player is viewing the game world. The simplest project of course is just 1:1. Which means that there is no difference between world space and screen space. That’s what will stick with for the moment.

orientation is the direction the ship is currently facing. Our ship will always travel in the direction of its orientation. This constraints causes the ship travel in smooth arcs as opposed to abruptly changing its course.

turnSpeed and speed represent how quickly the ship can turn and how fast it can travel respectively. We won’t be modifying these values after setting them, which means the ship will turn and travel at constant rates. These values represent the rates of change for orientation and position. Note also, we defined turnSpeed in terms of fullCircle. This is an alias for Math.PI * 2; we are dealing in radians and not degrees.

target is a value with the shape { x: Number, y: Number }. The ship will always attempt to move towards this value by adjusting its orientation.


The update function is the real meat of the enemy ship. First, we check to see if we are close to our target. If we are close enough, we consider our goal accomplished and we set a new target. Otherwise, we change our orientation so that we are flying toward our current target.

var y = target.y - position.y;
var x = target.x - position.x;
var d2 = Math.pow(x, 2) + Math.pow(y, 2);

Here, x and y are really the distance between target and position along the respective axises. We want to know these values in order to calculate the distance between them. In general, you use the Pythagorean theorum to calculate distance. For deeper dive into the math, watch Distance Formula on Khan Academy. Finding the actual real distance uses a square root and calculating a square root is an expensive operation that’s best to avoid whenever you can.

We can bypass the need by working with the distance² (hence the variable name d2). We compare this against the hard-coded value of 16 (that’s 4²). In other words, if the distance between the ship and its target is less than 4 units we find a new target.

if (d2 < 16) {
    target = findNewTarget();

Once we’ve established what the ship’s target should be, we want the ship to move toward the target. As I’ve just mentioned, I’ve chosen to have the ship move at a constant speed. This means that it does not slow down or speed up. The only thing it can do is to change the direction it’s facing (orientation). These sort of constraints determine the personality and character of your game. Bear in mind, you don’t need to simulate the physics to have a fun game. Instead, you need to establish behaviors for your game entities that are merely fun. Fortunately, fun behaviors can often be easier to implement that the actual physics. I recommend taking a look at the demo and tweaking the turnSpeed and speed values to get a small taste for how the behavior can affect the game’s character.

In order to change the ship’s orientation we need to first determine where the ship ought to be facing. The values of x and y we just calculated can be interpreted as a vector. Meaning, it represents the direction and distance (magnitude) from the ship to the current target. To extract the actual angle (in radian) we use Math.atan2(x,y).

var angle = Math.atan2(y, x);
var delta = angle - orientation;

So now we have the direction the ship wants to face, angle, and the direction that it is facing, orientation. We calculate the difference between them and store it as delta.

The basic idea is that add the value of turnSpeed to orientation once each invocation of udpate until delta is 0 (meaning that the ship is flying directly at the target). However, some values of delta will cause the ship to “turn the wrong way”. For example, imagine that the ship is facing the top of the screen and that we’ve defined that as orientation === 0. Now, imagine that the target is directly to its right. The value of angle would be π/2 (or 90°). Adding turnSpeed to orientation each frame increments the value from 0 to π/2. However, if the target is directly to the left, then the value of angle would be 3π/2 (or 270°). Simply incrementing orientation would cause the ship to turn right and keep turning right. This is unintuitive behavior; we expect the ship to turn the shorted distance. In order to accomplish this, we translate any value of delta higher than π (180°) by subtracting fullCircle. This normalizes the value of delta between -π and π (or between -180° and 180°).

var delta_abs = Math.abs(delta);
if (delta_abs > Math.PI) {
    delta = delta_abs - fullCircle;

We take the absolute value of delta because otherwise we’d have to check for delta < -Math.PI as well. Also, we’ll use delta_abs again.

If delta is 0, we don’t need to change orientation. When it is different we need to modify the value of orientation.

if (delta !== 0) {
    var direction = delta / delta_abs;
    orientation += (direction * Math.min(turnSpeed, delta_abs));
    orientation %= fullCircle;

First, we decide how much to change it using Math.min(turnSpeed, delta_abs). We could just use turnSpeed. However if we did it’s likely that delta would never quite be 0 and (depending on the size of turnSpeed) it could result is jittery motion. Secondly, we want to decided which direction to turn: positive values to the right and negative values to the left. We multiply the amount direction to change the sign, because direction will only ever be 1 or -1. Finally, we modulo orientation to ensure that it stays within a range of -2π to 2π. Otherwise, the calculation of delta would be off.

Our last step in update is to modifiy the actual position using the latest orientation and speed.

position.x += Math.cos(orientation) * speed;
position.y += Math.sin(orientation) * speed;

Some basic trigonometry is fairly fundamental for most game development. If you’re mathematically lost at this point, I recommend reviewing over at Khan Academey.

Here’s the geometric interpretation of the code. We want the ship to move a distance of speed in the direction of orientation. To do this, we need to project this vector (distance and direction) on the x and y axises. Since the distance is the length of the hypothenuse of right triangle, cosine gives us the x part and sine gives us the y part. We can then add these values to the current position.


Drawing the ship to the screen is a bit easier to follow. Here’s the flow of the logic:

  • Save the state of the drawing context.
  • Transform the context to make it easier to draw our ship.
  • Draw the ship.
  • Restore the context back to its original state.
  • Draw the target

      function draw(ctx) {
          ctx.translate(position.x, position.y);
          ctx.fillStyle = 'yellow';
          ctx.fillRect(-3, -1, 6, 2);
          ctx.fillStyle = 'rgba(255,0,0,0.5)';
          ctx.arc(target.x, target.y, 2, 0, Math.PI * 2, true);

Recall that ctx is the drawing context for the canvas. The context provide a useful API that allows us to move it around before we draw on it. This is analgous to having a sheet of paper that you might shift and rotate in order to make it easier to draw something complicated. This is the purpose of the translate and rotate methods.

First, we use ‘save’ to establishing a checkpoint for the drawing context that we can easily revert back to using ‘restore.’ The calls to translate and rotate modify the state of the drawing context. This modified state is very specific to the drawing of our enemy ship. If we didn’t translate and rotate the canvas, we’d have to do a lot more work to figure out where to draw the four corners of the rectangle.

I decided that I want my ship to be 6px long and 2px wide. Since I want the middle of the middle of my ship to be the point where it rotates, I shift by half the length and half the width. Hence, the values passed to ctx.fillRect(-3, -1, 6, 2). This point the new origin (0,0) at the middle of the rectangle, and it makes our call to rotate behave intuitively. If we used ctx.fillRect(0, 0, 6, 2) instead, then the ship would appear to rotate around its upper left corner. We’ll use these same techniques once we switch to using sprites.

After we restore the context’s state, we draw the target. We don’t bother using rotate because it’s meaningless to rotate a simple circle. Likewise, we don’t bother translate since the drawing logic is so simple.

The canvas is a broad topic in itself. I recommend taking a look at the tutorials over at MDN. A handy reference for the APIs themselves can be found on MSDN.

Friday, January 11, 2013  |  From Christopher Bennage

This is a continuation from the previous post.


Many games have a start screen or main menu of some sort. (Though I love games like Braid that bypass the whole notion.) Let’s begin by designing our start screen.

We’ll have a solid color background. Perhaps the ever lovely cornflower blue. Then we’ll draw the name of our game and provide an instruction to the player. In order to make sure we have the player’s attention, we’ll animate the color of the instruction. It will morph from black to red and back again.

Finally, when the player clicks the screen we’ll transition to the main game. Or at least we’ll stub out the transition.

Here’s a demo based on the code we’ll cover later in this post (as well as that from the previous post.)


Here’s the code to implement our start screen.


Recall that our start screen is meant to be invoked by our game loop. The game loop doesn’t know about the specifics of the start screen, but it does expect it to have a certain shape. This enables us to swap out screen objects without having to modify the game loop itself. The shape that the game loop expects is this:

    update: function(timeElapsedSinceLastFrame) { },
    draw: function(drawingContext) { }


Let’s begin with the start screen’s update function. The first bit of logic is this:

hue += 1 * direction;
if (hue > 255) direction = -1;
if (hue < 0) direction = 1;

Perhaps hue is not the best choice of variable names. It represents the red component for an RGB color value. The range of values for this component is 0 (no red) to 255 (all the reds!). On each iteration of our loop we “move” the hue towards either the red or black.

The variable direction can be either 1 or -1. A value of 1 means we are moving towards 255 and a value of -1 means we are moving towards 0. When we cross a boundary, we flip the direction.

Keen observers will ask why we bother with 1 * direction. In our current logic, it’s an unnecessary step and unnecessary steps in game development are generally bad. In this case, I wanted to separate the rate of change from the direction. In order words, you could modify that expression to 2 * direction and the color would change twice as fast.

This leads us to another important point. Our rate of change is tied to how quickly our loop iterates; most likely 60fps. However, it’s not guaranteed to be 60fps and that makes this approach a dangerous practice. Once way to detach ourselves from the loop’s speed would be to use the elapsed time that is being passed into our update function.

Let’s say that we want to it to take 2 full seconds to go from red to black regardless of how often the update function is called. There’s a span of 256 discrete values between red and black. To make our calculations clear, let’s say there are 256 units and we’ll label these units R. Also, the elapsed time will be in milliseconds (ms). For a given frame, if were are given a slice of elapsed time in ms, we’ll want to calculate how many R units to increase (or decrease) hue by for that slice. Our rate of change can be defined as 256 **R** / 2000 **ms** or 0.128 R/ms. (You can read that as “0.128 units of red per millisecond”.) This rate of change is a constant for our start screen and as such we can define it once (as opposed to calculating it inside the update function).

Now that we have the rate of change , we only need to multiply it by the elapsed time received in update to determine how many Rs we want. A revised version of the function would look like this:

var rate = 0.128; // R/ms

function update(elapsed) {
    var amount = rate * elapsed;
    hue += amount * direction;
    if (hue > 255) direction = -1;
    if (hue < 0) direction = 1;

One consequence of this change is that hue will no longer be integral values (as much as that can be said in JavaScript.) This means that we’d really want to have two values for the hue: an actual value and a rounded value. This is because the RBG model requires an integral value for each color component.

function update(elapsed) {
    var amount = rate * elapsed;
    hue += amount * direction;
    if (hue > 255) direction = -1;
    if (hue < 0) direction = 1;

    rounded_hue = Math.round(hue);


Let’s turn our attention to draw for a moment. One of the first things you generally do is to clear the entire screen. This is simple to do with the canvas API’s clearRect method.

ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);

Notice that ctx is an instance of CanvasRenderingContext2D and not a HTMLCanvasElement. However, there is a handy back reference to the canvas element that we use to grab the actual width and height.

There are other options other than clearing the entire canvas, but I’m not going to address this in this post. Also, there are some performance considerations. See the article listed under references.

After clearing the screen, we want to draw something new. In this case, the game title and the instructions. In both cases I want to center the text horizontally. I created a helper function that I can provide with the text to render as well as the vertical position (y).

function centerText(ctx, text, y) {
    var measurement = ctx.measureText(text);
    var x = (ctx.canvas.width - measurement.width) / 2;
    ctx.fillText(text, x, y);

measureText returns the width in pixels that the rendered text will take up. We use this in combination with the canvas element’s width to determine the x position for the text. fillText is responsible for actually drawing the text.

The rendering context ctx is stateful. Meaning that, what happens when you call methods like measureText or fillText depends on the state of the rendering context. The state can be modified by setting its properties.

var y = ctx.canvas.height / 2;
ctx.fillStyle = 'white';
ctx.font = '48px monospace';
centerText(ctx, 'My Awesome Game', y);

The properties fillStyle and font change the state of the rendering context and hence affect the methods calls inside of centerText. This state applies to all future methods calls. This means that all calls to fillText will use the color white until you can the fillStyle.

Notice too that we are calculating the x and y values for the text on every frame. This is potentially wasteful since these values are unlikely to change. However, if we want to respond to changes in canvas size (or even changes to the text itself) then we’d want to continue calculating these on every frame. Otherwise, if we were confident that we didn’t need to do this, we could calculate these values once and cache them.

Now let’s use the red component calculated in update to render the instructional text.

var color = 'rgb(' + hue + ',0,0)';

ctx.fillStyle = color;
ctx.font = '24px monospace';
centerText(ctx, 'click to begin', y + 30);

fillStyle can be set in a number of ways. Earlier, we used the simple value white. Here were are using rgb() to set the individual components explicitly. Any CSS color should work with fillStyle. (I won’t be too surprised if some don’t though.)

Now you might be wondering why we bothered calculating hue inside update since hue is all about what to draw on the screen. The reason is that draw is concerned with the mechanics of rendering. Anything that is modeling the game state should live in update. The tell in this example is that hue is dependent on elapsed time and the draw doesn’t know anything about that.

Update (again)

Moving back to update, the next bit deals with input from the player. In the sample code I’ve extracted the input logic away. The key thing here is that we are not relying on events to tell us about input from the player. Instead we have some helper, input in this case, that gives us the current state of the input. If event-driven logic says “tell me when this happens” then our game logic says “tell me if this is happening now”. The primary reason for this is to be deterministic. We can establish at the beginning of our update what the current input state is and that it won’t change before the next invocation of the function. In simple games this might be inconsequential, but in others it can be a subtle source of bugs.

var isButtonDown = input.isButtonDown();

var mouseJustClicked = !isButtonDown && wasButtonDown;

if (mouseJustClicked && !transitioning) {
    transitioning = true;
    // do something here to transition to the actual game

wasButtonDown = isButtonDown;

We only want transition when the mouse button has been released. In this case, “released” is defined as “down on the last frame but up on this one”. Hence, we need to track what the mouse button’s state was on the last frame. That’s wasButtonDown and it lives outside of update.

Secondly, we don’t want to trigger multiple transitions. That is, if our transition takes some time (perhaps due to animation) then we want to ignore subsequent clicks. We have our transitioning variable outside of update to track that for us.

More to come…


Friday, December 07, 2012  |  From Christopher Bennage

See the introduction post for context.

The Loop

In general, game development begins with the game loop. If you come from the business world of software development, this will be strange. You don’t rely on events. Phil Haack once asked me “why a loop?”, to which I responded “uh…”. However, a much better answer would have been this one on stackoverflow.

Okay, so we should use a master loop. If our runtime is the browser, how do setup this loop? There’s a relatively new API called requestAnimationFrame and that’s what most folks recommend. Check out these for details:

(I do recall reading something negative along the way about the API, but I couldn’t find it again.)

I used the requestAnimationFrame shim referenced in the Paul Irish post above. The shim is only necessary for older browsers that have not implemented the API. By the way, we refer to each iteration of the loop as a “frame” because of the analogy with traditional animation.


Now that we’ve ensured that requestAnimationFrame is present we can get to our game loop. Here is my game’s bootstrap code (well, an early version of it):

The heart of this the loop function. It has the following step:

  • capture the current time
  • calculate the time that has elasped since the last frame
  • execute the game’s logic for the frame (that’s the update and draw invocations)
  • request the next invocation of loop using requestAnimationFrame
  • record the current time of the this frame for calculations in the next one

N.B. This code doesn’t use frameId yet. The idea is that this handle could be used to halt the loop.

The beginLoop function is there merely to provide a closure for some of the variables used to track the state of the loop. It kicks off the loop with its initial invocation of loop.

The big mystery inside of loop is the currentScreen object. Here I was thinking ahead (which can be dangerous). I know that my game will have at least two “screens”, possibly more:

  • start menu screen
  • main game screen (where the action takes place)

I wanted the loop logic to work with both (as well as any future screens). I expect screen objects to have two methods:

  • update takes the time elapsed since the last frame and is responsible for updating the state of the game.
  • draw takes the drawing context (from the canvas) and is responsible for rendering the current state of the game.

You’ll also see that I grab a canvas element and capture its drawing context. If you are not familiar with the canvas APIs, I recommend that you start here.

Why two different methods for game logic?

Keeping the update and draw functions separate is important. When frames becomes expensive to compute, the game may respond with lag or non-deterministic behavior. Too avoid this, you might want your game to skip over some logic during a particular iteration of the loop. However, it’s very likely that you won’t want to drop calls to update. It’s not necessary a big deal if you skip rendering a couple of frames, however if skip calculating the location of a projectile then it might mysteriously “pass through” its target. This will become more relevant to us in particular, because I’d like to all the player to control the speed of game (a common feature of many tower defense games).

Right now update and draw are always called for each iteration of the loop, so the distinction is academic in this context. We could though calculate our frame rate in loop and occasionally skip invoking draw if the rate slowed down.

Now we have enough in place to begin working on our start menu screen.

Friday, December 07, 2012  |  From Christopher Bennage


Something disgusting, like six years ago, I listed on 43Things that I wanted to write a video game. I’ve actually made numerous arrested attempts ever since I started programming with my TI-94a back in 1983. My last attempt has been much less arrested (though still incomplete).

I’ve learned a lot in my most recent endeavor, so it’s time to share. You can follow the actual work in progress, but my plan it to recreate the steps I’ve gone though over the course of a few posts.


I am too ambitious. With that in mind, I created a set of constraints for making a game.

  • keep gameplay simple
  • don’t worry about art (that can come later)

I started off wanting to make a game for the Windows 8 store. I decided afterwards that I will target modern browsers in general. This means that I took no dependencies on the WinJS libraries. (Though the Windows store is still my endgame.)

I also decided to not use any frameworks (such as ImpactJS). Not because they are bad, but because I want to learn why I need them.


This is my spec (well, more or less).

I decided to make a simple tower defense game. My inspiration is The Space Game from the Casual Collective, as well as plenty of influence from StarCraft.

The player will build structures in an asteroid field. Waves of enemy ships will attempt to destroy those structures. The player has to manage resources such as minerals and solar power, and fend of the attacks. Structures will cost minerals to build and require power to operate.

The player can navigate the map (up, down, left, right) as well as zooming in and out. There will be a minimap.

Graphics will be sprite-based. The game should be touch-friendly (really, I want touch to be primary).


  • Build New Games, a collaboration between Microsoft and Bocoup, is an excellent set of articles on HTML/JavaScript game development.

  • My friend, Matt Peterson, currently a graduate student at DigiPen, who’s advice and guidance has been most useful.

Tuesday, December 04, 2012  |  From Christopher Bennage

Our annual (or mostly annual) conference is coming up in Janaury. I’m really excited about our set of speakers. Including:

Really, we have too many great speakers to list. The spectrum ranges from Scott Hanselman to Guillermo Rauch.

Check out the full list of speakers.

The event is in Remond, WA on Microsoft’s campus. The dates are Janauary 15 - 17.

Regsistration is $498.

You can read all the detaisl or you can jump straight to the registration.

I look forward to seeing you there!

Friday, November 16, 2012  |  From Christopher Bennage

I’ve been responsible for the technical evaluation portion of some developer interviews recently. I stumbled through the first few, unhappy with my aged and worn approach of asking questions, having the candidate write pseudo code on a white-board, and so. A friend challenged me: he said that the interview should be a positive experience for the candidate even if they don’t get the job.

With that in mind, here’s what I decided to do.

A few days before an interview, I’d sent the candidate a link to a repository. Specifically, some code that p&p had worked on and was publicly available. (I’d ask ahead of time what languages and platforms the candidate was comfortable with, and choose a code base accordingly.) I’d tell the candidate to be prepared to write some code during our time together.

Next, I’d pick two or three scenarios (stories or bugs) to work on with respect to that code base. However, I would not share the exact scenarios with the candidate ahead of time. I like to see how a candidate first reacts to a problem. It also give an opportunity to observe the candidate navigating unfamilar source as they acquaint themselves with what needs to be done.

I’d allow the candidate to bring their own computer (if they desired), to search the web (a very important skill), and to ask me questions. Furthermore, I would spend at least half of the time ping-pong pairing. They would write a test and then I’d implement it, we’d switch and so on.

I was also careful to share all of this with the candidate ahead of time. Being prepared is important, and I like to see how candidates prepare. Interviewing is not about solving clever tricks, it is about seeing if they can be a productive team member. My purpose was to simulate actual work.

I think that my approach still has plenty of room for improvement, but I like the direction it’s been going so far.

Tuesday, August 21, 2012  |  From Christopher Bennage

N.B. If you don’t know anything about WinJS, take a moment to peruse this primer. Also, the context of this post is the p&p Hilo project.

In particular, you should read about promises and asynchronous programming in JavaScript. Derick Bailey also wrote about promises on his blog.

A Bit About Promises

A promise is an object. It is not a function and it is not the value returned from the async operation. To get to the value, you need to call the then method on the promise object. You pass a callback function as an argument to then. The promise invokes the callback and passes the value you’re interested in into the callback. Clear as mud, right?

Here’s a fictitious example that pretends like calculating a random number requires an async operation:

getRandomNumberAsync().then(function(someNumber) { 
    // do stuff with `someNumber`

The call to then returns a promise itself. You could do this:

getRandomNumberAsync().then(function(someNumber) { 
    // do stuff with `someNumber`
}).then(function() {
    // more stuff

Or written another way:

var afterRandomNumber = getRandomNumberAsync().then(function(someNumber) { 
    // do stuff with `someNumber`

afterRandomNumber.then(function() {
    // more stuff

The two example above are the same.

Now if our callback function returns a value, that value is passed along to the next promise’s callback.

getRandomNumberAsync().then(function(someNumber) { 
    return someNumber + 1;
}).then(function(someNumberPlusOne) {


This allows you to easily chain promises, piping the output of one into the next callback in the chain.

getRandomNumberAsync().then(function(someNumber) { 
    return someNumber + 1;
}).then(function(someNumberPlusOne) {
    return someNumberPlusOne + 1;
}).then(function(someNumberPlusTwo) {


Of course, this is a bit silly when then operations are not async. It’s more interesting when the thing you return from the callback is also a promise. Let’s make a another fictitious async function, this time one that needs input:

    // do something with `someNumberOverTen`

Now we can do this:

getRandomNumberAsync().then(function(someNumber) { 
    return getRandomNumberHigherThanAsync(someNumber);
    // What will `something` be?

In the example above, you might think that something will be the promise returned from getRandomNumberHigherThanAsync. It’s not. Instead, it’s the value that getRandomNumberHigherThanAsync produces and would pass into its callback. Returning another promise from within the callback for a promise is a special case. Though it’s probably the most frequent case.

Putting Promises Together

Now let’s pretend we have a set of functions that all return promises, named A through E. If we wanted to execute them in sequence, passing the results from one to the next, we could write it this:

A().then(function(a) {
    return B(a).then(function(b){
        return C(b).then(function(c){
            return D(c).then(function(d){
                return E(d);

Yeah, that hurts my eyes too. Though I found that I was writing my code just like this at first.

However, we should realize that A.then() returns a promise and that that promise completes only when all of the nested promises have completed. If we wanted to execute a new function F after all these steps, we could do it like this:

var waitForAllToBeDone = A().then(function(a) {
    return B(a).then(function(b){
        return C(b).then(function(c){
            return D(c).then(function(d){
                return E(d);

    return F(e);

However, that last inline callback has the same signature as F. That means that we can simplify to this:


Now we realize that what we did for F is also true for E. In fact, it is true for the entire chain. We can simplify that nasty nested beast to:


Much nicer.

A Real Example

Let’s bring this home. While working on HiloJS we needed to copy an image thumbnail to a new file. It sounds simple, but it requires the following steps:

  1. Open a file that we will write to. We’ll call this the target file.
  2. Get the thumbnail image from another file. We’ll call this the source file. (WinRT creates the thumbnail for us from the source.)
  3. Copy the stream from the thumbnail source to the target file’s input stream.
  4. Flush the output stream.
  5. Close both the input and the output stream.

(Actually we don’t really care about the order of the first two steps. They could be switched.)

Our initial implementation looked liked this:

function writeThumbnailToFile(sourceFile, targetFile) {

    var whenFileIsOpen = targetFile.openAsync(fileAccessMode.readWrite);

    return whenFileIsOpen.then(function (outputStream) {

        return sourceFile.getThumbnailAsync(thumbnailMode.singleItem)).then(function (thumbnail) {
            var inputStream = thumbnail.getInputStreamAt(0);
            return randomAccessStream.copyAsync(inputStream, outputStream).then(function () {
                return outputStream.flushAsync().then(function () {

Then we had a code review with the always helpful Chris Tavares. He pointed us in a more excellent direction. We were able to change the code to this:

function writeThumbnailToFile(sourceFile, targetFile) {

    var whenFileIsOpen = targetFile.openAsync(fileAccessMode.readWrite);
    var whenThumbailIsReady = sourceFile.getThumbnailAsync(thumbnailMode.singleItem);

    var whenEverythingIsReady = WinJS.Promise.join([whenFileIsOpen, whenThumbailIsReady]);

    var inputStream, outputStream;

    whenEverythingIsReady.then(function (args) {
        outputStream = args[0];
        var thumbnail = args[1];
        inputStream = thumbnail.getInputStreamAt(0);
        return randomAccessStream.copyAsync(inputStream, outputStream);

    }).then(function () {
        return outputStream.flushAsync();

    }).then(function () {

A couple of notable differences:

  1. In the first implementation, we passed along some values via the closure (e.g., inputStream and outputStream). In the second, we had to declare them in the outer scope because there was no common closure.

  2. In the first implementation, we chained targetFile.openAsync and sourceFile.getThumbnailAsync, but we didn’t really need to. We made the real relationship more explicit in the second using WinJS.Promise.join. That mean the values of these two promises came to us in an arrays (we named it args).


Understanding how promises can be composed really helped us to make the code more readable. It can be difficult to wrap your head around the way they work, but (like it or not) promises are an essential part of writing apps with WinJS.

Fictitious Functions Implementations

// an example implementation of getRandomNumberAsync

function getRandomNumberAsync() {

// an example implementation of getRandomNumberHigherThanAsync

function getRandomNumberHigherThanAsync(minimum) {
    var someNumber = Math.random() + minimum;

Wednesday, August 15, 2012  |  From Christopher Bennage

N.B. If you don’t know anything about WinJS, take a moment to peruse this primer. Also, the context of this post is the p&p Hilo project.

One of the first questions we’ve been struggling with is how to best test a WinJS app. (I’m going to use the term “unit test” somewhat loosely. Some of our tests would technically be classified as “integration tests”.)

Where to run the tests

Our first barrier to unit testing a WinJS app was finding a convenient way to run the tests.
The primary difficulty is that the WinRT APIs are only available in the context of a Windows 8 app (and that’s also practicially the case for WinJS as well). So if your tests need to touch either one, the only choice you currently have is to run the tests inside a Windows 8 app.

After some experimentation, we choose to include a second app in our solution to host the unit tests. (At one point, we had the tests embedded in the actual app itself; executing them with a hidden keyboard shortcut.) Having two apps means that we have to share the source that’s under test. Currently, we’re just manually linking the files. I also have to manually go into the default.html and add references to the scripts. Ultimately, I’d like to have this automated, but that’s a task for another day.

Notice in the screen shot of the solution explorer, that the Hilo folder in the Hilo.Specifications project has a little red x. This is because the folder doesn’t physically exist there. Instead, there are just links to the corresponding files in the main Hilo project.

How to run the tests

We settled on Mocha for running our unit tests. Mocha is popular in the Node.js and it has (in my opinion) one of the better async test stories. This is really important when building Windows 8 apps because (much like Node.js) all the APIs are asynchronous.

We also choose to use a BDD-style for the tests. However, Mocha supports several styles, including a QUnit style.

Mocha will pass a function into your tests for you to call once the asynchronous work is complete. For example:

it(‘test something asynchronous using a promise’, function(done) {


        if(!result) { // or whatever assertion is appropriate
            throw new Error(‘test failed’) 
        } else {
            done(); // we call the function after the async work is complete


If you don’t understand the call to then, take a moment to read about async programming in WinJS apps.

What’s great about Mocha is that if you omit the done parameter, then the harness automagically assumes the test is synchronous. Very nice.

We did have one problem with Mocha. It has an internal recursive process that can cause a stack overflow in IE. Derick Bailey came up with a quick workaround by reseting the stack before each test with a call to setTimeout in our test helper script.

beforeEach(function (done) {
    setTimeout(done, 0);

As mentioned before, Mocha is primarly for Node. However Mocha’s creator TJ Holowaychuk, graciously allowed me to setup a Nuget package to make it easier for Windows developers to use Mocha.

Steps to install Mocha

  1. Right-click on the test project and select Manage Nuget Packages
  2. Seach for “mocha”
  3. Select “mocha for browsers” and click Install
  4. Open the default.html page and reference the scripts. They are located in the \lib folder. (see below)
  5. Open the default.js file and add some where after app is ready.

In my default.html:

<link rel="stylesheet" type="text/css" href="">
<script src=""></script>
 <!– choose the style that you want for tests first –>

<!– then reference your actual test script –>

A simplified default.js might be:

(function () {
    ‘use strict’;

    var activation = Windows.ApplicationModel.Activation,
        app = WinJS.Application,
        nav = WinJS.Navigation;

    app.addEventListener(‘activated’, function (args) {
        if (args.detail.kind === activation.ActivationKind.launch) {
            args.setPromise(WinJS.UI.processAll().then(function () {
    }, false);


What to mock?

The next big question was about making our code “testable”. I don’t like saying that because, in general, we don’t want test concerns to be bleed into the code. (I have some personal principles about these sorts of practices.)

At first, I tried to create a system that would completely mock out every WinRT API. I modeled it after CommonJS Modules. In essence, I made every “module” in my app use a require function to locate its dependencies. Using this approach you had to reference the WinRT API in the very unnatural form of:

var knownFolders = require(‘Windows.Storage.KnownFolders’); 

instead of the standard:

var knownFolders = Windows.Storage.KnownFolders;

This made it easy (ish) to mock out the WinRT call in my tests. However, there were a number of negatives to this approach. Mostly, it added an extra layers of complexity and it broke tooling (such as Intellisense and code navigation).

Instead, we decided to take a more functional approach to our code. As much as was reasonable, we tried to write our code as functions with inputs instead of as objects with dependencies. Then in our tests we could invoke the functions passing in “mocks” that were shaped like the necessary WinRT dependencies. This mean that we had thin layers in our app that invoked the functions and passed in the necessary bits. It also meant that in a few cases, we had to run tests against the actual WinRT objects. (Technically, I would call these “integration” tests instead of “unit” tests).

The best example of this approach in the HiloJS project (so far) can be found in tileUdater.js. In that file, we create a simple object that coordinates the real work using a set of functions. The major functions are defined in their own files (all inside the \Hilo\Tiles folder). We “export” these functions using WinJS.Namespace.define. Exporting them makes them available to the code in tileUpdater.js as well as our tests.


So far this arrangement has worked really well for us. Working with Mocha has been a lot of fun. The test authoring experience isn’t quite as smooth as I’d like, but I’m sure that will come as we gain more experience.
Remember though, this project is very much a journey, so keep on eye on the project site. We’ll be writing more about it as we learn.

As always, your feedback is greatly desired. Do you have a better way? How does this approach strike? Feel free to speak up our the project’s discussion board.

Wednesday, August 01, 2012  |  From Christopher Bennage

I’m a few weeks into my latest p&p project. We’re exploring how to build Windows 8 applications with HTML and JavaScript. I’ll refer to these apps as “WinJS apps”.

This post is a very brief overview and introduction to some terminology related to WinJS. It’s my personal take and it’s certainly not official. All of the official documentation can be found at the Dev Center.

What is a WinJS app?

In my recent expereince there is often some confusion about Windows 8 apps in general, so let’s begin there.

Windows 8 apps are similar to what you would find on Windows Phone, iOS, or Android, in that they are sandboxed and they have to declare to user when they use more advanced APIs (like location awareness for example). The only way for users to get Windows 8 apps is through the store.

Windows 8 apps can be built with C++ and XAML, C#/VB.NET and XAML, and JavaScript and HTML. All three choices have access to the Windows Runtime. It’s the consolidated API was interacting with the OS.

When using JavaScript, the Windows Runtime is available as the global object Windows.

In addition to the Windows Runtime (which I sometimes personally call WinRT), there is the Windows Library for JavaScript or WinJS. This is different from WinRT. It’s pure JavaScript and only availabe to JavaScript apps. It’s automatically referenced when you create a new project. It is available as the global object WinJS.

WinJS includes lots of helpful bits:

  • an implementation of CommonJS Promises/A.
  • some advanced UI controls
  • DOM utilities
  • navigation and xhr helpers
  • and more

Technically, you don’t have to use WinJS. If you wanted to, you could ignore it. In practice though, it can be pretty useful.

Finally, you can develop with standards-based HTML, CSS, and JavaScript without worrying about cross-browser issues. For example, I haven’t felt the need for jQuery because I can just use document.querySelector without fear.

Likewise, don’t go looking through WinJS for standard controls; just use the native HTML controls that you already know and love.

Friday, April 27, 2012  |  From Christopher Bennage

It’s common for a single web page to include data from many sources. Consider this screen shot from Project Silk. There are four separate items displayed.

The primary concern of the page is displaying a list of vehicles. However it also displays some statistics and a set of reminders. I labeled the stats and reminders as orthogonal because they are (in a sense) independent of the primary concern. Finally, there is the ambient data of the currently logged in user. I call this data ambient because we expect it to be present on all the pages in the application.

It’s a common practice in MVC-style applications to map a single controller action to a view. That is, it is the responsibility of a single action to produce everything that is needed to render a particular web page.

The difficulty with this approach is that other pages often need to render the same orthogonal data. Let’s examine the code for the action invoked by \vehicle\list.

public ActionResult List()

    var vehicles = Using<GetVehicleListForUser>()

    var imminentReminders = Using<GetImminentRemindersForUser>()
        .Execute(CurrentUserId, DateTime.UtcNow);

    var statistics = Using<GetFleetSummaryStatistics>()

    var model = new DashboardViewModel
                        User = CurrentUser,
                        VehicleListViewModel = new VehicleListViewModel(vehicles),
                        ImminentReminders = imminentReminders,
                        FleetSummaryStatistics = statistics

    return View(model);

Disregarding how you might feel about the Using<T> method to invoke commands and other such details, I want you to focus on the fact that the controller is composing a model. We generate a number of smaller viewmodels and then compose them into an instance of DashboardViewModel. The class DashboardViewModel only exists to tie together the four, otherwise independent data.


Personally, I prefer to avoid classes like DashboardViewModel and simply rely on dynamic typing in the view. However, others feel strongly about having IntelliSense support in the view.


Project Silk had separate actions just to serve up JSON:

public JsonResult JsonList()
        var list = Using<GetVehicleListForUser>()
            .Select(x => ToJsonVehicleViewModel(x))

        return Json(list);

You’ll notice that both JsonList and List use the same GetVehicleListForUser command for retrieving their data. JsonList also projected the data to a slightly different viewmodel.

Reducing the Code

While reevaluating this code for Project Liike, we decided to employ content negotiation. That is, we wanted a single endpoint, such as \vehicle\list, to return different representations of the data based upon a requested format. If the browser requested JSON, then \vehicle\list should return a list of the vehicles in JSON. If the browser requested markup, then the same endpoint should return HTML.

First, we needed to eliminate the differences between the JSON viewmodel and the HTML viewmodel. Without going deep into details, this wasn’t hard to do. In fact, it revealed that we had some presentation logic in the view that should not have been there. The real problem was that I wanted the action to look more like this:

public ActionResult List()
    var vehicles = Using<GetVehicleListForUser>()

    return new ContentTypeAwareResult(vehicles);

Only, the view still needed the additional data of statistics and reminders. How should the view get it?

We decided to use RenderAction. RenderAction allows a view to invoke another action and render the results into the current view.

We needed to break out the other concerns into their own actions. For the sake of example, we’ll assume they are both on the VehicleController and named Reminders and Statistics. Each of these action would be responsible for getting a focused set of data. Then in the (imaginary) view for List we could invoke the actions like so:

// List.cshtml 
@foreach (var vehicle in Model)

<section role="reminders">
@{ Html.RenderAction( "Reminders", "Vehicle") }

<section role="statistics">
@{ Html.RenderAction( "Statistics", "Vehicle") }


Note that each action has it’s on associated view.


The value of using RenderAction is that we where able to create very simple actions on our controllers. We were also able to reuse the actions for rendering both markup and JSON.

A secondary benefit is the separation of concerns. For example, because we moved the responsibility of composition from the controller into the view, a designer could now revise the view for the \vehicle\list without needing to touch the code. They could remove any of the orthogonal concerns or even add new ones without introducing any breaking changes.

The Downside

There are a few caveats with this approach.

First, don’t confuse RenderAction with RenderPartial. RenderAction is for invoking a completely independent action, with its own view and model. RenderPartial is simply for renders a view based on a model passed to it (generally derived from the main viewmodel).

Secondly, avoid using RenderAction to render a form. It’s likely won’t work the way you’d expect.This means that any form rendering will need to occur in your primary view.

Thirdly, using RenderAction breaks the model-view-controller pattern. What I mean is that, in MVC, it’s assumed that the view does nothing more than render a model. Controllers invoke a view, and not vice versa. Using RenderAction breaks this rule. Personally, I have no problem breaking the rule when it results in code that is more simple and more easily maintained. Isn’t that the whole point of best practices anyway?

Tuesday, February 07, 2012  |  From Christopher Bennage

Acknowledgment: This is meant to be the Windows equivalent of Anders Janmyr’s excellent post on the subject of finding stuff with Git. Essentially, I’m translating some of Anders’ examples to Powershell and providing explanations for things that many Windows devs might not be familiar with.

This is the third in a series of posts providing a set of recipes for locating sundry and diverse thingies in a Git repository.

Determining when a file was added, deleted, modified, or renamed

You can include the –diff-filter argument with git log to find commits that include specific operations. For example:

git log –diff-filter=D # delete
git log –diff-filter=A # add
git log –diff-filter=M # modified
git log –diff-filter=R # rename

There are additional flags as well. Check the documentation. By default, git log just returns the commit id, author, date, and message. When using these filters I like to include –summary so that the list of operations in the commit are included as well.

N.B. If you run a git log command and your prompt turns into a : simply press q to exit.

I don’t think that you would ever want to return all of the operations of a specific type in the log however. Instead, you will probably want to find out when a specific file was operated on.

Let’s say that something was deleted and you need to find out when and by whom. You can pass a path to git log, though you’ll need to preced it with and a space to disambiguate it from other arguments. Armed with this and following Ander’s post you would expect to be able to do this:

git log –diff-filter=D –summary – /path/to/deleted/file

And if you aren’t using Powershell this works as expected. I tested it with Git Bash (included with msysgit) and good ol’ cmd as well. Both work as expected.

However, when you attempt this in Powershell, git complains that the path is an ambiguous arugment. I was able to, um, “work around” it by creating an empty placeholder file at the location. Fortunately, Jay Hill heard my anguish on Twitter and dug up this post from Ethan Brown. In a nutshell, Powershell strips out the . You can force it to be recognized by wrapping the argument in double qoutes:

git log –diff-filter=D –summary "–" /path/to/deleted/file

That works!

I’m guessing that Powershell considers to be an empty arugment and therefore something to be ignored. I also assume that when the file actually exists at the path that git is smart enough to recognize the argument as a path. (Indeed, the official documentations says that “paths may need to be prefixed”).

While we’re here, I also want to point out that you can use wild cards in the path. Perhaps you don’t know the exact path to the file, but you know that it was named monkey.js:

git log –diff-filter=D –summary – **/monkey.js

Happy hunting!

Wednesday, February 01, 2012  |  From Christopher Bennage

Acknowledgment: This is meant to be the Windows equivalent of Anders Janmyr’s excellent post on the subject of finding stuff with Git. Essentially, I’m translating some of Anders’ examples to Powershell and providing explanations for things that many Windows devs might not be familiar with.

This is the second in a series of posts providing a set of recipes for locating sundry and diverse thingies in a Git repository.

Finding content in files

Let’s say that there are hidden monkeys inside your files and you need to find. You can search the content of files in a Git repositor by using git grep. (For all you Windows devs, grep is a kind of magical pony from Unixland whose special talent is finding things.)

# find all files whose content contains the string ‘monkey’
PS:\> git grep monkey

There several arguments you can pass to grep to modify the behavior. These special arguments make the pony do different tricks.

# return the line number where the match was found
PS:\> git grep -n monkey

# return just the file names
PS:\> git grep -l monkey

# count the number of matches in each file
PS:\> git grep -c monkey

You can pass an arbitrary number of references after the pattern you’re trying to match. By reference I mean something that’s commit-ish. That is, it can be the id (or SHA) of a commit, the name of a branch, a tag, or one of the special identifier like HEAD.

# search the master branch, and two commits by id, 
# and also the commit two before the HEAD
PS:\> git grep monkey master d0fb0d 032086 HEAD~2

The SHA is the 40-digit id of a commit. We only need enough of the SHA for Git to uniquely identify the commit. Six or eight characters is generally enough.

Here’s an example using the RavenDB repo.

PS:\> git grep -n monkey master f45c08bb8 HEAD~2

master:Raven.Tests/Storage/CreateIndexes.cs:83:         db.PutIndex("monkey", new IndexDefinition { Map = unimportantIndexMap });
master:Raven.Tests/Storage/CreateIndexes.cs:90:         Assert.Equal("monkey", indexNames[1]);
f45c08bb8:Raven.Tests/Storage/CreateIndexes.cs:82:          db.PutIndex("monkey", new IndexDefinition { Map = unimportantIndexMap });
f45c08bb8:Raven.Tests/Storage/CreateIndexes.cs:89:          Assert.Equal("monkey", indexNames[1]);
HEAD~2:Raven.Tests/Storage/CreateIndexes.cs:83:         db.PutIndex("monkey", new IndexDefinition { Map = unimportantIndexMap });
HEAD~2:Raven.Tests/Storage/CreateIndexes.cs:90:         Assert.Equal("monkey", indexNames[1]);

Notice that each line begins with the name of the commit where the match was found. In the example above where we asked for the line numbers, the results were in the pattern:

[commit ref]:[file path]:[line no]:[matching content]

N.B. I had one repository that did not work with git grep. It was because my ‘text’ files were encoded UTF-16 and git interpretted them as binary. I converted them to UTF-8 and the world became a happy place.


Sunday, January 29, 2012  |  From Christopher Bennage

Acknowledgment: This is meant to be the Windows equivalent of Anders Janmyr’s excellent post on the subject of finding stuff with Git. Essentially, I’m translating some of Anders’ examples to Powershell and providing explanations for things that many Windows devs might not be familiar with.

This is the first in a series of posts providing a set of recipes for locating sundry and diverse thingies in a Git repository.

Finding files by name

Let’s say that you want locate all the files in a git repository that contain ‘monkey’ in the file name. (Finding monkeys is a very common task.)

# find all files whose name matches ‘monkey’
PS:\> git ls-files | Select-String monkey

This pipes the output of git ls-files into the Powershell cmdlet Select-String which filters the output line-by-line. To better understand what this means, run just git ls-files.

Of course, you can also pass a regular expression toSelect-String (that is, if you hate yourself.)


[Next, searching for files with specific content.](/blog/2012/02/01/finding-stuff-in-your-git-repo-2/)

Monday, January 09, 2012  |  From Christopher Bennage

My interest in making software well is an accident. What I’m really interested in is living life well. Chasing that chimerical beast of software “best practices” is merely a happy side-effect.

To that end, there’s an ancient maxim: ‘know thyself’. Despite over three decades of living with myself, I am often surprised by what I do. Surprised, and many times embarassed.

For example, last week I complained on twitter about what I had perceived as selfish and inconsiderate behavior of some of my fellow employees. It was quickly pointed out to me that I was wrong; that I was completely misinterpreting my observations.

Once I realized my mistake, my immediate thought was “Oh, I don’t want people to think that I’m a jerk. I wish I hadn’t said that”. Shortly afterwards though, I realized that I had been more concerned about what other people thought and not my real problem. The real problem was that I was a jerk. I had judged people I did not know with only scant evidence. This reminded me of another ancient maxim: “judge not, that ye be not judged”.

Now, here is the surpising conclusion. I’m glad that I stated my faulty opinion out loud, despite that it embarassed me, because it revealed my fault and I had to correct it. I had to confront my own prejudice and fix it. If I had kept the venom to myself, I would have gone on nursing my prejudice.

My take away: it doesn’t matter what people think about me, it matters what I am. It is better for me to surface my flaws and fix them, than it is for me to hide them and decay.

Tuesday, November 01, 2011  |  From Christopher Bennage

Working with people is a lot like working with code. New relationships are green fields. Over time they become brown fields and (just like code) they require maintenance. I’m sure that everyone reading this can identify some legacy relationships that they would describe as well complicated. Just like some legacy code.

<!– more –>

I mean a lot with the word ‘relationship’. I have in mind everything from co-workers to friends to significant others. All of these variations require maintenance and I think we should deliberately structure our relationships so that maintenance is easier.

Interaction Smells

So what is the social equivalent of a switch\case statement?

We talk about code smells in software development as suggestive indicators that something is wrong. When it comes to relationships, I’ll call them interaction smells. I would consider these common emotional responses to be smells:

  • avoidance
  • irritation
  • suspicion

Personally, I have been guilty of avoiding someone because I thought I would irritate them and I didn’t want the hassle. This was in a work environment and it a negative effect on the overall efficacy of the group. My impulse to avoid was a smell and it led to a problem that needed to be addressed.

Amicability Debt

Bad code gets worse over time. We call this technical debt. Relationships that have soured do not get better by themselves. Little fractures grow over time. If we don’t address them when we smell them, the stink only gets worse. In addition, the stinky relationship can be begin affecting other parts of design, uh I mean, other social interactions (e.g., the team you are working with).

Refactoring the Relationship

Relationships are more difficult to work with than code for one primary reason:

You cannot revert back to a previous state if your changes fail.

Nevertheless, we often need to make changes. Refactoring code doesn’t change the exposed functionality, we just make internal changes to improve it. If you are beginning to have problems with your boss, that doesn’t necessarily mean it’s time to quit (that would be changing the function) rather you might just need some relational refactoring.

But what do I mean by refactoring a relationship? Well, there’s a lot to be said and you can find a good deal of practical advice on dealing with conflict over on “Doc” List’s blog.

In brief though, I mean this:

Be honest and humble. “Hey, Joe, I feel like you’ve been a bit on edge with me. Did I do something to frustrate you? I’d like to clear the air.” Then talk it over. Again, refer to Doc’s blog for lots of details.

One final qualification, since you cannot revert what you say and do, you must be deliberate and thoughtful about your refactoring.

This was originally posted this in August 2009, but I needed a reminder myself so…

Thursday, October 27, 2011  |  From Christopher Bennage

I’ve alluded before that I did a large chunk of my development in some form of ECMAScript for the first ten years of my professional life. Now, JavaScript is cool again for the first time. Everyone wants to learn it.

So, like me, you probably already kinda maybe knew JavaScript. But times have changed and now it’s a serious language. How do you get up to speed? Here’s what I did.

Read Some Books

Eloquent JavaScript

This is probably the best book to start with if you are really rusty (which also includes plain ol’ new as well). Personally, I found the book a bit tedious and I didn’t quite finish.

Did I mention that it’s free?

JavaScript: The Good Parts

An essential read for modern JavaScript development. It’s short and terse and easy to read. Douglas Crockford is highly regarded, though he can get occasionally harsh some mellow. He’s the supreme overlord author of JSlint, a nifty tool that’s useful for detecting the not so good parts in your own JavaScript. The information in this book is foundational and I recommend reading it soon.

JavaScript Patterns

This book is awesome. Seriously. Someone should give Stoyan a trophy. It deals with higher level patterns in your JavaScript applications. Be sure to read it after you become comfortable with core language concepts.

High Performance JavaScript

I haven’t actually read this one yet, but it’s on my list. I have however heard Nicholas C. Zakas speak and from that I suspect that the content will be excellent.

Staying in Touch

I’ve found it a little difficult to stay abreast of what’s having in the JavaSCript community.

JavaScript Weekly

The weekly podcast and its associated newsletter have been excellent. Highly recommended.

On the Interwebz

Start with Elijah Manor. Aside from just being a good guy, Elijah is a perpetual fountain of information. So, you’ll want to follow him on Twitter. Caveat: Following Elijah is drinking from a firehouse.

I also recommend:

I’m sure there are many other resources. Please add additional ones in the comments.

Some Thoughts

Here’s a few thoughts about learning JavaScript. Take them or leave them, but these are my current opinions.

Prototypes, not classes

JavaScript is not a classical language (that’s fancy talk for ‘class based’ language). Sometimes it looks classical, and may even taste a little classical, but really it’s not. Don’t try to force it. I think you’ll be happier and you’ll write happy little functions if you embrace it’s prototypical nature. If you don’t understand the difference, that’s okay. You will after reading the books I listed above.

Don’t confuse the language and the environment

We mostly know JavaScript through browser development. As such, we’ve generally confused the inside evil of the DOM with JavaScript itself. Or at least we did before jQuery rescued us.

However the browser isn’t the only environment. For the troglodytes amongst us, you can use Node.js and write JavaScript on the server.

Leverage the natural strengths

Each of these concepts deserves a post (or more) on their own, so I won’t go into details.


Don’t mix trying to learn JavaScript with trying to learn a framework or library. My initial attempt to learn Ruby was thwarted by Rails. I know that some folks will disagree with me on this point. Here are my reasons:

  • It’s likely that you’ll encounter many new concepts just learning the language.
  • Sometimes it’s difficult to discern between a language feature and a framework feature.
  • Many frameworks embody an opinions that can (unintentionally) mislead you about the language (e.g., many frameworks attempt to make JavaScript classy).

Now, having said that, I do recommend exploring the vast diversity of frameworks and libraries out there after you’ve become comfortable with JavaScript.

Some Resource

What else can you add?

Wednesday, October 19, 2011  |  From Christopher Bennage

Take this post cum granlis salis. I’m trying to figure this stuff out and I’m thinking out loud.


Whenever a browser makes a request, it includes a string identifying itself to the server. We commonly refer to this as the user agent string. This string identifies the browser and the platform and the version and a great deal more such nonsense.

This sounds great in theory. We should be able to use this data to optimize what’s being sent to the (mobile) browser. However, there’s been something of a sordid history for user agent strings. In retrospect, we’ve realized that user agent sniffing is a tool that has often hurt more than it has helped.

We’ve learned to favor feature detection over browser detection (or device detection). Take a look at modernizr and for more on the that front. The success of feature detection has also resulted in a shift from server logic to client logic. We detect features on the client but we detect user agent strings on the server, before we send anything to the client.

How does all this play into the mobile web? One of the key mobile features we are interested in is screen size. Luckily for us, the W3C has blessed us with media queries. In a nutshell, media queries allow you to conditionally apply CSS based properties of the display device. This has given rise to something known as Responsive Web Design. Responsive Web Design is about having a single set of markup whose layout can respond to the device’s display capabilities. Unfortunately, there are a few rough edges with this approach.

Moving Backwards…

In the mobile world, client-side feature detection has a few drawbacks. It requires extra code to be sent to the browsers and it takes additional processing on the client. It’s also likely that you’ll end up sending more than is really needed (or that you’ll need to make additional requests).

One solution to this conundrum is to use the open source “database” called WURLF. You can query WURL with a user agent string and it will return a set of capabilities. I think of it as feature detection on the server. Though admittedly it’s a bit misleading to call it that.

This means your server can ask questions like “Does this client support HTML5? If no, what do they support?” before the first response is even sent.

WURLF has commercial support and a C# API. For ASP.NET developers, 51Degrees has an open source project called Foundation that is built on top of WURL. It uses an HttpModule to automatically query WURL and populate the Request.Browser. Setting up WURLF without Foundation takes a little bit more work, but not too much. Both are available on Nuget: WURL and 51Degrees.

What should you do?

I don’t think that there is a cut and dry answer at the moment. What you do depends heavily on your target audience. If you are targeting the North American market there’s a good chance you’ll be okay with a single set of markup, going with a responsive mobile-first design. In other words, there would be no need for something like WURLF.

However, you might need a very broad reach or you might be targeting a market heavy in feature phones or something else that’s very different from North America. In those cases, it is good to understand your options.

Monday, October 17, 2011  |  From Christopher Bennage

I’ve recently discovered that I favor blocks over playsets. I’m talking about toys, and of course the canonical example of blocks is Legos. You can build nearly anything with them. They are useful, versatile, and inviting.

Now, the term ‘playset’ warrants a bit more explanation. I don’t mean the large outdoor sets with swings and sandboxes and spring-loaded ponies. No, I’m a child of the 80s and I loved me some Star Wars playsets.

So my definition of ‘playset’ is colored by my childhood. I think think of ‘playset’ as a themed toy representing an environment. Like the Hoth playset pictured here. If you want to pretend you are the Imperials raining destruction upon a ragtag Rebel Alliance, the Hoth Imperial Attack playset can’t be beat.

The problem is that’s all you can do with it. I mean, you can’t use the Hoth playset to stage an epic Cybertronian showdown between Optimus and Megatron. (Well you can, but you’ll have admit it’s just a bit awkward.)

Let’s get back to the blocks. Those puppies can be used to reconstruct a carbonite freezing chamber as well as hosting a dramatic cliff-side battle between Snake Eyes and Storm Shadow. Better yet, you can construct worlds of your own invention instead of merely mirroring those of others.

In reality, it’s not so cut and dry. (Nothing is, is it?) No, in reality, there’s a spectrum. In reality, there is Lego® Star Wars®. The line between blocks and playsets is blurred.


I believe these categories apply to software as well, though we call them libraries and frameworks. Rob Conery asked a question about this on Google+ recently. Derick Bailey provides a definition attributed to Chris Eppstein:

“Frameworks call your code, you call library code.”

I began thinking about this from a different angle. I think that frameworks impose an opinion. Ruby on Rails has a strong opinion about how to create web apps. I think that make it a framework. Or at least, closer to the framework end of the spectrum than to the library end.

Now, to be clear, I am not saying that opinionated software or that frameworks are bad. In fact, they can be brilliant. I think Rail’s strong opinion has been a significant contribution to its success. What’s important to understand though is the limitations. When you are using a framework, the boundaries are harder to cross. The results can be strained and unnatural. The problem for me begins when our fanboy favor for a framework leads us to force its use it where it doesn’t fit.


Playing with both blocks and playsets is fun. So let’s stretch the analogy even further. What does that mean to software development? My takeaway is this:

Monday, October 10, 2011  |  From Christopher Bennage

The last few weeks I’ve been trying get a finger on the pulse of mobile web development. I wanted to identify the thought leaders, understand the big questions, and (perhaps mostly importantly) begin cataloging the practical considerations for building mobile experiences today.

Here’s where I’m at so far…

<!– more –>

What is ‘the mobile web’?

The definition of mobile web is quickly evolving. Devices are varied and the distinctions are blurring. If you think it’s as simple as iOS, Android, and Windows then you’ll be surprised. (I do genuinely love my WP7). Personally, I think the distinction between mobile and desktop is fading more and more everyday. When I say mobile web I am talking about HTML-based applications and not applications that are built natively for their respective platforms. Of course, there is debate over native apps versus web apps: when is one appropriate over the over? etc, etc. This is a question when intend to address in Project Liike.

Who to follow?

I’ve been following a mishmash of people, and I must confess that my process of qualifying them has been some haphazard.
I’m compiling a list on twitter. A number of folks on this list are signatories of future

Other sources I’ve been paying attention to are:

  • A List Apart – “For people who make websites.”
  • Smashing Magainze – Lots of articles on design, web, and of course mobile.
  • Cloud Four – Many recent and thorough posts exploring some of the big questions in mobile.
  • Yiibu – They have a lot of interesting ideas, and they’ve done some impressive work for Nokia.

I’ve also been reading through Programming the Mobile Web by Maximiliano Firtman. The first few chapters are pretty scary for someone like myself who did not understand how diverse and scattered the mobile world is. (It’s also funny to see how much has changed since the book was published in 2010.)

Anything you’d recommend?

The state of things

Caveat: This is just Christopher’s brain dump. Consider it merely food for thought.

  • There are many compeling reasons for developing mobile web apps. Not to the complete exclusion of native apps, but maybe?
  • You need to understand your target market and the devices that it uses. Don’t make assumptions. You might be surprised.
  • The space is changing, standards are evolving, solutions are being formulated. However, if you need to you need build an app today, there is still plently of pragmatic advice.

One more thought: don’t jump to conclusions. You might read about something cool like Responsive Web Design, but such cool and innovative techniques can be deceptive. Research and testing is your friend.

 Christopher Bennage News Feed 

Last edited Jun 10, 2009 at 10:21 PM by bennage, version 5