bookmark_borderRevisiting Android Development

A long while back, I wrote an article which amounted to ”the android sdk sucks.” Since then i’ve had quite a few comments about it, so I’ve devicded to re-evaluate Android Development.

The platform

Since 2009, android had gone through 3 major releases. There are literally hundreds of devices you can target, stuck running everything from 1.6 to 4.1.

In a way this is similar to PC development. You have literally no idea what sort of hardware or OS people are going to run your software on.

Thankfully nowadays Android has a fairly sophisticated set of options for specifying minimum hardware requirements in the application manifest. Starting with version 3 there are also vast improvements in support for different screen sizes and resolutions.

Unless you are doing something incredibly weird or want to support absolutely every device, hardware diversity is the least of your problems with the current SDK.

The SDK

The Android SDK is heavily influenced by what I like to refer to as the “Java mindset”. When developing an app expect to be dealing with lots of XML files, complex Java abstractions, and strange tools.

It’s also quite large (though not as large as the iOS SDK) – I easily spent an hour downloading both the base SDK and the extra packages for eclipse development.

Eclipse is still the recommended IDE for development, though you can still run all of the tools from a terminal if you find it completely intolerable.

IDE

The only major improvement I noticed with the current eclipse plugin is that the GUI builder has greatly improved. Honestly, I now feel as if I could implement a proper GUI with it.

Emulator

The Android Emulator, your only way of accurately testing an app without buying a real device, is still sadly quite slow. After turning on GPU emulation (which is oddly disabled by default), performance improves slightly though it still feels quite laggy. If I was going to do Android development professionally I would definitely get a real device.

Documentation has improved by miles. There are a lot more useful examples of how to do things, and the documentation for classes has greatly improved. I can actually follow it and not screw up.

Last time I mentioned the documentation for writing OpenGL Applications was a bit lacklusture. I can safely say this is no longer the case – there are now useful examples, both in the examples and the actual class documentation. Not to mention, OpenGL support on devices has definitely improved miles.

If your codebase relies heavily on C, Android provides a Native Development Kit (NDK). I haven’t looked much into this, as the process to use it seems fairly overly complicated and doesn’t appear to be integrated well with eclipse.

The Android SDK is of course not the only way to develop for Android. There are also solutions such as Mono for Android and Unity. I haven’t used any of these, but I have heard good things about them.

To Conclude

Both the Android SDK and Ecosystem have definitely improved by miles since the last time I looked at them. In a way, Android has matured.

While the SDK is still a bit rough around the edges, I think if you get used to it, you should be able to make some good apps with it.

Maybe i’ll be developing some Android apps in the future…

bookmark_borderThoughts on Euruko 2012

Euruko Amsterdam 2012

Just over a week ago I attended the EuRuKo 2012 event in Amsterdam. I have to say, I was largely impressed.

The event was hosted in the gorgeous Pathé Tuschinski, a cinema about 30 minutes walk away from the central station. It wasn’t that difficult to find, provided you didn’t get it mixed up with the similarly named cinema down the road.

This year to be quite honest I didn’t attend many of the talks, instead I was mainly networking around the venue. I have to say though the introduction to this years event made me laugh, and really set the tone for the rest of the event. The keynote this year was a continuation of Matzs’ keynote last year regarding Rite (now mruby) and Ruby in general, which was a nice touch.

Since the venue was right in the middle of Amsterdam, there were no problems at lunch getting food. Communication was no problem either as people in Amsterdam seem to be amazing multi-linguists.

It’s also worth noting that this years Euruko was actually filmed by a dedicated production team, the results of which can be found on vimeo.

After lunch I checked out the CocoaPods talk by Eloy Durán, which by all accounts turned out to be one of the most controversial talks this year. It started out as a pretty normal talk describing how Cocoapods (written in ruby) makes incorporating third party libraries into your Xcode project a breeze. Then all of a sudden, it turned into a musical meme fest. As someone who didn’t get the in-joke, I was a bit dumbfounded to say the least. It was a bit like seeing an obscure internet meme for the first time.

Whatever was going on inside Eloys’ head, I still have no idea. But really this shows a great side to Euruko: it’s not just about Ruby. It’s not a stuck up corporate conference where they strictly stay on topic. Anything can happen during a talk. On the second day, there was even a talk on making the perfect coffee – something you would not find at WWDC for sure!

Sadly though the bizarreness did not continue onto the next talk. ZeroMQ: Supercharged Sockets by Rick Olson did not feature dancing Octocats doing a musical number.

The rest of the talks for the day were mainly technical and covered topics ranging from Garbage Collection & JRuby to Erlang. There were also the usual set of lightning talks which provide a cool platform for attendees to pitch their problems, thoughts and products.

The second day pretty much followed the format of the first.

The second keynote was provided by Geoffrey Grosenbach, in which he discussed how people can learn from watching other developers code. I had heard this argument presented before, but watching someone like Geoffrey present a well-formed argument on its merits is a whole different experience. It was one of those talks were if it was the only talk in the conference, I would gladly still have paid for it.

My only criticism of the second day was that the ending felt a bit lacklustre. The previous year in Berlin there was a singalong to Ruby which was amazing. Nothing in the ending this year could compare to that.

Enough of the talks…

Well a conference is more than talking to people all day. What else could a participant do?

On both days there were no problems with refreshments. Tea & Coffee were provided by an excellent team. If you really wanted a break you could simply pop out to one of the numerous restaurants near the venue.

This year there were around ~600 attendees, and although the cinema screen was impressive the room was basically packed for most of the more interesting talks.

Interestingly one of organisers created a mini-game whereby a participant could scan their RFID-enabled identity badge over one of 6 white boxes spread around the venue to “check-in”. Prizes were awarded to those dedicated enough to check-in. I felt this was a rather ingenious idea, which added a cool element to the conference.

Also of help was a Euruko Mobile App which provided a neat summary of the events and even push notifications! I’m now convinced that every conference should have an app.

The after-events

On both days there were events afterwards to go to.

The first was at Club Home, which was not far from the venue. It opened its doors a bit later than expected, so people were stood around for a while. The club itself I could only describe as a double layered sardine can, as in it was so small and packed I could hardly see where I was going. For those used to nightclubs, i’m sure it was great. Still, free drinks were provided so it wasn’t all that bad.

The second was at Roest, which seemed like the most difficult place to find in Amsterdam. After tagging along with some Euruko attendees who happened to be passing by the central station, we walked for what seemed like hours until we stumbled across a locked gate across the canal, which stood between us and the venue. There were no tricks this time, we had to walk right around. After what seemed like an eternity, we finally arrived.

In terms of aesthetics, it was way cooler and quirkier than Club Home. It was basically a warehouse stuck in the middle of a sandpit next to the canals. I almost thought I stumbled across a real-life Gary’s Mod map, as there were bizarrely placed props everywhere. Free drinks were again provided.

So really, was it really worth attending Euruko 2012? Yes, absolutely. It’s a great magnet for ruby developers. It also doesn’t fall into the trap of being a conference where everyone just strictly talks about Ruby all day, so it actually has some much-appreciated depth to it.

Bring on Euruko 2013 in Athens!

bookmark_borderPostmortem of a Ludumdare 23 Game

Last weekend I decided to take part in Ludumdare 23 Jam. Here is a short postmortem.

The theme was “Tiny World”. After a little brainstorming and inspiration I decided to have a go at making an RTS.

After coming up with a basic list of things to do, I started to work on the engine. Since I wanted the minimum of fuss when it came to actually deploying the game, I decided to try out NME, a flash-like API written in haXe which describes itself as a “a free, open-source framework that enables development for iOS, Android, webOS, BlackBerry, Windows, Mac, Linux and Flash Player from a single codebase” – perfect!

While the “game” ended up being a complete unsubmittable disaster, I still found the experience to be quite insightful.

TinyConquer

What went right

After a few hours I managed to get a basic prototype working with a unit and walls. At this stage, the unit merely moved towards the mouse cursor and collided against the walls. Next came unit-unit collision, which turned out to be a bit more complicated since I decided to use arbitrarily sized bounding boxes.

Path finding ended up being the simplest thing to implement. For this I used AStar. While initially I stumbled across a rather insightful article with illustrations, I eventually turned to a more helpful article on wikipedia which filled in the gaps.

At the end of the second day, after asking myself “am I really having fun making this?”, I decided not to continue. In the end all I really got implemented was a simple map with units that moved around walls.

What went wrong

Collision detection

It’s easy to write code to determine if a collision has occured. It’s another thing to handle what happens after a collision has occured. While I managed to eventually ironed out most of the issues with the tile collision, later on I found there were still problems if the unit entered a tile incorrectly where it could get stuck.

Simulations are hard to debug, especially when you add in movement and collision. Knowing how annoying buggy collision detection is, I really wanted to get this solid and ended up spending far too much time resolving collision and movement issues.

TinyConquer

NME wasn’t that great

One of my biggest problems was with NME. Not only did I find drawing to be CPU intensive, I stumbled across a rather odd bug with rotations in the HTML5 target. As soon as an element was rotated, the transform for both the sprite and its children became incredibly screwed up. This did not occur in any of the other targets tested.

In addition later on while it worked fine in Chrome and Firefox, Safari was a different matter: I encountered tons of scripting errors and even made it crash!

Finally of note is that text drawing didn’t work properly in the neko target. While I ended up not needing to draw text, it still put a dampner on things.

Loss of motivation

To start off with, while I hyped myself up beforehand I found that when it came to it, I wasn’t really “in the zone”. This made progress much slower than expected. Half-way through, I started to REALLY loose motivation and ended up procrastinating a lot. Coupled with bumping into silly implementation issues, this really killed development. Realy, I underestimated the effort required to implement something like an RTS from scratch.

In the end, While my game made for a nice tech demo with prototype vector graphics, it was distinctly lacking in gameplay.

To Conclude

Using something you don’t know well in a development contest is a recipe for disaster. Having not used NME before, I wasn’t quite sure how certain things worked so I was constantly looking back and forward through the documentation.

In comparison had I used something like Unity, 99% of the underlying functionality would already have worked leaving me to work more on the gameplay and art elements which are far more important for a game.

When developing a game, getting gameplay in ASAP is a must. Without it, all you are really staring at is a tech demo, not a game.

For something like Ludumdare, simpler is better. While an RTS is conceptually simple, the underlying framework isn’t. Looking at some of the entries, a lot of the better ones are based on simpler gameplay concepts or backed by powerful game engines.

Finally, it’s worth remembering that most of the time, a good game isn’t made in a day. In fact, a good game can take years to create. So don’t treat Ludumdare as a way of making a new cool hit game. It probably isn’t going to happen.

For reference, I have put up the code to the “game” up on github.

bookmark_borderUsing OpenAL on iOS

Recently I have been tackling problems in OpenAL code for an iOS app. The trouble with OpenAL is that despite there being a spec, the underlying implementation for this audio API is mostly undefined. What can work for one system can fail miserably in another.

In the case of iOS, no sourcecode is provided so the underlying implementation is partly a mystery, though we can infer from the documentation it uses the AudioUnit API.

Here are a some best practices i’ve developed based on experimentation:

Do you really need to use OpenAL?

In some cases OpenAL may be overkill. For instance if you are just playing a one-off sound effect when pressing a button, it’s probably a better idea to use AVAudioPlayer.

If however you are making a fully immersive 3D AAA shooter, you’re probably a better off using OpenAL. If you are really crazy, for ultimate control you could even try writing your own audio mixer.

Never re-use the same source twice

The implementation of OpenAL on iOS 5.x acts rather oddly when it comes to streaming sources. Lets say you make a music manager and decide to allocate a source. You allocate some buffers then queue them for the source. You then re-use this source to play various music tracks.

However a problems arise as soon as the source is re-used. If you simply stop it and de-allocate buffers, when you queue up a new set of buffers for a new music track the OpenAL implementation seems to get confused and only plays the first buffer you allocate.

Attempting to recover the source at this point is impossible. You can stop it, rewind it, throw buffers away… nothing seems to get it to work properly. Based on this I can only assume you are not meant to use the same source more than once, at least for streaming sources.

Don’t allocate sources you don’t need

Each source you allocate and play in OpenAL will add a mixing unit to the mixer, which will be mixed to produce the final stream. In addition every source you allocate uses memory.

Instead of allocating a bunch of sources, only allocate sources as you need them. From what i’ve been able to determine, sources aren’t that expensive to allocate. It might make sense to limit your allocations however, as there seems to be no fixed limit to the amount of sources you can allocate on iOS.

At one point after receiving a general speedup by changing my source allocation, I theorized that perhaps every source you allocate in OpenAL is mixed regardless of whether or not its playing. However basic experimentation seems to indicate there is no major performance issue with merely allocating lots of sources.

Cache your buffers to disk

If for example you choose to encode your sound effects in something like OGG format to save space, you might notice it takes a noticeable amount of time to decode the audio each time you buffer it, even when using Tremor. This is no fun to an end-user.

Using an easier to decode codec is one solution, but if you have long sound effects it’s still going to take time to decode all the samples.

So it makes sense to keep buffers around for sound effects. However if you have too many buffers you will likely bump into low memory alerts, forcing you to purge non-playing buffers. When you need to play your sound effects again, you need to decode them all over again.

One way you can solve this is by storing your decoded buffers temporarily to disk in the temporary folder (NSTemporaryDirectory). That way when you get a memory alert you can dump your buffers, then whenever you need them again you can quickly re-load them straight from disk.

To conclude

Despite implementation-specific bugs, OpenAL is actually quite a nice and simple library for playing both 2d and 3d audio. There are no licensing fees, the specification is open. It’s available for practically every relevant modern development platform, what more could you want?

Have fun using OpenAL!

bookmark_borderSyncing Youtube with Websockets

A while ago I came across SynchTube, a rather neat service which allows you to synchronise the playback of a Youtube videos (as well as other services).

This creates a rather interesting viewing experience: one can discover and collaboratively critique videos they otherwise would never have seen before.

Synchtube

After playing around with Synchtube, I was curious as to how it worked. It turns out the embeddable youtube plugin allows you to query and set the time of a playing video. As long as you have some sort of central authority and a realtime messaging service, you can synchronise playback between multiple users.

In the case of Synchtube, it synchronises the video by sending commands over Websockets. For instance, if you play around with the javascript code, you’ll find simple JSON messages being processed from the server:

Handling add_user with data ["11b85c64",null,null,null,false,false,0].
Handling num_votes with data {"votes":0}.
Handling remove_user with data "0282e6d7".
Sending < with data "=D".
Handling < with data ["66716fea","=d"].
Handling < with data ["55c2f7fa","cats and boots and cats and boots and boots and cats and boots and cats"].
Handling add_user with data ["bbabb3dc",null,null,null,false,false,0].
Handling num_votes with data {"votes":0}.
Handling remove_user with data "bbabb3dc".

Onto the Project

Making youtube videos sync over the web sounded like a cool idea for a project, so I decided to make a simple simple synchronised playback system: play a video in one window, and it’s replicated in another via a simple Websocket protocol.

TestTube 1

Putting together Sinatra, ActiveRecord, and thin-websocket I made an initial prototype using a simple JSON messaging protocol with two key commands: “video” to set the video, and “video_time” to set the video time:

[client] {'t': 'subscribe', 'channel_id': 1}
[server] {'t': 'userjoined', 'user': {id: => 'anon_123', name: 'Anonymous', anon: true }}
[server] {'t': 'skip', 'count': 0}
[server] {'t': 'video', 'time': 0, 'force': true, 'url': 'EJ_wXOFQV3M', 'provider': 'youtube', 'title': 'STALLMANQUEST', 'duration' => 152.953, 'playlist' => false, 'position': 0, 'added_by': 'Anonymous'}

A designated leader simply polled the youtube control and sent updates to all the other clients.

Nice enough, but I decided to continue on by adding more functionality: the playlist, video skipping, chat and moderation. All of this functionality ended up being passed around as simple JSON messages through the Websocket connection.

All was going well until I bumped into a design crisis. Originally I wanted users to sign up in order to create and moderate channels, but this solution was becoming more and more undesirable. I wanted to try something different.

So I borrowed an idea from a certain anonymous image board: make everyone anonymous and use Tripcodes to identify users who want to be identified.

Why? well after using Synchtube for a while, I found the only thing I was interested in was the sharing and discovery of videos, not the excessive point scoring by the community. I also noticed a general hostility to users without user accounts, which to me detracted from the experience of watching cool videos. By making everyone anonymous by default I hoped to emphasise the viewership aspect.

While this required rewriting half the authentication mechanism, it was well worth it. Except for the administration interface I didn’t have to worry about implementing user account logic.

Another change I made was to use Backbone.js to help structure the front-end, as well as the administration interface. While this significantly slowed down development, I felt it really helped to keep the back-end service simple and lightweight.

Problems with WebSockets

Websockets are undeniably great as they solve a fundamental problem of how to push messages to clients. Unfortunately though they suffer from poor support in web servers. For instance, during development I was able to serve both the front-end and the Websocket communication through a single port, but I found replicating such a configuration in a production environment to be practically impossible.

Using nginx, I was not able to open a web socket through its HTTP 1.1 backend proxy. This led to the rather undesirable solution of having to serve Websockets directly from the app server on a separate port.

Another problem with Websockets is they don’t seem to work reliably with cookies. So if for example you want to tie a Websocket connection to a logged in user, you need to use another mechanism to authenticate the user such as generating a user token.

Usage of backbone

I have to admit, I hated Backbone.js. It always seemed like a rather arbitrary solution for synchronising models between a client and server. A lot of the examples I saw seemed needlessly complicated and abstracted.

Half-way through development I was getting a bit annoyed at using so many views for something as simple as a list of items, so I decided to refactor and use a different design pattern: use a single view and take advantage of element manipulation and event bubbling in JQuery. This greatly simplified my code in many places, for example:

// Instead of this:

var BanListRow = {
  tagName: 'div',
  className: 'ban_row',
  events: {
    "click a.edit": "edit"
  }
}
BanListRow.initialize = function() {
  this.model.bind("change", this.render, this);
}

...

var BanListPanel = {
  tagName: 'div',
  id: 'banedit',
  events: {
    "click a.add_ban": 'createBan'
  }
}
BanListPanel.initialize = function() {
  BanList.bind('add', this.addBan, this);
  BanList.bind('remove', this.removeBan, this);
  BanList.bind('reset', this.addBans, this);

  BanList.fetch();
}
...

// Consolidate everything together like this :

var BanListPanel = {
  tagName: 'div',
  id: 'banedit',
  events: {
    "click a.add_ban": 'createBan'
  }
}
BanListPanel.initialize = function() {
  BanList.bind('add', this.addBan, this);
  BanList.bind('remove', this.removeBan, this);
  BanList.bind('reset', this.addBans, this);
  BanList.bind('change', this.updateBan, this);
}

Generally speaking I found it best to keep objects to a minimum and take advantage of event bubbling. After realising this I felt a bit more comfortable with using Backbone.js.

To conclude


TestTube 2

In the end TestTube turned into an anonymous synchronised youtube playlist. For those interested, I put up an instance so you can check it out:

http://testtube.cuppadev.co.uk/r/1

In all I felt this was a really cool project. It goes to show if you have an idea, even if someone has already implemented it there is nothing stopping you from having a go at implementing it yourself.

bookmark_borderCalling It Quits on RailsCollab

One of my first rails projects was Railscollab. It started off as nothing more than a proof-of-concept port of ActiveCollab to ruby, and ended up being something more feature complete and in some ways better than the original.

Unfortunately after nearly 5 years of sporadic development, i’m permanently calling it quits.

Over the years, i’ve had various people help out with RailsCollab and enquire about it. Even though it took a while, I kept updating it. I’ve had everything from hate mail to genuine appreciation. It was even instrumental in landing me a job at a cool startup. But really I have never felt a longing attachment to it, and so the project has never really taken off into its own entity.

Lets not forget that even in its minimal state, an open source project is costly to maintain. Rails has a major update pretty much every year, causing problems every time the code is updated. And lets face it: you really do need to keep rails up to date in such a project, since thats where peoples interest lies. It’s a never-ending cycle.

One also has to look at the state of project management apps to realise that RailsCollab really doesn’t offer anything revolutionary or compelling to the market space. Basecamp-like “killer” systems have been cloned to death, so much that I don’t even find it funny anymore.

Do I really want to maintain a boring project management apps nobody really needs? The answer is of course no.

bookmark_borderHelp! I Have a Memory Leak

Recently I had a project which had some of the worst memory leaks in C++ i’ve ever had to deal with. It had just about every memory leak problem you could think of, all of which could have been solved with a little bit of planning.

Using tools such as Valgrind or Instruments surely helps, but they can only help you so much.

So if you have a nightmarish C++ project with memory leaks, heres a few ways in which you can solve them.

Stage 1: Forgetfulness

We start off with a simple case: when you make an object but never delete it. e.g.:

  Object *foo = new Object(); // foo never deleted

Which can be solved by:

  delete foo; // <<< delete the object

Stage 2: Garbage Collection

Sometimes you have a pointer to an object which is re-assigned at one point, but the old object is never deleted.

  Object *foo;

  foo = new Object();
  // ... later on ...
  foo = new Object();

Which can be solved by deleting the object before re-assigning:

  Object *foo;

  foo = new Object();
  // ... later on ...
  delete foo; // <<< delete the old object
  foo = new Object();

Stage 3: Destructors

Some people assume if you make a couple of classes like this:

  class Foo
  {
    Foo();
    ~Foo();
  };

  class Woo : public Foo
  {
    Woo();
    ~Woo();
  };

If you destroy an instance of Woo both ~Woo and ~Foo will be called. Only it wont: only ~Woo will be called. Anything you free in ~Foo will never be freed.

So if you want ~Foo to be called too, the destructor for Foo needs to be virtual, i.e.:

  class Foo
  {
    Foo();
    virtual ~Foo(); // <<<
  };

Stage 4: Spaghetti

Things start getting complicated when you have objects which can be referenced by multiple objects. For example:

  Object *foo, *child1, *child2;

  foo = new Object();
  child1 = new Object();
  child1->parent = foo;
  child2 = new Object(foo);
  child1->parent = foo;

Now when do we delete foo? If we make child1 or child2 delete it, we’ll probably get a crash when we delete foo twice. If we delete it elsewhere, how do we know child1 or child2 aren’t still using it?

One possible solution is to use a reference counting system like in Objective C, so when we reach 0 we delete the object:

  class Object
  {
    Object* retain()
    {
      retainCount++; // object is being used
      return this;
    }
    void release()
    {
      --retainCount; // object is no longer being used
      if (retainCount <= 0)
        delete this;
    }
    
    virtual ~Object()
    {
      if (parent) parent->release();
    }
    
    Object *parent;
  };

  // ...

  Object *foo, *child1, *child2;

  foo = new Object();
  child1 = new Object();
  child1->parent = foo->retain(); // object is being used by child1
  child2 = new Object(foo);
  child1->parent = foo->retain(); // object is being used by child2

If you want to be more fancy you can make a smart pointer class, e.g.

  // Modified Object
  
  class Object
  {
    Object* retain()
    {
      retainCount++;
      return this;
    }
  
    void release()
    {
      --retainCount;
      if (retainCount <= 0)
        delete this;
    }
  
    virtual ~Object()
    {
      parent = NULL;
    }
  
    ObjectReference parent;
  };

  // The smart pointer

  class ObjectReference
  {
  public:
    // Constructor
    ObjectReference()
    {
      object = NULL;
    }
    
    // Assignment initializer
    ObjectReference(const ObjectReference &ref)
    {
      object = ref.object ? ref.object->retain() : NULL;
    }
    
    // Assignment operator
    ObjectReference& operator=(const ObjectReference &ref)
    {
      if (object) object->release();
      object = ref.object ? ref.object->retain() : NULL;
      return *this;
    }
    
    // Pointer operator
    operator Object*()        { return object; }
    
    Object *object; // reference to Object
  };
  
  // ...
  
  Object *foo, *child1, *child2;

  foo = new Object();
  child1 = new Object();
  child1->parent = foo; // automagically retains foo
  child2 = new Object();
  child1->parent = foo; // automagically retains foo

Beware however that when you get a circular reference your objects may never be released using this method.

Stage 5: Runaway Spaghetti

Even if you have a reference counting system, you might encounter situations where you release or retain objects too much. Typically memory leak tools only tell you where objects were allocated, not who the retain/release culprit is.

One way of solving this is to keep track of where you retain and release objects

  class Object
  {
    Object* retain(char *file=NULL, int line=0, char *owner=NULL, int addr=0) {
       retainCount++; 
       if (owner)
         printf("%x: retain (%i) [%s @ %i] OWNER %s[%x]", this, retainCount, file ? file : "", line, owner, addr);
       else
         printf("%x: retain (%i) [%s @ %i]", this, retainCount, file ? file : "", line);  
       return this;
    }

    void release(char *file=NULL, int line=0, char *owner = NULL, int addr=0) {
       --retainCount;
       if (owner)
         printf("%x: release (%i) [%s @ %i] OWNER %s[%x]", this, retainCount,file ? file : "", line, owner, addr);
       else
         printf("%x: release (%i) [%s @ %i]", this, retainCount,file ? file : "", line);
   
       if (retainCount <= 0)
         delete this;
    }
    
    // ...
  };

  // ...

  Object *foo, *child1, *child2;

  foo = new Object();
  child1 = new Object();
  child1->parent = foo->retain(__FILE__, __LINE__, "Object", child1);
  child2 = new Object(foo);
  child1->parent = foo->retain(__FILE__, __LINE__, "Object", child2);

Then you can simply examine your logs and spot the problematic line of code for that extra release or retain.

Final boss

Of course once you have solved all of your leaks, you might find you bump into the arch nemesis: Memory Corruption. Specifically, this:

  class Entity
  {
  public:
    float mNextThink;
  
    Entity();
    void think();
  };

  Entity::Entity()
  {
  
  }

What is wrong with this? Well say we have some code like this….

  for (int i=0; i<mEntities.size(); i++)
  {
    if (smCurrentTime >= mEntities[i]->mNextThink)
      mEntities[i]->think();
  }

Then think may never be called, since mNextThink is never initialized, so its value will be undefined. It could be 0, it could be -10000. Who knows. The solution is simple:

  Entity::Entity() :
  mNextThink(0) // set a default value
  {
  }

With all of your memory leaks solved, you should now be able to sleep better.

bookmark_borderYou can't pirate a web app

…or can you?

Since the dawn of consumer computing, people have pirated applications. Why should web apps be any different?

Ripping off website designs or application concepts is nothing new. Though usually maintaining such a rip-off is costly and requires a significant effort to maintain. Not to mention one is always playing cat and mouse.

I’ve noticed recently the trend towards fat clients. This is where the application is written using an MVC javascript framework (Backbone, Sproutcore, Cappuccino, etc…) and the application simply uses a JSON API on the server and renders all the views on the client.

A great deal of logic and assets are stored on the client. The server is merely used as a lightweight data storage and control system.

So if you can write a replacement server, it’s trivial to “pirate” a web service.

An example

Let’s take QuietWrite for example. A rather nice web-based text editor developed by James Yu.

If we simply save the page to disk, most of the core functionality already works!

pir-start

Of course, editing text is not very useful – we need to save it too.

pir-fin

To sum it up, within an hour I was able to replicate the backend enough so that I could save documents.

After finishing the working implementation, I was surprised to find out that James supplies the source code to an example app CloudEdit which has a similar API to QuietWrite.

I needn’t have written all that code!

Given how easy it is to deploy an app nowadays you could potentially rip off an application, stick your own ads on it, and get it running all within a day. You can even get free updates for the front-end straight from the original developer.

Provided this fat client trend continues, I wonder how long it will be till we start seeing “CD Keys”, SecuROM, and other elaborate anti-piracy solutions in web applications.

bookmark_borderHow to make a great conference

I received a bit of critique for my Review of Euruko 2011. Granted I was a bit more critical of a few things compared to my twitter feed, but well…

But what do people look for in a conference? No idea. So what do I look for in a conference?

The marketing

There’s nothing worse than hearing about a conference just when its finished. Maybe I forgot about it, or maybe I just don’t follow the right people on Twitter…

Whatever the case I really wanted to go, but since the marketing failed I missed out!

Lanyrd solves this problem partially, but not every conference is listed there. I also don’t check it regularly.

Even when I hear about an event, there are still more problems that can crop up. Quite a few times i’ve noticed events listed with poor directions. Please, tell me where your event is, otherwise I can’t get to it!

The talks

A great talk is made up of several factors, notably…

  • The speaker
  • The content
  • Relevance to the event

So I would suggest a great talk has the best speaker, with the best content which is completely related to the event.

Whereas the worst talk is narrated by the worst speaker, with the most boring content, which is completely unrelated to the event.

Personally I think this is where Barcamp-style conferences shine: there are no rules. A talk could literally be about anything. Everything is relevant.

Then again themed conferences are better in that you know what to expect when you get there. If you go to a ruby conference you get Ruby, not PHP. Not to mention the schedule is usually predetermined.

The venue

A venue can be the breaking point of the event. It needs to be large enough to accommodate the attendees. It also needs a good layout otherwise congestion will form, making everything look too crowded.

Crowds are great, until everyone starts talking and we bump into the nightclub problem: I CAN’T HEAR WHAT ANYONE IS SAYING.

Besides the specifics of the building, the venue also needs to be in an accessible location. I don’t want to trek 10 miles to the venue, and if there is no food provided people need somewhere near to go to eat.

After-events

I think after-events are also a key part of a conference. Without them your post-conference networking opportunities are limited to the people you just happened to bump into at the conference.

Its also a good opportunity for forming a lasting impression of the event in the heads of attendees. If the party last night was awesome, i’ll probably be thinking of it months down the line when the next conference is on.

bookmark_borderThoughts on Euruko 2011

Euruko 2011

2 weeks ago I attended the Euruko 2011 event in Berlin. In all I would say it was well worth going to, despite a few flaws.

The event was hosted in the Kino International, a cinema close to the centre of Berlin. So it was quite easy to find. Connectivity was not a problem either, as the best free wifi i’ve seen in ages was provided.

The quality of the talks was extremely varied. For instance, David Calavera’s “JRuby hacking guide” talk drew me into a state of REM sleep. In fact the next time I have insomnia, I shall be sure to seek his assistance.

In contrast to this, Paolo Perrotta’s “The Revenge of method_missing()” was a stroke of genius. Great pacing, casual tone. I found the pictures accompanying the code to be quite complimentary and hilarious.

Really, I could divide the main talks in Euruko into one of several categories.

“Here’s my code”

A talk in which the speaker meticulously describes code, a design pattern, or something equally as boring to the outside observer. Quite often the code does not have any relation to what you are doing.

“The other day…”

A talk in which the speaker tells a story with illustrations or code to accompany their points in a way which is easy to remember.

“Use my product”

The speaker describes how they solved a problem, creating a product or project in the process.

EDIT: I would also like to point out that despite my critique of the talks, overall I thought the event was great. Things such as the keynotes and the sing-along at the end were fantastic.

Enough of the talks…

Of course, a conference is more than just watching people talk in front of an impressive cinema screen. During breaks there was an opportunity to cram into the lobby area for a free drink, or to talk to one of the many developers squished against the walls.

Not to say the venue was small, but given the natural tendency of people to congregate in areas near the seating, doorways, and conference room the feeling of being in a small space was inevitable.

Predictably the free drinks and food ran out quite quickly on both days. Rather deviously someone decided to serve sparkling water next to the still water. My taste-buds still haven’t recovered.

The after-events

On both days there were events afterwards to go to. I have to say, the party at “Tante Käthe” was the most difficult thing to find, even with some helpful pointers provided by the organisers and the modern wonders of GPS.

In fact, if I had not bumped into some other attendees I would never have guessed I needed to go down a dark pathway, over a fence, round a corner and down another dark pathway.

Sadly the planned BBQ was cancelled so there was nothing to eat, though free drinks were available. To be honest though, the event itself did not live up to expectations considering you needed the skills of Indiana Jones to find it.

The second day after-event was the Github drinkup. Kudos to Github for the free drinks. However I somewhat suspect if the Github guys weren’t around, this would never have happened.

Still, drinking in a nice easy to locate venue was a great experience.

So really, was it really worth attending Euruko? Yes. It’s a great magnet for ruby developers and business people, so networking opportunities were plentiful. And there were several inspiring gems in the talks to make it worth it.

Bring on Euruko 2012!