Hi! I’m James Darbyshire,
an Enterprise Architect
from Sydney, AUS

about me

Senior Enterprise Applications Architect at NSW Rural Fire Service. Volunteer Fire-Fighter (RAFT). Aussie Yorkshireman.

Learn More →

recent stuff

Fires Near Me
Providing fire information on bush fire incidents in NSW
My Fire Plan
A mobile and smartphone version of the Bush Fire Survival Plan
POCreate
Create a Purchase Order in SAP from your iDevice

found on

Get in touch →

XmlSerialization Considerations (on MonoTouch)

- -

I recently was making an app which needed to store data on the client. The obvious choices were:

  1. Local file storage (e.g. serialise a class to XML and store it in Documents)
  2. Use an SQLite database to store the data and read/write when needed

I have done both before, and generally find that the XML storage is a quick and easy method of storing data on the client app - however it has it’s problems…

Why XmlSerialization is good

As stated above, XmlSerialization is quick and easy to implement.

Mono (.Net) has out of the box support for XmlSerialization and System.IO to read and write files, so it’s very simple to create multiple files to store your data in and be done with it.

XmlSerialization is a perfectly natural, acceptable and encouraged way of saving files to file system. In fact, XmlSerialization works fine for simple data storage where objects are individual, and belong to their parent object, and do not need to be referenced by any other object.

Here is a simple helper class I knocked up in 30 seconds:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
using System;
using System.Xml.Serialization;
using System.IO;
using System.Xml;

public class FileHelper
{
  public void WriteToFile<T> (string filename, T item)
  {
      var serializer = new XmlSerializer (typeof (T));
      using (var stream = File.OpenWrite (filename))
      {
          serializer.Serialize(stream, item);
      }
  }
  
  public T ReadFromFile<T> (string filename) where T : class
  {
      var serializer = new XmlSerializer (typeof (T));
      using (var xreader = XmlReader.Create(File.OpenRead(filename)))
      {
          T item = serializer.Deserialize(xreader) as T;
          return item;
      }
  }
}

Consider the following simple forum domain model™:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
public class Topic
{
    public string Subject {
        get;
        set;
    }

    public List<Post> Posts {
        get;
        set;
    }
}

public class Post
{
    public string Body {
        get;
        set;
    }
}

Basically, a Topic contains an List<Post>, in Domain language Topic.Posts - logical, huh?

We can XmlSerialize that model, and we will get a nice XML file which looks like:

<?xml version="1.0" encoding="utf-8"?>
<Topic xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
  <Subject>My first topic</Subject>
  <Posts>
    <Post>
      <Body>Hello world!</Body>
    </Post>
  </Posts>
</Topic>

OK. So we have a nice XML file which is easy to read and write to, easy to pass around, easy to code. And we have a nice little helper method to use to save this file.

What’s wrong with XmlSerialization

As I said before, nothing is wrong with XmlSerialization. Use it when the problem you need to solve is solvable by using it, but understand the drawbacks (a very non-committal tongue twister for you).

XmlSerialization has a series of pre-requisite rules which must be met in order to serialize a file. I’m not going to go into them all, but if you are interested here is the MSDN article.

For me, the big ones have been:

  1. Properties must be implementations, not interfaces. e.g. List<T> not ICollection<T> or IList<T>.
  2. Relationships are not easily supported (without extra attributes)

Consider the following, more realistic domain model™:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
public class User
{
    public string Username {
        get;
        set;
    }
}

public class Topic
{
    public string Subject {
        get;
        set;
    }

    public User Owner {
        get;
        set;
    }
    
    public List<Post> Posts {
        get;
        set;
    }
}

public class Post
{
    public User Owner {
        get;
        set;
    }
    
    public string Body {
        get;
        set;
    }
}

It’s virtually the same as before, except it’s slightly more realistic:

  1. There is a new class, User.
  2. Topic contains a new property Owner of type User.
  3. Post contains a new property Owner of type User.

What we now have is a relational data structure.

O.M.G.!!

Using the same FileHelper let’s save the model to an XML file, which results in:

<?xml version="1.0" encoding="utf-8"?>
<Topic xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
  <Subject>My first topic</Subject>
  <Owner>
    <Username>Macropus</Username>
  </Owner>
  <Posts>
    <Post>
      <Owner>
        <Username>Macropus</Username>
      </Owner>
      <Body>Hello world!</Body>
    </Post>
  </Posts>
</Topic>

At first glance, it looks all ok. But it’s not…

Users are not the same instantiation

Even though the .Net code uses the same instance (reference) of User, when the XmlSerializer serializes the data to the XML file, an element <Owner> is created in both <Topic> and <Post>.

This will result in the deserialization process instantiating 2 different User objects, which do not share the same reference.

You can get around this by writing some logic that stores only the User ID, and peppering your domain objects with business logic - but that’s bad practise.

Performance and Disk usage

Secondly - Imagine the size of the XML files (and the time to serialize and deserialize) if you had 1,000 topics, each with 100 posts and just 1 user…

(I originally tried 10,000 topics, but got bored of waiting)

iOS Simulator on my MacBook:

  • Serialize and Write: 1671 milliseconds, filesize 15 megabytes
  • Deserialize and Read: 4679 milliseconds

iPhone 4:

  • Serialize and Write: 34400 milliseconds, filesize 15 megabytes
  • Deserialize and Read: 83712 milliseconds

Circular references

So, to extend our example further - we would now like to be able to reference all of the User objects Post objects, and we express this in our Domain Model as:

1
2
3
4
5
6
7
8
9
10
11
12
public class User
{
    public string Username {
        get;
        set;
    }
    
    public List<Post> Posts {
        get;
        set;
    }
}

Now if we try and serialize our model:

BANG

System.InvalidOperationException: There was an error generating the XML document. ---> System.InvalidOperationException: A circular reference was detected while serializing an object of type User.

We have a circular reference, which XmlSerialization has no clue what to do with:

  • User.Posts.Owner
  • Post.Owner.Posts[i].Owner

There is no easy way to get around this using XmlSerialization without creating Id properties on your classes, and creating a pseudo-relational database in XML using [XmlIgnore] on your Domain Model. Again this is bad practise on a domain model (I have a thing for keeping my Domain Models ‘pure’).

To retain transparency, you could use a Binary Serializer to mitigate… But that’s not the title of this post.

To quote Frank Krueger (Author of iCircuit and SQLite-Net):

BinarySerialization solves this problem but breaks when you start using events (it serializes your events which results in a large part of the heap put into the binary). You can’t tell it to ignore events, instead, you have to play a game with events as fields - which takes about 8 lines of boilerplate per event.

.NET serialization seems to have been developed with the idea of message passing and that’s it. Serializing graphs is a miserable experience.

Conclusion

So, what was all of this about? Well… I was rambling. This post was originally meant to be a comparison of SQLite and using XmlSerialization to store data in XML files on your iOS device.

I failed…

Instead, it’s a dissection of the reasons why I use XmlSerialization in some projects, and not in others - the comparison will have to wait for another post in the near future!

For what it’s worth, (and just to confuse) in my latest app (due to be released soon) I use both of these methods to store data, and files… More to come!

As always, here is the source

FDI Calculator App in iOS AppStore

- -

My first (personally funded and created) app is now up in the Apple App Store.

The app is a tool for:

  1. Fire Danger Index calculator
  2. Fire Danger Rating information
  3. Drought Factor calculator

From the AppStore app description:

FDI Utility is an application to calculate the Forest Fire Danger Index (FFDI), Grass Fire Danger Index (GFDI) and Drought Index (KBDI). It displays the results as the Fire Danger Index (FDI) and also converts these indices to the correspondingForest and Grass Fire Danger Ratings (FDR) for ease of use. The FDRs are color coded and display information about what to do, what to expect and simple fire behaviour on a day of that rating. Calculations are from the paper “McArthur’s fire-danger meters expressed as equations” published in the Australian Journal of Ecology by Noble, Bary & Gill, 1980. The app is “donationware” meaning that you may use this app for free, forever - but if you like the app please consider donating to Hornsby RFS brigade (of which I am a member) or buy me a coffee!

Check it out if you want to calculate Fire Danger Indices, Ratings or Drought Factor.

Run Tests From Networked Location

- -

I use Parallels on OSX to run Visual Studio inside a VM.

With a test project I had written, I was getting the following exception thrown

Unit Test Adapter threw exception: URI formats are not supported.. Which was highly annoying!

After some googling, I found the solution:

  • Double click on Local.testsettings which is under the Solution Items of Project 1
  • Test Settings window is displayed. Click on the Deployment link.
  • You will see a checkbox Enable Deployment. Select the checkbox and click Apply. 2
  • The Test Settings can also be found under TraceAndTestImpact.testsettings and just follow the same steps.

Compile Mono From Source on OSX

- -

The following is how I compiled mono latest from source:

  1. Install MacPorts from http://macports.org
  2. Install gettext
    sudo port install gettext
  3. Locate your current mono environment. It is usually at:
    /Library/Framework/Mono.framework/Versions/Current
  4. Download the mono source from GitHub master branch
    git clone https://github.com/mono/mono.git
  5. Run autogen with your OSX version prefix
    ./autogen.sh --prefix=/Library/Framework/Mono.framework/Versions/Current
  6. Run make
    sudo make
  7. Optionally, you can run the tests
    sudo make check
  8. Run make install
    sudo make install

Mercurial Branching

- -

We are currently going down the path of switching from our huge monolithic SVN repository, to a number of product based Mercurial repositories. Historically, I have used FogBugz and Kiln, but this time we have decided to give the JIRA stack a whirl - mainly for GrassHopper, as one of the project managers here wanted to give it a go.

In the old SVN repo, instead of branching and tagging, we copied files to “Release folders”, and ended up with a bunch of duplicated code, which was pretty hard to manage. One problem was making a bug fix in a release directory, then having to either manually implement that into the development directory, or do an onerous merge of the 2 directories.

To cut a long story short… We were using SVN like a file store, not a source control system.

For my personal projects in KilnHg I have always had 2 repositories per project - one for “Release” and one for “Development”. Why? Because that is how Fog Creek’s guides showed me. It works, but I wanted to take advantage of Mercurial’s named branches for our work projects. It produces some nice graphics, and (whilst I prefer the terminal to work with hg) works better with the TortoiseHg GUI - which is what my developers are used to, albeit TortoiseSVN.

So, I armed myself with some questions:

  • How to do this?
  • What is the best way?
  • Am I mad?

And went to the holy grail of Google to have a look.

After some to-ing and fro-ing, trying and failing, I came up with the following “demo” scenario. What follows is simple, and possibly does not deserve a blog post - but it can provide some point of reference for me (and my team!)…

Initial repository

  1. Create repo in source control system
  2. Clone repo to my machine
    hg clone repoURL
  3. Add my code to the repo
    hg add
  4. Commit code to the repo
    hg commit -m "Initial commit"
  5. Push to the remote repo
    hg push

Make the release branch

  1. Make the release branch
    hg branch "Release-1.0"
  2. Commit the change to the repo
    hg commit -m "Added branch Release-1.0"
  3. Push to the remote repo
    hg push --new-branch

Bug fix on the Release-1.0 branch

  1. Switch working directory to the branch you want to edit in your repo
    hg update "Release-1.0"
  2. Add your bugfix to the repository
  3. Add your changes to the repo
    hg add
  4. Commit your changes
    hg commit -m "Added BUGFIX1 to Release-1.0"
  5. Push
    hg push

Feature added on the default branch

  1. Switch working directory to the default branch
    hg update "default"
  2. Add your feature to the repo
  3. Add your changes to the repo
    hg add
  4. Commit your changes
    hg commit -m "Added FEATURE1 to default"
  5. Push
    hg push

Merge the bugfixes on Release-1.0 into default

  1. Switch working directory to the default branch
    hg update "default"
  2. Merge the Release-1.0 branch into the default branch
    hg merge "Release-1.0"
  3. Commit your changes
    hg commit -m "Merged Release-1.0 into default"
  4. Push
    hg push

Which results in the following graphlog (from BitBucket):

Microsoft Azure DevCamp in Sydney

- -

Some may know that I went along to the Azure DevCamp in Sydney yesterday.

I have some plans for Azure, and hadn’t really had a chance to have a play with it before yesterday… Well.. I was impressed, and had a lot of fun!

The new portal is pretty schmick! And makes it easy to create:

  • Websites
  • Cloud Services
  • Storage
  • SQL DB’s

Best thing is that, during preview, it is free for 90 days - with a 30% discount on certain functions within the preview time.

My favourite Azure feature was…

Websites

Currently you can have free testing websites. After the preview, you will be allowed 10 free websites, as long as they end in .azurewebsites.net which is ok if you want a staging/testing area.

Unfortunately, the shared instances do not allow you to bring your own domain, but with a reserved instance, you can use a CNAME record on your DNS to redirect your TLD to the hosted site.

Hopefully Microsoft will lift this restriction and allow us to pay for a shared instance with our own domains (12 cents per hour isn’t justifiable for many, unless you host multiple websites on there). Until then, it’s 8 cents per hour (Calculator) during the preview.

My favourite part was…

Winning an XBox 360!

Part of the Conference involved a “Windows Azure Challenge” set up by Andrew Coates (@coatsy) from Microsoft here in Sydney. Which apparently I won the first leg of… Just goes to show that anyone can do Azure!

I know I need a haircut… And it was a cold 8 degrees in Sydney - hence the Arctic exposure suit!). Thanks to Paul from CodeRed for the pic.

Next…

Sign up for a 90 day free trial (You need a credit card, but Microsoft promise that they won’t charge you unless you say you want to pay!)

Check out the presenters sites for more info:

Have fun! More to come as I dive into the rabbit warren…

Modal Loading Dialog in MonoTouch

- -

Update: I was using this in a project which called the UILoadingView class from an anonymous delegate and kept coming up against the dreaded SIGSEGV error… Well it was a silly mistake on my behalf. I forgot to wrap the DismissWithClickedButtonIndex(0, true); in BeginInvokeOnMain (...). Code updated below.

As some of you may know, I work as a Solutions Architect at IQX in Sydney - a company specialising in SAP integration with, well, pretty much everything.

Part of our offering is mobility, and of late we have been refactoring our apps to make use of (Read: DogFood) one of our products, IQfoundation. In short, IQfoundation is an accelerator to access SAP data from anything. For those of you who are interested, the datasheet is here.

Shameless plug over…

The point of all of this is that, accessing SAP can be slow, especially if you are on an iPhone, connected by 3G into your network.

In the past we have put up a little spinner in the top right corner of the app, which tells the user when data is being accessed, and when that request has finished. All a little bit too subtle for some… Which is fine for interactions where you don’t want to block user input, but for areas of the app which rely on the data being there before the user can continue it was a flaw in the design of our apps, making them less user friendly than they could be.

In case you missed it (don’t worry, you are not alone) it’s that little, subtle, light-footed white spinner up the top. How very quaint.

In the example screenshot above, we don’t want to block the user interaction with the app, so a spinner is great! But… What about areas of the app which we don’t want the user to continue with until they have a full set of data?

Within the same app, against Apple’s infinite wisdom, we have some in-app settings which require communication with SAP before the user should be able to continue on interacting with the app.

Obviously (or not) these are selections from data available in SAP. What is the user tried to select the Sales Organisation, before it had been loaded from SAP? What is the user tried to select the Distribution Channel before the Sales Organisation had filtered them down? Would the world end?

Luckily, the world doesn’t end - but those not possessing the eyesight of an eagle may not have spotted the little spinner in the top right corner - leading to a bug report of “It doesn’t work”.

So, how to fix the problem? As always, Apple’s iOS User Experience guidelines are a good start. But I also like to look at how others have done it.

A quick Google returned a blog by Chris Small who found an interesting ObjectiveC link and noted that:

On the iPhone there is an example of a modal “loading” dialog when you set the wallpaper from one of your images.

Good find Chris! I would have checked this, but… Well my cat would be very disappointed if I changed my wallpaper, and she already scratched me this morning… I’ll take your word for it.

Chris details some code which works, however when I used it didn’t work quite how I wanted it to. If I had a shared UILoadingView upon which .Show () was called multipel times, I would get the spinner redrawn every time - looked a bit ugly when this happened. I also played with the numbers slightly, though these really do depend upon the text in your message (Maybe I will update it in the future to do some funky auto-calculations)…

I refactored it slightly, making it behave more like the UIAlertView I was used to. Giving me:

Why is this good? Well it solves my problem… The user knows that the data is loading, and the UIAlertView stops them from interacting with the application until all of the data is loaded. Magic! Just remember to put in some timeout handling around your longer running tasks… We don’t want to leave the user in an endless wait… Not yet anyway…

Here is the class:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
using System;
using MonoTouch.UIKit;
using System.Drawing;
using System.Linq;

// Original idea: http://mymonotouch.wordpress.com/2011/01/27/modal-loading-dialog/
// Based on:      http://mobiledevelopertips.com/user-interface/uialertview-without-buttons-please-wait-dialog.html

public class UILoadingView : UIAlertView
{
    private UIActivityIndicatorView activityIndicatorView;

    public new bool Visible {
        get;
        set;
    }

    public UILoadingView (string title, string message) : base (title, message, null, null, null)
    {
        this.Visible = false;
        activityIndicatorView = new UIActivityIndicatorView (UIActivityIndicatorViewStyle.WhiteLarge);
        AddSubview (activityIndicatorView);
    }

    public new void Show ()
    {
        base.Show ();

        activityIndicatorView.Frame = new RectangleF ((Bounds.Width / 2) - 15, Bounds.Height - 60, 30, 30);
        activityIndicatorView.StartAnimating ();
        this.Visible = true;
    }

    public void Hide ()
    {
        this.Visible = false;
        activityIndicatorView.StopAnimating ();

        BeginInvokeOnMainThread (delegate () {
            DismissWithClickedButtonIndex(0, true);
        });
    }
}

Enjoy! Let me know if you use it.

Resources in MonoTouch Assembly

- -

In my experience, I found that adding resources into a MonoTouch project can be slightly confusing - if you have not done it before.

I had a requirement to embed images into a project… easy you may say. Add a solution folder into the solution, add your images into the folder and give them a build action of ‘Content’. Duh… Why bother with a blog post? I hear the crowd mutter.

Well… It on the surface it is easy. The process is:

  1. Right click on the solution, and select “Add > Add Files”
  2. Select your image and click “open”
  3. Choose to “Copy the file to the directory”
  4. Set the Build Action to “Content” (In the properties panel)

Done… You can access the image by loading it using a directive which is relative to the root of the app. In this case it would be:

_Resources\Xamarin_Logo.png

Meaning I can access the image with the following code:

1
UIImage myImage = UIImage.FromFile("_Resources\Xamarin_Logo.png");

A short interlude

At this point, it is probably wise to diverge from the main point and do a little explaining… You see that I have named the folder “_Resources” and not “Resources” - there is a reason, I am not just being awkward. The folder name “Resources” is a reserved name, and MonoTouch will throw the following error at compile time if you name the folder “Resources”:

Resources/Xamarin_Logo.png: Error: The folder name ‘Resources’ is reserved and cannot used for Content files (ResourcesExample)

Doh… I solve this by adding an underscore to the beginning of the folder name. You could change this to “Images” or ”Boyakasha” or, indeed, anything else you wanted, it would still work… Just don’t use a reserved name! (Boyakasha hasn’t been coined by the Mono compiler… yet)

Back to the point

My point is that, I am used to embedding images as resources into my .Net projects and accessing them as objects rather than as a string location to a directory. I think this is better because:

  • Strongly typed
  • I can change the Resource file, and all associated images will be changed at the same time
  • I can transform the image before showing it (e.g. resize, remove sharp corners to please the office health and safety officer etc.)
  • I do this by creating a static class in my project named “Resources”, which has the definition of each Image which I will use in my project as a static property of type UIImage.
1
2
3
4
5
public static UIImage XamarinLogo {
  get {
      return UIImage.FromFile("_Resources/Xamarin_Logo.png");
  }
}

This now means that, as long as I include the namespace of the static “Resources” class in my code, I can reference that image using “Resources.XamarinLogo”. Magic!

Check out the source code for an example of this being used in a MonoTouch.Dialog sample app.

jQuery Validate and MaxLength

- -

A quick one… Just had a situation where we were adding the maxlength attribute to a html input, and then trying to use the excellent jQuery Validate plugin to validate the form… Resulting in an invalid form! Doh!!

Seems it is a bug…

Here is the fix.

Doh! Hopefully fixed soon.

Happy coding!!

Handling .Net DateTime in jQueryUI DatePicker Using KnockoutJS

- -

In a recent project we were using KnockoutJS to bind a .Net model to a JavaScript ViewModel to a HTML View.

The View has a number of jQuery UI controls on it, including the infamous the DatePicker.

The model had a number of ‘complex’ types on it, however the type which gave us the most headache was the ‘simple’ .Net CLR DateTime type.

When the JavaScriptSerializer serializes a .Net DateTime it spits out a string which looks like:

1
"\/Date(1000001352100)/\"

Which, from my days as a Unix programmer, I recognised the to be the beloved Epoch timestamp (milliseconds since epoch/1st January 1970) surrounded by Date().

First suggestion was to eval the Date and be done with it… But we all know that eval === evil as it is sloooooow, and anyone can inject malicious code into the page and run it on our unsuspecting user - this was not an option.

Onwards…

After some messing about, we decided that creating a Knockout custom binding would be the solution.

I stumbled upon a jsFiddle written by rniemeyer which implemented a custom jQuery UI DatePicker binding - all we had to do was shoehorn the conversion from the ViewModel DateTime string representation, to a JavaScript Date Object.

As the DateTime format from the JavaScript Serializer always spits out dates in the same format, we can:

  1. Use string.replace to remove all of the crap, leaving us with the Epoch time
  2. Use parseInt(string) to get the string as an int
  3. Use the new Date(epochTime) constructor to create the proper JavaScript Date object Initially I thought that we might need to convert back to Epoch time to send the data back to the server, however the JavaScriptSerializer is capable of reading the Date Object’s .toString() method and converting it into the CLR DateTime type. Too easy! Here is the

fiddle and the code for completeness sake:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
ko.bindingHandlers.datepicker = {
    init: function(element, valueAccessor, allBindingsAccessor) {
        //initialize datepicker with some optional options
        var options = allBindingsAccessor().datepickerOptions || {};
        $(element).datepicker(options);

        //handle the field changing
        ko.utils.registerEventHandler(element, "change", function() {
            var observable = valueAccessor();
            observable($(element).datepicker("getDate"));
        });

        //handle disposal (if KO removes by the template binding)
        ko.utils.domNodeDisposal.addDisposeCallback(element, function() {
            $(element).datepicker("destroy");
        });

        //handle .Net DateTime
        var value = ko.utils.unwrapObservable(valueAccessor());
        var obs = valueAccessor();

        if (obs() !== null && !isNaN(obs().replace(/\/Date\((-?\d+)\)\//, '$1'))) {
            obs(new Date(parseInt(value.replace(/\/Date\((-?\d+)\)\//, '$1'))));
            $(element).datepicker("setDate", obs());
        }
    },
    update: function(element, valueAccessor) {
        var value = ko.utils.unwrapObservable(valueAccessor()),
            current = $(element).datepicker("getDate");

        if (value - current !== 0) {
            $(element).datepicker("setDate", value);
        }
    }
};