Jamie's Blog

The ramblings of a programmer with a little too much time on his hands

Category: Software (Page 1 of 4)

A Small Victory For My Readers

About 4 years ago, Google started a crusade to get all websites to use HTTPS.

That means that the connection to a site that you view is secure and encrypted.

They’ve even gone so far as to start penalising websites that don’t use HTTPS. Which falls in line with their older announcements that websites delivered over HTTPS would get higher rankings.

This means that if you look at your address bar, there should be a green padlock.

Like This

Like This

I am in no way an SEO expert.

I’m not even an SEO pert

… What? That was funny. Ok, it was Fozzy bear funny. But even so.

 But Google pushing for HTTPS was always a good thing.

Why Are You Telling Us This, Jamie?

Well, it’s a weird way to announce…

If the shoe fits! Am I right?

Ignore him. Anyway, as of right now my blog (what you’re reading right now) and my website are both being served via HTTPS. This is all thanks to the amazing folks at Let’s Encrypt.

They have an amazing post on their site about how HTTPS and Certificate Authorities (CAs) work. At the very least, you should check that out.

Pitfalls

There’s a one caveat to my blog being served over HTTPS, and that’s that some of my articles link to non-HTTPS content (usually old embedded images). That means that your browser might either warn you about it, or just remove that content. I’m going to be taking a look at each post and fixing them where necessary, but this could take some time as there are over 170 published posts (at the time of writing) on here.

I also might have to change my theme, as my search bar doesn’t seem to be using HTTPS, so that’s causing a few issues.

.NET in new places

Over a year ago I wrote about how I was building an application with Mono, Xamarin and GTK# on MacOS (even though it wasn’t called that then). Well, since then the Microsoft .NET team had announced that they were open sourcing the .NET runtime and were re-building it for non-Microsoft operating systems.

Because I wanted to play with .NET Core, I started trying to figure out how I could use it on my *nix machines. Because new Microsoft are awesome, they very quickly put up tutorials on how to get the run time installed on your machine.

Shortly after then, they also started building their documentation pages for all things .NET Core.

Thankfully this doesn’t follow the dated MSDN model, and it generated on page load from the GitHub documentation repo

.NET Core

Time has moved on since the initial betas and RTM releases, and we’re at version 1.0 of .NET Core. So I headed over to the .NET Core website and installed in on my Mac.

After that, I was looking around for the quickest way to develop apps for it. The quickest way to do that is to pull up the console and run these commands:

This would create a directory called dotNetCoreApp and create an empty .NET Core console application.

Pretty good huh?

We Can Rebuild Him

So npm is a thing that exists.

If you’ve gotten this far, then I’m assuming that you know what npm is. If not check out this wikipedia article for some background information [LINK]

Going back to the .net documentation (linked above), theres an article about building a .NET Core application using npm and yoeman, which is incredibly useful.

Using yoeman, you can template a full .NET Core application (console application or ASP/MVC web application stack) in seconds. So that’s what I did.

All of this is an extremely fancy way of announcing that I’m starting some cross platform .NET Core application development. The prototypes that I come up with will be hosted on my GitHub account, which can be seen here [LINK].

At the time of writing, I only have one application in that repo. It’s not that exciting either. All it does it print a message to the console.

Console Application Screenshot

It really doesn’t do very much at all.

That’s literally it. Nothing else. Exciting, huh.

What has been added to this repo so far was written entirely using Visual Studio Code. Why Visual Studio Code and why not something like, say, Atom?

Because Visual Studio Code is stupid fast and has all manner to plugins specifically for C#/.NET development (things like NuGet search and install).

The best laid schemes o’ mice an’ men, Gang aft agley

So what am I planning? Other than just learning the caveats and pitfalls of using .NET Core, I want to build a selection of application in C# with .NET Core. Some of the things I want to build are some MVC sites, and a few quick and dirty console applications.

The big one, however is medico. medico is an application that I’ve been designing for a long time, and I think that .NET Core is the way forward for it.

The design documentation is still in flux, and I’m still adding to it, but you can read the latest version of the medico documentation here [LINK]

So this means that, going forward, medico’s development will be restarting.

There are a few things that I need to look up first though. Things like opening files from disk in .NET Core (I know that the FileStream class wasn’t available back in the early beta). Once I’ve managed to figure out how to do these things, medico’s development will begin again in earnest.

Watch this space, I guess.

XML, SMS, CLI, PDQ

Background

My mobile phone has SMS messages that go as far back as my first smart phone (around 2009). Each time that I upgraded, I found ways to transport them to my new phone.

Why? Well, why not? It’s always good to look back through my old messages and reminisce over stupid conversations.

In the case of a few threads, these include folks I’ve lost contact with; and a bunch with a friend who has passed away

The Issue

Shortly after Christmas something happened with my current phone which caused me to have to reflash the currently running ROM. This meant that I’d lost everything.

Luckily, I have a bunch of apps (including this one) that run a bunch of backups EVERYDAY at midday. The SMS backup app that I use takes a full backup of my SMS messages,

Yes, I still use SMS to communicate with some friends

stores them in an XML file and uploads that file to my DropBox account.

Since this happens every day, all I had to do was pull down the previous day’s message backup and restore it onto my phone.

This is where it went a little wobbly.

What Happened

For some reason during the restoration process, my phone crashed and rebooted. This happened a few times (I was doing too many restorations of different things [WhatsApp, HangOuts, etc.] and caused a Kernel Panic).

A few days later, I’d noticed that the restoration process had restored  around 3-5 copies of each of my SMS messages. This was no good as I ended up having lots of superfluous messages to trawl through to find an older one.

Since it had been a few days (a few backups had been taken, and a load of messages received), I couldn’t restore the backup from when my phone crashed because my message threads would be out of date.

Possible Solution

I COULD have manually deleted each of the repeated messages, but I did a total count of the messages and it came out at 25,000

I think that the second word out of my mouth was “that”

I even took to Reddit to ask if anyone knew of an app that I could use to do bulk remove the duplicate apps, but there were no suggestions.

Being a supremely lazy person, and apparently all good programmers are, I thought that there must be a better way to do it.

Program

Why not write a program to go through the latest backup file and remove all duplicate entries? It wouldn’t have to be beautiful, or have a user interface. It also wouldn’t need to have any command line interface. Just dump the XML file in the same folder as the binary, run it, and ‘let her rip’.

Half an hour later I’d written the program, tested it, generated an XML and replaced all the SMSs on my phone.

Listing

I thought that what I would do is that I’d post it up here. It’s not that elegant, but it’s good enough to share and for a bit of a post mortem.

Anyway, here’s the code (hosted with my gists):

What I Could Have Done Better

RemoveDuplicates returns an array which is fine, but that means that the final line of RemoveDuplicates needs to convert from an IList<Sms> (which implements IEnumberable) to an array before returning. This isn’t a huge issue, but it does increase the memory footprint of the code.

It also means that the following code:

has to convert the array that was returned from RemoveDuplcicates into an IOrderedEnumerable (to do the ordering), then back into an Array (to replace the contents of the Smses objects Sms object array.

This is boxing and unboxing, which is not that big a problem for my particular app because I’m running on a very modern, powerful PC. But if this was being run on an embedded system or one with low memory, then boxing and unboxing could cause memory issues.

Also, there’s a chance that a null ref could be thrown during any of the boxing or unboxing steps (if RemoveDuplicates returns a null array, then the grouping will break; or if the Grouping returns null, then the ToArray will break).

Plus, there would be a speed boost by not boxing and unboxing. Explicitly converting from one type to another is slow and produces a (frankly massive) object. Again, because the code was run on a modern, powerful PC so this wasn’t a huge problem; but something to consider for later revisions.

There are other things that I could have done better, but I’m pretty happy with it as it stands.

My Own Columbo Moment

One last thing:

  • Before I ran my SMS XML through this console app, it was 8 MB and had over 25,000 messages.
  • Afterwards, the rendered XML SMS was around 2 MB and had just under 10,000 messages.

I call that a win.

3.5" Hard Drive

Operating Systems – Eeek!

My Windows 7 machine was running a little slow recently. So, I decided to buy a shiny new SSD and upgrade to Windows 10.

After all, users of Windows 7 and 8.1 can get a free upgrade to Windows 10. So why not, right?

Installing an OS

My SSD arrived, and I cracked open my PC when I got home. I took out all of my current (non-SSD) drives and got to work installing Windows 10.

The reason I took out the old hard drive is an old one, one that I’ve tried to avoid for years:

Accidentally wiping the wrong drive

My three hard drives have loads of important data on them. Pictures, documents, music,movies. All sorts of stuff that I’ve created and collected over years.

Some of the things I have stored on these drives are over a decade old and important to me.

Anyway, so I ripped out the current drives and jammed my shiny new SSD in there. I fired up the machine with high hopes. My USB drive with a Windows 10 installation media was already plugged in.

It’s 32GB my USB 3.0 drive (named Anoia), btw

After about 20 minutes, Windows was installed and ready to run.

Issues?

After an hour or so of setting up Windows 10, I shut down to re-install my old drives. And that’s when things started getting janky.

After a few hours of using my computer I started to notice random restarts.

Blue screens where happening a lot.

Luckily Windows 10 is pretty useful when it restarts via Blue Screens, so I got to Googling. Here are some of the issues I had:

  • 0x80070570 – 0xa003 (which is a Windows 10 media creation issue)
  • BAD_POOL_CALLER
  • KERNEL_PANIC
  • KERNEL_SECURITY_CHECK_FAILURE
  • MEMORY_MANAGEMENT_ISSUE

Unfortunately Google wasn’t that helpful. But I kept Googling.

Results?

After a few evenings wasted Googling, I found a few forums posts related to my motherboard (an ASUS P8P68) and UEFI Secure Boot (which is what Windows 10 uses to ensure that it’s boot loader hasn’t been edited by a malicious third party).

UEFI Secure Boot?

An extremely simplified and (not entirely correct) description of UEFI Secure Boot is this:

When you’re computer starts up, the BIOS fires. BIOS stands for Basic Input Output System and is used to do a bunch of things (including a POST – Pre Operating System Test), the main thing is to choose a drive to boot from.

Booting is when your computer loads a tiny piece of software, called a Boot Loader. The Boot Loader tells the computer where on the drive to go to load the rest of the Operating System.

UEFI Secure Boot is a way of making sure that the boot loader doesn’t get altered by anyone (a virus, Lenovo or Dell).

Technically Lenovo and Dell didn’t alter the Boot Loaders of affected computers, but their software was installed at the lowest level and consumers didn’t know.

The Boot Loader on an UEFI machine has been encrypted. The Boot Loader is decrypted by your BIOS, a bunch of things are done to ensure that the Boot Loader hasn’t been altered since it was installed. Then the operating system (which is linked to by the Boot Loader) is started.

And The Point Is?

Well, it turns out that the model of motherboard that I have has an issue with UEFI Secure Boot. I’m still piecing  things together, but it looks like, under a set of very specific circumstances, my Windows kernel was screwed up. And one of those circumstances was related to UEFI Secure Boot.

The Kernel is the core of the operating system.

A Week Later

After about 8 hours of the operating system being installed, something would screw up and the kernel would get chewed up by something.

It can’t be the installer for Windows, because I’m using a completely legitimate installer – I’d paid outright for a Windows 10 install.

I was going to install an upgrade, but it turned out that my copy of Windows 7 wasn’t eligible for the free upgrade.

I don’t believe it was related to some kind of virus or malware. I was running a legitimate install (again something purchased outright) of Bit Defender 2016 each time the OS was installed.

It was the first thing that I’d installed after initial boot, I’d let it update itself and I’d leave it in Auto Pilot mode.

Resolution

After spending a week, trying to get the OS to install and run nicely I’ve given up. For some reason, it kept screwing up and I was getting more than a little miffed.

My choice was:

  1. Buy some new hardware and hope that it fixes everything
  2. Don’t use Windows

Moving to a free operating system seemed like a great idea after a week of struggling to get a stable OS installed.

Anyway, this is all a really long winded way of saying that I’m currently running Ubuntu 14.04.

That’s all, really.

Cropped code image

Mono, Xamarin and Gtk#

Over the past few weeks I’ve been playing with cross platform development tools. With Microsoft’s announcement of the Visual Studio Code preview [LINK], the latest version of Mono (version 4.0, built against the C# 6.0 spec) [LINK] and Microsoft open sourcing their entire Core CLR for .NET [LINK], now is the time to be developing in C# (if not before).

If you can’t tell already, this will end up being a post about programming.

One of the many great things about Microsoft open sourcing the Core CLR is that many years of research and development over at Microsoft has just become available to us developers, for free. We’re talking things like garbage collection [LINK], cryptography [LINK] and a whole host of other features.

Also, since the Core CLR is so ingrained into the Windows kernel (there haven’t been any major “scrap it and rewrite it from the ground up” kernels in Windows since NT first came out) that Microsoft are basically ensuring that any code written using the Core CLR is going to run, without major issues on all versions of Windows to come.

Not just the desktop and tablet ones, but the mobile and gaming platforms too.

It also means that code written for the Core CLR can be ported to non-Windows platforms extremely easily. As long as the C++ that it’s written in will compile for the target system, then it’ll be available for it (and the code for the CLR is really well written).

Mono

So Mono is an open source version of the Core CLR (the project had begun several years before Microsoft open source their Core CLR), amongst other things. It’s been written with cross platform development in mind, which means that there are precompiled binaries for Linux, Mac OS and Windows.

The Mono development team haven’t shied away from using any of the recently open sourced Microsoft code in their version of the C# compiler, too [LINK]. Mostly, they’re using the code as either source material for implementing their own versions of some of the C# features that have been buggy or not fully implemented in the past.

Xamarin

Xamarin are the company who own the intellectual property rights for the Mono project. There’s a long story behind it, but the short version is:

  • Mono is developed in mid 2000 as a way of getting the .NET CLR on Linux by Ximian
  • Novel bought Ximian in 2003
  • Attachmate then bought Novell in 2011
  • Attachmate lays off hundreds of layoff at Novell (including Mono staff)
  • Xamarin are granted full licence to work on all Mono products

Xamarin then went on to make Mono Develop, which was cross platform IDE for Mono. In early 2013, Xamarin announced Xamarin Studio which is based on Mono Develop, but with many advanced features.

Xamarin Studio is able to read and work with Visual Studio projects and create iOS, Android, OSX and Windows applications. It also has most of the features available for Visual Studio (code completion, advanced debugger, UI designer, etc.)

What’s All This Got To Do With Me?

Well, since I’ve been playing around with Mono and Xamarin Studio, I thought I’d write a little about it. I’ve been using Xamarin Studio on my laptop (mid 2010 Mac Pro) for developing some applications and writing some throwaway code.

Why use the Mac?

Mainly because it’s small, fast to boot up and I can take it with me places.

Seriously, if I’m travelling anywhere (say I have a long journey ahead of me) then I can pull down my latest code and do some work while I’m sat around waiting to get where I’m going. It works too – I get quite a lot done on those long journeys (I used to watch a film or something, but now I’m way more productive).

The code view is very similar to most IDE’s (it feels like it is modelled after Visual Studio).

Xamarin Studio Code View

Some filenames have been censored due to the nature of the code being worked on

The designer uses GTK# (a wrapper for the GTK+ library) version 3 for GTK enabled projects.

Xamarin Studio Designer View

Some filenames have been censored due to the nature of the code being worked on

What Do You Think of Xamarin Studio?

It’s really quite nice,  and very similar to Eclipse and Visual Studio. I really do feel that users of both of those two IDEs will get on well with Xamarin Studio.

However, and I’m not sure whether this is specifically my laptop or not, I’ve noticed a few crashes on opening Solution files. This tends to happen when I’ve got the GitHub GUI open at the same time as Xamarin Studio, so perhaps there’s a file lock issue?

I’ve noticed a similar issue with the GitHub GUI when I’ve got Xamarin Studio open: opening the GitHub GUI after making edits with Xamarin Studio (but having not closed Xamrin Studio), I’m told that I’ve made no edits to any files in the repository.

Plain Sailing?

Not really. I had a bit of a massive issue with compiling and running some Mono code using Xamarin.

When developing on Windows using .NET and C#, Windows does some smart things when creating an instance of the compiled program for execution. One of these is to figure out which system DLLs need to be loaded for the program to run. On non-Windows environments, using Mono this is slightly broken.

What’s meant to happen is that all of the Windows DLLs are mapped to Mono binaries that are compiled for the target OS (OS X Yosemite, in my case). However, this isn’t done automatically, which leads to a lot of instances of errors like this one:

Unhandled Exception:
System.TypeInitializationException: An exception was thrown by the type initializer for Gtk.Container ---> System.DllNotFoundException: gtksharpglue-2
at (wrapper managed-to-native) Gtk.Container:gtksharp_gtk_container_get_focus_child_offset ()
at Gtk.Container..cctor () [0x00000] in /private/tmp/source-mono-mac-4.0.0-branch/bockbuild-mono-4.0.0-branch/profiles/mono-mac-xamarin/build-root/gtk-sharp-2.12.21/gtk/generated/Container.cs:79
--- End of inner exception stack trace ---
at Gtk.Bin..ctor (IntPtr raw) [0x00000] in /private/tmp/source-mono-mac-4.0.0-branch/bockbuild-mono-4.0.0-branch/profiles/mono-mac-xamarin/build-root/gtk-sharp-2.12.21/gtk/generated/Bin.cs:15

After a LOT of Googling and a LOT of reading about how Mono handles DLL mapping, I found a really good and concise answer on StackOverFlow (one of my favourite websites, ever – seriously, there’s a Stack Exchange for every possible subject):

http://stackoverflow.com/a/15655635

The extremely short version is that there needs to be a shell script in the binaries folder which will call the mono runtime with the compiled program as an argument, but also ensuring that the correct Mono library is loaded into /usr/lib before running it. The shell script needs to be run for the compiled binaries to run correctly.

In case the answer is ever removed, or the  link doesn’t work for some reason, here is a copy of the shell script that needs to be written:

[sharp]!/bin/sh
export DYLD_FALLBACK_LIBRARY_PATH="/Library/Frameworks/Mono.framework/Versions/Current/lib:$DYLD_FALLBACK_LIBRARY_PATH:/usr/lib"
exec /Library/Frameworks/Mono.framework/Versions/Current/bin/mono ./binaryNameHere.exe

Swapping out [sharp] for an # and binaryNameHere for the name of the binary to run, obviously.

Platypus can also be used to wrap all of that up into a native .app file, too. So that’s cool.

Anything Else?

You’re quite right, I’ve rambled on for long enough as it is.

Oh, one last thing before I go: I’ve been thinking about getting one of the CODE keyboards [LINK], because I keep hearing great things about mechanical keyboards. Although I have used them in the past (all of my early computer experiences include mechanical keyboards), I’ve not had the chance to try one properly as an adult.

Anyway, I’ll leave it at that I think. We’re getting close to 1300 words, which won’t be fun for you to read I guess.

Until next time, have fun!

Run Away screen shot

Run Away – The Game

I’ve been working on a Javascript game for a while and thought I’d show a (very early) screen shot and discuss it a little.

What does it look like?

Run Away is still in early graphical development (because I suck a designing assets), so I’ve made use of some freely available graphical assets.

Run Away screen shot

An extremely early version of Run Away, using freely available graphical assets

In the screenshot, the score can be seen in the upper left corner, the game over message in the centre of the screen, and the hero (having been caught by the monster) in the lower centre of the screen.

How do you play Run Away?

Currently, the only game mode is a survival mode, of sorts. The aim of the game is to not let the monster capture the hero. The hero will become fatigued and slows down as he runs away from the monster and must stop occasionally to catch his breath.

The monster will keep chasing until it catches the hero, or becomes fatigued. When the monster is fatigued, it will slow to a crawl in order to catch it’s breath.

That’s it, basically.

How does it work?

The game is rendered on a HTML5 canvas, and the game logic is written in JavaScript. The use of HTML5 means that support for versions of Internet Explorer before 9 is non-existent (loading the game on an earlier version of Internet Explorer will give a blank screen), but I’m ok with this as Microsoft no longer support those browser versions.

when will it be released?

As with my website redesign announcement from yesterday, it’ll be done when it’s done. However, I will be asking specific folks whether they’d be willing to user test it for me.

Once I’ve done a private beta test, I’ll post an entry announcing a public beta test. Keep your eyes peeled for an announcement folks.

Until then, stay frosty.

J

Landing Page Redesign

So I decided to redesign my landing page.

The one found over here: [LINK]

Why?

When I originally wrote it, I didn’t have a great knowledge of HTML, CSS and the like. However, since then I’ve been improving my knowledge and skills with those languages, JavaScript and responsive site design.

Responsive?

Responsive web design has been a big thing for a while now. A responsive website is able to respond to changes in the viewport used to view it.

Imagine you’re looking at a website on a mobile device (say a tablet computer), you want the website’s design to make the most of the features available in the device you’re using. You also want the website to look different in landscape and portrait views.

This is what responsive web design means.

Where is it?

I’m still working on the code and design at the moment, but I’ve made it all freely available on my GitHub profile. There should be a link to my GitHub account on the side-nav, check out the project called “Website” and you’ll see what I’ve done so far.

Depending where you’re reading this, the side-nav might not be there. If there’s not side-nav, then here’s a handy link to my GitHub: [LINK]

It’s still early in the project – and I’m only building a landing page – so don’t expect fantastic features just yet. They’ll happen… eventually.

What’s the plan?

My blogs are responsive and so it GitHub, so it seemed silly to have a badly designed and unresponsive landing page/central hub for those things.

Plus, I want to bring together my blogs, GitHub, wiki and social links into one place. The current page has only two of those things (blogs and GitHub), provided as links via a collection of HTML5 canvas elements – meaning that the site wont work in older browsers.

When will it be live?

I don’t really like this quote, but it fits perfectly:

It’ll be done when it’s done

– 3D Realms, Blizzard, John Carmack, Gabe Newell, etc.

Once I’m happy with the design and layout of the site I’ll push the code to the server. It’ll be fun to see whether I get any more hits (probably not). Only time will tell, I guess.

I’d better get back to it, I guess.

J

Let not the wheels of friendship rust from want of lubrication

Managing My Projects

I recently went on holiday and I spent part of my time there, relaxing by the pool, taking in some sun and watching the world go by. It was extremely relaxing, and I should do it more often – my last holiday before then was 2008.

Let not the wheels of friendship rust from want of lubrication

Right back at ya, street face.

My time relaxing permitted me to take a look at managing my projects from a different angle: I’m going to manage them semi-professionally.

I may not be writing code all the time, but I hardly ever stop thinking about it.

It used to be that my personal projects would go one of three ways:

  1. I would work feverishly, often into the wee hours of the morning, several nights in a row until they were completed
  2. I would put in a few hours here and there, throughout the week, eventually finishing off the project.
  3. I would start them, work on them for a little while, then stop pause progress on them for a while. Most of my big projects [Naze Besto, MediCare, etc.] took this route

Well, in the words of a character from a T.V show that I don’t watch: “Not today”.

… Err. I might have stretched that a little far.

Managing Projects?

That’s right. I’m going to be managing my projects from this point forward.

I’m modelling most of my management on the processes that exist where I currently work. We use a combination of Atlassian’s Jira (a Scrum and Agile board simulator) and 10,000 ft (a time management system). I shan’t be using either of these pieces of software, for the simple reason that they offer a LOT of functionality that I don’t require.

Their both fantastic systems, but I’m not looking for a set up that has as many bells and whistles as they do. Plus, I’m looking to do this on the cheap as it’s an experiment to see how well I can regiment my time. That’s why, after a lot of looking around and consulting several comparison sites, I’ve settled on Target Process.

The bonus with Target Process is that I can pay to upgrade to allow more team members (I’m currently locked to 5 in the free version) if my projects ever get to a stage where it’s just me working on them, and it’s used by a whole bunch of big name companies (Cisco and Vaio to name drop, but two).

I’ve not managed to add many of my projects at the time of writing, but I’ll post a screen shot in a later blog post once I’m off to a start. plus, I’ll be able to summarise my weekly/monthly progress on each project in a blog post – I wouldn’t want to leave my wonderful reader base without updates on how I’m doing. Plus, I feel like I should write more anyway. I mean, looking back at my post history, there are HUGE gaps of time that I want to try and make up for – at the very least, so that there’s a log somewhere of what I’ve done with my spare time during my life.

It’ll give me a great chance to be honest with myself and see how many active projects I have (I’m not sure I could count them all, if I tried. Which is not to say that I don’t know how many projects I have, but more a case of them being so far reaching, that most of them can and are split into smaller projects), and see which ones can take priority or which ones can by put on the shelf for the time being (if any).

Pro tip to any aspiring programmers: If you want to be taken seriously in the industry, you’ve got to be able to manage your time well, and make good estimates of how long tasks will take.

So far I’ve started adding my projects from GitHub and I’ve seen a lot of possible work piling up in front of me, but I’m not worried by it all. It’s just another challenge to overcome, you know?

Click to embiggen

It doesn’t look like much, but two of these projects are just names, and one has one has only just left the planning stage

The way I see it, I’ll be able to plan my personal project time a lot better using Target Process. Whether I stick to that plan, however is a different story. you can plan all you like, but if you don’t stick to it, things will go very wobbly very fast.

Anyway I’d better be off, I’ve got planning to do. I’ll catch you all on the flip side.

Career Check-in: 2.5 Years

So, I’ve been working as a software developer for 2.5 years now. I’ve learnt quite a lot of things along the way, and I wanted to write about the single most important thing that every software engineer NEEDS to use throughout their careers – whether that’s a “professional in an office cubical, engineering software for commercial purposes” or “open source all the things” type (or anything in between).

Revision Control

To be fair this could be useful for almost any work that takes place using a computer.

We’ve all been there (and if you haven’t, then I’m sure that you will one day): you’ve just spent 7 hours writing some update to a piece of software – it’s a big update, fixing a lot of problems. You hit compile. You wait for the compilation process to finish. Then you hit debug and the compiled code runs.

Except that it falls over, massively. Errors everywhere; the screen goes blank; your computer reboots.

Damn.

Since you’ve had to save your code before building it, you’ve now lost the original (pre-changes) code.

Blast and damn.

Revision Control (or version control, or source control) is a type of application which keeps track of all the changes that are made to a given set of files – these can be code listings; documents; image files; anything really. Almost any type of file can be tracked by revision control.

To be honest, you can achieve version control manually by copying the files before you make any changes. The problem with this is that it’s manual, difficult and boring. For each backup you make, you have to come up with a new name for the backup (say a time stamp), and some description of the changes made to the code base that necessitated the backup – otherwise you have no idea why you backed up, or when you backed it up.

But, why not have a computer do that for you?

By having revision control software do that for you, all you need to do is come up with a list of changes that you’ve made and tell the revision control software to store those changes. You can even get the old versions of the files back.

I’m seriously surprised that most Universities that offer computer science related courses don’t teach revision control. Usually because it’s assumed that all students of CompSci know about them. I did, but MANY on the degree course I took didn’t.

But… There are lots of different ones. Which revision control system is the best?

That’s an impossible question to answer. I mean, they all do exactly the same thing, but in slightly different ways.

For instance, GitHub stores check-ins of code (the backups) on a web server. The code listing is publicly visible (unless you pay for private storage), which helps to foster open source development. GitHub actually runs Git (another revision control application, originally written by Linus Torvalds), and communicates with it to perform the check-ins of code.

Whereas, SVN (or Apache Subversion) has the same idea, but can be local or remote – the check-ins are either stored on the local machine (the one you’re using to write the software) or on a server (perhaps in some other country – this is called a Distributed Concurrent Version System or DCVS).

Almost all revision control software can be accessed via a GUI, the command line or through your IDE. Each system has it’s own set of commands, but the main commands will be variations on:

  • Create a repository (where the code base for a single solution will be stored)
  • Initialise a repository (create the folder structure, set a list of file types to be ignored, etc.)
  • Perform a check-in (copy the changes to the repository)
  • Perform a check-out (get the latest changes from the repository)
  • Perform a branch (create a separate copy of the current version of the code base)
  • Perform a merge (merge the differences between two branches of the same code base)

Learning how to do these things with code that already exists is a terribly important thing that all software engineers should do. You’ll have to learn to use them when you get into the industry anyway (even if you’re self employed or release everything as open source). And without them, your resume/C.V is going to look a little weak. After all, it’s one of the basics of software engineering; right up there with polymorphism, encapsulation, recursive programming and program flow control (if..then..else, for, while, etc.).

Here are some links to get your started on the road to revision control mastery:

  • The Wikipedia entry for Revision Control can be found here: [LINK]
  • OSS -Watch has a nice write up on revision control here: [LINK]
  • Smashing Magazine weighs up the pros and cons of 7 of the big revision control programs here: [LINK]
  • Git has a great explanation on revision control on it’s website, here: [LINK]
  • There’s a great explanation from the SVN book here: [LINK]

Hopefully, this will help someone. Remember, check-in little and often and you’ll have no worries.

I can’t tell you how many times I’ve spent days writing some software, fixing some bug or other. Then when I’ve checked it in, the whole thing comes crashing down on account of some other part of the code base changing – colleagues kept making changes and now nothing works. Had I checked in sooner, it might have been avoided. However, the beauty of revision control is that you can get an extremely detailed view of what’s changed.

Until next time, stay frosty.

J

A Really Short Post

Morning all,

Before others point it out, I know that the RSS feed widget (over on the sidebar, there) constantly says:

An error has occurred; the feed is probably down. Try again later.

I’m trying to track down the source of this error and fix it. It seems to be a problem with the relationship between the RSS feed coming in from Git Hub and the RSS feed widget for WordPress. I’ve posted a question in the support forums, but it hasn’t had an answer yet.

I know that the RSS feed (from Git Hub), itself, is fine because feedburner tells me so here [LINK]

Until I figure this out, the widget will continue to report the error. Sorry guys

J

Page 1 of 4