I have edited this post on the above date. However, the only change that I have made is that I am now hosting the source code (found at the end of this post) on GitHub. This means, that if I need to make a change to the code, I do not have to edit this post again – as it will pull the latest version of it from the relevant page on GitHub whenever it is accessed by a reader.
What of What?
A few months back, I was thinking of applying for a job relating to video technologies. The company wanted someone with “experience working with video compression techniques with regards to software development.” As with everything relating to software development, this got me curious. How does video compression work? Why do we need to compress video? What are the most widely used compression techniques?
I’m only going to answer (and party, so) the second question in this post:
Why do we need to compress video?
This is simple. Otherwise we would run out of storage space for digital video. I’ll explain.
OK, so storage space is slowly become less of a problem (I still remember my first 1GB hard drive, back in the days of 3.5″ floppy discs, well before USB memory sticks. Before USB, even). But that’s only because we’re gradually increasing the quality of the footage we shoot. It wasn’t so long ago that the first 3D cameras came out, before that it was HD cameras, before that it was digital cameras.
Of course, I’m talking about cameras that are dedicated to shooting video here, but the same principle applies to digital stills cameras, too.
Our need to take pictures, static or otherwise, at higher and higher resolutions is creating greater need for more and more storage space and better, more efficient compression methods. Eventually we’ll get to a point where we can capture near-to-real-life resolution, and the storage space needed will be astronomical (by today’s standards), but that’s more than 20 years off at the very least.
Why Does Video Take Up So Much Space?
Do you remember taught that animation (of any kind) is a bunch of still images that are played together in a sequence? The images are displayed, one after the other, quick enough that our brains to put them together as one image – usually at, around 24 – 60 images every second. That means that when you’re watching a piece of video footage, you’re actually watching 20 – 60 images every second.
Animation is actually seen as a “Mind Hack” by those in the Neuroscience (Brain Science) community, but since it’s so common place now, many lay-people (I.E not connected with Neuroscience) don’t think of it as such.
Each of those images has to be stored somewhere and, depending on the resolution – the number of pixels (Picture Elements, the individual “dots” that make up the picture) – that can be anywhere from 20Kb to 15Mb per image.
Two things to note here:
- When talking about images, we use the measure of bits, not bytes. This is because we store image data in bit, and text data in bytes. When stored as text, the character “a” takes up 1 byte of memory in a computer; which is 8 bits.
- I’m not including audio here, as the audio is actually stored separately in all video formats.
With, on average, 25 images per second of footage taking up 15Mb of storage space, you’re going to run out of space very quickly.
Well, suppose that you’re taking video at 640 pixels wide by 480 pixels high with a colour depth of 24 (that means 8 bits representing each of the colours Red, Green and Blue). This would give you an overall value of 7Mb per image.
This is what the phrase “MegaPixel” means. It is a measure of how many pixels are captured by a stills camera when you hit the shutter button.
So, a camera that takes stills at 7 mega pixels COULD take images that are 640 pixels wide by 480 pixels high at 24 bits per pixel. But that’s just confusing the matter.
At 25 images per second, a full minute of footage would take up about 10.3Gb of storage. That’s more than one dual layer DvD, for a single second of video footage. Again this isn’t counting the audio.
And that’s less than standard definition, too!
Understanding is Key
In an effort to understand why video compression is tantamount to film production I did what I do best.
OK, second best.
All right, third best.
Is that even a term?
I’ve written a very short program that calculates how much storage space is required for a video taken at any resolution, colour depth, frame rate and length. All of these variables are provided by the user.
It’s a really simple bit of code, so there are no checks on what the user is entering, it’s assumed that the user is entering valid data. So long as the user enters valid data, the program will calculate the amount of storage space required for the video footage taken at that resolution and colour depth provided.
It’s written in C, one of the most powerful programming languages there is, and my second favourite (after C++). It should be easy to read whether you’re a programmer or not. It’ll need a little extra code to run, dependant on your OS environment, and you’ll need to compile it, too.
At the minute, I’ve commented out the code for user input. That’s because the code is in it’s initial testing stages. For now, if you run the above code, it’ll tell you how much storage space is required for a video taken at 640*480 pixels, at 24bit colour depth, at 25 frames a second with a total duration of 1 hour.
Give it a go, and tell me what you think.
For those who don’t write software or can’t program, here’s a break down of what it does:
- 640 (width) * 480 (height) = 307,200 (pixels per frame)
- 307,200 (pixels per frame) * 24 (colour depth) = 7,372,800 (bits per frame)
- 7,372,800 (bits per frame) * 25 (frames per second) = 184,320,000 (bits per 1 second of film)
- 184,320,000 (bits per 1 second of film) * 3600 (number of seconds in 1 hour) = 663,552,000,000 (the size of the resultant video file in bits)
- 663,552,000,000 is roughly 632,812.50 MB for 1 hour of video.
- This doesn’t take into account the audio