Recipe of the week: Freshly baked MacBook Pro

A panicked friend came to me yesterday with a broken 15″ MacBook Pro (Early 2008).

I sat the thing down on my desk and hit the power button. The hard drive & disk drive made a bit of noise, the power light glowed faintly then CLUNK and it was off again. No glowing Apple, no hope of booting to Recovery Mode.

He repeatedly mentioned that it “must be a hard disk issue” so I swiftly replaced it, in the back of my mind knowing it was unlikely to fix anything. After this I reseated, then replaced the RAM, removed various other components methodically, ensured that no connectors had come loose, reset PRAM/SMC, pressed all the obscure key combinations  I could find mentioned on the internet when powering on the machine, but still nothing.

After a few hours, and transferring data from the supposedly borked hard drive onto a portable USB disk, we pronounced the machine dead and went off on an adventure to the city to find him a new work machine.

Some people would stop there, but after seeing it mentioned in a few places, and having nothing to lose, I thought it’d be worth a try seeing what would happen if I put the thing in the oven for a little while.

Queue the unscrewing, de-clamping montage, after which I was left with the logic board, cleanly polished of all thermal paste.

After pre-heating the oven to about 190°C, I placed the motherboard in, CPU-Side-Up, and let it sizzle for about 9 minutes as recommended by another online source.

IMG_20141122_173847After letting it cool on a cake rack for around 20 minutes (my housemate laughing the whole while about how ridiculous this all seemed), I proceeded to put everything back together piece by piece.

With the battery back in, power cable connected, came the moment of truth.

IMG_20141122_182636

Maniacal laughter entailed.

IMG_20141123_113121

Almost 24 hours in and it’s still running like a new one.

I have no doubt in my mind that this is only a temporary fix, and if you do try this at home (which I generally wouldn’t recommend) YYMV.

Node.js vs Java Play! Framework

After having somewhat of a heated Facebook conversation with a friend about the Ups and Downs of various web frameworks, I was inspired to benchmark a simple “Hello World” to compare the speed and resource usage of Node.js and the Play! Framework (using Java).

Being mostly Perl guys ourselves, my go-to tool for web wizardry is generally the awesome Catalyst MVC framework, can’t say the same about my friend.

All the tests below were completed on my Laptop – Intel Core i5 M480 CPU  @ 2.67GHz (2 cores, 4 threads) with 8 GB RAM running Ubuntu 13.10.

I’m comparing node.js v0.10.15, Play 2.2.1 built with Scala 2.10.2 (running Java 1.7.0_51) and using ApacheBench 2.3 in the following benchmarks, monitoring system resource usage using Dstat 0.7.2.

I know it’s not a very exhaustive set of benchmark tests, but the ApacheBench command I ran made 100,000 requests with a concurrency of 1,000.

~$ ab -r -n 100000 -c 1000 <url>

Ok, now for the fun stuff.

The Play! Framework

After creating the basic application template made by running

~$ play new hello

I simply changed the app/controllers/Application.java to contain the following:

package controllers;

import play.*;
import play.mvc.*;

import views.html.*;

public class Application extends Controller {

    public static Result index() {
        return ok("Hello World");
    }

}

 And the results?

Concurrency Level:      1000
Time taken for tests:   8.764 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      9100000 bytes
HTML transferred:       1100000 bytes
Requests per second:    11410.73 [#/sec] (mean)
Time per request:       87.637 [ms] (mean)
Time per request:       0.088 [ms] (mean, across all concurrent requests)
Transfer rate:          1014.04 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   72 314.7      0    7016
Processing:     0    9  23.5      5     883
Waiting:        0    9  23.3      5     882
Total:          0   81 322.1      6    7039

Percentage of the requests served within a certain time (ms)
  50%      6
  66%      8
  75%     10
  80%     11
  90%     23
  95%   1004
  98%   1011
  99%   1025
 100%   7039 (longest request)

Not too shabby if you ask me.
11,410.73 requests per second, and 8.764 seconds to complete the test.

So obviously node.js can do better, right?

This one was much simpler to get up and runnning. My hello.js is as follows:

var sys = require('sys'),
http = require('http');

http.createServer(function(req, res) {
  res.writeHead(200, {'Content-Type': 'text/html'});
  res.write('Hello World');
  res.end();
}).listen(8080);

Show me the money!

Uh oh, this isn’t quite what we’d both hoped for from the fabled node.js…

Concurrency Level:      1000
Time taken for tests:   17.967 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      11100000 bytes
HTML transferred:       1100000 bytes
Requests per second:    5565.76 [#/sec] (mean)
Time per request:       179.670 [ms] (mean)
Time per request:       0.180 [ms] (mean, across all concurrent requests)
Transfer rate:          603.32 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0  124 809.2      0   15036
Processing:    14   44  34.8     45    1782
Waiting:       12   44  34.8     45    1782
Total:         19  168 816.8     45   15299

Percentage of the requests served within a certain time (ms)
  50%     45
  66%     47
  75%     51
  80%     53
  90%     59
  95%   1044
  98%   1059
  99%   3050
 100%  15299 (longest request)

So node.js handled 5,565.76 requests a second, and took a whopping (comparatively speaking) 17.967 seconds to complete the benchmark. Like I said, not the stunning, life-changing, backend-rewrite-worthy results we’d been hoping for.

Then came the brain-fart moment.

Node.js was only using one core whereas according to this bit of the Play! Framework documentation “[The default Play configuration] instructs Akka to create one thread per available processor, with a maximum of 24 threads in the pool.”

Well, that’s hardly a fair test, is it?

Now for the real test – C’mon node!

After a few tweaks to the original node script, and after reading about the Cluster module, I was left with the following:

var cluster = require('cluster');
var http = require('http');
var numCPUs = require('os').cpus().length;

if (cluster.isMaster) {
  // Fork workers.
  for (var i = 0; i < numCPUs; i++) {
    cluster.fork();
  }

  console.log('Created ' + numCPUs + ' processes');

  cluster.on('exit', function(worker, code, signal) {
    console.log('worker ' + worker.process.pid + ' died');
  });
} else {
  // Workers can share any TCP connection
  // In this case its a HTTP server
  http.createServer(function(req, res) {
      res.writeHead(200, {'Content-Type': 'text/html'});
      res.write('Hello World');
      res.end();
  }).listen(8080);
}

Surely now node.js can live up to those expectations of ours, right?

Concurrency Level:      1000
Time taken for tests:   9.502 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      11100000 bytes
HTML transferred:       1100000 bytes
Requests per second:    10524.19 [#/sec] (mean)
Time per request:       95.019 [ms] (mean)
Time per request:       0.095 [ms] (mean, across all concurrent requests)
Transfer rate:          1140.81 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   48 234.2      1    3011
Processing:     0   45  32.6     37     548
Waiting:        0   43  32.6     36     548
Total:          0   92 241.8     42    3302

Percentage of the requests served within a certain time (ms)
  50%     42
  66%     55
  75%     65
  80%     71
  90%     96
  95%    139
  98%   1059
  99%   1092
 100%   3302 (longest request)

Wahoo! This time it only took node 9.502 seconds to get through all of those requests, and it managed to handle a fairly decent 10524.19/second.

Still a teeny bit shy of the ever-impressive Play! Framework, but now lets take a look at the system load all this testing generated.

They’re both quick, but how about their system resource usage?

The first graph here shows CPU usage throughout the testing. The Play test came first, followed by the basic node.js test and lastly the clustered node.js server test.

CPU Usage - Node.js vs Java Play! Framework

Hold up a second – looks like I ran 6 tests, doesn’t it?
Ok, so I did. Play! had to compile the application the first time I ran it, so I chose to run three tests twice, and then share the results from the best scoring test. IMHO it seemed fair enough at the time, if you don’t feel that way let me know and I’m happy to run them again.

As you can see, and somewhat surprisingly, it appears that once the Java has been compiled the CPU usage is much the same as the clustered node.js example.

Memory Usage - Node.js vs Java Play! Framework

In the same fashion as the first chart, this is the Play Framework test followed by the basic node.js example and finally the clustered node.js server. As you can see, and this time not so surprisingly Java gobbles up as much RAM as it can which node.js gets the task done with minimal extra usage.

So the final verdict?

If memory usage is of great concern to you, it would seem that node.js is the way to go. If not, and you do perhaps prefer Java development, then Play may be the way to go (though from what I’ve seen so far the documentation isn’t what it could be.

If you like being given the a basic MVC architechted application framework with which to build your application, then there’s another brownie point for Play. If RYO is more your style, then perhaps node.js is for you.

And before you go and hate on me…

I know that this is hardly a real-world use case. With most applications there are at least some elements of database operations, caching, front-end proxies (whether these be via varnish/nginx/noSQL stores/a CDN or other means), as well as much more complex logic and other moving parts.
However, as a simple test of the minimum overhead of these two tools, and a way to see the bog-standard amount of requests you should be able to push out of them I feel this is fairly fitting.

Also, if you look at this as somewhat of a guide as to how easy it is to get set up with node.js or Play, you’ll notice that Play comes with a fairly well tried and tested MVC feel about it, and gives a bit more of a predefined structure to your application – granted you may or may not want this.

And one last thing before I go: yes, this was heavily inspired by Maciej Zgadzaj’s post here.

Finally Bit the Bullet

So, it’s happened. After a long time of falling into disrepair, the spaghetti code that was once behind this website has been swept under a rug (git commit; cd ../; rm -rf blog), and replaced with WordPress.

Hopefully this means that it’ll be easier for me to manage the content on the site, and will also inspire me to write a little more. Perhaps not – only time will tell.

Busy Busy Busy; or, VPSMon got an update, Hanzi Pal is most systems go, and some other cruft

An upgrade to VPSMon, launch of Hanzi Pal‘s beta website, and a few additions/modifications to my ever growing, and ever more complex little online infrastructure could have you saying I’ve been quite busy as of late.

You’re probably right.

Just wanted to take a moment to thank the growing community of users lurking behind VPSMon for your continued words of praise, bug reports and feature suggestions/requests. What I thought would be a quick, hacky weekend project to keep an eye on this server has turned into a bit of a saga to say the least.

The latest quest in this seemingly never ending mission to mobile sever monitoring has given birth to another little side project. She doesn’t have a name yet, but is basically a little service that runs on your Linux based machine, and responds to HTTP requests with some nice statistics about your server.

And guess what, it has a mostly-compatible-with-solusvm API, which means that the next update of VPSMon hopefully won’t just be for VPS servers, but all machines running Linux of some description.

Anyways, if you’re interested enough to find my email and request beta access go for it, I’d love to hear from you.

I’ve also had the pleasure of working with a few of the guys from the Footscray Maker Lab, but that, my dear blog-goers, is another story.

And on that note, Peace.

LNPPCF vs. LAMP

Every day it seems that someone is writing introductions and reviews about how awesome using the so-coined LAMP (Linux, Apache, MySQL & PHP) stack is.  I too was once a lover of this choice of software for getting things online, however over the past few years I’ve slowly replaced most of those with other free alternatives and haven’t looked back for a moment. This website, along with most others that I look after nowadays, are running what I guess you could call a LNPPCF stack. That is Linux, Nginx, PostgreSQL and Perl with Catalyst and FastCGI.

My reasoning behind this? Well, for all intents and purposes Postgres seems to be very much alive in terms of regular releases and the fact that it has a huge following and ample resources online for learning and working with it. The official docs are also second to none, but should you still be having trouble getting your head around why your queries are running slowly, or not at all, there’s always the #postgres channel on FreeNode where people are generally more than willing to share their expertise and opinions (and in the traditional IRC manner, flame your ass if you’re being an ubern00b).

Postgres also doesn’t allow you to do Stupid Shit™. A friend and I were having trouble with data in a MySQL database of ours disappearing or being truncated for no obvious reason.  The culprit was a VARCHAR column with a length set. MySQL was simply truncating anything we tried to store in this column, and then going on it’s merry way.  Postgres in the same situation essentially blows up with a (somewhat helpful) error message, and doesn’t commit the transaction to the database. IMHO, this is a much better thing to happen – I don’t want to assume all my data is stored in full and safe, only to find that half of it has been thrown away by my DBMS. It can also point out flaws in table definitions, the database structure or in the code interfacing with the DB.

As for my choice of Nginx over Apache it’s fairly simple. It’s powerful, fast, light and modular. The config files are laid out in a more sane format (perhaps its the fact that they mostly resemble a big Perl-ish datastructure), the docs are extremely helpful and most if not all things that Apache is capable of are achievable with relative ease, perhaps more.

My choice of Perl over PHP was honestly the biggest hurdle to overcome. It’s sometimes cryptic syntax took a while to come to terms with, but again Perl has an amazing community lurking behind it, and for a beginner it shouldn’t be too hard to get up and running with a little practice and some guidance and support from the veterans that are out there. I think the biggest benefit of Perl is the CPAN. It’s a goldmine of code which you can take advantage of. With a big push towards code reuse, and not rebuilding the wheel for each menial task you have to deal with in your daily programming adventures, there’s bound to be something on the CPAN to help you out. Another great thing is when you need to achieve task X, but aren’t quite sure of the intricacies that would be required to implement a solid solution. If you’re lucky you’ll be able to find a module that has been developed, tried and tested by the community (which may or may not have very great documentation – YMMV there) which you can use to help out along the way. CPAN puts PHP’s Pear to shame.

There is also an ever-growing amount of awesome web frameworks out there for Perl. Catalyst, Mojolicious, Dancer – the list goes on, as do the reasons why you would choose one over the other. Personal preference tends for me to use Catalyst for most projects, however I’ve dabbled with some of the others (namely Mojolicious) for smaller projects.

Then comes FastCGI which basically ties my Perl projects nicely into Nginx, and gets everything up and running online. Most of the Perl web frameworks out there come with some form of inbuilt testing and development server, but so far FastCGI seems to be the best route to go down in terms of actual production-ready serving of “stuff”.

All in all, it really comes down to personal preference and what languages and technology you’re most comfortable and familiar with. But given the choice, why not explore something a little left-of-center for your next project? You may be pleasantly surprised by what you can achieve.

Convert bytes to readable units in Perl

I recently used code similar to that below in VPSMon, and came across a need where it would be convenient to display bytes as a more human-friendly string in Perl. Basically I’m building a script, and I want to convert the amount of available/free RAM on a machine from bytes to gigabytes for display to a user, but I figured may as well cover all bases and have this around in case it needs to display other units/multiples in the future.

So, with a little bit of tweaking, here it is!

Pass bytes as the first argument, and optionally a ‘truesy’ second argument if you’d rather the multiple of 1000 vs the default 1024.

sub bytes_to_human {
  my ($bytes, $mul) = @_;
  return $bytes . " B" if ($bytes < 1024);
  $mul = $mul ? 1000 : 1024;

  my $exp = int(log($bytes) / log($mul));
  my @pre = qw/ K M G T P E /;
  my $pre = $pre[$exp-1] . ( $mul == 1024 ? 'i' : '' );
  return sprintf("%.2f %sB", ($bytes / POSIX::pow($mul, $exp)), $pre);
}

Another year, another Android app

So what better way to kick of 2013 than releasing something to Google play for all the other nerds out there.

The concept is simple – I wanna see live statistics about my SolusVM-based VPS servers, but the only other apps I found in the Play Store either didn’t do what I wanted, or didn’t work at all.

VPSMon is super simple, and gives you a quick overview of your VPS servers disk usage and bandwidth, and its current memory usage. You’re also able to boot, reboot and shutdown your machine at the touch of a button. This feature is kind of dangerous when you think about it though – you let your son play with your phone for a few minutes, and he manages to shut down some mission critical services. Ha. The next release will have optional password/pin protection on these functions – just a matter of plumbing it out.

Anyway, take a look at VPSMon on Google Play if you feel the need.

The World Didn’t End

It would seem that the world didn’t end on the 21st of this month, as predicted by the ancient Mayans, which means that tomorrow is still Christmas day, and that there is only one week left of 2012.

And man has it been a hell of a year. We’ve all seen a dramatic increase in the number of ‘memes’ scattered throughout our newsfeeds and timelines, danced way too many times to Gangnam Style, and decided that Aztec patterns are the new black. However, although these fads will eventually fade back into the nothingness from whence they came, the people I have met and relationships I’ve forged over these past 12 months will hopefully live on for years to come.

I’m not sure that I’ve achieved quite as much as I’d hoped to this year – but who does, really?

I’ve worked most weekends rather than dancing the nights away with the other lads, taught myself a little Java (and inadvertantly found myself in a pickle due to my new skills), brought this blog to its current state from its meager beginnings as lone, empty text file, and probably a few other interesting things along the way.

I’ve done a little tech work successfully here and there for people, applied for more jobs than ever, but mostly spent a lot of my time starting on projects and ideas that I soon after forgot about or have shelved for a ‘rainy day’.

In the 12 months that follow this week I’d like to change a few things about the way I tackle certain situations. This isn’t a new-years resolution, more of a favour that I am asking myself to follow through with. From here on, I need to finish what I start, I need to have more fun, I need to let go the things that are holding me back from achieving what it is I truly want from life.

We all have the ability to move mountains, we just need to find the determination.

On that note I’ll leave you to your own quiet contemplation about the year gone by, and wish you all the best for 2013.

J

Try not to piss off the cheeseburgers

So, as I’ve written around these parts, I released a simple app to Google’s Play store at the beginning of the month.

It was going fairly well, and starting to gain a fairly decent amount of users until I was forced to pull the plug on it about two weeks into it’s short lived life. Unfortunately it seems that the corporations don’t take kindly to the outsider tech folk who are seemingly more competent that the ones they pay the big salaries too.

So anyway, despite being somewhat of a mood dampener, if anything it gives me the opportunity to take this awesome new knowledge that I’ve found. There are a few more, less offensive, little projects in the works – the first of which I hope to have up by Christmas.

I’ve kinda given up on the Magento front for now, however I might pick it back up some rainy day soon.

Now, back to the code – It ain’t gonna write itself.

Twitter Bootstrap for Magento

I recently had a job interview for a Magento gig, as I’ve had a little experience with it in the past, and thus it shows up on my CV. Sadly enough, I didn’t get the position, however it’s inspired me to get back into Magento development, and as such I’ve begun to port the awesome Twitter Bootstrap to work with as a base template for any Magento themes that I may create in the future, and also remove it’s dependency on Prototype.js as its default JS framework and port the existing code to jQuery where necessary.

While it’s still only in its infancy, I don’t think that it will be too huge of a task – I’m slowly getting my head back around the beast that is Magento, and taming it’s wild being.

I’ll perhaps post the finished result on BitBucket when I feel it’s in a more complete state, but for now it’s back to ViM…