Not Switching from Emacs to Vim

Following my last article about possibly making the heretical move to Vim from Emacs, I’ve been trying to use Vim as much as I can. I can now safely say that I will not be making the switch.

There are a number of reasons to this.

Firstly, and most importantly I can’t quite make the conceptual leap to modal editing. My brain understands that I need to hit “Esc” if I want to move around, or “i” if I then want to insert something, however my fingers completely ignore that fact. I just can’t seem to stop myself moving around and then starting to type straight away. Likewise, I just can’t seem to stop trying to move whilst I am writing. My pathetic intellect can’t cope with separating the two.

Secondly, the saying is true: “You don’t miss something till it’s gone”. I have discovered during my journey of using Vim that I actually really like Emacs. I like the fact that it has a calendar in there so I can quickly see what date next Tuesday is, or that I can have an IRC chat without leaving my editor, or that I can organise my life. I like the fact that there is Lisp behind it all and I can tweak around with it if I want or that I’m getting pretty efficient with Emacs for my day-to-day work. And I love the fact that I’m not a newbie any more, but I still have loads to learn.

Finally, with the above two aspects in mind, I have been using Emacs in console mode exclusively since my epiphany of “stick with Emacs”. This way I can get a better representation if I decide to set up a VPS so I can work remotely. And you know what. It’s absolutely fine! Normally I use Emacs (GTK I think, not XEmacs) which is great, but in console I can’t use the mouse, and this is brilliant! (I hate rodents). It doesn’t play so nicely with Tmux, but I’m using the shell witin Emacs much more, and for what I need, it’s fine.

So, there you have it. Sorry Vim, I hardly knew ya. Hello Emacs, I’m back!

Switching from Emacs to Vim?

As my regular reader will know, I’m a heavy Emacs user (have been now for over two years). So why this heretical thinking then?

A number of reasons.

Firstly, I find I’m spending more and more time editing directly on a server. Obviously Emacs can handle this with Tramp mode.

Yes, that’s true, but it would be much nicer just to have the editor there on the same system. I could install Emacs on the server, but for some reason I can never get Meta keys mapping correctly, or Meta-Ctrl working when using emails within a terminal.

Secondly, as I spend all day logged into servers via SSH, I’m increasingly spending more of my time in Tmux. A result of this is that I’m starting to get confused with the binding keys when in Emacs and when in Tmux. This is not great.

Thirdly and this is more due to Tmux’s credit than Emac’s failure, I love Tmux’s sessions. I currently have a central server that I connect to. On that server I have a number of Tmux sessions running. I simply love being able to open up my “live” session and carry on where I left off. This is especially useful for me as I have three workstations – home, work and laptop. The downside of this is that when I don’t have internet access, I can’t get to my sessions.

Finally. Because I want to learn Vim. I may not move over to it full time if at all. But it would be good to know what “the other side” was really like. It may be crap. I don’t know.

But I want to find out.

Looking for an UltraBook

It looks like I’m going to be doing a lot of travelling in the near future. This is all well and good, but my current laptop (a Sony Vaio VGN-Z) is starting to creak, and has quite a low battery life. This is unfortunate, as I really like my Vaio; the screen resolution is good, the weight is fantastic (roughly 1.6kg), and the keyboard is a pleasure to type/code with.

So I have a couple of options. I could hack about with my Vaio and try to increase performance and battery life. Or, I could buy a new laptop.

For the first option it looks like the 3.2 kernel will offer battery life improvements, so I could have a go at upgrading to this. I could try installing a different flavour of Linux, one designed for increased battery life, such as Fundutu or even Arch, and heavily tweak it, but I had a nightmare last time I tried Arch on the Vaio.

If none of these work then I’ll have to look into getting a new laptop. I have some strict criteria for choosing the laptop:

  • Must weigh the same or less than my Vaio (1.6kg)
  • Must have a long battery life (6 hours+)
  • Must be able to run Linux (Ubuntu or Arch) with minimal tweaking

I’ve been doing some research and the best two seem to be the MacBook Air 13 inch and the Dell Latitude E6220

1. Dell Latitude E6220

This comes in at the same weight as my Vaio but less screen resolution.  However, it is Ubuntu Certified (whatever that means—I assume it’s good) and the best thing is the battery life is roughly 7.5 hours.  Price is roughly £1,000 including UK VAT.

2. MacBook Air

The weight of this thing is just silly, and with a 7 hour battery life you can’t complain. However, it’s a Mac! I know I’m going against the grain, and my father would disown me if he ever saw this sentence, but I don’t really like Macs. Don’t get me wrong: they look great, and they are far superior to Windows, but there are some irritating features about them that completely throw me. However, upon some research it appears Ubuntu is installable after some tweaking.

I went through the config screens for both the Apple and the Dell to see if I could get the specs as close as possible to compare the price, initially assuming that the Mac would be far more expensive. It turns out that they are pretty much neck-and-neck.

£1.00 per month for 36 months to install an outdated version of FireFox.

£1.00 per month for 36 months to install an outdated version of FireFox.

However, whilst selecting the software to install on the Dell, as well as the mandatory shitload of bloatware which couldn’t be “Not Included” there was the option to install Firefox 3.06 at a cost of £7.00! For fuck’s sake, Dell! Charging seven pounds to install an out-of-date browser?! Actually, it gets worse. You can also opt to pay by installment: just £1.00 per month over a period of 36 months!

Dell, I’ve scratched you off my shortlist.

So, it looks like I have more research to do, some of which includes:

  • Asus Zenbook UX21E
  • Lenovo IdeaPad U300s
  • Asus Zenbook UX21E review

The search continues.

I’ve Turned into a Hipster

Hipster Dog

I’ve seen a day I never thought would come:  the day I turned into a hipster!

If you’ve been following this blog, you’ll have noticed that throughout 2011 I’ve been working more and more in Javascript, more specifically with Node.JS.

The last few months have been especially Node.JS-centric. During the day, I’ve been building architectural improvements into Jabbakam. Evenings and weekends, I’ve been  hacking about on the Live Unsigned mobile site. All of my recent projects revolve around Node.JS, and I’ve been loving it. Node is incredibly interesting and powerful, and it is giving my brain a workout.

However, Javascript itself has a few niggly bits that become more problematic the larger the project gets. Issues with structure and maintainability of code motivate projects such as Joose and Backbone.js. One of the things that has been bugging me is having to hack in class-based inheritance. After a lot of experimentation, I settled on Prototypal Inheritance, which does the job to an extent, but left me dissatisfied .

So, the other day I finally broke down and tried CoffeeScript.

When CoffeeScript first came out I resolved never to bother with it. It is clearly for Ruby people who don’t quite understand Javascript. Since then it’s been getting a lot of press, and it now even has its own book!

Anyway, I was up the other night, waiting for my daughter to wake up in screaming pain as razor-sharp pieces of enamel slowly bored through her gums. With nothing better to do, I decided to play around with CoffeeScript.

I was up and running in no time, installing the Node.JS Module and browsing the Little Book of CoffeeScript. After about an hour I had effortlessly written some additions to Live Unsigned which integrated seamlessly. To be honest, I feel like a bit of a muppet for not trying it sooner.

After just a couple of days playing around with it, CoffeeScript has now become a major part of my toolkit. The syntax is elegant, nicely lending itself to clean coding, and code can be reused easily, improving project structure. CoffeeScript compiles to pretty clean Javascript, and from what I can make out, the better you understand Javascript, the better CoffeeScript you can write.

So yes, I am now coding in CoffeeScript.

I am now a hipster.

There is nothing left for me but to rush out, buy some skinny jeans and a Macbook Air and learn Vim.

Oh, God.

NodeJS: Experiments with Middle End Part 3

Note: This is the last in a series of posts describing my experiments with constructing a Middle End. In Part 1, we discussed the concept of the middle end and its advantages. In Part 2, we constructed a very simple example.

In this post, we’re going to make that example slightly more complex by introducing dependencies within the modules. Now the code must be able to handle these dependencies for both the client side and the server side.

In the last post, we built a simple module consisting of a single function that returned a greeting:

var Greetings = function() { }
Greetings.prototype.hello = function(who) {
   return "Hello "+who;
}
module.exports = Greetings;

At this point, it would be very useful to sanitize and validate the parameter passed to the “hello” function. We could obviously build this functionality directly into the code, but this is exactly the sort of thing that should be abstracted into its own module(s).

For the purpose of this example, we’ll simply include the functionality to give the parameter passed a “trim”; after some hacking and snippet-borrowing from node-validator by Chris O’Hara, we can build another simple module called Tidy:

var Tidy = {
   whitespace: '\r\n\t\s',
 
   trim: function(str) {
      var whitespace =  '\r\n\t\s';
      str = str.replace(new RegExp('^['+this.whitespace+']+|['+this.whitespace+']+$', 'g'), '')
      return str;
   }
}
module.exports = Tidy;

We can now improve our Greetings module, including the dependency and functionality:

var Tidy = require('./tidy');
 
var Greetings = function() { }
Greetings.prototype.hello = function(who) {
   who = Tidy.trim(who);
   return this.hello(who);
}
module.exports = Greetings;

To see if this will work as expected, let’s add some spaces to the parameter passed in the NodeJS script:

var Greetings = require('./greetings');
var greetings = new Greetings();
console.log(greetings.hello('  Node   ')+"!");

When we run it, this is what we see:

$ node runner.js
Hello Node!

Good, so this works on the server side, but let’s test its behaviour on the client side. Again, we should add some spaces to the parameter passed to ensure that it is working:

<script type="text/javascript"> </script>
<script src="greetings.js" type="text/javascript"> </script>
<script type="text/javascript"> </script>

Unfortunately, we now see the following error:

require is not defined
[Break On This Error] var greetings = new Greetings();

Our attempt to “require” the dependency in the Greetings module has broken the client side. To fix this, we need to define our own require() function, much like we had to for the module object. We do this by creating a new front end-only script, which we’ll call “middle-end.js”.

var require = function(module) {
   document.write('<script src="'+module+'.js" type="text/javascript"> </script>');
}
 
var module = {
   exports: undefined
};

(Obviously we should include much more error checking, but this will be enough for our example).

We modify our html as follows:

<script src="middle-end.js" type="text/javascript"> </script>
<script type="text/javascript"> </script>
....

Now when we reload the page in the browser, all should work as expected. Huzzah!

This is just a very simple example of what can be done when JavaScript code is shared between the client and server sides—the “Middle End”. This concept becomes really exciting when you start thinking about potential approaches to form validation, session management, and even display logic. This is definitely something worth playing around with.

All of the code is available as a Gist.

NodeJS: Experiments with Middle End Part 2

Note: This is the second in a series of posts describing my experiments with constructing a Middle End. In Part 1, we discussed the concept of the middle end and its advantages.

In my previous post, I talked about Kyle Simpson’s concept of a Middle End—a layer between the front end and back end. Today, I’m going to sketch out my very simple experiments with constructing such a layer.

What I am focusing on here is simply a means of sharing JavaScript code between the client side and the server side.

Let’s start with a very simple NodeJS Module, greetings.js:

var Greetings = function() { }
Greetings.prototype.hello = function(who) {
   return "Hello "+who;
}
module.exports = Greetings;

Next, let’s create a very simple NodeJS script that imports this module, and uses it:

var Greetings = require('./greetings');
var greetings = new Greetings();
console.log(greetings.hello('Node')+"!");

When we run it, this is what we get:

$ node test.js
Hello Node!

All very simple stuff.

Now, suppose we want to use the Greetings module in our front end web app. 

<script src="greetings.js" type="text/javascript"> </script>

Well, it runs and we get the desired output. However, Firebug reports an error within the greetings.js file:

module is not defined
[Break On This Error] module.exports = Greetings;

The solution to this issue is to “mock up” the module object so the error isn’t thrown:

var module = {
   exports: undefined
};

Run it again, and now we see no errors.

Great, so it’s working. We can share simple modules between the client and the server sides.

But what if we want to have a more complex module? One that has some dependencies?

That will be discussed in the next post.

All of the code is available as a Gist.

Arch Linux Virtualbox Update

Gosh! I just found this in my list of draft posts (originally written in early 2010). I thought I might as well post it as it could be useful to someone out there?

Again, this is one of those posts more for me, but hopefully other people might find it useful.

Recently I updated my Arch Linux box

# pacman -Syu

and during this update a new kernel was added – which meant that my VirtualBox installation didn’t work any more, chucking out the following message

The VirtualBox Linux kernel driver (vboxdrv) is either not loaded or there is a permission problem with /dev/vboxdrv. Re-setup the kernel module by executing

‘/etc/init.d/vboxdrv setup’

NodeJS: Experiments with Middle End Part 1

Back in October last year (2010) a bunch of us fled Berlin and made for Warsaw. Not just for the Vodka you understand, but more specifically for the Front-Trends conference.

One of the talks was discussing the concept of the Middle End, by Kyle Simpson. This really made an impression on me, and I think I probably bored Kyle in the bar afterwards with my drunken enthusing. When we arrived back home, my aim was to start playing around with the idea of a Middle End as soon as possible. Unfortunately, as is life, events took over (such as discovering one is going to become a dad!) and these thoughts were consigned to the back of my mind.

Until now

(more…)

Note to self: Node Deployment

This is really a note to me as since Delicious has “jumped the shark” I don’t really know where else to put it.

I’ve been researching automated deployment of NodeJS applications, and found this excellent blog post to help me out:

Deploying Node.JS Apps, by Ben Gourley.

Plus this question on stack overflow on nginx proxying should help me out.

That’s it. now I just need some time to play around with this stuff!

Also, if anyone knows of a really simple, easy to organize bookmarking site please let me know!