So, setting up a basic development environment for express is not that hard. In fact, you can really just run it from your local system if you want. However, if you need to collaborate or if you want to do your development on a remote machine, it takes a couple more steps to get a convenient working environment setup.
The following post will explain how I set mine up and introduce a couple of the tools I used to make my life easier.
So Git might be the single best thing to happen to software development in the last few years. It is miles ahead of every other form of version control out there and has even inspired the creation of amazing websites like Github.
I’ve always thought it would be great if my code would automatically deploy itself after I pushed it to the server. Rather than me having to SHH into the dev box and pull it after every update. Since Git is a distributed version control system, one would think this is just a matter of pushing to a copy of the repository on your development box; however, the kind of repo that accepts push commands is called a “bare” repository, and it does not have a copy of the code in a usable form. Fortunately, Git provides access to things called “hooks,” which are scripts that execute after certain events, like receiving a push update.
Here are the steps I took to get this setup.
First, clone a bare copy of your repository onto your development box:
Git clone --bare user@url:repo.git
Then, create a normal clone of that repository:
Git clone repo.git repo/
Finally, under repo.git/, there should be a folder called “hooks” and if you create a file in there called “post-receive,” it will get executed after any push updates are received. (note: you may need to chmod +x it)
The script I use looks like:
Fortunately, somebody else thought of this problem already and designed a great tool called supervisor to make our lives easier.
Sudo npm install supervisor -g
Now, just run your app like normal, except instead of using:
And that’s how I setup my development environment. All in all, one of the easiest and most enjoyable dev environments I’ve ever worked in, and definitely a setup I’ll use in future projects.
So, you’ve written you’re express.js app and you’re finally ready to push it to production. Should be easy right? Just copy the files, turn on the web server and bam ready to go. Oh wait… this isn’t like other web apps you’ve written before… the app itself is actually the server. Alright, no big deal, just run it like always! But it starts on port 3000 in “development” mode… that can’t be good.
These are just a couple of the things that I ran into while deploying my site, and unfortunately the existing instructions on how to get things running on production are really not that great.
This post will describe how I went about getting things up in running in a (relatively) stable manner.
Step 1: Environment Variables
Express uses two environment variables to dictate whether something is in production or not: NODEENV and PORT. PORT should be fairly obvious in that it controls the port that the web server will bind to. You will most likely want to set this to 80 unless you’re using another service/program to proxy traffic through to your app. NODEENV specifies which environment the application will run in. It defaults to “development” which is why your program always starts in development mode. You will probably want to set this to “Production,” but Express allows you to define any number of different, custom runtime configurations. In app.js, you have probably seen these lines before:
In order to setup a different development environment, you simply copy & paste, and switch out ‘development’ for the environment of your choice. My production configuration looks like this:
Telling the server to run in “Production” also does some things behind the scenes like caching of templates, so I would not recommend turning it on during development.
Now, you set these variables at the command line like this:
I’d recommend creating a quick shell script to automatically set these before starting your app, it’ll make your life easier. The script I created is below:
Things to note about this script:
Sudo? – yes, Ubuntu will not allow processes run by normal users to bind to ports below 1024, so you have to run this as root.
E flag? – this tells it not to clear the environment variables before executing the app. I think this might be a security risk, but imo it is pretty minimal.
Forever? – this is a lovely little node.js app that will make sure that your app keeps running. If for whatever reason the node process running your application crashes, this app will restart it. Definitely a must have for any site using Express.js. It can be installed by running sudo npm install forever –g (note: important to install it globally with the –g flag)
And that’s it, right? We’re done! ---- Not quite. Check the next section for one last step before you’re good to go.
Step 2: Setup new Session Provider
So, up until now, you’ve been using the default, memory-based session store that ship with express. This is great for development, but not so great for production. If you try to run an app in production mode with this still enabled, you’ll get a nasty warning about how it leaks memory, and it would be a really, really good idea to switch it out with a real provider. Fortunately, this is fairly easy to do.
I chose redis for my session store, so I’ll walk you through how to get redis setup with your express app on Ubuntu 12.04.
First, you need to install redis. You can use the following command:
Next, you might want to convert redis to using upstart instead of init.d, the steps to do that are outlined very well in this post: https://gist.github.com/1315952
Finally, you’ll need to use the redis session store module, which can be installed with the following command:
Next, now that we have redis and all of the node modules installed, we need to actually tell our program to use it.
In app.js, you can import the redis store using the following line:
Then, you’ll want to change the app.configure line that you had been using for session support before to look like this:
… and that’s it! (I think… it’s been a few weeks… if it isn’t working for you, shoot me an email: ebensing @ this domain)
I am a huge fan of Amazon’s AWS platform. I can have my own small linux VM running, never have to worry about hardware, and be reasonably sure it’ll be up, all for less than $20 a month. Pretty sweet deal. It has a nice web interface for creating new VMs, restarting, ect. All in all, something I’m very happy to pay for and it beats the hell out of paying for hosting from a traditional provider. But, it does have its problems, and as I found out, they can be a bitch to get around.
So, last night I decided that I’d upgrade my version of node.js real quick just to keep everything recent. Big mistake. At the rate that Node.js and some of the modules I use (looking at you Express) are changing, upgrading isn’t exactly a friendly task, and backwards compatibility is not really a priority.
So, essentially along this whole process, I was editing some permissions on files. Specifically, I was going to change the owner of /usr/local/ to my account, so that I didn’t have to mess with sudo npm … when using npm. Alas, you should really not be doing stuff like that late at night when you can accidentally type /usr/ instead of /usr/locals… Long story short, my ‘sudo’ program as well as everything under my /usr/ directory was now owned by my local account, and to change it back, I needed to use sudo… which I couldn’t do because sudo won’t run if it isn’t owned by root… sigh.
So, how the hell do you fix that?
Well, on a normal, physical machine, it really isn’t that bad. Boot up into recovery mode, get root, change the permissions back and then you’re off to the races. However, if you’re on AWS… you can’t boot into recovery mode and things get a bit trickier. Below are the steps I had to take to get everything working again, hopefully I can save somebody else some time if they ever end up similarly screwed.
Step 1: Shutdown your VM and startup a new one
So, first, you should shut down your VM. Take note of which availability zone you are in because the new VM you start up must be in this zone. Additionally, when starting up a new VM, I’d recommend going with the default amazon AMI and not another Ubuntu server one. For some reason, it causes problems when you attach a root Ubuntu EBS volume as a secondary EBS volume on another Ubuntu VM.
Next, you are going to want to go to your EBS management page and detach your main EBS volume for your original server. You should then be able to attach it to the new VM you just created. NOTE: both VMs must be in the same availability zone for you to do this. If they are not, AWS will not let you attach the old EBS volume to your new VM.
Also, take note of the device name for the old EBS volume. It’ll probably be something like /dev/sdf or similar.
Step 2: SSH into the new VM and mount the old EBS volume
Once you are SSH’d into your new VM, simply run sudo mkdir /mnt/old and then sudo mount /mnt/old/
You will now be able to access the file system to your old machine at /mnt/old, and since this new VM has a working sudo, you can change the owner back or do whatever else you need to do to your old file system to get things working.
Once you’re done fixing file permissions, simply power down the new VM and re-attach your old EBS volume to your old VM. If you did what you needed to, you should be able to sudo again after you’ve booted up.
Since my primary OS is windows (gasp, please don’t call the police), I often use an Ubuntu VM for development work. VirtualBox is a great, free, and open source piece of software that allows you to easily create and run a VM on your machine.
Most of the time, I just use a Ubuntu server VM though- not the desktop version, and while VirtualBox does provide terminal access to the VM, I prefer using putty to connect because it offers nice features like re-sizable windows and highlight to copy, not to mention it can be nice to have multiple SSH windows open.
The default settings for a VirtualBox VM will not allow you to connect to your VM via SSH though. This post will walk you through a couple methods to help get you up and running.
Method 1: Using a Bridged Adapter
Generally, the easiest method to get things working is to simply change the adapter type to “Bridged Adapter” on your VM settings. To do this, open up VirtualBox and right-click on your VM, and select “settings.” Next, open the “Network” tab and change the “Attached to” drop down list to “Bridged Adapter.”
Next time you boot up your VM, it will act just like another computer on your network, with its own IP and everything. (Meaning you can also SSH to it from other computers on your network) In my opinion, this is by far the best and most convenient way to get things working; however, there are times when it is not an option, and we’ll go over ways to handle those below.
Method 2: Port Forwarding
As I recently learned at my summer internship, there are times when you cannot use bridged mode due to network policies. The company I’m working at has particularly tight information security policies and will not allow any devices other than company owned devices onto their network. So, basically my VM looks like a user owned device, and therefore could not use bridged mode.
The trick we can use here though is port forwarding. VirtualBox provides tools that will allow you to forward a port from your host machine to your VM. In this example, we will forward port 2222 on the host machine to port 22 on the VM, but this technique is generally applicable and can be used to forward any combination of ports.
First, open up VirtualBox again and go back to Settings > Network. Make sure the “Attached to” option is set to “NAT” and then expand the “Advanced” options. You should now see a clickable button titled “Port Forwarding,” if you can’t select it, double check that your “Attached to” option is set to “NAT.”
This new dialog will allow you to add and remove forwarding rules for this VM by click on the buttons on the right side. Don’t worry about the guest/host IP addresses; however, in the “host” port box, put the port that you want forwarded (in our example, port 2222) and then in the “guest” port, put where you want it forwarded to (ie. 22 for SSH).
And boom! That’s it. You should be good to go now. All traffic directed at your host machine (localhost) on the port(s) you’ve entered will be forwarded to your VM.
So, I built this site using Express.js and have to say, it is a pretty cool framework.
I also really enjoyed the asynchronous nature of node.js. For those unfamiliar with node, it strongly suggests that you write asynchronous code. This allows for all sorts of optimizations and increased efficiency, but definitely takes some practice to get used to this style. Coming from a heavily imperative background, it was fun learning a new way to think about problems. And certainly, anybody who does work with distributed systems should consider node.js because of the asynchrony that is built in at a low level.
Express sticks with the tried and true MVC convention for web application design. Having worked with other MVC frameworks though, I think it has one of the cleanest implementations. Furthermore, the ability to chain functions together for each route and middleware is really powerful.
While express has something like 14 different template engines, the default is called Jade. Initially, I was a little skeptical at having to learn/use another templating language, but after working with Jade for this project, I’m glad I did. It supports just about everything I’d ever want my templates to do and is a very succinct syntax. With python, I generally find myself annoyed at the importance of indents and lack of curly brackets, but I found that this aspect actually fits quite will with an HTML templating language. Also, I really enjoyed the CSS style selector syntax that it incorporates.
Honestly, I had heard a lot about Mongo before this project, but really didn’t understand exactly what it was or how it worked. Most of what I had heard was anecdotal or “it’s web scale.” (http://www.youtube.com/watch?v=b2F-DItXtZs watch this if you haven’t- you won’t regret it)
I think express.js is one of the slickest web frameworks out right now. Everything fits together very well, and I will definitely be using it on anything new I develop for the foreseeable future.
A word of caution: Express.js is still a young framework, and is changing a lot. I definitely ran into a few issues where the official documentation was out of date, but could generally find something online to work through it.