How to provision a local VM and remote amazon EC2 instance with the same Chef and Vagrant setup

In the previous article we learned how to create a local virtual machine for development and a similar live server on Amazon EC2 with Vagrant.

That helped us setup the servers and get going with Vagrant but we didn’t install anything on them. So let’s do that now!

First a recap of the tools we are using:

Vagrant – The glue that holds the whole process together. Vagrant co-ordinates virtualisation providers to manage virtual servers and provisioners that will then load apps and tweak settings on those servers.

Virtualbox – The virtualisation provider for local virtual machines. Also the default for vagrant, but other providers can be used.

Vagrant EC2 plugin – The link to the virtualisation provider for servers on the Amazon EC2 platform.

Chef – The tool to add applications, modules and config files to the server that is controlled by Vagrant. The provisioner.

The good thing about this toolset is they all abstract their work domain well. Vagrant can work with different virtualisation providers, such as Virtualbox or VMware. It can use different provisioners such as Chef or Puppet. With whatever combinations you still use the same vagrant instructions to work – vagrant up, vagrant destroy, vagrant provision, vagrant shh.

Chef abstracts the provisioning process so the same Chef configuration can be used for whatever type of server you wish to cook up (sorry!). In theory this is true, but in practise it may need a bit of OS specific config here and there. To be fair this stuff is HARD so sometimes you have to be aware that you have a particular strain of a certain OS. There might be a way around this but in my last setup to install a certain application on Ubuntu I had to ensure apt-get update was called before the app was installed. But I could do this with Chef, so it still keeps the philosophy of the toolset.

And the philosophy of the toolset? To be able to produce a portable and reproducible development environment.

And this is what I want to do. To be able to produce a local development server and then reproduce this server on EC2. In the previous article we created a Vagrant managed server both locally and on EC2. So here we now need to feed these servers some food – in the form of apps and config.

Our shopping list of tasks will be:

  • Install Chef
  • Setup the local chef directory structure
  • Create a git repo for the whole project (the chef tools manage cookbooks via git)
  • Add some cookbooks
  • Instruct vagrant to provision the VM with Chef to install MySQL and PHP
  • Create a custom cookbook to set the root MySQL password, create a user and database and populate from a dump file.
  • Repeat on our remote EC2 server to provision with the same setup as the development machine


Installing chef

First step is follow instructions here:

The aim is to just install chef on your machine and go no further into configuration. We will be using chef-solo, which means all configuration will be kept and managed locally. That’s fine for this project, we can keep our config close. The other types of chef are suited for managing multiple clusters of servers which sounds like an adventure for another day.

To keep things simple we won’t even have to call chef-solo commands ourself. Vagrant will do that. The one chef tool we will have to use is called ‘knife’ which is used to carve up config files, or cookbooks to use the Chef terminology.

Installing cookbooks

Before we start let’s recap the file structure of our project so far:

Let’s start by asking chef to install php and mysql for us on our VM. To do this we have to use knife to install the cookbooks for php and mysql. We will then instruct vagrant to tell chef to run recipes from those cookbooks.

One thing to be aware of with using knife (hold the blunt end?) is it requires a git repo to work with. But we were going to put our server config into a repo anyway, so let’s do it now.

Now we can start using knives:

Ideally we would only be installing the php and mysql cookbooks but we need a few extras to smooth over. After all this stuff is tricky to do across such a wide range of platforms. The apt cookbook will ensure our Ubuntu server is up to date when we start installing, the iis and yum-epel keep the other cookbooks happy.

During the install your screen should show you knife doing lots of stuff. If you look in your cookbook directory you will see the downloaded cookbooks:

Cookbooks are packages that can have dependencies on other cookbooks. knife is clever enough to deal with these dependencies and load them for us, which accounts for the extras here (beyond our own extra’s we specified).

Getting Vagrant to read cookbooks

Now we can edit the Vagrantfile for our VM. Refer to the final copy of the file at the bottom of the article to see where to fit things in. Here we tell vagrant to use chef for provisioning and which recipes to run. We also need to tell vagrant where the chef cookbooks are:

Earlier chef was installed on your host machine so cookbooks could be downloaded, but we

also need chef to be installed on the virtual machine too. The chef client on the target machine is sometimes included in base boxes, so may already be there but that is not

guaranteed. Luckily there is a vagrant plugin that will ensure chef is installed on the target machine, and if not install it for us. To install the plugin run in your shell:

And then update your Vagrantfile to use the plugin:

Provision the local VM

Now from the vagrant_local directory tell vagrant to provision the server.

Again chef will fill your screen in green with it’s activity. Once completed you can

login and verify it’s installed mysql

Result! Chef has cooked up this stack for us.

Passing parameters to cookbook recipes

What we’ve done so far is run off the shelf cookbooks that install standard packages.

We haven’t yet told Chef about anything specific about our particular install. Cookbooks often contain multiple recipes so you can customise an install by selecting appropriate recipes. For example if we only needed MySQL client we would of left out the MySQL server recipe. The other way to customise chefs actions is to pass in cookbook parameters. There’s often a wide range of cookbook parameters which you can find detailed in the cookbook docs. Let’s start by specifying the root password for mysql (from a security point of view this is not a production solution, just a demo). We can do this by passing the value to the mysql cookbook in our Vagrantfile:

And ask vagrant to shake this change through to the VM

This command runs Chef on an already running VM. A core principle of Chef is it’s operation is idempotent – running it multiple times will result in the same outcome. In this case the root password gets updated but everything else stays the same. This is great for developing our setup, we can make small steps and test each time.

Creating custom cookbooks

So next something more adventurous. We will setup a database, user and then import data into the database from a dump file so our app has an initial state. To my knowledge this isn’t possible with the default cookbook so lets create our own cookbook to do this.

To create a new cookbook we again use knife but first we must create a separate directory to store our custom cookbooks. This must be done as some Chef commands that manage cookbooks can delete cookbooks that have not been downloaded. It also helps organise your cookbooks clearly. So from the chef directory run

Then we must tell Chef about the new cookbook directory by editing the Vagrantfile to describe cookbook locations relative to the Vagrantfile:

Now instruct knife to create an empty cookbook for us (run from the chef directory)

If you look inside the site_cookbooks directory you will see a dbsetup cookbook that is far from empty. Fortunately we don’t need to worry about most of this structure for the moment, we just need to edit the default recipe (site_cookbooks/dbsetup/recipes/default.rb):

This will instruct chef to run mysql commands to create a database and then a database user. This operation requires root permissions but we can fetch that here from the config we defined earlier. Note the database name, username and password are also pulled from the config. So best define that back in the Vagrantfile:

Also tell Chef to use the new cookbook:

And kick it off again with (from vagrant directory)

This time you might see some unpleasant red text in the Chef output:

As we created the new cookbook while the VM was running the directory could not be mounted. No problem, we can switch off and switch on again to fix:

Excellent! We can login with our new user and that user can see the new database.

Using Chef to restore a database from a dump file

If only we could fill that database with data from a database dump so our VM has data to work with out of the box. Again Chef makes that pretty simple. First we need to generate the dumpfile. As we ‘backup like a boss‘ around here use this command on whichever server contains the populated database: (substituting your db credentials):

Copy this file to site_cookbooks/dbsetup/files

Now add lines in the recipe to copy this file to the VM and restore the db (site_cookbooks/dbsetup/recipes/default.rb)

And again get chef to work

Now inspect the database to check your data is there.

I really like this setup. With our custom cookbook added to version control we have a setup that can from nothing create a VM, install core applications and also populate and configure our database. These methods can be used to setup Apache, PHP or whatever stack you require. This setup is also going to payoff for our EC2 server that we setup in the previous article. As we have done all the hard work creating the cookbook we only need to update the EC2 Vagrantfile with the cookbooks to run and the config. What’s nice here is we can use the config to set different parameters for the different environments when required.

Here’s the completed Vagrantfile for the local VM (mydomain/vagrant_local/Vagrantfile)

And here’s the complete Vagrantfile for the remote EC2 server (mydomain/vagrant_ec2/Vagrantfile)

So there we have the basics of a project that can create a local VM and similar instance on EC2; provision both with applications via the same Chef setup and deploy databases. Now it’s a matter of building on this structure to fill in the gaps and add the rest of the stack. For a start install a webserver and configure the virtual hosts files using Chef templates (maybe a future article). Also for production secure and then prepare methods to deploy codebases and databases. Happy dev-oping!

How to setup an EC2 instance and similar local development virtual machine with Virtualbox, Vagrant and Chef

I’ve finally done it and taken the plunge into the world of devops. I’ve been meaning to automate the build of my live server out for a while but recent changes to the EC2 pricing structure have given me extra motivation. Financial motivation! What I wanted to achieve was:

  • Automate creating, starting and stopping an Amazon EC2 instance using Vagrant
  • Automate creating a similar local virtual machine using Vagrant
  • Provisioning both with Chef to install packages such as Apache, MySQL, etc
  • Deploy base codebases and databases for all my sites

The holy grail for me would be to run one command and bang! – an EC2 instance with all my sites would appear. Then run another command and boom! – a local virtual machine would appear with all the sites running locally for development. And of course all the deployment and setup would be shared so there would be no duplication.

There were many problems found along the way pursuing this dream but in the end it turns out Virtualbox, Vagrant and Chef can deliver the goods. And deploy the goods. And provision the goods!

The benefits for this process are plenty:

  • Recreate the environment quickly and easily, recreate locally.
  • Test changes to the environment locally then deploy live.
  • Migration to another OS is simple. Where possible the tools are platform agnostic and where this is not possible platform specific work arounds can be implemented.
  • This toolset is widely accepted so would be simple to migrate to another hosting platform
  • All config is kept in one place under version control. It’s entirely possible to work on all the config files like virtual host files, db config in your favourite IDE locally and deploy changes via Chef so you don’t need to fiddle around with vi inside a ssh tunnel. (Although I do like that type of thing!)

Creating a local development virtual machine with Vagrant

So to get started we need Vagrant and also Virtualbox if you don’t already have it.

With these in place we can start to create the configuration for our local virtual machine

vagrant init creates a file called Vagrantfile that defines everything Vagrant needs to create the VM and later the details for Chef to know how to provision your new server. The file is heavily commented with sensible defaults to help us when we need to start tweaking.

So first thing we need to consider is the ‘base box’ to use. This is the base flavour of the OS and where our journey starts. Kinda like an install DVD. Or ISO. Normally this is a matter of choosing a base box to match the intended OS, eg CentOS, Debian, Ubuntu. However we want to create a server on Amazon EC2, so we must choose an image that is both available as a Vagrant base box and as an EC2 AMI (the EC2 equivalent of a Vagrant base box)

I was already planning to run Ubuntu so our next job is to find a menu of base boxes and AMI’s.

Luckily there are excellent online resources for Ubuntu. EC2 AMI’s are listed here and Vagrant boxes listed here

Ubuntu Server 14.04 LTS is the best match here, so let’s configure our Vagrantfile for the local VM. Fire up your favourite editor and amend the Vagrantfile to use this box and to setup networking:

Then to start the vm run this in the same directory as the Vagrantfile

Vagrant will now download the base box and instruct Virtualbox to create and start the VM. Once it’s completed we can login to our new local development server

So that’s our local development server created with a single command. Later we will introduce Chef to install applications like MySQL and Apache. Here’s the full Vagrantfile for the local VM:


Creating a remote EC2 server with Vagrant

Next to setup the live EC2 server. For this we need to start with installing a Vagrant plugin to do the talking with EC2.

And then setup the Vagrant environment

And again we have to configure the new Vagrantfile. But before we can we have do some some work in our AWS Management Console.

We need to:

  • Setup an IAM access key to allow Vagrant to talk to EC2
  • Setup a SSH key pair to use to login to the EC2 instance once it’s created
  • Choose the location to launch the instance
  • Setup a security group for the instance
  • Choose the type of instance
  • Choose the AMI

That sounds like a lot to do! And there’s more, and this was a real gotcha for me. I wanted to take advantage of the new t2.micro pricing. It’s cheaper AND better spec than the t1.micro. No brainer. However it turns out that t2 instances only run under Amazon VPC. I thought this would be end of the road, with either VPC not working with the vagrant-ec2 plugin or it costing too much. Turns out VPC has no cost and it does work with vagrant-ec2. Phew!

So the final item for the AWS Management Console list is:

  • Setup a VPC network

So off to work. To obtain the IAM access key follow the instructions here:

You will end up with an Access key ID (example: AKIAIOSFODNN7EXAMPLE) and Secret access key (example: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY) These keys identify the vagrant-ec2 plugin with an Amazon IAM user/role/group. In the IAM console you set the permission policy to only let these keys access what is neccessary to create, boot and halt an instance. However as we are keen to get started we can allow all actions by creating a group assigned to our user with the following policy:

With this in place enter the key details into your Vagrantfile (the full Vagrantfile is listed at the end of the article, refer to it to see where to insert each snippet)

It’s probably a good idea next to decide which EC2 region you want your server to live in. We have much to do in the EC2 console so should make sure we are setting things up in the right region. I’ve selected ‘US East’ for my example.

Next task is the ssh keys to access the instance once it’s created. This is not managed in the IAM console but the EC2 console, selecting ‘Key Pairs’ from the navigation menu. Once your keypair is setup, enter the details in the Vagrantfile

Amazon Ubuntu AMI’s require ssh using the ubuntu user, which we specify with the ‘override.ssh.username’ parameter.

Now we need to setup the VPC, as this needs to be in place for the other items on our todo list. Again in the EC2 console select ‘Network Interfaces’ from the navigation menu and create a VPC network interface. Vagrantfile:

Then select ‘Security Groups’ from the navigation menu and create a security group for the VPC. At least add SSH, HTTP and HTTPS inbound rules for a web server. More food for our Vagrantfile. Note you must use the ‘Group ID’ value:

Now the instance type. I already know I want the cheap one:

And the AMI. In this example we want 64 bit trusty tahr for us-east-1, to fit the VPC. But which type? Turns out for the t2.micro instance we must have the ‘hvm’ type. The list at leads us to ami-9aaa1cf2 which we can enter, along with your region

Then create an elastic IP (select the VPC type) for the instance and enter it here:

Finally we have to set the base box. As said before Amazon doesn’t use Vagrant base boxes but it’s own AMI’s, but vagrant still needs a base box to do it’s stuff. So we specify a dummy box that is built to work with the Vagrant-EC2 plugin:

Now we are set. Time to see the vagrant-ec2 plug work it’s magic (note the extra provider option calling Vagrant to tell it to talk to EC2)

Check in the EC2 console to see the instance spark into life. In my testing the elastic IP didn’t always connect so I needed to connect it by hand, but that’s a small step to put right.

Again once booted we can login

Another win, we are now half way to our objective. A live EC2 server and corresponding local development server all controlled via Vagrant. In true devops style you can create, start, stop, destroy and recreate repeatedly from the same config. Infact I suggest you next run

just because you can

As promised the Amazon EC2 Vagrantfile in full:

Next article: Use Chef to provision both the local VM and the EC2 instance. It’s nice to have created these servers but they really need applications installing so they can do some work for us!

Database backup like a boss

We often need to create and restore database backups to migrate data between servers or run backups. The mysqldump command creates a series of mysql insert commands that is good for readability but not so good for file size. It’s recommended to compress before transferring across a network but that means another command, and then further commands to cleanup afterwards. Compressing can often reduce file sizes to a tenth of the original. Sometimes however the uncompressed file simply won’t fit in the file system, likely for large data stored when the system is running low on disk space.

In the situations it’s best to pipe the mysqldump straight into a compression utility so there is no intermediate file. On the other end likewise the compressed file can be uncompressed and the output piped to mysql.

It’s simple to backup like a boss:

and then

Breezejs navigation properties

The more I use BreezeJS the more I like it. I’m not using it with a .NET infrastructure so I’m hand rolling my metadata. Here’s a note of my findings for future reference.

key: The primary key for the entity. Can be a compound key. Required when new entities are created, loading by key and for reducing the amount of information required to describe navigation properties.

The science of relating entities together. We can have 1:1, 1:many and many:many (via a mapper collection) relationships. We can have parent to child relationships and inverse relationships (child to parent).

name: Arbitrary name of the property. In knockout land this will be the observable property that contains the related entity/entities

entityTypeName: The namespaced entity name of the related entity, eg Location:#Tinder

isScalar: Single(true) or many(false). For non scalar the related entities are embedded in an array, for scalar types the entity is embedded directly.

associationName: Arbitrary name of the relationship. Convention is <root entity>_<child entity>, eg Centre_Location. Must be unique, as it is used when describing relationships that have both forward and inverse relationships. In these cases the inverse relationship associationName must match the normal (forward?) relationship.

foreignKeyNames: Array of key names. These define the property on the root entity to use when fetching the related entity. This property value will relate to the primary key of the related entity. We do not need to specify the details of the related entity primary key here as this is defined in the related entity metadata.

Not sure how multiple keys would work but guess the ordering would match the compound key ordering in the related entity.

invForeignKeyNames: Array of key names. These define the relationship the other way around. These define the property on the related entity that matches the primary key of the root entity.

For most cases we only want to define normal or forward relationships where we state the parent to child relationship. For these cases the use of foreign and inverse keys depends if the relationship is 1:1 or many.

For 1:1 isScalar = true and the foreignKeyNames field is used.

For many isScalar = false and the invForeignKeyNames field is used.

I’ve not tried defining both foreignKeyNames and invForeignKeyNames for a relationship, upto now it’s only neccessary to define one. A possible exception might be for a many relationship. If the related entity does not map to the primary key of the root entity maybe it would be described with foreignKeyNames.

Using invForeignKeyNames can seem confusing, as it’s describing a forward relationship. It is useful though as the alternate would be to define an inverse relationship on the related entity back to the root which might be useful, but may result in extra response payload.

Core audio tips

Incase it ever gets removed, some Core audio gold from

  • The Audio Unit Programming Guide is required reading for using Audio Units, though you have to filter out the stuff related to writing your own AUs with the C++ API and testing their Mac GUIs.
  • Get comfortable with pointers, the address-of operator (&), and maybe even malloc.
  • You are going to fill out a lot of AudioStreamBasicDescription structures. It drives some people a little batty.
  • Always clear out your ASBDs, like this:

    This zeros out any fields that you haven’t set, which is important if you send an incomplete ASBD to a queue, audio file, or other object to have it filled in.

  • Use the “canonical” format — 16-bit integer PCM — between your audio units. It works, and is far easier than trying to dick around bit-shifting 8.24 fixed point (the other canonical format).
  • Audio Units achieve most of their functionality through setting properties. To set up a software renderer to provide a unit with samples, you don’t call some sort of a setRenderer() method, you set the kAudioUnitProperty_SetRenderCallback property on the unit, providing a AURenderCallbackStruct struct as the property value.
  • Setting a property on an audio unit requires declaring the “scope” that the property applies to. Input scope is audio coming into the AU, output is going out of the unit, and global is for properties that affect the whole unit. So, if you set the stream format property on an AU’s input scope, you’re describing what you will supply to the AU.
  • Audio Units also have “elements”, which may be more usefully thought of as “buses” (at least if you’ve ever used pro audio equipment, or mixing software that borrows its terminology). Think of a mixer unit: it has multiple (perhaps infinitely many) input buses, and one output bus. A splitter unit does the opposite: it takes one input bus and splits it into multiple output buses.
  • Don’t confuse buses with channels (ie, mono, stereo, etc.). Your ASBD describes how many channels you’re working with, and you set the input or output ASBD for a given scope-and-bus pair with the stream description property.
  • Make the RemoteIO unit your friend. This is the AU that talks to both input and output hardware. Its use of buses is atypical and potentially confusing. Enjoy the ASCII art:

    Ergo, the stream properties for this unit are

    Bus 0 Bus 1
    Input Scope: Set ASBD to indicate what you’re providing for play-out Get ASBD to inspect audio format being received from H/W
    Output Scope: Get ASBD to inspect audio format being sent to H/W Set ASBD to indicate what format you want your units to receive
  • That said, setting up the callbacks for providing samples to or getting them from a unit take global scope, as their purpose is implicit from the property names: kAudioOutputUnitProperty_SetInputCallback and kAudioUnitProperty_SetRenderCallback.
  • Michael Tyson wrote a vital blog on recording with RemoteIO that is required reading if you want to set callbacks directly on RemoteIO.
  • Apple’s aurioTouch example also shows off audio input, but is much harder to read because of its ambition (it shows an oscilliscope-type view of the sampled audio, and optionally performs FFT to find common frequencies), and because it is written with Objective-C++, mixing C, C++, and Objective-C idioms.
  • Don’t screw around in a render callback. I had correct code that didn’t work because it also had NSLogs, which were sufficiently expensive that I missed the real-time thread’s deadlines. When I commented out the NSLog, the audio started playing. If you don’t know what’s going on, set a breakpoint and use the debugger.
  • Apple has a convention of providing a “user data” or “client” object to callbacks. You set this object when you setup the callback, and its parameter type for the callback function is void*, which you’ll have to cast back to whatever type your user data object is. If you’re using Cocoa, you can just use a Cocoa object: in simple code, I’ll have a view controller set the user data object as self, then cast back to MyViewController* on the first line of the callback. That’s OK for audio queues, but the overhead of Obj-C message dispatch is fairly high, so with Audio Units, I’ve started using plain C structs.
  • Always set up your audio session stuff. For recording, you must use kAudioSessionCategory_PlayAndRecord and call AudioSessionSetActive(true) to get the mic turned on for you. You should probably also look at the properties to see if audio input is even available: it’s always available on the iPhone, never on the first-gen touch, and may or may not be on the second-gen touch.
  • If you are doing anything more sophisticated than connecting a single callback to RemoteIO, you may want to use an AUGraph to manage your unit connections, rather than setting up everything with properties.
  • When creating AUs directly, you set up a AudioComponentDescription and use the audio component manager to get the AUs. With an AUGraph, you hand the description to AUGraphAddNode to get back the pointer to an AUNode. You can get the Audio Unit wrapped by this node with AUGraphNodeInfo if you need to set some properties on it.
  • Get used to providing pointers as parameters and having them filled in by function calls:

    Notice how the return value is an error code, not the unit you’re looking for, which instead comes back in the fourth parameter. We send the address of the remoteIOUnit local variable, and the function populates it.

  • Also notice the convention for parameter names in Apple’s functions. inSomething is input to the function, outSomething is output, and ioSomething does both. The latter two take pointers, naturally.
  • In an AUGraph, you connect nodes with a simple one-line call:

    This connects the output of the mixer node’s only bus (0) to the input of RemoteIO’s bus 0, which goes through RemoteIO and out to hardware.

  • AUGraphs make it really easy to work with the mic input: create a RemoteIO node and connect its bus 1 to some other node.
  • RemoteIO does not have a gain or volume property. The mixer unit has volume properties on all input buses and its output bus (0). Therefore, setting the mixer’s output volume property could be a de facto volume control, if it’s the last thing before RemoteIO. And it’s somewhat more appealing than manually multiplying all your samples by a volume factor.
  • The mixer unit adds amplitudes. So if you have two sources that can hit maximum amplitude, and you mix them, you’re definitely going to clip.
  • If you want to do both input and output, note that you can’t have two RemoteIO nodes in a graph. Once you’ve created one, just make multiple connections with it. The same node will be at the front and end of the graph in your mental model or on your diagram, but it’s OK, because the captured audio comes in on bus 1, and some point, you’ll connect that to a different bus (maybe as you pass through a mixer unit), eventually getting the audio to RemoteIO’s bus 0 input, which will go out to headphones or speakers on bus 0.

Improving playback timing with Core MIDI Network MIDI

The network MIDI feature for OS X and iOS is fantastic – a great way to connect your mobile and desktop apps together. However mileage can vary and timing can often become unacceptable depending on the mood of your local network.

This makes sense – Wifi is not designed for sending infrequent low latency small packets where each packet must be sent instantly.

Things can be massively improved by creating a private local network to connect your iOS device to your mac directly. And in a typically Apple way it’s ridiculously easy to do!

On your mac in the wifi menu select ‘Create Network…’ and follow the instructions. Then on your mobile device switch to the new network.It may complain that there is no internet connection but that’s ok – you don’t want interruptions from email/twitter/etc during your MIDI session anyway 🙂

Reducing disk bloat on the command line

Some handy nix snippets to find which directories are eating into your disk space

Show me it all

First the daddy. Show disk free (df) on all mounts (-h is for (h)uman readable disk sizes)

Show a breakdown

Show disk usage (du) for a file/directory or set

-h Again human readable

-s Value for each file/directory

-c Show a grant total for all

Rather than specify a file or path I usually cd into the directory and use a wildcard:

To sort by size pipe to, err, sort. With the -h option it groups k, m, g together so not actually in true size order but it’s easy enough to see what’s what. Alternatively remove the -h and look at big numbers!

These commands will miss dot files by default. If you need to see them use

Show recent file changes

Want to see the preference files updated when you launched an app? Or can’t remember what you’ve been doing recently? Show files changed in the last 10 minutes.

To widen the net to longer durations, say files updated in the last 10 days use ctime

Time defaults to days but alternative units can be specified:

s: second

m: minute

h: hour

d: day

w: week

Show recent file access

For file access use either amin or atime with similar pattern.

Do  dangerous stuff

Please use without effect of medication. Now you’ve found files that are possibly clogging up space (how about old backup/cache/log files?) with care you can delete files. Warning it’s recursive. You can/should run the command without the -delete to see what’s going to happen first.

SVN Workout 1 – Branch and merge practise

RECOMMENDED TO USE LATEST SVN CLIENT, 1.8.4 (at time of writing)

Version control is only as good as how it’s used. And it can be used really badly. All the facilities are there to prevent trouble from occurring, but only if you use them.

A basic reliable version control workflow assumes that the trunk copy contains only stable production ready code.

To ensure only stable production ready code goes into the trunk, branches must be created to contain the work for each feature for a future release. Let’s say that again:

To ensure only stable production ready code goes into the trunk, branches must be created to contain the work for each feature for a future release.

It’s often tempting to just use the trunk for everything, avoid branching and merging to save time and hassle but that leads to bigger problems down the line and costs MORE time to later fix.

So to dispel the rumour that branching and merging is difficult, let’s do a little svn workout to show how easy it can be, and how much hassle it can save when real world problems occur.

Setting XDebug breakpoints for remote CLI scripts with PHPStorm

PHPStorm makes setting and using XDebug breakpoints easy. Just define the server configurations and click the phone icon to listen mode and load your site in a browser. With the correct xdebug.ini parameters it works if your application is either running on the same machine as PHPStorm or on a remote server.

Running CLI scripts on a different server to the machine PHPStorm is running on require a bit more effort though.

First ensure the phone icon is active so PHP Storm is listening for connections.

Again ensure server configurations are setup for your project. There will be a server name and configuration name.

On the remote server define the server name that PHPStorm should listen for:

export PHP_IDE_CONFIG=””

Now execute the CLI script on the remote machine.

If the server name you specify matches the server name you have defined in your project execution will halt on your breakpoints.

Or will it?

For me this works fine for some environments. When this is not enough export some more:


to get it going and


to stop it. Nice!

How to rename a mongodb collection

On installing a sample node.js application the associated mongo files ended up with long collection names including ‘.’ and digit blocks.

To rename you cannot access the collection directly, as the dot-digit combination is illegal. Instead use

to solve the problem.