Best oscilloscope app so far…

app_store135x40Audiospex oscilloscope app iPhone 6 image

Yes, it’s out now and it’s a beauty. It’s a very nice app, the most satisfying to date and something I’d wanted to do for a long while now. I mean oscilloscopes are great things. First they look good. Lots of controls which look mind bogglingly complicated – check. A dot that pings across the screen like a life support machine – check. The ability to actually see what is going on in a circuit in real time – check. Probes that look like surgical instruments – check.

I used to have two ‘scopes (as they are affectionately known) before the family arrived but they had to go because an oscilloscope is not exactly a small thing. Mainly due to the cathode ray tube and the corresponding circuitry to produce the extremely high voltages to persuade the electrons from the gun to fling across the length of the tube. However the architecture of an analogue oscilloscope is very simple. Once the correct voltages are applied to the cathode ray tube the gun will emit a stream of electrons that accelerate towards the screen at great speed due to the high voltage potential in grids inside the tube. When they hit the screen it glows green (traditionally) due to a phosphorescent coating on the inside of the screen.

That setup alone produces a dot in the middle of the screen. Not that exciting, we need a trace. The axis on a ‘scope is time on the horizontal and amplitude on the vertical. In the tube there are vertical and horizontal deflection plates. To achieve a trace a sawtooth oscillator acts on the horizontal plates. The amplitude of the sawtooth defines how far from the centre of the screen the trace will track (that’s one control we need to adjust) and the frequency will define the time a trace takes to travel from left to right. The only tricky part is to ensure the sawtooth oscillator provides a separate pulse during the step part which is fed into the grid amplifiers to ensure the beam is stopped during the fly back phase. Horizontal is mapped to time and time should only go one way!

With that in place the central dot becomes a moving dot, with its speed depending on the sawtooth frequency. Due to the phosphorescent coating on the screen it continues to emit light after the beam has moved away. This means when the sawtooth frequency is increased the moving dot appears as a constant line. When televisions had cathode ray tubes they relied on exactly the same trick.

The final step is the easiest. The input signal to the scope is amplified and then fed to the vertical deflection plates. Now the beam will draw whatever signal is applied to the input on the screen. For more detail the vertical deflection amplifier gain can be increased.

That gives the basics of a scope. We can now check a signal and instantly get a feel for it’s content. Has it any obvious noise or distortion, low frequency content, high frequency content, the sizes, any regular patterns? Most of this is hidden to a multimeter which has to average out to give a single reading.

However this basic setup has a key problem. It is very hard to view repetitive waveforms, like a basic sine wave. Due to the phosphorescent coating the screen continues to emit light after the beam has moved on. This is a good thing and prevents flicker and makes slow frequency timebases easier to view. However on fast timebase frequencies this persistence means several traces will appear to be on the screen at the same time and actually mask the real shape of the true signal.

Oscilloscope waveform with no trigger
Where’s the sine wave here? Without trigger the true signal gets lost in overlapped waveforms

The phosphorescent coating is a physical thing so cannot be adjusted to switch this off for higher frequencies. Even if it could it would result in the display becoming very dim.

Instead to get around this issue scopes have a trigger section.

The trigger circuit freezes the sawtooth timebase oscillator in the flyback position. In other words the trace is blanked and about to begin, so nothing is seen. The trigger circuit compares the input voltage to the vertical amplifier against an adjustable trigger voltage. Only when the input level crosses this level does the trigger circuit allow the timebase oscillator to continue unobstructed until the next blanking phase where again the trigger circuit will force it to wait until the trigger threshold has been crossed. That a ensures that the start of the trace always occurs when the input waveform is at the same position so it will appear stationary on the screen. It’s important to realise the input signal must cross the trigger threshold to trigger the trace. Scopes will have an option to trigger on a positive edge (signal going from below threshold to above) or negative (signal going from above the threshold to below)

Oscilloscope waveform with trigger switched on
With trigger switched on the waveform no longer moves so we can see it – a perfect sinewave!

This can be initially confusing if the signal is always below or due to a DC component always above the threshold level. The trigger threshold will never be crossed, so nothing is seen on the screen. The threshold level must be adjusted carefully to ensure the signal triggers to see the trace.

Sometimes the trigger function is not required or does not suit the input signal. In this case the trigger section can be switched to “Auto”. This setting means the trigger circuit will automatically fire on each blanking phase ignoring the state of the threshold. In other words the trigger section is effectively switched off.

Overall this neat design mean once a scope is calibrated it is both accurate and reliable. There’s not really much to go wrong. Which makes it a great tool.

Now with advent of flat screens and capable cpu’s I thought it time for me to model an oscilloscope in an app. It is fairly straightforward to draw a signal on the screen but it was really important to me that the app should behave and look like a real cathode ray tube oscilloscope in real time. This made development much more tricky but in the end worthwhile. To me it has that analog scope feel but fit’s in my pocket!

 

Those pesky core audio error codes

We’ve all been there. I was there recently with an error code from core audio, ‘-66748’ to be specific. On a good day the meaning for these codes can be found in the core audio header files but this must be done by hand, unless there is some framework global search facility in XCode that no-one has told me about.

This time I thought I’d be slightly more automated and grep the framework header directory to search everything for me.

So with a right click on the framework and “Show in Finder” to get the path I tried

grep: warning: recursive search of stdin

Eh?

I’ve specified the path so why is grep reading stdin?

Turns out it’s the negative error code, despite being in quotes. The dash is making -66748 a grep argument, rather than the search term.

The trick to fix this is to use a double dash — to indicate to the shell there are no further optional arguments to parse after the -r option.

This comes up with the goods:

and a bit of extra grep arguments to show line numbers and context:

Or why not put it in a script?

Stepping into chained method calls with XDebug and PHPStorm

Or Shift F7 where have you been all my life?

More and more frameworks are providing API’s that are chainable. It’s descriptive, concise and the in thing.

So, instead of code like this:

We have this:

Much less clutter, and easy to implement. To make this coding style work all these chainable methods need to return their instance value:

But upto now I’ve had a big problem with this syntax. Not with the implementation or appearance but using xdebug breakpoints.

Say I have problems with the ‘order’ method that I want to investigate with XDebug and PHPStorm. With the chainable syntax how do we get there?

Set a breakpoint for the line ‘$select = $dbAdapter->getSelect()’ and then use ‘Step into’.

This steps into the ‘getSelect’ method. Not what we want. So we step over till the end of that method and step over again which lands us in the ‘columns’ method. It’s tempting to use step out while in the ‘getSelect’ method, but that gets us out of the whole chain so we miss everything. Still we are in ‘columns’ now. Not what we want, but closer!

So again repeat this method till we land in the ‘order’ method. Then the debugging can be done. We could of course have put the breakpoint in the ‘order’ method in the first place but this assumes we know what object ‘getSelect’ is going to return. And with big frameworks and modular architectures this is not always obvious.

So this drove me crazy. I just wanted to be able to highlight the method to step into or choose it from a list or something.

And that’s what PHPStorm gives you, I just didn’t know where to find it. I’d always been looking at options in the debugging panel. It would be with ‘step over’ and ‘step into’ buttons right? Wrong!

It’s in the top ‘Run’ menu.

And I never go in any of the top menus. Ever. Well except for creating new projects.

Since PHPStorm gives me contexual menus, panel buttons and keyboard shortcuts galore I’d never though to look in the top menus.

And there it is:

Smart step into

menu

This give’s exactly what’s required. A popup menu allowing you to choose which method to step into.

And a keyboard shortcut to commit to muscle memory.

So thanks again PHPStorm but please add a ‘smart step into’ button on the debug panel so it’s easier for me to find!

Using git to show a quick summary of all files altered on a branch

Assuming the branch has not yet been merged and working against master this ‘lil beauty provided a neat list of all the actions:

And if that’s not enough this shows the files altered on that branch:

Debugging Magento product filter counts

A handy place to inject a cheeky “error_log” to show the SQL that fetches the counts shown on product filters:

Mage_CatalogIndex_Model_Resource_Attribute::getCount

public/app/code/core/Mage/CatalogIndex/Model/Resource/Attribute.php:76

How to provision a local VM and remote amazon EC2 instance with the same Chef and Vagrant setup

In the previous article we learned how to create a local virtual machine for development and a similar live server on Amazon EC2 with Vagrant.

That helped us setup the servers and get going with Vagrant but we didn’t install anything on them. So let’s do that now!

First a recap of the tools we are using:

Vagrant – The glue that holds the whole process together. Vagrant co-ordinates virtualisation providers to manage virtual servers and provisioners that will then load apps and tweak settings on those servers.

Virtualbox – The virtualisation provider for local virtual machines. Also the default for vagrant, but other providers can be used.

Vagrant EC2 plugin – The link to the virtualisation provider for servers on the Amazon EC2 platform.

Chef – The tool to add applications, modules and config files to the server that is controlled by Vagrant. The provisioner.

The good thing about this toolset is they all abstract their work domain well. Vagrant can work with different virtualisation providers, such as Virtualbox or VMware. It can use different provisioners such as Chef or Puppet. With whatever combinations you still use the same vagrant instructions to work – vagrant up, vagrant destroy, vagrant provision, vagrant shh.

Chef abstracts the provisioning process so the same Chef configuration can be used for whatever type of server you wish to cook up (sorry!). In theory this is true, but in practise it may need a bit of OS specific config here and there. To be fair this stuff is HARD so sometimes you have to be aware that you have a particular strain of a certain OS. There might be a way around this but in my last setup to install a certain application on Ubuntu I had to ensure apt-get update was called before the app was installed. But I could do this with Chef, so it still keeps the philosophy of the toolset.

And the philosophy of the toolset? To be able to produce a portable and reproducible development environment.

And this is what I want to do. To be able to produce a local development server and then reproduce this server on EC2. In the previous article we created a Vagrant managed server both locally and on EC2. So here we now need to feed these servers some food – in the form of apps and config.

Our shopping list of tasks will be:

  • Install Chef
  • Setup the local chef directory structure
  • Create a git repo for the whole project (the chef tools manage cookbooks via git)
  • Add some cookbooks
  • Instruct vagrant to provision the VM with Chef to install MySQL and PHP
  • Create a custom cookbook to set the root MySQL password, create a user and database and populate from a dump file.
  • Repeat on our remote EC2 server to provision with the same setup as the development machine

 

Installing chef

First step is follow instructions here:

https://wiki.opscode.com/display/chef10/Installing+Chef+Client+and+Chef+Solo

The aim is to just install chef on your machine and go no further into configuration. We will be using chef-solo, which means all configuration will be kept and managed locally. That’s fine for this project, we can keep our config close. The other types of chef are suited for managing multiple clusters of servers which sounds like an adventure for another day.

To keep things simple we won’t even have to call chef-solo commands ourself. Vagrant will do that. The one chef tool we will have to use is called ‘knife’ which is used to carve up config files, or cookbooks to use the Chef terminology.

Installing cookbooks

Before we start let’s recap the file structure of our project so far:

Let’s start by asking chef to install php and mysql for us on our VM. To do this we have to use knife to install the cookbooks for php and mysql. We will then instruct vagrant to tell chef to run recipes from those cookbooks.

One thing to be aware of with using knife (hold the blunt end?) is it requires a git repo to work with. But we were going to put our server config into a repo anyway, so let’s do it now.

Now we can start using knives:

Ideally we would only be installing the php and mysql cookbooks but we need a few extras to smooth over. After all this stuff is tricky to do across such a wide range of platforms. The apt cookbook will ensure our Ubuntu server is up to date when we start installing, the iis and yum-epel keep the other cookbooks happy.

During the install your screen should show you knife doing lots of stuff. If you look in your cookbook directory you will see the downloaded cookbooks:

Cookbooks are packages that can have dependencies on other cookbooks. knife is clever enough to deal with these dependencies and load them for us, which accounts for the extras here (beyond our own extra’s we specified).

Getting Vagrant to read cookbooks

Now we can edit the Vagrantfile for our VM. Refer to the final copy of the file at the bottom of the article to see where to fit things in. Here we tell vagrant to use chef for provisioning and which recipes to run. We also need to tell vagrant where the chef cookbooks are:

Earlier chef was installed on your host machine so cookbooks could be downloaded, but we

also need chef to be installed on the virtual machine too. The chef client on the target machine is sometimes included in base boxes, so may already be there but that is not

guaranteed. Luckily there is a vagrant plugin that will ensure chef is installed on the target machine, and if not install it for us. To install the plugin run in your shell:

And then update your Vagrantfile to use the plugin:

Provision the local VM

Now from the vagrant_local directory tell vagrant to provision the server.

Again chef will fill your screen in green with it’s activity. Once completed you can

login and verify it’s installed mysql

Result! Chef has cooked up this stack for us.

Passing parameters to cookbook recipes

What we’ve done so far is run off the shelf cookbooks that install standard packages.

We haven’t yet told Chef about anything specific about our particular install. Cookbooks often contain multiple recipes so you can customise an install by selecting appropriate recipes. For example if we only needed MySQL client we would of left out the MySQL server recipe. The other way to customise chefs actions is to pass in cookbook parameters. There’s often a wide range of cookbook parameters which you can find detailed in the cookbook docs. Let’s start by specifying the root password for mysql (from a security point of view this is not a production solution, just a demo). We can do this by passing the value to the mysql cookbook in our Vagrantfile:

And ask vagrant to shake this change through to the VM

This command runs Chef on an already running VM. A core principle of Chef is it’s operation is idempotent – running it multiple times will result in the same outcome. In this case the root password gets updated but everything else stays the same. This is great for developing our setup, we can make small steps and test each time.

Creating custom cookbooks

So next something more adventurous. We will setup a database, user and then import data into the database from a dump file so our app has an initial state. To my knowledge this isn’t possible with the default cookbook so lets create our own cookbook to do this.

To create a new cookbook we again use knife but first we must create a separate directory to store our custom cookbooks. This must be done as some Chef commands that manage cookbooks can delete cookbooks that have not been downloaded. It also helps organise your cookbooks clearly. So from the chef directory run

Then we must tell Chef about the new cookbook directory by editing the Vagrantfile to describe cookbook locations relative to the Vagrantfile:

Now instruct knife to create an empty cookbook for us (run from the chef directory)

If you look inside the site_cookbooks directory you will see a dbsetup cookbook that is far from empty. Fortunately we don’t need to worry about most of this structure for the moment, we just need to edit the default recipe (site_cookbooks/dbsetup/recipes/default.rb):

This will instruct chef to run mysql commands to create a database and then a database user. This operation requires root permissions but we can fetch that here from the config we defined earlier. Note the database name, username and password are also pulled from the config. So best define that back in the Vagrantfile:

Also tell Chef to use the new cookbook:

And kick it off again with (from vagrant directory)

This time you might see some unpleasant red text in the Chef output:

As we created the new cookbook while the VM was running the directory could not be mounted. No problem, we can switch off and switch on again to fix:

Excellent! We can login with our new user and that user can see the new database.

Using Chef to restore a database from a dump file

If only we could fill that database with data from a database dump so our VM has data to work with out of the box. Again Chef makes that pretty simple. First we need to generate the dumpfile. As we ‘backup like a boss‘ around here use this command on whichever server contains the populated database: (substituting your db credentials):

Copy this file to site_cookbooks/dbsetup/files

Now add lines in the recipe to copy this file to the VM and restore the db (site_cookbooks/dbsetup/recipes/default.rb)

And again get chef to work

Now inspect the database to check your data is there.

I really like this setup. With our custom cookbook added to version control we have a setup that can from nothing create a VM, install core applications and also populate and configure our database. These methods can be used to setup Apache, PHP or whatever stack you require. This setup is also going to payoff for our EC2 server that we setup in the previous article. As we have done all the hard work creating the cookbook we only need to update the EC2 Vagrantfile with the cookbooks to run and the config. What’s nice here is we can use the config to set different parameters for the different environments when required.

Here’s the completed Vagrantfile for the local VM (mydomain/vagrant_local/Vagrantfile)

And here’s the complete Vagrantfile for the remote EC2 server (mydomain/vagrant_ec2/Vagrantfile)

So there we have the basics of a project that can create a local VM and similar instance on EC2; provision both with applications via the same Chef setup and deploy databases. Now it’s a matter of building on this structure to fill in the gaps and add the rest of the stack. For a start install a webserver and configure the virtual hosts files using Chef templates (maybe a future article). Also for production secure and then prepare methods to deploy codebases and databases. Happy dev-oping!

How to setup an EC2 instance and similar local development virtual machine with Virtualbox, Vagrant and Chef

I’ve finally done it and taken the plunge into the world of devops. I’ve been meaning to automate the build of my live server out for a while but recent changes to the EC2 pricing structure have given me extra motivation. Financial motivation! What I wanted to achieve was:

  • Automate creating, starting and stopping an Amazon EC2 instance using Vagrant
  • Automate creating a similar local virtual machine using Vagrant
  • Provisioning both with Chef to install packages such as Apache, MySQL, etc
  • Deploy base codebases and databases for all my sites

The holy grail for me would be to run one command and bang! – an EC2 instance with all my sites would appear. Then run another command and boom! – a local virtual machine would appear with all the sites running locally for development. And of course all the deployment and setup would be shared so there would be no duplication.

There were many problems found along the way pursuing this dream but in the end it turns out Virtualbox, Vagrant and Chef can deliver the goods. And deploy the goods. And provision the goods!

The benefits for this process are plenty:

  • Recreate the environment quickly and easily, recreate locally.
  • Test changes to the environment locally then deploy live.
  • Migration to another OS is simple. Where possible the tools are platform agnostic and where this is not possible platform specific work arounds can be implemented.
  • This toolset is widely accepted so would be simple to migrate to another hosting platform
  • All config is kept in one place under version control. It’s entirely possible to work on all the config files like virtual host files, db config in your favourite IDE locally and deploy changes via Chef so you don’t need to fiddle around with vi inside a ssh tunnel. (Although I do like that type of thing!)

Creating a local development virtual machine with Vagrant

So to get started we need Vagrant https://docs.vagrantup.com/v2/installation/ and also Virtualbox if you don’t already have it.

With these in place we can start to create the configuration for our local virtual machine

vagrant init creates a file called Vagrantfile that defines everything Vagrant needs to create the VM and later the details for Chef to know how to provision your new server. The file is heavily commented with sensible defaults to help us when we need to start tweaking.

So first thing we need to consider is the ‘base box’ to use. This is the base flavour of the OS and where our journey starts. Kinda like an install DVD. Or ISO. Normally this is a matter of choosing a base box to match the intended OS, eg CentOS, Debian, Ubuntu. However we want to create a server on Amazon EC2, so we must choose an image that is both available as a Vagrant base box and as an EC2 AMI (the EC2 equivalent of a Vagrant base box)

I was already planning to run Ubuntu so our next job is to find a menu of base boxes and AMI’s.

Luckily there are excellent online resources for Ubuntu. EC2 AMI’s are listed here http://cloud-images.ubuntu.com/locator/ec2/ and Vagrant boxes listed here https://vagrantcloud.com/ubuntu

Ubuntu Server 14.04 LTS is the best match here, so let’s configure our Vagrantfile for the local VM. Fire up your favourite editor and amend the Vagrantfile to use this box and to setup networking:

Then to start the vm run this in the same directory as the Vagrantfile

Vagrant will now download the base box and instruct Virtualbox to create and start the VM. Once it’s completed we can login to our new local development server

So that’s our local development server created with a single command. Later we will introduce Chef to install applications like MySQL and Apache. Here’s the full Vagrantfile for the local VM:

(mydomain/vagrant_local/Vagrantfile)

Creating a remote EC2 server with Vagrant

Next to setup the live EC2 server. For this we need to start with installing a Vagrant plugin to do the talking with EC2.

And then setup the Vagrant environment

And again we have to configure the new Vagrantfile. But before we can we have do some some work in our AWS Management Console.

We need to:

  • Setup an IAM access key to allow Vagrant to talk to EC2
  • Setup a SSH key pair to use to login to the EC2 instance once it’s created
  • Choose the location to launch the instance
  • Setup a security group for the instance
  • Choose the type of instance
  • Choose the AMI

That sounds like a lot to do! And there’s more, and this was a real gotcha for me. I wanted to take advantage of the new t2.micro pricing. It’s cheaper AND better spec than the t1.micro. No brainer. However it turns out that t2 instances only run under Amazon VPC. I thought this would be end of the road, with either VPC not working with the vagrant-ec2 plugin or it costing too much. Turns out VPC has no cost and it does work with vagrant-ec2. Phew!

So the final item for the AWS Management Console list is:

  • Setup a VPC network

So off to work. To obtain the IAM access key follow the instructions here:

http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html

You will end up with an Access key ID (example: AKIAIOSFODNN7EXAMPLE) and Secret access key (example: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY) These keys identify the vagrant-ec2 plugin with an Amazon IAM user/role/group. In the IAM console you set the permission policy to only let these keys access what is neccessary to create, boot and halt an instance. However as we are keen to get started we can allow all actions by creating a group assigned to our user with the following policy:

With this in place enter the key details into your Vagrantfile (the full Vagrantfile is listed at the end of the article, refer to it to see where to insert each snippet)

It’s probably a good idea next to decide which EC2 region you want your server to live in. We have much to do in the EC2 console so should make sure we are setting things up in the right region. I’ve selected ‘US East’ for my example.

Next task is the ssh keys to access the instance once it’s created. This is not managed in the IAM console but the EC2 console, selecting ‘Key Pairs’ from the navigation menu. Once your keypair is setup, enter the details in the Vagrantfile

Amazon Ubuntu AMI’s require ssh using the ubuntu user, which we specify with the ‘override.ssh.username’ parameter.

Now we need to setup the VPC, as this needs to be in place for the other items on our todo list. Again in the EC2 console select ‘Network Interfaces’ from the navigation menu and create a VPC network interface. Vagrantfile:

Then select ‘Security Groups’ from the navigation menu and create a security group for the VPC. At least add SSH, HTTP and HTTPS inbound rules for a web server. More food for our Vagrantfile. Note you must use the ‘Group ID’ value:

Now the instance type. I already know I want the cheap one:

And the AMI. In this example we want 64 bit trusty tahr for us-east-1, to fit the VPC. But which type? Turns out for the t2.micro instance we must have the ‘hvm’ type. The list at http://cloud-images.ubuntu.com/locator/ec2/ leads us to ami-9aaa1cf2 which we can enter, along with your region

Then create an elastic IP (select the VPC type) for the instance and enter it here:

Finally we have to set the base box. As said before Amazon doesn’t use Vagrant base boxes but it’s own AMI’s, but vagrant still needs a base box to do it’s stuff. So we specify a dummy box that is built to work with the Vagrant-EC2 plugin:

Now we are set. Time to see the vagrant-ec2 plug work it’s magic (note the extra provider option calling Vagrant to tell it to talk to EC2)

Check in the EC2 console to see the instance spark into life. In my testing the elastic IP didn’t always connect so I needed to connect it by hand, but that’s a small step to put right.

Again once booted we can login

Another win, we are now half way to our objective. A live EC2 server and corresponding local development server all controlled via Vagrant. In true devops style you can create, start, stop, destroy and recreate repeatedly from the same config. Infact I suggest you next run

just because you can

As promised the Amazon EC2 Vagrantfile in full:

Next article: Use Chef to provision both the local VM and the EC2 instance. It’s nice to have created these servers but they really need applications installing so they can do some work for us!

Database backup like a boss

We often need to create and restore database backups to migrate data between servers or run backups. The mysqldump command creates a series of mysql insert commands that is good for readability but not so good for file size. It’s recommended to compress before transferring across a network but that means another command, and then further commands to cleanup afterwards. Compressing can often reduce file sizes to a tenth of the original. Sometimes however the uncompressed file simply won’t fit in the file system, likely for large data stored when the system is running low on disk space.

In the situations it’s best to pipe the mysqldump straight into a compression utility so there is no intermediate file. On the other end likewise the compressed file can be uncompressed and the output piped to mysql.

It’s simple to backup like a boss:

and then

Breezejs navigation properties

The more I use BreezeJS the more I like it. I’m not using it with a .NET infrastructure so I’m hand rolling my metadata. Here’s a note of my findings for future reference.

key: The primary key for the entity. Can be a compound key. Required when new entities are created, loading by key and for reducing the amount of information required to describe navigation properties.

The science of relating entities together. We can have 1:1, 1:many and many:many (via a mapper collection) relationships. We can have parent to child relationships and inverse relationships (child to parent).

name: Arbitrary name of the property. In knockout land this will be the observable property that contains the related entity/entities

entityTypeName: The namespaced entity name of the related entity, eg Location:#Tinder

isScalar: Single(true) or many(false). For non scalar the related entities are embedded in an array, for scalar types the entity is embedded directly.

associationName: Arbitrary name of the relationship. Convention is <root entity>_<child entity>, eg Centre_Location. Must be unique, as it is used when describing relationships that have both forward and inverse relationships. In these cases the inverse relationship associationName must match the normal (forward?) relationship.

foreignKeyNames: Array of key names. These define the property on the root entity to use when fetching the related entity. This property value will relate to the primary key of the related entity. We do not need to specify the details of the related entity primary key here as this is defined in the related entity metadata.

Not sure how multiple keys would work but guess the ordering would match the compound key ordering in the related entity.

invForeignKeyNames: Array of key names. These define the relationship the other way around. These define the property on the related entity that matches the primary key of the root entity.

For most cases we only want to define normal or forward relationships where we state the parent to child relationship. For these cases the use of foreign and inverse keys depends if the relationship is 1:1 or many.

For 1:1 isScalar = true and the foreignKeyNames field is used.

For many isScalar = false and the invForeignKeyNames field is used.

I’ve not tried defining both foreignKeyNames and invForeignKeyNames for a relationship, upto now it’s only neccessary to define one. A possible exception might be for a many relationship. If the related entity does not map to the primary key of the root entity maybe it would be described with foreignKeyNames.

Using invForeignKeyNames can seem confusing, as it’s describing a forward relationship. It is useful though as the alternate would be to define an inverse relationship on the related entity back to the root which might be useful, but may result in extra response payload.

Core audio tips

Incase it ever gets removed, some Core audio gold from http://www.subfurther.com/blog/2009/04/28/an-iphone-core-audio-brain-dump/

  • The Audio Unit Programming Guide is required reading for using Audio Units, though you have to filter out the stuff related to writing your own AUs with the C++ API and testing their Mac GUIs.
  • Get comfortable with pointers, the address-of operator (&), and maybe even malloc.
  • You are going to fill out a lot of AudioStreamBasicDescription structures. It drives some people a little batty.
  • Always clear out your ASBDs, like this:

    This zeros out any fields that you haven’t set, which is important if you send an incomplete ASBD to a queue, audio file, or other object to have it filled in.

  • Use the “canonical” format — 16-bit integer PCM — between your audio units. It works, and is far easier than trying to dick around bit-shifting 8.24 fixed point (the other canonical format).
  • Audio Units achieve most of their functionality through setting properties. To set up a software renderer to provide a unit with samples, you don’t call some sort of a setRenderer() method, you set the kAudioUnitProperty_SetRenderCallback property on the unit, providing a AURenderCallbackStruct struct as the property value.
  • Setting a property on an audio unit requires declaring the “scope” that the property applies to. Input scope is audio coming into the AU, output is going out of the unit, and global is for properties that affect the whole unit. So, if you set the stream format property on an AU’s input scope, you’re describing what you will supply to the AU.
  • Audio Units also have “elements”, which may be more usefully thought of as “buses” (at least if you’ve ever used pro audio equipment, or mixing software that borrows its terminology). Think of a mixer unit: it has multiple (perhaps infinitely many) input buses, and one output bus. A splitter unit does the opposite: it takes one input bus and splits it into multiple output buses.
  • Don’t confuse buses with channels (ie, mono, stereo, etc.). Your ASBD describes how many channels you’re working with, and you set the input or output ASBD for a given scope-and-bus pair with the stream description property.
  • Make the RemoteIO unit your friend. This is the AU that talks to both input and output hardware. Its use of buses is atypical and potentially confusing. Enjoy the ASCII art:

    Ergo, the stream properties for this unit are

    Bus 0 Bus 1
    Input Scope: Set ASBD to indicate what you’re providing for play-out Get ASBD to inspect audio format being received from H/W
    Output Scope: Get ASBD to inspect audio format being sent to H/W Set ASBD to indicate what format you want your units to receive
  • That said, setting up the callbacks for providing samples to or getting them from a unit take global scope, as their purpose is implicit from the property names: kAudioOutputUnitProperty_SetInputCallback and kAudioUnitProperty_SetRenderCallback.
  • Michael Tyson wrote a vital blog on recording with RemoteIO that is required reading if you want to set callbacks directly on RemoteIO.
  • Apple’s aurioTouch example also shows off audio input, but is much harder to read because of its ambition (it shows an oscilliscope-type view of the sampled audio, and optionally performs FFT to find common frequencies), and because it is written with Objective-C++, mixing C, C++, and Objective-C idioms.
  • Don’t screw around in a render callback. I had correct code that didn’t work because it also had NSLogs, which were sufficiently expensive that I missed the real-time thread’s deadlines. When I commented out the NSLog, the audio started playing. If you don’t know what’s going on, set a breakpoint and use the debugger.
  • Apple has a convention of providing a “user data” or “client” object to callbacks. You set this object when you setup the callback, and its parameter type for the callback function is void*, which you’ll have to cast back to whatever type your user data object is. If you’re using Cocoa, you can just use a Cocoa object: in simple code, I’ll have a view controller set the user data object as self, then cast back to MyViewController* on the first line of the callback. That’s OK for audio queues, but the overhead of Obj-C message dispatch is fairly high, so with Audio Units, I’ve started using plain C structs.
  • Always set up your audio session stuff. For recording, you must use kAudioSessionCategory_PlayAndRecord and call AudioSessionSetActive(true) to get the mic turned on for you. You should probably also look at the properties to see if audio input is even available: it’s always available on the iPhone, never on the first-gen touch, and may or may not be on the second-gen touch.
  • If you are doing anything more sophisticated than connecting a single callback to RemoteIO, you may want to use an AUGraph to manage your unit connections, rather than setting up everything with properties.
  • When creating AUs directly, you set up a AudioComponentDescription and use the audio component manager to get the AUs. With an AUGraph, you hand the description to AUGraphAddNode to get back the pointer to an AUNode. You can get the Audio Unit wrapped by this node with AUGraphNodeInfo if you need to set some properties on it.
  • Get used to providing pointers as parameters and having them filled in by function calls:

    Notice how the return value is an error code, not the unit you’re looking for, which instead comes back in the fourth parameter. We send the address of the remoteIOUnit local variable, and the function populates it.

  • Also notice the convention for parameter names in Apple’s functions. inSomething is input to the function, outSomething is output, and ioSomething does both. The latter two take pointers, naturally.
  • In an AUGraph, you connect nodes with a simple one-line call:

    This connects the output of the mixer node’s only bus (0) to the input of RemoteIO’s bus 0, which goes through RemoteIO and out to hardware.

  • AUGraphs make it really easy to work with the mic input: create a RemoteIO node and connect its bus 1 to some other node.
  • RemoteIO does not have a gain or volume property. The mixer unit has volume properties on all input buses and its output bus (0). Therefore, setting the mixer’s output volume property could be a de facto volume control, if it’s the last thing before RemoteIO. And it’s somewhat more appealing than manually multiplying all your samples by a volume factor.
  • The mixer unit adds amplitudes. So if you have two sources that can hit maximum amplitude, and you mix them, you’re definitely going to clip.
  • If you want to do both input and output, note that you can’t have two RemoteIO nodes in a graph. Once you’ve created one, just make multiple connections with it. The same node will be at the front and end of the graph in your mental model or on your diagram, but it’s OK, because the captured audio comes in on bus 1, and some point, you’ll connect that to a different bus (maybe as you pass through a mixer unit), eventually getting the audio to RemoteIO’s bus 0 input, which will go out to headphones or speakers on bus 0.