Test remote javascript APIs with Capybara and puffing-billy »
Testing remote APIs is easy in Ruby. Libraries like webmock, vcr and
artifice give you all the tools you need to ensure that you’re sending the
right requests to the right remote endpoints.
However, it becomes a lot more difficult when it’s your Javascript code
that’s making the requests. Using requests specs with Capybara gives you loads
of tools to control the browser, but it doesn’t let you stub or mock responses
to requests that originate from within the browser.
This is where puffing-billy saves the day. It’s a request stubbing
library for browsers. It spawns a HTTP(S) proxy server that it uses to
intercept requests from your browser. Using simple webmock-alike syntax, you
can configure the proxy to send fake responses to requests for specific URLs.
For example, the following is a simple piece of javascript code that fetches
the temperature for Bath, UK from the openweathermap.org service.
<!-- /weather/ --><p>Current temperature for Bath, UK: <spanid='temp'></span></p><script>$(function(){$.ajax({url:'http://openweathermap.org/data/weather/2656173',dataType:'jsonp',success:function(data){$('#temp').text(data.temp+'°C')},error:function(){$('#temp').text('unavailable')}});});</script>
And this is a request spec for that snippet. Note how it easily fakes JSONP
data and error responses.
# spec/requests/weather_spec.rbdescribe'Weather for Bath, UK',:js=>truedoit'should fetch the temperature from openweathermap.org'do# fake some JSONP dataproxy.stub('http://openweathermap.org/data/weather/2656173').and_return(:jsonp=>{:temp=>12.7})visit'/weather/'page.shouldhave_content('Current temperature for Bath, UK: 12.7°C')endit"should fail gracefully when openweathermap.org isn't available"do# fake a failureproxy.stub('http://openweathermap.org/data/weather/2656173').and_return(:code=>500)visit'/weather/'page.shouldhave_content('Current temperature for Bath, UK: unavailable')endend
puffing-billy supports HTTP and HTTPS requests and all common HTTP verbs.
Go check it out on GitHub now!
Install Ruby Enterprise Edition 1.8.7 in OS X Mountain Lion »
Upgrading to Mountain Lion will wipe your old dev environment, so this is what
you’ll need to get back to work on ree-1.8.7:
1. Install XCode command-line tools.
Available from the Preferences > Download panel in XCode, or as a separate
download from the Apple Developer site.
2. Install gcc-4.2.
REE doesn’t play well with Apple’s LLVM compiler, so you’ll need to install the
old gcc-4.2 compiler. It’s available in the homebrew homebrew/dupes
repository.
brew tap homebrew/dupes
brew install apple-gcc42
3. Install xquartz.
The OS X upgrade will also remove your old X11.app installation, so go grab
xquartz from macosforge and install it (you’ll need v2.7.2 or later for
Mountain Lion).
4. Install ree.
Remember to add the path to the xquartz X11 includes in CPPFLAGS and the path
to gcc-42 in CC. Here I’m using rbenv, but the same environment variables
should work for rvm.
There are plenty of these OBDII WiFi modules advertised for sale on eBay. I’ve
recently bought a RenaultSport Clio which has an OBDII port, so I picked up one
to experiment with.
This little gadget plugs into the OBDII port in the Clio (which is hidden
behind a removable panel just below the ignition card key slot). It creates an
ad-hoc WiFi network and listens on a TCP port for client connections.
The chip that does all the hard work of connecting to the car’s onboard
computer(s) is an ELM327-compatible, which means it speaks a simple ASCII
protocol. Control messages are sent to the chip using old-school AT commands
(like old-school modems), and OBD comms use simple hex strings.
The ATZ command resets the interface chip (and displays its ID). The rest of
the commands all request data from the ECU using OBD mode 1.
0100 and 0120 enumerate the OBD PIDS supported by my car. The data
returned is a header (41 00 / 41 20) followed by the actual data as a
bit-encoded set of flags.
0105 queries the ECU for the current engine temperature (which, in this case,
returns a temperature of 32°C).
I’m going to carry on hacking with this module, and I’m hoping to build a natty
little iOS telemetry app to use with it. Stay tuned for further blog posts as
I learn more!
howmanyleft.co.uk runs its node.js workers behind supervisord. To avoid
dropping requests with 502s when restarting workers, I hook into the
SIGTERM signal and call close() on the HTTP server. This stops the server
from listening for new connections, and once all the open connections are
complete, the worker exits.
Since I’m using redis on howmanyleft, I need to close my redis connection
gracefully too. The close event on the HTTP server fires when all
connections have closed, so close my redis connection there. node_redis
flushes all active redis commands when you call quit, so I won’t lose any
data.
After months of running a hybrid sarge/wheezy installation on my Pogoplug (the
wheezy bits needed for OS X Lion Time Machine support in netatalk), a
power cut forced a reboot. Unfortunately the poor thing never came back to
life.
A prod with an FTDI cable on the Pogoplug’s serial headers retrieved the
following console grumbles:
udevd[45]: unable to receive ctrl connection: Function not implemented
udevd[45]: unable to receive ctrl connection: Function not implemented
udevd[45]: unable to receive ctrl connection: Function not implemented
udevd[45]: unable to receive ctrl connection: Function not implemented
udevd[45]: unable to receive ctrl connection: Function not implemented
udevd[45]: unable to receive ctrl connection: Function not implemented
It seems a package update had introduced a version of udevd that the poor
Pogoplug’s kernel isn’t able to support. The good news is that you don’t
really need the debian-installed udevd daemon. The initrd image that the
Pogoplug boots from has its own, older udevd which is capable enough.
Plugging the pogoplug’s root disk into a Linux laptop and disabling the udevd
daemon (insert exit 0 somewhere near the top of /etc/init.d/udev) brought
my Pogoplug back to life, and it’ll hopefully mean I don’t have to consider a
more expensive NAS for a long time yet. Yay!
Make TimeMachine in OS X Lion work with Debian Squeeze (stable) netatalk servers »
All you need to do to get TimeMachine running again is to install the netatalk
package from unstable. It’s a really simple process. First add the wheezy
(testing) repository to your apt configuration.
# /etc/apt/sources.list
# stable repo
deb http://cdn.debian.net/debian squeeze main
# new testing repo
deb http://cdn.debian.net/debian wheezy main
Then give your squeeze repo a higher priority to your wheezy one (to prevent
wheezy packages being accidentally installed when they’re not wanted).
Now install netatalk, making sure to tell apt-get to get it from the wheezy
repo.
sudo apt-get install netatalk/wheezy
And remember, if you’ve added an afpd service description to your avahi-daemon
configuration, remove it and restart netatalk (because netatalk 2.2beta will
register itself automatically with avahi).
If you’re a Python programmer, no doubt you’re now familiar with
virtualenv. One of its nicest features is --no-site-packages, which
isolates your virtual environment from any packages that are already installed
globally.
However, if you’re on OS X, using --no-site-packages means you can’t use the
OpenSSL library that’s installed by default. Trying to easy_install or
pip install pyopenssl into your virtualenv won’t work, since OS X doesn’t
ship with OpenSSL headers.
The solution to this little problem is to symlink the system OpenSSL library
into your virtualenv:
So, you’re trying to use JODConverter or unoconverter, you’ve set up openoffice
to launch as a service in the background somewhere, but it’s not going
anywhere, and all you’re seeing in your logs is
creation of executable memory area failed: Permission denied
creation of executable memory area failed: Permission denied
creation of executable memory area failed: Permission denied
creation of executable memory area failed: Permission denied
The solution, my friend, is to make sure that the openoffice process gets a
writable path in its HOME environment variable. In my case, the supervisord
config entry looks like this:
This is my latest project - a simple little web app that plays back
twitter feeds from F1 races. It’s just like a Sky+ box, but for F1 tweets. If
you miss the race, you can still watch it with all the best commentary from a
hand-picked selection of the Twitterati.
Events go live on Tweet GP the moment the coverage starts on the BBC, so even
if you’re only minutes behind the action, you don’t have to miss a thing.
It should work on all modern browsers. I’ve tested in on the current versions
of Chrome, Firefox and Safari (both on the desktop and on iOS). It also works
on IE8 (although it doesn’t quite look as sexy). If anyone can test on IE9 or
Opera, I’d be grateful for feedback.
If you’re of a very geeky persuasion, feel free to read on and get the back
story of how it was all built.
A few weeks ago I was sat at home watching the Malaysian Grand Prix. I had
woken up at 8am, which was too early for comfort on a Sunday morning and,
crucially, two hours too late to watch the coverage on the BBC live.
Thankfully, like the rest of the civilised world, we have a very smart HUMAX
set-top box that time-shifts TV for us, so I was able to watch the TV coverage
from the start (in HD – thank you, BBC).
However, I had to put my iPhone down and stay away from twitter for the
duration of the race in case I found out the result. This meant missing out on
the insight and comic talents of several people I follow who usually make the
race a much more enjoyable event.
Now, being a hacky kind of guy, I decided I could solve this problem with code
(and at the same time learn something new). In time for the Chinese Grand
Prix, I’d cobbled up a bit of Node.js code that connected to the
Twitter streaming API, and sucked up a live stream from all the
interesting F1 people, pushing it into a local Redis instance.
Using that lot, I was able to get out of bed at a sensible time to watch the
action in Shanghai, without missing out on the comedy stylings of the likes of
@sniffpetrol. Much rejoicing.
Since then, I’ve spent my bank holiday weekends packaging that lot up in a
shiny HTML5 box. Everything on the client side is done without any plugins or
proprietary extensions. As such, the code should work on all modern browsers
(I’ve tested it on Chrome 11, Safari 5, Firefox 4, IE 8 and iOS 4.3).
I’m using express to serve up the front-end pages. All the code’s written
in coffeescript, and the styling’s done in sass. Live tweets get
pushed straight to the clients using Redis pubsub and socket.io. The
server lives in the Rackspace Cloud UK.
And that’s it! Well done if you’ve read this far. I think next I’ll be signing
up for a go on Twitter’s Site Streams beta API, and seeing if I can generalise
this a bit. Fancy having a recording of all your tweets ready to playback for
any TV program? Watch this space!
Using connect-assetmanager with sass and coffee-script »
I’m a big fan of using as few curly-brackets as possible in my code, which
means I love writing my stylesheets using Sass and my javascripts using
CoffeeScript.
Because I love fast websites too, I also mash all my assets together using
connect-assetmanager. However, out of the box, connect-assetmanager
doesn’t automatically compile Sass or CoffeeScript.
It’s not difficult to get it working though. All you need to do is write a
couple of short preManiupulate handlers, like so:
Recently I’ve been developing a Flex application at work using Adobe’s free
Flex SDK. If you want to be able to draw transparent text labels in Flex, you
need to have the fonts embedded in the application. Embedding a font is as
simple as including it in a style block in the mxml file for the application:
However, the next bit gets a bit freaky. The compiler will invariably fail to
build the application and give you an error that’s something like:
[mxmlc] app.mxml(14): Error: exception during transcoding: Unexpected exception encountered while reading font file 'assets/DejaVuSansMono.ttf'
[mxmlc]
[mxmlc] font-family: DejaVuSansMonoEmbed;
[mxmlc]
[mxmlc] app.mxml(14): Error: unable to build font 'DejaVuSansMonoEmbed'
[mxmlc]
[mxmlc] font-family: DejaVuSansMonoEmbed;
[mxmlc]
[mxmlc] app.mxml(14): Error: Unable to transcode assets/DejaVuSansMono.ttf.
[mxmlc]
[mxmlc]
[mxmlc]
It turns out that the Flex compiler has a choice of font encoding engines to
use and the first one it tries is usually a bit rubbish. In order to avoid the
error, you need to force it to use the closed-source Adobe implementation
(which is shipped with the free Flex SDK, but not with the open-source Flex
SDK). At the command line, you need to add the option
-managers flash.fonts.AFEFontManager to your mxlmc command.
If you’re using ant for your builds, you can add a font element to your mxmlc
block to do the same.
The clustering algorithm has been tuned and it’s working reliably—even on all
sorts of odd foreign plates!
On top of that, I’ve added a de-skewing and extraction function. These are the
results extracted from the previous image.
I’m happy with the speed too – these two were extracted in 0.2s. The only
downside at the moment is that the clustering algorithm is roughly O(n^2) in
terms of the number of contours, so on busier images it can take a second or
two to do the clustering stage (and it’ll produce an occasional false
positive).
Had a bit of a brainwave the other night — clustering connected components
using size similarity, locality and fit to a line as criteria. The image shows
the kind of results I’m getting with a really quick script thrown together late
last night.
Blue boxes outline the individual characters that have been recognised, red
boxes outline clusters.
It’s sometimes missing some of the characters at the ends of the plates, and
there are still some false positives showing up in some images, but I’m fairly
confident that a bit of tuning could fix all that.
Since the last update, I’ve been concentrating on stopping character shapes
from bleeding into other elements of the image – mostly it’s been the edges of
number plates that have been getting in the way. I’ve used a hough transform to
find the number plate edges and eliminate them from the thresholded image
before component labeling. I’ve also discarded some of the components based on
a bit of filtering after labeling.
I’ve been fiddling with OpenCV on-and-off in my spare time for a few weeks now,
and I’m starting to get a feel for the challenges involved in number plate
recognition. There’s plenty of papers and background material out there to
keep me reading for months.
The simple approach as used by a number of researchers is to concentrate on
thresholding the image (sometimes many levels of thresholding), and then doing
a bit of connected component labelling to find candidate regions of the image
to extract and pass to a second-stage character recognition algorithm.
Here’s my first attempt. Source image, then thresholded and labelled image.
Something to distract me from the television: an open-source project!
I know nothing about the field of computer vision, so this was obviously the
most suitable area to base my project in. I’m going to attempt to write an
open-source automatic vehicle registration number recognition library.
I’m going to be coding in Python (because I need its wicked rad awesomeness in
my life), and relying a heck of a lot on OpenCV for the underlying
hardcore bit-pushing shit.
The project is on githubnow. There’s nothing there to see yet, but a
few watchers would be very encouraging.
I tend to develop on a MacBook running OS X Leopard. In order to keep my main
Leopard system clean and tidy, I use Parallels desktop to run my development
environments in virtual machines. Parallels is jolly good at sharing data with
Windows virtual machines, but a bit lacking when it comes to Linux. Hence I’ve
started using netatalk on all my Linux virtual machines to access their disk
drives from OS X.
Lots has been written about what it takes to install and tweak netatalk to get
it to talk happily with Leopard, so I won’t go into the problems here. All you
need to know should be found at the following links.
The process below is an amalgamation of all the instructions from those pages,
put together to form the process I go through to set up netatalk on my virtual
machines.
Firstly, one needs to download the netatalk source, install all of its
dependencies and build it with ssl enabled.
That should have build and installed the netatalk package, and told the debian
packaging system to not replace it if a new version appears in the online
repositories.
The only netatalk service I’m interested is the file sharing one, so the rest
can be turned off in /etc/default/netatalk:
# Set which daemons to run (papd is dependent upon atalkd):
ATALKD_RUN=no
PAPD_RUN=no
CNID_METAD_RUN=no
AFPD_RUN=yes
TIMELORD_RUN=no
A2BOOT_RUN=no
Finally, to enable zeroconf/bonjour discovery of the shares on the system,
install avahi-daemon and add a service definition to
/etc/avahi/services/afpd.service to advertise the afp service.
<?xml version="1.0" standalone=‘no’?><!–*-nxml-*–><!DOCTYPE service-group SYSTEM "avahi-service.dtd"><service-group><name>%u</name><service><type>_afpovertcp._tcp</type><port>548</port></service></service-group>
And that’s it! You should now be able to reliably mount your home directory on
the Linux virtual machine from OS X.
Here’s a problem that had me scratching my head for a while this weekend. How
do I create a simple REST service using WCF (Windows Communication Foundation)?
MSDN has a great little tutorial that explains how to code the service and
get it running in a standalone application context. However, it fails to
address getting the service deployed on an IIS server.
The key point that seems to be missing from the documentation is that no matter
what you do with the web.config file, you’re never going to get a configuration
that works as a plain-old REST web service. Instead, you have to delete the web
service configuration completely from the web.config file, and add the
declaration Factory="System.ServiceModel.Activation.WebServiceHostFactory" to
the ServiceHost element in the .svc file associated with your service.
Despite mostly using Java for the work I’m doing at the moment, there’s still
some call for a bit of C++. Visual Studio 2005* is still the daddy
of all IDEs for developing in C++ as far as I’m concerned, but all the
intermediate cruft that it generates doesn’t really need checking in to my
Subversion repository.
You can cut down the extraneous crap by excluding all debug and release
directories, and all .ncb, .suo, .vcproj.USERNAME.user and .aps files.
* Yes, I know 2008 is available, and I’ve even got it downloaded from MSDN,
but I’ve not got around to installing it yet…