REST Gradle Plugin Update

It’s been a while since I updated the REST Gradle Plugin but necessity is the mother of invention (but what the fuck have I ever invented?) so here we go!

Up until now whatever type of REST request you made using the plugin, the output would have been printed to the build log.
If you were lucky and nosy you might’ve discovered that you can access the internal response object, but no more are you deprived of information!

Starting from version 0.4.0 you can now declare a closure to handle successful responses, yay!

Declaring A Response Handler

For example if we would like to download a binary file and save it to the disk, then we would declare:

task download(type: {
    uri = ''
    responseHandler = { InputStream is ->
        new File('/home/user/file.bin').withOutputStream { Outputstream os ->
            os << is

Type handling

If the plugin detects that the given responseHandler closure accepts a parameter of type InputStream or String, it will pass the response body of the respective type. Otherwise, the plugin will fall back to HTTPBuilder’s response and hand the closure the data object that was created by the client.


For example if we query a resource that returns a JSON response, we state that the expected content type is JSON, and HTTPBuilder already parses the response and slurps it:

task download(type: {
    uri = ''
    contentType =
    responseHandler = {
        assert it.results.size() == 50

Comfortably Screaming Architecture

I’ve recently re-read Uncle Bob‘s Screaming Architecture blog post.

Second pass

The first time I read that post I was fairly new to professional programming, so my reaction was docile; “Sounds good” I thought to myself, and figuratively nodded in agreement.

Reading it once more I found that I still agree with the case but my reaction also came with some refinements that I think should be made.

Uncle Bob’s points

In Screaming Architecture, Uncle Bob argues:

  • Application architecture should represent the “what” rather than “how”.
  • Frameworks should not dictate the application’s architecture.
  • Application architecture should be testable regardless of the frameworks you’ve chosen.

But I think that by rejecting the involvement of frameworks in our architecture we may also reject positive additions such as:

  • Higher speed of development.
  • Reduced boilerplate code.
  • Reduced responsibilities.

Trade offs

I think there are times when the benefits of a framework outweighs its “lock-in”s; these benefits could reduce the developer’s workload while conserving the organization’s resources so the framework should be considered by the architect as a valid possibility.

An example I like to give in this case is my “Mac vs. Linux” argument (no flame-war intended).
To put it simply and somewhat inaccurately, I view Linux as “free” and Mac as “comfortable“; each has its own pros and cons.

I trade comfort and in return I get freedom

On my development machine I run Linux and I wouldn’t dream of developing on a machine that runs anything else because I love and need the freedom it provides; in my case, it’s well worth the time I spend on maintenance, troubleshooting and configuration.

I trade freedom and in return I get comfort

At home I could set up a system aimed at media consumption and would easily go with Mac because the setup, configuration and decision making have already been done for me; that’s fine because my goal is quick and easy media consumption.

Bring it on home

So your project probably “screams” Grails and is now forever locked in with Grails, but you’ve saved a huge amount of time on coding and configuration.
I definitely believe there are cases in which “comfortable” architecture pays off and I believe it’s the responsibility of the pragmatic tech-lead to argue for it.

Disable the Optimus discrete graphics GPU in Ubuntu using bbswitch

I’ve recently made the mistake of purchasing the Asus K56VM laptop.

Asus K6VM (vicpc)

I call this purchase a mistake because as well as an excellent spec (3rd gen. i7 processor, excellent screen), this laptop also comes with nvidia’s Optimus graphic card setup.

Optimus Prime (Wikipedia)

In case you’re not familiar with the Optimus setup, it basically means that your laptop contains 2 graphic cards – 1 that’s built in to the Intel processor, and 1 discrete standalone. Given the proper drivers, and taking into account workload and power consumption, your OS can seamlessly toggle between the 2 cards.

If you use Windows as your OS, everything works nice and dandy; If you use a Linux-based OS, you’re in for a world of pain:

  • Both GPUs are on and active.
  • The HDMI output is “hardwired” to the discrete GPU.
  • The VGA output is “hardwired” to the internal GPU.
  • Only the VGA output works by default.
  • You can work with only one output at a time.
  • You can only toggle between outputs using an external project such as Bumblebee.


So I decided to just go and disable the nvidia GPU and exclusively use the internal one; this will save me from a lot of hassle and waste of power.

Installing bbswitch

bbswitch is a sub-module of the Bumblebee project; it allows you to easily activate and disable the discrete GPU.
Because bbswitch operates as a kernel module, we’ll install it using the DKMS framework so that the module will also survive kernel upgrades.

We will first add the Bumblebee PPA to APT’s repository list:

root@mandromeda:~# apt-add-repository "deb YOUR_UBUNTU_VERSION_HERE main"

root@mandromeda:~# apt-add-repository "deb-src YOUR_UBUNTU_VERSION_HERE main"

Update APT’s indices:

root@mandromeda:~# apt-get update

And then install bbswitch:

root@mandromeda:~# apt-get install bbswitch-dkms

Permanently switching off the discrete card

Now we need to make sure that none of nvidia’s driver modules are loaded (both original and alternative), and we will then configure the bbswitch module to switch off the discrete GPU when loaded.

Edit /etc/modprobe.d/blacklist.conf by appending to it:

# Blacklist the alternative nvidia module
blacklist nouveau

# Blacklist the original nvidia module
blacklist nvidia

Then edit /etc/modules by appending to it:

# Switch off discrete GPU
bbswitch load_state=0

Applying the changes

Finally we will apply the changes we made to the kernel module configurations.

Update the initial ramdisk by running:

root@mandromeda:~# update-initramfs -u

Then restart. The discrete GPU should now be permanently disabled.

Registering new Spring beans in Grails during runtime

Sit down, son; This talk’s been a long-time comin’.
There may come a day when you find yourself having to register a new Spring bean during runtime of a Grails application.
I’m gonna show you a method that’s been passed down in our family for generations.

First, get hold of a reference to the GrailsApplication bean; You can’t register no new beans if you ain’t got access to the boss-man.


import org.springframework.beans.factory.config.ConstructorArgumentValues
import org.springframework.beans.MutablePropertyValues


//Create a definition for the new bean
def beanDef = new GenericBeanDefinition(beanClass: NewBeanClass, 
    autowireMode: AbstractBeanDefinition.AUTOWIRE_BY_NAME)

//Provide the bean with any arguments required by the constructor
def argumentValues = new ConstructorArgumentValues()

//Set additional properties such as references to other Spring beans
def propertyValues = new MutablePropertyValues()
propertyValues.add('booleanSwitch', true)

//Register the new definition
grailsApplication.mainContext.registerBeanDefinition('newBeanClass', beanDef)

If y’all use some high-and-mighty IDE like Intellij IDEA, you may notice that the registerBeanDefinition method isn’t recognized by the mainContext.
This is because GrailsApplication exposes the main context with the interface of org.springframework.context.ApplicationContext, but it’s actually an instance of GrailsWebApplicationContext so this method is in fact accessible and it’s all legit.

Fight Crime with GPG

Originally posted on Blog @Bintray:

So you deliver your awesome library to hundreds of users each day, but they’re a tough bunch and they’re all like:

“Hey man, we gotta see some ID”

So you kneel to the whims of the rabble; you generate your GPG key pair and sign each artifact you deliver, because hell if you’re gonna let someone miss out on your superb code

And let there be no mistake – this road means pain, brother.
Wanna use some organization-wide key pairs? How do you plan to safely share them around?
Wanna make sure all products are properly signed? Good luck configuring each and every of your hundred or so builds!

But this is where Bintray swoops in like The Dark Knight, man! To save you from those GPG signing street gangs!
Because unlike the technological promises made by the second millennium (flying cars and whatnot), Bintray took an oath and…

View original 271 more words

Bintray + GitHub = Synergistic Love Story

Originally posted on Blog @Bintray:

First things first – Bintray is not a competitor of GitHub. They complete each other, not compete. Here’s how (I love vienn diagrams):

Bintray is an organic next step for developing software at GitHub – once your sources are built – distribute them from Bintray.
Our job is to make it as easy as possible for you, our fellow GitHubber. Here’s what you get:

First, sign up to Bintray using GitHub:
Sign Up

Authorize Bintray for GitHub, fill the needed details, and you’re done.

Naturally, login using GitHub too:
Sign In

Next step is the only one you’ll have to do manually, without GitHub integration – creation of a repository1. Don’t forget to select the right type!

Once that done, we are back to GitHub integration again. Just click on Import From Git:

Get your stuff from to Bintray in two simple steps:

  1. Select the desired GitHub repositories to become Bintray packages

View original 232 more words

Validating MongoDB’s DBRefs

As discussed on this SO question.

Ref Marks The Spot

For various reasons MongoDB doesn’t support joins, but documents can be linked using DBRefs.

For example, we’d like to build a relation between space ships and crew members.
Our ships document looks like:

    "_id": "someID"
    "class": "Firefly"
    "name": "Serenity"

And our crew document looks like:

    "name": "Malcolm 'Mal' Reynolds"
    "ship": DBRef("ships", "someID")

We link Capt. Mal to his ship “Serenity” by adding a `ship` field to the crew document; the value of the `ship` is a DBRef object composed of:

  1. The name of the collection which we reference.
  2. The ID of the item we reference.

But what happens when you encounter an inconsistency between references? Links may become invalid, typically in dev and staging environments but this could also happen in your *gasp* production environment!


There’s no official built-in way to validate DBRefs, but it is easy to manually validate them.
MongoDB is awesome in many ways; one manifest of this awesomeness is the ability to execute commands in the form of Javascripts.

So I wrote a small script – validateDBRefs.js:

//Create a generic function to extract the ID from a document
var returnIdFunc = function(doc) { return doc._id; };

//Map the collection of ships to a collection of ship IDs
var allShipIds  = db.ships.find({}, {_id: 1} ).map(returnIdFunc);

//Find all crew members with ship IDs that don't exist in the allShipIds collection
var crewWithInvalidRefs = db.crew.find({"ship.$id": {$nin: allShipIds}}).map(returnIdFunc);

print("Found the following documents with invalid DBRefs");
var length = crewWithInvalidRefs.length;
for (var i = 0; i < length; i++) {

That can be run with:

mongo DB_NAME validateDBRefs.js

In the given form, the script will output all the crew documents that reference non-existing ships:

Found the following documents with invalid DBRefs



These IDs can now be used to perform validity reporting or even cleaned up as part of an automated maintenance procedure!