Adam Tuttle

deORM()

Here's a little UDF that I just finished writing that I think will be useful to those of you that use ColdFusion's ORM.

The root of the problem is that doing a <cfdump> on an ORM entity is not really "safe" because its relationships will all be loaded and dumped too (unless you use the "top" attribute, which is poorly implemented). For a while I got around this by using a simpler version of the UDF that I'm about to share. The simpler version was this:

function deORM( obj ){ return deserializeJson( serializeJson( obj ) ); }

To prevent repeating myself, let's call this the "bad" function.

Hopefully you can see that this will be terribly inefficient: We're converting that ORM entity to a JSON text representation of itself (which looks like a standard structure) and then converting that text back to a standard CF structure. It's that second half that's going to take the biggest hit.

This worked very well for us, until it didn't. We started getting really random Java Heap Space exceptions, and the stack trace was completely unhelpful: Just a bunch of recursing in the implementation of serializeJson, until the stack trace truncated in the logs... so it was a big pain in the butt to track down, too!

It's not so bad when you're just converting 1 entity; but if you're trying to write a general purpose solution that, for example, logs the entire request local scope and session scope in the event of an exception, you can imagine how it would be really easy to get dozens and dozens of entities into the mix -- then memory usage skyrockets, and kablooey.

So, instead of this crazy over-usage of the bad function, I decided to try to "do it right," as it were. The result is below. The end result is the exact same thing that you would expect from the bad function, but without ever converting to and back from text. And it properly handles object inheritance, too. Not that the bad function had an issues here — it didn't — but it would be easy to overlook this otherwise...

I haven't done any performance benchmarks on the conversion speed of this approach, but at least it prevents the heap space errors we were seeing! (And if I had to guess, I'd say this way is faster, too...)

So here you go; the best way I've come up with so far to make any object safe to dump, even if somewhere down in the object graph there might be an ORM entity or 10:

function deORM( obj ){
    var deWormed = {};
    if (isSimpleValue( obj )){
        deWormed = obj;
    }
    else if (isObject( obj )){
        var md = getMetadata( obj );
        do {
            if (md.keyExists('properties')){
                for (var prop in md.properties){
                    if (structKeyExists(obj, 'get' & prop.name)){
                        if ( !prop.keyExists('fieldtype') || prop.fieldtype == "id" ){
                            deWormed[ prop.name ] = invoke(obj, "get#prop.name#");
                        }
                    }
                }
            }
            if (md.keyExists('extends')){
                md = md.extends;
            }
        } while(md.keyExists('extends'));
    }
    else if (isStruct( obj )){
        for (var key in obj){
            deWormed[ key ] = deORM( obj[key] );
        }
    }
    else if (isArray( obj )){
        var deWormed = [];
        for (var el in obj){
            deWormed.append( deORM( el ) );
        }
    }
    else{
        deWormed = getMetadata( obj );
    }
    return deWormed;
}

You can see I've got a catch-all at the end there that would show the object's metadata in the event it's not a type we're setup to handle. I've yet to find anything that falls into that path, but I figured I'd leave it there to be safe.

Hope this helps some of you, too!

Book Review: Armada by Ernest Cline

I was a big fan of Ready Player One; so much so that mere weeks after finishing the book, when I learned that the audiobook version was read by Wil Wheaton, I bought and listened to that version as well (and yes, Wil's reading was so worth the extra purchase!) While I did not rank well at all in the RPO easter egg games, I did play them, and I was excited to see him giving away a Delorian.

Given this, it should come as no surprise that when I heard that Ernest Cline had another book in the works, I started watching with, admittedly, some froth. On release day he did a Reddit AMA (which could have gone better, but apparently suffered because of some otherwise unrelated drama from Reddit HR) and I dutifully read through the entire thread, hoping to find some easter egg leads for the new book. I didn't.

I'm writing this review while listening to a Spotify playlist named Raid The Arcade Mix - created from the image of a hand-written Maxell cassette tape cover included at the end of the book, which has special significance in the book and is frequently referenced. That's a pretty neat take-away from the new book. I'll be coming back to this one for a while to come.

It took me a day or two to start the book, not for lack of motivation. Life just got in the way. And unfortunately, I caught wind of some negative reviews in the meantime, and they cast shade over the majority of my own reading of the book. I spent the first 60% or so wondering when I would start to agree with these reviews. When will the pop culture references start to feel forced and overbearing? When I realized I was waiting for it to happen, I was able to let it go: I decided that it probably wasn't going to happen because I just disagreed with the reviewers. In doing so, I greatly increased my own enjoyment of the book. Lesson learned: Screw professional reviewers, I'll form my own opinions as I go.

Does the book take too much from others in the civilians-drafted-to-save-the-world genre? I don't think so. It's been compared heavily to Ender's Game and even movies like The Last Starfighter, but to me it read more like a mashup than a rip-off. Everything I've seen listed as source material for Armada's ripping off was actually referenced in the book. Does that make it a cited reference rather than plagiarism? I don't know. But again: it read like a genuine attempt at adding to the genre, rather than paraphrasing to make a buck.

There isn't a lot of room to deviate from what you're probably expecting based on the synopsis: Video game playing kid dreams of living inside a Star Wars movie; sees a UFO one day; gets drafted to save the world. (Don't worry, no spoilers here...) Either he will or he won't. There are a few other plot lines opened in Act 1, too, that can only go two, maybe three different ways after all is said and done. But, as much as Cline has painted himself into a corner plot-wise, he's done a good job of it. The entire book (less epilogue) takes place over only a day or two, and the action is so well paced that every time I had to put my kindle down (you know, to eat or sleep or acknowledge the existence of my family) I found myself thinking, "but it was just getting good!"

I've just finished the story this morning before starting work, and I was really pleased with the ending and the book as a whole. I definitely got my $11.43 worth of entertainment, and I would recommend it to anyone willing to read with an open mind and especially those of us that grew up in the 80's and 90's involved with sci-fi video game and movie culture. What 80's kid hasn't played Descent and wanted to fly those ships for real?

I highlighted 12 passages on my kindle: 1 typo (because that's how I roll), and 11 pop culture references that I didn't get. I'll be using those highlights to go look up what are probably going to be books and movies that I'll really enjoy. And only now after putting the book down, I've found a link to a playable ROM for one of the fictional games referenced in the books: Phaëton.

There you have it. I give it two enthusiastic thumbs up!

Should I Publish These Browserify-Friendly Modules to NPM?

Over the last few weeks, when not riding in the best airline seat ever (seriously! I got it for 4 consecutive flights!), I've been exploring the use of Browserify for keeping our front-end JavaScript modular, reusable, and well organized; and I am really thrilled with the result so far.

I've been wondering if I should publish some of the modules I've written — the more reusable ones — onto NPM, so that others could make use of them too. They solve what I think are some pretty common problems, with short, clean code and minimal dependencies:

  • Detect extreme AFK
  • Detect multiple tabs / browser windows
  • Disable the back button

Detect extreme AFK

Sometimes AFK is no big deal... until it is. One of our most heavily used applications is a (relatively) lengthy registration process, where we get the impression that people start their registration, get part of the way through, shut their laptop to drive home and have dinner, open it back up 8 hours later, and expect to continue on as if nothing has changed. In the long run, we hope to support that behavior, but in the short term we're settling for an alert and redirect.

var onMinutesAFK = require( '../common/detect-extreme-afk.js' );

onMinutesAFK( 30, function afkHandler() {
    alert( 'Your session has been inactive for too long. You\'ll need to start registration again from the beginning.' );
    document.location.href = '/events:register/home/';
});

This one works by setting the page render time into memory and then using a safely-throttled window.mouse-move event listener to compare render time to the current time. When your threshold has been crossed, the callback is called. Now that I think about it more, this one should take a 3rd argument to determine whether it should stop checking after the first time the callback is called, or continue to listen for mouse-move events in perpetuity (as it currently does).

Detect multiple tabs

Having multiple tabs open is usually just an annoyance, but there are times when it can cause extreme confusion and should be avoided if at all possible.

var onBonusTab = require( '../common/detect-multiple-tabs.js' );

onBonusTab( function bonusTabDetected() {
    var warning = $( '<div ...></div>' );
    $( 'body' ).append( warning ).css( 'padding-top', '40px' );
});

This one allows you to warn the user of the multiple-tabs situation when it's detected. It relies on cross-tab communication via LocalStorage events, which is simple, and almost universally available. It accepts a callback argument which is called when multiple tabs are detected. In the above case, I'm adding a big red bar to the top of the page to alert them to this fact... but the implementation of the warning (or alert, close tab, etc) is entirely up to you.

Disable the back button

I expect this to be the most controversial one. Some people ardently believe that you should not alter the way that browsers work, and 99% of the time I would even agree with them. For that 1% of the time, there's this module:

var disableBackButton = require( '../common/disable-back-button.js' );

disableBackButton();

This one works by using HTML5 PushState, which is widely —though not universally— available (not in IE8 or IE9). There is a polyfill for those older browsers, but I've not made any attempt to use it yet.


So my question is: Is it common for modules like these —specifically built for front-end code via Browserify (or some other commonjs-consuming front-end architecture?)— to be published to NPM? If so, is there an established naming convention?

LinkedIn API: All Take, No Give

I've written in the past about how LinkedIn doesn't seem to really understand the mobile experience; which honestly, to any frequent mobile user that has tried visiting LinkedIn (intentional or otherwise) on their phone, takes no explanation at all. Their mobile experience is horrible at best, and non-existent at worst.

If you've got reason to be paying attention to LinkedIn, you may have heard that they recently announced some changes to the policies regarding their APIs, mostly to place additional limits and restrictions on data available through the API and actions that API consumers (apps) can perform on behalf of their users. I looked around for the earliest rumblings of these changes, and the farthest back that I can find is this post from February 2015 — about 3 months ago. The details in that blog post are pretty scant and vague. Very hand-wavy. They list the restrictions as follows:

  • Allowing members to represent their professional identity via their LinkedIn profile using our Profile API.
  • Enabling members to post certifications directly to their LinkedIn profile with our Add to Profile tools.
  • Enabling members to share professional content to their LinkedIn network from across the Web leveraging our Share API.
  • Enabling companies to share professional content to LinkedIn with our Company API.

It does give a deadline though: 11 days from today, May 12th. So they gave all developers of their APIs (except those deemed special enough to be granted access to their Partnership Programs) twelve weeks to figure out the changes, update their apps, and get through the app store approval process, which can take as much as two weeks in some cases.

It is unclear what will happen if an app fails to update in time: Will entire API requests fail because they requested a field that is now restricted, or will that field simply be excluded? I don't know.

Incentives: Adding Value

The way you convince users to jump through extra hoops for you is by providing a carrot at the end of the stick. "If you log in with your LinkedIn account, then we'll make it easy for you to join our official alumni group and to connect with other alumni." In fact, that is precisely the value that my company used as incentive for LinkedIn account connections in our apps. (We work with Universities and associated businesses.)

So that's what the user gets: simplified networking and peace of mind that they've joined the right alumni group. What do we get in exchange for the user jumping through the login hoop? Data. Not a lot, but some.

Technically LinkedIn used to provide access to quite a lot of data, assuming the user has given it to them, and you've requested permission to access it; but we have always kept our permissions usage low. We've requested email address, basic address and (current) employment information, as well as your phone number. You might realize that this is almost exactly the type of information you're likely to find in the Alumni Directory of your alma mater — and that unless you've moved since graduation, they probably already have it all. You're right, and that's precisely why it's collected. Universities go to great lengths to keep alumni contact information up to date because donations are a large part of their operating budget.

Now, we're working entirely above-board here. LinkedIn authentication is entirely optional, and we disclose what information we collect and why. Nothing changes with the alumni directories: They still won't share your information without your permission. But, by collecting this information, they can use it to make sure they have your latest address and phone number, as well as your current employment information.

Ghost of LinkedIn Future: Just the Stick

Given the complete lack of detail in that original blog post, I was kind of freaking out about the prospect of additional restrictions at first. Once I had the opportunity to sit down and review the transition guide — Which is still not great, you have to infer quite a bit. I hope my assumptions are correct! — I was able to calm down a little bit. The only data that they're taking away from us is phone numbers. The address and employment information we were previously collecting is apparently so low-threat that it's going to remain available. There is quite a bit of data that will now be restricted, but it's not data that we were already collecting for our purposes.

But they're also taking away our ability to provide value to our users in exchange for that LinkedIn login. You can no longer view group memberships or send membership requests. You can no longer send messages on behalf of users, which means you can no longer send connection requests.

The one bit of value remaining, then, is that we can automatically fill out / update your school profile information from your latest LinkedIn details. If you're on a phone or tablet, that's a good amount of typing saved and you'd probably rather use that feature than not, but it's not a big motivator.

Once again LinkedIn is showing that they have no regard for the end-user, in an increasingly mobile world. They don't care what you get out of your LinkedIn experience, as long as you keep giving them your data.