« | »

The Turing Test of Feature Assessment

If the results of an advanced system are indistinguishable from fake results, is it worth investing in the system in the first place?  It’s my observation the brightest marketing minds struggle with this question every day. From cross-selling strategies to data-visualization, inspired creative and technical teams waste precious resources devising complex systems that perform no better than their mock counterparts.

How can agencies avoid falling into this trap? By applying this simple variant of the famous Turing Test:

“If a neutral third party is unable to distinguish an advanced interaction from a static or randomly generated alternative, prefer the simpler solution or revisit your solution.”

If a Magic Eightball can answer the question as well as an elaborate recommendation engine then it may be wise to take a shortcut.  Or provide context so the user better understands the reason for the recommendation. But taking shortcuts is not what creative people are wired to do. It can often be difficult to objectively judge your own work, especially when there are a lot of clever things happening under the surface. Inner knowledge and familiarity with how something works greatly changes a person’s perception of its output. Like good parents, designers and developers see beauty and genius in their creations that others would not.

This test can steer technology and creative teams away from potentially wasteful approaches. A good example of where the test can be applied is a map visualization showing real-time global activity.  I’ve seen many examples of this type of visualization that are indistinguishable from pre-rendered alternatives. Observers will usually be skeptical and have short attention spans. The burden is on the visualization to quickly and easily distinguish itself from a random or static alternative.

Teams that apply the Turing Test of Feature Assessment will quickly realize when they need to apply further effort to make their projects successful or when to choose a simpler version of an experience at a much lower cost.

Editor’s Note: In recognition of a long track record of creative insight, “The Turing Test of Feature Assessment” will hereafter be he referred to as simply “Noel’s Test” 

Posted by on September 5, 2012.

Categories: Technology

  • http://twitter.com/felixturner Felix Turner

    I get your point that data-visualizations need to actively show they are using real live data rather than fake canned data. Does the Noel test apply more broadly also?

  • http://twitter.com/felixturner Felix Turner

    I get your point that data-visualizations need to actively show they are using real live data rather than fake canned data. Does the Noel test apply more broadly also?

  • http://twitter.com/noelbillig Noel Billig

    I definitely believe so. Another place you could apply the test is to recommendation engines (i.e., any algorithm that asks you to buy, listen, watch, etc. based on data analysis).

    I think everyone has a story about a product that was recommended to them that seemed inappropriate ( http://www.amazon.com/Strange-Amazon-Recommendations/lm/R15W3LLZV2GF0V ). In these cases, the recommendation engines would fail the test by showing just the product or result alone. Many companies, like Amazon, Foursquare, and Netflix, have added insight into why a recommendation is given (“recommended because of your interest in…””). This helps an otherwise confusing recommendation pass the test, although if this line of copy itself seems random, perhaps less so.

  • http://twitter.com/noelbillig Noel Billig

    I definitely believe so. Another place you could apply the test is to recommendation engines (i.e., any algorithm that asks you to buy, listen, watch, etc. based on data analysis).

    I think everyone has a story about a product that was recommended to them that seemed inappropriate ( http://www.amazon.com/Strange-Amazon-Recommendations/lm/R15W3LLZV2GF0V ). In these cases, the recommendation engines would fail the test by showing just the product or result alone. Many companies, like Amazon, Foursquare, and Netflix, have added insight into why a recommendation is given (“recommended because of your interest in…””). This helps an otherwise confusing recommendation pass the test, although if this line of copy itself seems random, perhaps less so.

  • Thai Le

    Agree with the recommendation engine aspect…though with data visualization that is done correctly, its suppose to enlighten you with information you didn’t already know. So whether or not it looks canned shouldn’t matter. The point is to show you what is or exists not to validate what you may already think. If you look at the visualization above and frame it as though it was showing global debt – it makes sense with what we already know. If however it was 10 years prior and I saw this data viz I would be surprised and if it were true some people should start looking at global their investment portfolio. With data viz you just don’t know what you will get until you run real numbers. The litmus test is whether or not what you see is interesting or informative and not really whether or not it looks canned IMO. Further, a static or random data viz is almost always that much more interesting when an expert analyzes what they see in the visualization – which is almost never what a lay person sees. That is the success of NYT data viz group – they simplify, interpret and focus on the things we would normally miss or dismiss as general or canned looking data.

  • Thai Le

    Agree with the recommendation engine aspect…though with data visualization that is done correctly, its suppose to enlighten you with information you didn’t already know. So whether or not it looks canned shouldn’t matter. The point is to show you what is or exists not to validate what you may already think. If you look at the visualization above and frame it as though it was showing global debt – it makes sense with what we already know. If however it was 10 years prior and I saw this data viz I would be surprised and if it were true some people should start looking at global their investment portfolio. With data viz you just don’t know what you will get until you run real numbers. The litmus test is whether or not what you see is interesting or informative and not really whether or not it looks canned IMO. Further, a static or random data viz is almost always that much more interesting when an expert analyzes what they see in the visualization – which is almost never what a lay person sees. That is the success of NYT data viz group – they simplify, interpret and focus on the things we would normally miss or dismiss as general or canned looking data.

  • http://twitter.com/noelbillig Noel Billig

    @Thai Le – I agree with almost all of what you wrote, but I don’t think it necessarily relates to the main point of this article.

    The test can’t determine if a technically complex feature (like “realtime”-ness) is appropriate for your project, or whether your project is good or bad. It will only help you determine if you are failing to take advantage of a complex feature once you’ve decided to apply it (or help you determine if that complexity is even necessary).

  • http://twitter.com/noelbillig Noel Billig

    @Thai Le – I agree with almost all of what you wrote, but I don’t think it necessarily relates to the main point of this article.

    The test can’t determine if a technically complex feature (like “realtime”-ness) is appropriate for your project, or whether your project is good or bad. It will only help you determine if you are failing to take advantage of a complex feature once you’ve decided to apply it (or help you determine if that complexity is even necessary).

« | »




Recent Posts


Pages



About R/GA Techblog

More than thirty years ago agency founders Bob and Richard Greenberg experimented with Academy Award-winning optical printing techniques. Today R/GA programmers continue the legacy of using technology to experiment and inspire. Tech Blog is the most recent account of this evolutionary process.  more →