Tuesday, September 17, 2013

Benchmarks Aren't Enough, But Neither Are Experiences



User Experience Lab Tactile Robot
SAN FRANCISCO—A year ago, after attending the 2012 Intel Developer Forum (IDF), I asked a question inspired by things I'd been hearing from the various presenters and PR folks at the show: "How do you benchmark experiences?" The notion people were floating then was that what really matters about a system is how well it functions, not how well it scores on synthetic performance tests. Today, apparently, not much has changed.
Since immersing myself once again in the world of Intel this week, I've found myself facing an ongoing barrage of insistence that benchmark tests are passé at best and deceptive at worst, and that focusing on the usage of the final product is what's most valuable to consumers—or at least what should be. One instance was particularly vocal, even going so far as to dissect the code of some major pieces of benchmarking software (no, we were never told exactly which) to analyze the reasons they couldn't be trusted in the first place.
Strangely, time after time during the show, Intel representatives have touted how this processor is so-and-so percent faster than that processor, how we should be seeing scores however much better this time around. And I was invited to an event at the Intel campus in Santa Clara specifically for the purpose of running tests on the company's new Bay Trail tablet processing platform. Apparently benchmarking results are still important once in a while.
IDF13 BugOn one level, Intel is absolutely correct in this line of thinking. No, benchmark scores don't tell you everything about a product, and they alone will never be how anyone decides to buy this device rather than that one. No one, from tech companies to tech reviewers to tech consumers, should rely on them exclusively, even if they know how to properly interpret them.
But benchmark scores are useful, perhaps even vital, for the point at which questions of "experiences" stop being relevant. Not everyone may know whether the result they're seeing from one kind of activity is actually good or merely okay, or whether a certain game on it looks unreasonably jerky or if that's just the best they can expect for the money they want to pay.
Scores from an objective—or, heck, even an admittedly non-objective—third party provide the crucial final piece of the purchasing puzzle. If two tablets appear to play video in exactly the same way, and that's what you care about, which should you choose? If you know you want to play games but don't personally know FRAPS from a frappuccino, isn't seeing a list of comparable frame rates the best way for you to get the best system for the best price?
Ultimately, experiences don't tell you everything, either. Focusing on those, just like focusing on benchmarks, provides an incomplete picture that may inspire more confusion than clarity. Paying attention to a finalized product makes sense for a lot of reasons, foremost among them that it makes a company's fairly esoteric products easier for the company to sell and easier for the consumer to understand.
While in Santa Clara for the Bay Trail benchmarking event, I was also afforded an exciting behind-the-scenes glimpse at Intel's "User Experience Lab," where a series of tests and measurements, involving everything from computers to robots to a hemi-anechoic chamber, are deployed to judge the utility of a system's components. The quality of audio, width of viewing angles, and touch-screen sensitivity are important issues that affect day-to-day usage, and after seeing the Lab I had a much clearer understanding of the logic behind Intel's pushing experiences these days.
Of course, what happens in a testing lab and what happens in your living room aren't always (if ever) the same thing; a lot more variables come into play once you get the system home, and no company can test for everything you may do with your computer. Intel's rigorous scientific applications are a good place to start but, as is the case with the test result numbers people at Intel so frequently decry, not a good place to stop.
In a computing landscape that is changing every day and, more important, becoming more and more mainstream with each passing generation, the move from benchmarks to experiences is a good idea. But until there's a repeatable way to literally benchmark those experiences, so that the interested consumer knows not only what matters but why it matters and how it can help, experiences and benchmarks must work together to help consumers get the information, the answers, and the systems they need.

No comments:

Post a Comment