And people accuse me of offensively characterizing people who reach different conclusions than I have. It is to laugh. In reply, and attempting to remain more civil than Robin had been, I said this.
You misread the article, Robin. If you look at the power-consumption results on page 14, you’ll see that the Hitachi drive drew more power *at idle* than the Sandisk SSD did *under load* - and that doesn’t even count the difference in cooling needs. The MemoRight SSD also used less power at idle than the Hitachi did, and idle is where most notebook drives are most of the time. Those results are starkly at odds with the traffic-generating headline, and until the inconsistency is resolved I wouldn’t jump to any conclusions. What problems do exist with SSD power consumption are also more easily solved than you let on. Some functions can be moved back to the host, others to dedicated silicon which can do them very efficiently. It’s not like hard drives don’t have processors in them drawing power too, y’know. When somebody does a head to head comparison where the drives are idle 90% of the time and only reading 90% of the remainder, and properly accounts for the fact that whole-system I/O performance might not scale perfectly with drive performance, then it’ll be worth paying attention to.
Of course, Robin tried to poison the well by preemptively dismissing any methodological criticism as “denial and obfuscation” but I’d like to expand on that last point a bit. At the low end of the scale, a slightly improved I/O rate might prevent a processor from entering its power-saving sleep modes. At the high end of the scale, a slightly improved I/O rate could push a multi-threaded benchmark past the point where context switches or queuing problems degrade performance. In these cases and many others, the result can be more instructions executed and more power drawn on the host side per I/O, yielding a worse-than-deserved result for a faster device on benchmarks such as Tom’s Hardware used. Since the power consumed by CPUs and chipsets and other host-side components can be up to two orders of magnitude more than the devices under test, it doesn’t take long at all before these effects make such test results meaningless or misleading.
I’m sure Robin knows a thing or two about storage benchmarketing, which is not to say that he has engaged in it himself but that he must be aware of it. Workloads matter, and any semi-competent benchmarker can mis-tune or mis-apply a benchmark so that it shows something other than the useful truth. Starting from an assumption that Tom’s Hardware ran the right benchmark and demanding that anyone else explain its flaws is demanding that people reason backwards. Instead we should reason forwards, starting with what we know about I/O loads on the class of systems we’re studying, going from there to benchmarks, results, and conclusions in that order. That’s where the “idle 90% and reading 90% of the rest” comes from. It’s true that on back-room systems, including the ones, I work on, host caches absorb most of the reads and writes predominate at the device level. Patterson/Seltzer et al showed that decades ago, it has remained true, and I’ve pointed it out to people many times. However, what’s true in the data center is not true on the desktop and even less so for notebooks. In that context, Tom’s Hardware ran exactly the wrong benchmark and got exactly the wrong results. They screwed up, but that didn’t stop those whose self-interest predisposed them toward the same conclusion from jumping all over the story. For shame. Yet again, wannabes and marketroids have given hard-working and competent implementors an undeserved black eye, and that will have real effect on the pace of innovation. Thanks a lot, you cretins, and don’t you dare complain about my tone unless you’re willing to condemn the “Paris HIlton” remark first.