And people accuse me of offensively characterizing people who reach different conclusions than I have. It is to laugh. In reply, and attempting to remain more civil than Robin had been, I said this.
You misread the article, Robin. If you look at the power-consumption results on page 14, you’ll see that the Hitachi drive drew more power *at idle* than the Sandisk SSD did *under load* - and that doesn’t even count the difference in cooling needs. The MemoRight SSD also used less power at idle than the Hitachi did, and idle is where most notebook drives are most of the time. Those results are starkly at odds with the traffic-generating headline, and until the inconsistency is resolved I wouldn’t jump to any conclusions. What problems do exist with SSD power consumption are also more easily solved than you let on. Some functions can be moved back to the host, others to dedicated silicon which can do them very efficiently. It’s not like hard drives don’t have processors in them drawing power too, y’know. When somebody does a head to head comparison where the drives are idle 90% of the time and only reading 90% of the remainder, and properly accounts for the fact that whole-system I/O performance might not scale perfectly with drive performance, then it’ll be worth paying attention to.
Of course, Robin tried to poison the well by preemptively dismissing any methodological criticism as “denial and obfuscation” but I’d like to expand on that last point a bit. At the low end of the scale, a slightly improved I/O rate might prevent a processor from entering its power-saving sleep modes. At the high end of the scale, a slightly improved I/O rate could push a multi-threaded benchmark past the point where context switches or queuing problems degrade performance. In these cases and many others, the result can be more instructions executed and more power drawn on the host side per I/O, yielding a worse-than-deserved result for a faster device on benchmarks such as Tom’s Hardware used. Since the power consumed by CPUs and chipsets and other host-side components can be up to two orders of magnitude more than the devices under test, it doesn’t take long at all before these effects make such test results meaningless or misleading.
I’m sure Robin knows a thing or two about storage benchmarketing, which is not to say that he has engaged in it himself but that he must be aware of it. Workloads matter, and any semi-competent benchmarker can mis-tune or mis-apply a benchmark so that it shows something other than the useful truth. Starting from an assumption that Tom’s Hardware ran the right benchmark and demanding that anyone else explain its flaws is demanding that people reason backwards. Instead we should reason forwards, starting with what we know about I/O loads on the class of systems we’re studying, going from there to benchmarks, results, and conclusions in that order. That’s where the ... [ Read More (0.1k in body) ]