[geeks] Disks: recommendations?
Mouse
mouse at Rodents-Montreal.ORG
Fri Oct 30 14:09:52 CDT 2020
>>> On pretty much any decent modern SSD, wear leveling really isn't an
>>> issue any more, [...]
>> What _is_ "the entire rated service life of the drive"? [...]
> The seemingly-accelerated wear rate of moderns SSDs is exaggerated by
> how fast they are. In active use, an SSD under an extreme
> write-heavy load will fail faster than a mechanical disk in terms of
> calendar time, but generally not in terms of blocks written.
So, since my use case isn't redlining the disk's ability to accept
writes (except relatively briefly, and then for the same number of
blocks written regardless of speed), that's not relevant to me.
> Unlike spinning media, though, SSDs do now have mechanical
> degradation at rest, as there's no lubrication to dry up.
s/now/not/ I assume?
Maybe not, but they do have stored charge to leak. It's not
*mechanical* degradation, but it amounts to something similar
operationally.
> SSD lifetimes are rated on total-device-writes-per-day over a given
> number of years, assuming an end-to-end wear pattern.
...which is completely unrealistic in almost every real scenario.
> If you totally fill a device, and then rewrite the last block
> infinitely, you'll exhaust the spare block pool faster, but that's a
> pathological use case.
Not all that pathological; it's a lot like what will happen if it's
used with a filesystem that doesn't do TRIM.
> Remember to issue discard/dsm (a.k.a. "trim") commands for the unused
> portions of the media, and they'll last ages.
This sounds like "you need SSD-aware filesystem code to get decent
lifetime out of them". If so, that's another reason for me to avoid
them; FFS *long* predates TRIM. I've been working, off and on, on a
program to identify portions of an FFS filesystem which are not
important; the goal was different, but the resulting data could be used
to TRIM those areas. But that's not an in-live-use thing. Besides,
the steady state of disks is full.
> There are lots of low-end players in the SSD space (Adata, PNY,
> etc.), and my experience with those drives is that they are about as
> reliable as cheap thumbdrives. Do not trust them for data you care
> about.
And...is there any reasonably simple way someone like me, not in the
storage industry in any form, can tell whether I'm looking at something
like that or soemthing worth putting data on?
>> I would hope they'd instead flip from "working fine" to "read-only",
>> but I have little faith such hopes would be realized.
> That'd only be possible if wear was totally predictable (either
> detectably or artificially so).
I was, naovely (and apparently unrealistically), expecting that the
firmware would be able to tell whether it's got good blocks and, when
it runs out of good space, would know it.
> For what it's worth, I off-site with LTO, but that's only because of
> the price of SSDs and speed doesn't matter for disaster-recovery in
> my use case.
LTO? Isn't that a tape technology?
I'd be tempted, but my (severely limited, of course) experience is that
tape media is even more likely to lose bits than disk drives - and is
substantially more expensive in dollars-per-byte as well.
> I'm slowly migrating from spinning rust to SSDs for active data
> because it's hard to argue with a seek time of nearly zero.
Oh, I don't argue with it. It's just that, for me, performance isn't
important enough to override the factor-of-over-two price difference.
/~\ The ASCII Mouse
\ / Ribbon Campaign
X Against HTML mouse at rodents-montreal.org
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B
More information about the geeks
mailing list