[geeks] Solaris 10 / OpenSolaris bits to be in next version of OSX
Charles Shannon Hendrix
shannon at widomaker.com
Thu Aug 10 10:22:50 CDT 2006
Thu, 10 Aug 2006 @ 21:44 +1000, Scott Howard said:
> On Wed, Aug 09, 2006 at 08:35:17PM -0400, Charles Shannon Hendrix wrote:
> > But what about this required integration of the two abstractions?
>
> The fact that the two-level method was a hack from the start?
That's not a fact, that's your opinion.
What about the fact, a real one this time, that if you want several
filesystems there are a lot of advantages to them sitting on one good
platform?
The ZFS way means reinventing the volume manager or at least having
redundant copies.
> There's a number of advantages in having the filesystem not juts being
> aware of, but also controlling the actual layout of data on disk.
There are also disadvantages.
Has anyone actually tested wether or not the advantages and shortcuts
taken by ZFS are better than maintaining the abstraction and the
benefits that it gives you?
> > I've *never* had a good drive that did that. If a write fails, it
> > knows, and it tells me.
>
> How do you know? Because it didn't tell you otherwise? If you trust
> your storage that much, then you probably don't need ZFS for it's
> availibility features. Can you probably also do without insurance for
> your house/car/etc too... :)
You are making stupid assumptions.
I said nothing about how much I trust storage. If I trusted storage as
much as you imply/assume, I wouldn't have backups or run filesystems
that do a good job of making up for hardware limitations.
I didn't say ZFS didn't do useful things, I just question wether or not
it is new, and I question wether or not losing the abstractions was
necessary or worth the losing the benefits you gain there.
Also, some people are implying things it does which are very dangerous,
which is another reason for asking questions about it.
> What it does differently is that instead of storing the checksum with
> the data itself, it stores one rung higher up the filesystem metadata
> tree. ie, at each level you have the metadata plus the checksum of the
> data that it points to. This means that as the filesystem walks down
> the tree, it can check at each step of the way that everything is in
> order. If, for example, a block didn't get written to disk half-way down
> the tree (eg, the disk didn't write it for some reason) then it knows not
> to trust anything below that point and it will go to the redundant copy.
There has been userland software to do this for years in UNIX, and
operating systems like MVS have done this for decades now.
So it looks to me like ZFS is just a smooth integration of a lot of
different features into one.
Kind of like Apple's "new" Time Machine software: it's GUI on top of
file versioning that is decades old, but happens to have been done
smoother than what came before (on paper anyway, haven't seen it yet).
> > If this is true, then I definitely would not trust ZFS, because you just
> > described them trying to handle a situation that should not be handled.
>
> Why not? It handles it by using redundancy - and in particular a
> redundant copy that we _know_ we can trust due to the checksum.
>
> > When the drive starts having errors, a properly working RAID should kick
> > it out immediately.
> >
> > Modern drives automatically remap. When you start having errors, the
> > drive is toast.
>
> _IF_ it can detect it. That's what RAID is for. ZFS is for when the disk
> can't detect it.
No, you are arguing a point not made.
The statement was that ZFS detected and fixed errors on failing drives
and seemed to imply that it kept on using it.
Yes, nice that it detects those errors, but not good if it doesn't kick
that drive out of production. At the very least, it should kick it out
until an admin can indepdently verify the drive error was an anomaly.
--
shannon "AT" widomaker.com -- ["Meddle not in the affairs of Wizards, for
thou art crunchy, and taste good with ketchup." -- unknown]
More information about the geeks
mailing list