[geeks] Cheap Dell Servers
Sandwich Maker
adh at an.bradford.ma.us
Fri Feb 8 09:58:07 CST 2008
" From: der Mouse <mouse at Rodents.Montreal.QC.CA>
"
" > But I thought the whole point of FFS's method of minimizing
" > fragmentation was that when it goes to allocate the next sector it
" > needs to allocate, it calculates what would be the optimal sector to
" > allocate from based on the drive parameters, and if that sector is
" > not free, it picks a free sector to allocate which is as close to
" > optimum as possible (probably still in the same cylinder but on a
" > different head or something)?
"
" That's something FFS does, or at least tries to, yes, but it doesn't
" have anything to do with fragmentation, as I understand fragmentation.
"
" My understanding of FFS's way of dealing with fragmentation is just to
" try to allocate files contiguously, and try to keep some 5-10 percent
" of the space free so that this can usualy be done, or close enough to
" help.
"
" >> []
" > My main concern in this exercise is to decode compressed HD video,
" > which can be a bit sensitive to fragmentation at times, according to
" > my own experience. If the difference in streaming video performance
" > will be negligible, I won't worry too much about it.
"
" []
"
" > So, on ZBR drives, does it become wholly irrelevant? If so, I won't
" > worry about what parameters are in my disklabels.
"
" I'd hesitate to go so far as to say _wholly_ irrelevant, at least not
" without actually trying it. (These things are complicated even further
" by drives auto-sparing flaky sectors and the like, which can, from the
" host poin of view, reorder sectors in a track; spare tracks mean that
" similarly hidden seeks can be introduced beyond the host's control.)
the only settings that i think could materially affect this - depending
on how much power modern drives have over reordering - would be ffs
maxcontig and maxbpg, which you'd want large if you're primarily
dealing with large files. i also set cgsize as large as i can, for no
good reason except that it seems like the right thing to do on disks
orders of magnitude larger than what the algorithms were written for.
these ffs algorithms were created for unbuffered flat-geometry disks.
since the st1480 i don't know if any match that description.
on single disks i try to match heads with real heads [again for no
good reason], but i jigger cyls and secs to minimize inaccessible
space. i even wrote a ksh prog that generates possible geometries
given the disk size in raw blocks.
btw it's been a while since i've touched one, but iirc since at least
hpux9, hp has pretty much abandoned c/h/s though their fs is still
based on ffs.
________________________________________________________________________
Andrew Hay the genius nature
internet rambler is to see what all have seen
adh at an.bradford.ma.us and think what none thought
More information about the geeks
mailing list