[rescue] The State of SunHELP (new server build, etc)
Bob Darlington
rdarlington at gmail.com
Sun Jun 16 12:34:26 CDT 2019
How would you like me to test? I'm happy to get actual numbers for you any
way you like.
SAS attached, all the same disks, probably LSI sas card but I'd have to log
in and see for sure. And I haven't checked with other OS's with these
things. They're all attached to RHEL6 or RHEL7. Of course I have much
faster disk around, but that's not zfs based (netapp, IBM DS-whatever over
the san).
-Bob
On Sun, Jun 16, 2019 at 10:05 AM Tim Nelson <tim.e.nelson at gmail.com> wrote:
> Interesting to know what your specific values of "fairly quick" are? And,
> assuming all of your disks, controllers, expanders, etc are all SATA
> 6.0Gbps?
>
> Also, curious if you've seen the same performance with other OS (BSD, etc)?
>
> --Tim
>
> On Fri, Jun 14, 2019 at 6:51 PM Bob Darlington <rdarlington at gmail.com>
> wrote:
>
> > zfs raidz specifically eliminates the write-hole issues. On the smallest
> > of my systems I'm running 20 8TB disks raidz3 with a hot spare, times 4
> for
> > 84 disks in the jbod, and typically 4 of these craptastic seagate jbods
> to
> > a server. I can't vouch for smaller systems, but I will say my stuff is
> > fairly quick. Here's some of the settings...
> >
> > History for 'stg01':
> > 2016-02-22.15:15:34 zpool create -o ashift=12 stg01 raidz3 /dev/sdb
> > /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj
> > /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr
> > /dev/sds /dev/sdt /dev/sdu spare /dev/sdv -f
> > 2016-02-22.15:39:59 zpool export -a
> > 2016-02-22.15:44:57 zpool import -d /dev/disk/by-id stg01
> > 2016-02-29.13:48:31 zfs set compression=lz4 stg01
> > 2016-06-17.14:50:36 zfs set reservation=20G stg01
> > 2016-10-21.10:20:31 zfs set recordsize=8M stg01
> > 2016-10-21.10:24:14 zfs set relatime=on stg01
> >
> > pretty stock stuff. Watch the -f's (force).
> >
> > -Bob
> >
> > On Mon, Jun 10, 2019 at 2:14 PM Jonathan Patschke <jp at celestrion.net>
> > wrote:
> >
> > > On Mon, 10 Jun 2019, Bob Darlington wrote:
> > >
> > > > Raidz3 all the way. There are other under the hood tweaks we use
> and
> > > I'll
> > > > get you the list when I am in the office and remember.
> > >
> > > Benchmark your load before going with raidz3. In order to avoid the
> > > parity "write hole", the I/Os will tend to serialize, which means
> you'll
> > > usually get throughput limited to the slowest disk in the batch.
> > >
> > > For the same reason, tuning recordsize on raidz can make a huge
> > difference
> > > (partial stripe writes with parity make everyone have a rough day).
> > >
> > > I've had excellent experience with doing a zpool where each vdev is a
> > > mirror and devices in the mirrors are split between manufacturers (ie:
> > > WDC in the "left" side of each mirror, and Seagate in the "right").
> > >
> > > For my "back up the world" box, it's raidz2 over slow disks. It works
> > > fine, but performance is pretty bad for modern hardware. For the
> machine
> > > I run data-crunching jobs on, I have two zpools each with four disks
> > > arranged into two mirrors.
> > >
> > > For additional tweaks and tunables, please refer to the quote in my
> > > signature. :)
> > >
> > > --
> > > Jonathan Patschke | "The more you mess with it, the more you're
> > > Austin, TX | going to *have* to mess with it."
> > > USA | --Gearhead Proverb
> > > _______________________________________________
> > > rescue list - http://www.sunhelp.org/mailman/listinfo/rescue
> > _______________________________________________
> > rescue list - http://www.sunhelp.org/mailman/listinfo/rescue
> _______________________________________________
> rescue list - http://www.sunhelp.org/mailman/listinfo/rescue
More information about the rescue
mailing list