[Sunhelp] Fault tolerant system.
Fletcher, Joe
joe.fletcher at Metapack.com
Thu Sep 21 11:25:07 CDT 2000
I'm aware of the initiator id issue. We're using Solaris 8.
-----Original Message-----
From: Jarrett Carver [mailto:solarboyz1 at hotmail.com]
Sent: 21 September 2000 16:57
To: sunhelp at sunhelp.org
Subject: RE: [Sunhelp] Fault tolerant system.
Note that if you are connecting your storage arrays via SCSI you need to
ensure that none of the connected hosts has the same scsi-initiator-ids.
Other Fault tolerance Notes:
If you are using Solaris <= 7, and enable dynamic reconfiguration, ensure
that all diskpacks are connected to different boards, and that none are
connected to board 1. This will enable hot swapping I/O boards should one go
down. Tested out the hot swapping and it seemed to work well, although never
actually hot swapped on a production system ;-).
At the last place I was at we used Suns Cluster, and these were two of the
things I noted when setting it up. Other than that your wireing diagram
looked correct.
>From: "Fletcher, Joe" <joe.fletcher at metapack.com>
>Reply-To: sunhelp at sunhelp.org
>To: "'sunhelp at sunhelp.org'" <sunhelp at sunhelp.org>
>Subject: RE: [Sunhelp] Fault tolerant system.
>Date: Thu, 21 Sep 2000 16:15:16 +0100
>
>Hi,
>
>My diagram didn't quite make it intact I see.
>
>
>-----Original Message-----
>From: Gregory Leblanc [mailto:GLeblanc at cu-portland.edu]
>Sent: 21 September 2000 15:52
>To: 'sunhelp at sunhelp.org'
>Subject: RE: [Sunhelp] Fault tolerant system.
>
>
>Disclaimer: I don't have enough money to actually purchase a system like
>this, but I've gone over it and diagramed it a couple of times, "just in
>case".
>
>Budget for tranquilizers and pain-killers if you do buy into this sort of
>thing.
>Alternatively use VMS! :-)
>
>
>
> > -----Original Message-----
> > From: Fletcher, Joe [mailto:joe.fletcher at Metapack.com]
> >
> > I'm looking for some hints of putting together the following
> > hardware rig.
> > 2x E420Rs, each with two dual SCSI adapters added and two
> > A1000 disk arrays.
> > The system will be running RSF-1 "cluster" software. We will
> > have Veritas VM
> > installed
> > as well. Initially I'd like confirmation of just how the
> > whole lot should
> > be cabled together.
> > This really needs a few diagrams to help explain the options but I'm
> > assuming some people
> > aren't usiing MIME enabled emailers. Lets call the servers S1
> > and S2. The
> > arrays can be Ar1 and Ar2.
> > Each array has two SCSI connectors, ports 1 and 2. Assume we
> > will be working
> > with controllers in slot4
> > on each server. Each controller has ports A and B. Still with
> > me so far?
>
>Yeah, pretty straight forward.
>
>You'd think so but since it's not entirely obvious how the ports on the
>A1000 behave
>it's not quite. The trick is going to be making sure that if a failover
>occurs, the standby server sees the same disk set as the primary. Likewise
>if a disk set fails the standy set has
>to have identical contents to the primary.
>
>
> > What I think we need to do is connect S1 portA to Ar1
> > port1 and portB to
> > Ar2 port2
> > S2 portB to Ar1 port2
> > and portA to
> > Ar2 port1
>
>Sounds good to me.
>
>So how about S1-A to Ar1-2 with S1-B to Ar2-1
> S2-A to Ar2-2 with S2-B to Ar1-1
>
>There is a difference. It's further complicated by the fact that they want
>to use the standby servers for Q/A testing so they will need a disk set of
>their own to play on.
>
> > We then use RAID manager to carve up the disks within the
> > arrays (12 disks
> > in each)
> > and manage the resulting virtual disks through Veritas at
> > operating system
> > level.
> >
> > S1 S2
> > | \ / |
> > | \/ |
> > | / \ |
> > Ar1 Ar2
>
>Yeah, that's how I'd do cabling, as you're aiming for redundancy, so that
>if
>any 1 piece fails, you still have the other to pick up.
>
> > Alternatives are to chain the two arrays together and single port each
> > server thus:
> >
> > S1----Ar1----Ar2 ----S2
>
>I wouldn't go this way, as it's likely to hamper performance (the servers
>don't have a bus dedicated to each array), and it takes a bit away from the
>fault tolerance as well. Later,
> Greg
>
>
>This last bit is a stop gap if we can't get the planned config working
>within the timescale
>allowed by the client. Did I mention this has to be up and running by the
>end of next week?
>_______________________________________________
>SunHELP maillist - SunHELP at sunhelp.org
>http://www.sunhelp.org/mailman/listinfo/sunhelp
_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.
Share information about yourself, create your own public profile at
http://profiles.msn.com.
_______________________________________________
SunHELP maillist - SunHELP at sunhelp.org
http://www.sunhelp.org/mailman/listinfo/sunhelp
More information about the SunHELP
mailing list