[rescue] 280R questions
Jonathan C. Patschke
jp at celestrion.net
Fri Dec 15 05:32:35 CST 2006
On Fri, 15 Dec 2006, Bill Bradford wrote:
> I had a pair of 280Rs with dual 750s.
> They were POSESSED. CURSED. I expected them to start spewing pea
> soup out the Ethernet ports.
>
> Sun Service replaced everything in them but the chassis metal/plastic,
> power supplies (which tested fine), and hard drives. Some parts like
> RAM and CPUs were replaced MULTIPLE TIMES.
>
> They would STILL randomly panic, hang, reboot, etc.
I have the same experience with two or three Blade 1000 systems at $ork.
They're split between 750MHz and 900MHz systems. One of the 900Mhz
systems even has copper CPUs, so it's just that the whole platform is
crap.
> I repeat, we replaced everything but the chassis, hard drives, and
> power supplies, and they were still unstable as hell (this is after 2+
> years of flawless operation).
Exactly. We've been through numerous CPUs (although three of them were
due to a MADDENING experience with a Northrup Grumman tech), some number
of system boards, amd I-forget-how-many-CPUs. They also love to eat
DIMMs.
They'd be -wonderful- machines (fast, FC on the motherboard, USB, UPA
and PCI64) if they weren't such money sinks.
> Ended up replacing them with a pair of 1.8Ghz P4 1U rackmount boxes
> running CentOS 3. Other than a dead CPU heatsink fan, those boxes
> have had no problems.
Ours are slowly being replaced with v210s, which are mostly stable and
trouble-free, aside from the ALOM modules bricking themselves if we tell
them to get their IPs from DHCP (Sun -still- hasn't been able to
duplicate that problem).
--
Jonathan Patschke ) "Some people grow out of the petty theft of
Elgin, TX ( childhood. Others grow up to be CEOs and
USA ) politicians." --Forrest Black
More information about the rescue
mailing list