[rescue] E10k systems coming down in price
Skeezics Boondoggle
skeezics at q7.com
Fri Jul 18 11:42:46 CDT 2003
On Fri, 18 Jul 2003, "Joshua D. Boyd" wrote:
>
> On Fri, Jul 18, 2003 at 10:14:51AM -0400, Curtis H. Wilbar Jr. wrote:
> > On a machine like an E10K, it would be nearly pointless to run anything but
> > Solaris.
> >
> > Currently even if Linux, FreeBSD, and NetBSD ran on the E10K (I do not
> > believe any of them do yet), none of them would be anywhere near as
> > efficient in MP/MT as Solaris. Solaris's scheduling threads within
> > the kernel alone makes a big difference in MT performance as your
> > processor count goes up.
Actually, I recall an email from a friend and Linux kernel banger who
showed the output of Linux booting on an E10k, so they must have at least
rudimentary support for the Gigaplane...
> > Don't get me wrong, I like allBSD, and Linux, and pretty much most UNIX,
> > but for a box this big, the money you would save by running non Solaris
> > is going to be lost in the performance penalty you will pay for not really
> > "using" the hardware to it's fullest.
>
> That theory only holds for people hoping to use the E10k for commercial
> reasons. For non-commercials reasons, all the money you save is money
> you save, since the operating inefficiency doesn't result in lost
> profits anyway.
Well, you also lose a lot of the nifty features that make machines like
the E10k so cool - hot swapping system boards, for example. I think the
idea behind machines of that scale is that you plug it in, turn it on, and
let it run forever - having to reboot for something as mundane as a board
or CPU failure is unacceptable. :-) Other things, like support for
domains, inter-domain networking, environmental monitoring, etc. probably
aren't there yet, either.
Those are the things that differentiate Solaris, along with other features
in the kernel that allow for massive scalability. Sun abandoned the
"uniprocessor mode" vs. "multiprocessor mode" thing quite some time ago,
so the Linux guys like to beat up on the fact that on single-user or even
1-4 way small SMPs that Linux is faster - and in some tests it certainly
may be (context switch times, faster fork(), etc.) However, get beyond
12-16 CPUs and it falls over, while Solaris is just starting to hit its
stride. It just doesn't make sense to cram all those high-end features
into the typical Linux distribution, but I expect that in time there will
be a Linux optimized for larger systems.
Personally, though, I would find a Linux monoculture just as odious and
unpleasant to contemplate as a Windows monoculture; nobody has a monopoly
on good ideas, and the narrowing of choices is not only personally
grievous but, I think, dangerous to the health of the industry as a whole.
This may be unfair to the more passionate Linux advocates out there, but
Linux without commercial Unices to emulate/copy/steal from would be just
as dead as Windows without Apple to emulate/copy/steal from.
But hey, if "the market decides" our computing future is 100% Wintel and
the last remaining OS and microprocessor choices narrow to just the latest
commodity crap pumped out of Redmond and Santa Clara, well, I'll just play
with my old computers and go do something else for a living. Be a plumber
or something. Which is very similar to system administration, if you
think about it.
> Besides, how are the various free OSs supposed to get better if they
> don't have machines to work on?
Yeah. The OSDL was supposed to do that... I'll just refrain from comment
since I've had a couple of very talented friends who worked there and
bailed on the place.
<peave> Besides, large systems are "dead," remember? The so-called Top 500
Supercomputers list is now full of Wintel clusters (and clusters in
general). Am I the only one who finds that virtually useless? If you
have a problem to solve that isn't particularly well suited to
partitioning over a grid or cluster and you just want to find out which
_machine_ is the biggest/best/baddest for the task, the Top 500 rankings
don't tell you that anymore. If you want to add up all 30 machines in
your basement and your cell phone, toaster, PDA, your microwave and the
Postscript interpreter in your printer and publish a list of Top 500
Supercomputing _sites_, based on the aggregate computing power, then
that's fine - but just call it what it is. I'd still like to know how the
large systems rank in terms of running one single OS image in
non-clustered, non-grid, single standalone system configurations.
Y'know, who's waving one big willy instead of 4,096 little ones. :-) Maybe
it's irrelevant, nowadays. But it seems silly to be ranking who can pile
up the most boxes, rather than which individual boxes can do the most work
(so that you can then pile up a bunch of those! :-) </peave>
La la la, it's Friday, only 7 more hours 'til beer:30.
-- Chris
More information about the rescue
mailing list