-
Great White Shark
Originally Posted by Nick_B
I wasn't suggesting this was a solution. Just that gcc and OpenMPI run on OSX. I'm not attempting to convince you to do so. I said 8 cores because that's the biggest OSX box I know exists, lol.
Ah... my bad, then... I don't know of any other OS that could support such large level of parallel computing other than Linux... In my mind, MacOS and Windows just have too much memory overhead and things like PBSPro just won't be practical on them...
-
Originally Posted by proxops-pete
Ah... my bad, then... I don't know of any other OS that could support such large level of parallel computing other than Linux... In my mind, MacOS and Windows just have too much memory overhead and things like PBSPro just won't be practical on them...
Well, AIX and other Unixes, I suppose. Linux owns the market share, though.
I'm kind of curious what Windows HPC Server looks like. I don't think I'd care to use it, though.
Virginia Tech built an HPC from Mac G5s called System X. I believe it used OSX on the nodes. They still have a Mac hardware powered HPC, but it runs Linux now.
-
Great White Shark
Originally Posted by Nick_B
Virginia Tech built an HPC from Mac G5s called System X. I believe it used OSX on the nodes. They still have a Mac hardware powered HPC, but it runs Linux now.
I bet they paid close to nothing since they are an educational institution... for commercial companies like us Boeing, I bet we'd pay way too much for both the Mac hardware and the software! >.<
-
Originally Posted by proxops-pete
I bet they paid close to nothing since they are an educational institution... for commercial companies like us Boeing, I bet we'd pay way too much for both the Mac hardware and the software! >.<
Yeah, at the time in 2003, I guess System X was a top 10 HPC and cost 1/5th of the cost of the second most inexpensive HPC on the top 10 list (this did come from Apple's website/marketing, so take that into account). When I read that, I could only think "how could Mac be cheaper?"
-
Because nodes in HPCs tend to be really expensive compared to off the shelf systems, even when those systems are PowerMac G5s. I doubt Apple gave VT much more than the standard higher education discount. That's not usually something they do.
edit: also the cost of an HPC is way more than (cost per node x number of nodes)
Last edited by Steven P Jobs; 05-14-2012 at 06:03 PM.
-
Great White Shark
Oh I know it's more than Macs... while I can't disclose how much we paid for ours, it's a LOT... but do take into account things like 48 GB memory per node, 12-cores per node, diskless setup, and Inifiniband connections, it's no slouch... and I doubt that Mac version came remotely close to that...
-
^ when did you buy it though? At the time the PowerMac G5 was on sale the only system I know of that could have done that amount of cores/memory per node would have been some outrageously expensive Itanium 2 cluster, or possibly some kind of SPARC-based system, though I don't know too much about those. Nowadays you can easily get these kinds of specs from x86, and other architectures have moved on to even more outrageous specs. Things have changed greatly over the last couple of years in that regard. I'd predict even more speed gains over the next few years, especially if people finally figure out how to really use GPUs to their full potential. Another trend I see over the next 5 years or so is a drastic decrease in power consumption. After all building a supercomputer is one thing, but then you have to keep it running. IIRC the one in Tianjin is water-cooled, just imagine the potential for failure with over 30k water-cooled CPUs.
The RIKEN/K Computer in Japan is 10x as fast as the fastest system from 3 years ago, and the trend seems to be continuing exponentially.
Last edited by Steven P Jobs; 05-15-2012 at 08:58 AM.
-
Great White Shark
Originally Posted by Steven P Jobs
^ when did you buy it though? At the time the PowerMac G5 was on sale the only system I know of that could have done that amount of cores/memory per node would have been some outrageously expensive Itanium 2 cluster, or possibly some kind of SPARC-based system, though I don't know too much about those. Nowadays you can easily get these kinds of specs from x86, and other architectures have moved on to even more outrageous specs. Things have changed greatly over the last couple of years in that regard. I'd predict even more speed gains over the next few years, especially if people finally figure out how to really use GPUs to their full potential.
The RIKEN/K Computer in Japan is 10x as fast as the fastest system from 3 years ago, and the trend seems to be continuing exponentially.
Oh that price is as of last year... of course specs were worse back in 03 but still, things like Infiniband was still around... that alone runs six figures as I'm sure you know...
-
Of course there's other expenses than the CPUs, not the least being that you need a place to actually put the thing and get the thermodynamics figured out. Tens of thousands of CPUs put out a good bit of heat after all.
-
Great White Shark
Yeah... I hope more and more of such computationally expensive codes get ported to GPU parallel computing ... 'cause that would be orders of magnitude cheaper!!! o.O
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
|