Approaching a singularity on multiple technological fronts

Sharky Forums


Results 1 to 14 of 14

Thread: Approaching a singularity on multiple technological fronts

  1. #1
    Great White Shark
    Join Date
    Nov 2000
    Location
    Alpharetta, Denial, Only certain songs.
    Posts
    9,925

    Approaching a singularity on multiple technological fronts

    When I say singularity, I mean, a point where something will radically shift, and things won't be the same anymore.

    As standard (and not-so-standard) lithography approaches single digits, something will soon have to change. Our currently generation of lithography is at 22nm. Next generation is 18nm, followed by either 13nm or 11nm. After that the future is murky because you start to have a huge problem with electron tunneling. The gaps become so small they are no longer sufficient to act as a proper transistor.

    Time and again, people far smarter than me in both applied sciences and materials science have solved the problem in some way. But they are just prolonging the inevitable it seems. What can come next? Photonic computing? Quantum computing? Neither seems to be in a take-over position in the next 4 years or so.

    Then there are other technologies: hard drives (for mass-nearline/offline storage), tape drives, batteries.

    Batteries in particular are just odd. I have seen over the last 5 years roughly half a dozen new and easily commercialized solutions to radically increase battery life. Silicon nanowire based anodes, carbon nanotube solutions, etc. etc, etc. (I can find specific examples if you wish.)

    As we fight the constant struggle with packing more technology into less space; doing more with less energy, storing more energy so we can do things longer, what's next?

    I'd like to hear some feedback from my fellow sharks on anything they've heard about that they think could bring about the next radical shift in technology and the way we think about and use it.

    Crusader for the 64-bit Era.
    New Rule: 2GB per core, minimum.

    Intel i7-9700K | Asrock Z390 Phantom Gaming ITX | Samsung 970 Evo 2TB SSD
    64GB DDR4-2666 Samsung | EVGA RTX 2070 Black edition
    Fractal Arc Midi |Seasonic X650 PSU | Klipsch ProMedia 5.1 Ultra | Windows 10 Pro x64

  2. #2
    I don't roll on Shabbos! Timman_24's Avatar
    Join Date
    Aug 2004
    Location
    Urbana, IL
    Posts
    12,648
    I think there is a lot of room for improvement on the software front. Multiple cores still aren't utilized. Look at Bulldozer, Windows can't even manage that properly. I think the next thing will be scaling up cores to postpone the singularity. I've heard of all kinds of new generation computing tech such as DNA/biological computers, quantum computing, and such. My previous physics professor is working on quantum computing. He said 5 year ago they made a large discovery that rekindled his interest. It may be closer than we see.

    Once Intel/AMD hit the wall on the modern process, massive funding will be spent on these newer techs to keep people buying.
    PC: Corsair 550D
    4280k | Asus Rampage Gene | Mushkin 4x4GB | EVGA 780
    Intel 120GB SSD + 2TB Seagate | Seasonic 660 Plat
    2x Alphacool XT45 | Laing DDC | Bitspower

    Currently playing: Civ 5
    Last Game Beaten: Walking Dead

  3. #3
    Great White Shark proxops-pete's Avatar
    Join Date
    Feb 2003
    Location
    Houston, we have lift off!
    Posts
    10,316
    I read about a logic gate the size of atoms! o.O Let me find it...

    http://arstechnica.com/science/news/...ingle-atom.ars
    Last edited by proxops-pete; 03-05-2012 at 09:15 AM.

  4. #4
    LOLWUT ImaNihilist's Avatar
    Join Date
    Nov 2001
    Location
    San Francisco
    Posts
    14,034
    I would agree with the software front. We have so much compute power right now we don't even know what to DO with it. There is no code. We need a new generation of computer scientists. Most people, including myself, are just interested in making bucks off of simple project for the web. There aren't a lot of people these days really pushing interesting code. The incentive really isn't there.

    The most interesting project I've seen in a long time is Watson, and the interesting thing about Waston is that if you can get input latency low enough the compute power doesn't have to be local, it can be on a server somewhere else, even ephemeral. Watson uses something like 3,000 cores of computer power, but it took a team of geniuses to figure how to make it do anything at all. I think Watson and simple things like Siri show that we have the technology, we just don't have the software. There is no NASA for computer programming.

    If tomorrow Intel found out that they could make a Core i7 that was 10x faster across the board, I'm not sure it would really make that much of a difference. Sure, encoding times would drop…but really that's the only thing I can think of these days that sees real world gains from faster CPUs.
    Last edited by ImaNihilist; 03-05-2012 at 02:20 PM.

  5. #5
    I don't roll on Shabbos! Timman_24's Avatar
    Join Date
    Aug 2004
    Location
    Urbana, IL
    Posts
    12,648
    Quote Originally Posted by ImaNihilist View Post
    I would agree with the software front. We have so much compute power right now we don't even know what to DO with it. There is no code. We need a new generation of computer scientists. Most people, including myself, are just interested in making bucks off of simple project for the web. There aren't a lot of people these days really pushing interesting code. The incentive really isn't there.

    The most interesting project I've seen in a long time is Watson, and the interesting thing about Waston is that if you can get input latency low enough the compute power doesn't have to be local, it can be on a server somewhere else, even ephemeral. Watson uses something like 3,000 cores of computer power, but it took a team of geniuses to figure how to make it do anything at all. I think Watson and simple things like Siri show that we have the technology, we just don't have the software. There is no NASA for computer programming.

    If tomorrow Intel found out that they could make a Core i7 that was 10x faster across the board, I'm not sure it would really make that much of a difference. Sure, encoding times would drop…but really that's the only thing I can think of these days that sees real world gains from faster CPUs.
    The only people pushing code are game developers and the scientific community. Most of the high end super computer software is highly proprietary though. I saw a presentation last week about a company that uses night vision cameras coupled with a lens module that can take high resolution "xrays" of objects by detecting gamma rays. That's right, it uses night vision cameras and special lenses combined with software filters to detect gamma rays. It uses data collected from multiple detectors to reconstruct a fully 3D image of the internals of an object.

    It uses GPUs to do this calculation, two GTX295's actually. The guy was saying this type of calculation needed $250k computers 10 years ago, but now it only takes 2 $500 dollar GPUs. The project is aimed at the medical field.

    It is stuff like that that will push the envelope.
    PC: Corsair 550D
    4280k | Asus Rampage Gene | Mushkin 4x4GB | EVGA 780
    Intel 120GB SSD + 2TB Seagate | Seasonic 660 Plat
    2x Alphacool XT45 | Laing DDC | Bitspower

    Currently playing: Civ 5
    Last Game Beaten: Walking Dead

  6. #6
    Great White Shark
    Join Date
    Nov 2000
    Location
    Alpharetta, Denial, Only certain songs.
    Posts
    9,925
    I am constantly amazed at how the efforts of the HPC crowd are not being backported to the rest of business. Things like infiniband, external PCIe, heterogeneous computing (GPU + CPU), RAM-only storage, etc. All of these things that were originally tested, designed and implemented in the HPC realm are now the watchwords of the day for high performance business platforms.

    From a software realm, I think there needs to be a radical shift in how people perceive the problems that they write software to solve. We have single systems with 40 cores and 80 threads, and yet, there isn't really a workload that could take advantage of all of that, especially when usually your access to the storage solution backing it will end up single threaded (for example, linux and irqd).

    Crusader for the 64-bit Era.
    New Rule: 2GB per core, minimum.

    Intel i7-9700K | Asrock Z390 Phantom Gaming ITX | Samsung 970 Evo 2TB SSD
    64GB DDR4-2666 Samsung | EVGA RTX 2070 Black edition
    Fractal Arc Midi |Seasonic X650 PSU | Klipsch ProMedia 5.1 Ultra | Windows 10 Pro x64

  7. #7
    Great White Shark proxops-pete's Avatar
    Join Date
    Feb 2003
    Location
    Houston, we have lift off!
    Posts
    10,316
    Having used all that HPC-related tech, I can attest to how awesome all those are... but it does come at (very) high cost. And I am guessing that that's been the very limiting factor... Infiniband tech alone costs 6 figures...

  8. #8
    LOLWUT ImaNihilist's Avatar
    Join Date
    Nov 2001
    Location
    San Francisco
    Posts
    14,034
    Quote Originally Posted by James View Post
    From a software realm, I think there needs to be a radical shift in how people perceive the problems that they write software to solve. We have single systems with 40 cores and 80 threads, and yet, there isn't really a workload that could take advantage of all of that, especially when usually your access to the storage solution backing it will end up single threaded (for example, linux and irqd).
    It's a bit of a mathematics issue as well. It's difficult to multithread a lot of applications, especially those you want to respond in real time. Cutting up video to be rendered or compressing is easy—it's a fixed length and you can divide it any number of ways.

    That's what was so interesting to me about Watson. They throw 3,000 cores at a something you wouldn't think could be heavily multithreaded, and it responds in near real time. I'm not quite sure how they did it. What, exactly, was each core doing? Some processing audio, some looking for keywords, others doing probability…but how do you scale that over 3,000 threads?
    Last edited by ImaNihilist; 03-08-2012 at 02:24 PM.

  9. #9
    I don't roll on Shabbos! Timman_24's Avatar
    Join Date
    Aug 2004
    Location
    Urbana, IL
    Posts
    12,648
    Quote Originally Posted by ImaNihilist View Post
    It's a bit of a mathematics issue as well. It's difficult to multithread a lot of applications, especially those you want to respond in real time. Cutting up video to be rendered or compressing is easy—it's a fixed length and you can divide it any number of ways.

    That's what was so interesting to me about Watson. They throw 3,000 cores at a something you wouldn't think could be heavily multithreaded, and it responds in near real time. I'm not quite sure how they did it. What, exactly, was each core doing? Some processing audio, some looking for keywords, others doing probability…but how do you scale that over 3,000 threads?
    I believe Watson was fed text files of the questions. I'm pretty sure the enormous amount of computer programmers that are coding these days are not doing it at a machine level. I bet there are only a handful of people that actually write code at that basic level, which is needed to progress the software front. I'd venture to guess 90% or more of programmers only know high level languages such as C and Java. There is no way to tackle unique hardware issues using those.
    PC: Corsair 550D
    4280k | Asus Rampage Gene | Mushkin 4x4GB | EVGA 780
    Intel 120GB SSD + 2TB Seagate | Seasonic 660 Plat
    2x Alphacool XT45 | Laing DDC | Bitspower

    Currently playing: Civ 5
    Last Game Beaten: Walking Dead

  10. #10
    Great White Shark
    Join Date
    Nov 2000
    Location
    Alpharetta, Denial, Only certain songs.
    Posts
    9,925
    There needs to be more focus in multithreading the storage back end. That's where the major problem lies.

    If they used GPU type subprocessors for things like parity calculations, hardware RAID could actually be a performance contender. As it stands now, they are so far behind software RAID solutions it's not even funny.

    Crusader for the 64-bit Era.
    New Rule: 2GB per core, minimum.

    Intel i7-9700K | Asrock Z390 Phantom Gaming ITX | Samsung 970 Evo 2TB SSD
    64GB DDR4-2666 Samsung | EVGA RTX 2070 Black edition
    Fractal Arc Midi |Seasonic X650 PSU | Klipsch ProMedia 5.1 Ultra | Windows 10 Pro x64

  11. #11
    LOLWUT ImaNihilist's Avatar
    Join Date
    Nov 2001
    Location
    San Francisco
    Posts
    14,034
    Quote Originally Posted by James View Post
    There needs to be more focus in multithreading the storage back end. That's where the major problem lies.

    If they used GPU type subprocessors for things like parity calculations, hardware RAID could actually be a performance contender. As it stands now, they are so far behind software RAID solutions it's not even funny.
    Multithreaded storage is kind of Fusion IO's thing too. Their new PCIe 16x card has some kind of retarded bandwidth, like 4GB/s writes and 7GB/s reads.

  12. #12
    Great White Shark
    Join Date
    Nov 2000
    Location
    Alpharetta, Denial, Only certain songs.
    Posts
    9,925
    Quote Originally Posted by ImaNihilist View Post
    Multithreaded storage is kind of Fusion IO's thing too. Their new PCIe 16x card has some kind of retarded bandwidth, like 4GB/s writes and 7GB/s reads.
    They must have done a radical redesign then, because their earlier cards (FusionIO, FusioIO Duo, etc.) were horribly slow and could only reach their marketing claims when they striped across multiple cards.

    Even then if you put a filesystem across them (getting back to something like irqd) your performance tanks. Big time. All filesystems were written to hide the latency of slow speed storage. This is an inherent design flaw when paired up with something that is high speed.

    Raw device access, I have yet to find anything that comes close to a price/performance ratio of something like the RamSan. It's that far ahead of a FusionIO card. (To be fair, I haven't tested their ultra high end.)

    Crusader for the 64-bit Era.
    New Rule: 2GB per core, minimum.

    Intel i7-9700K | Asrock Z390 Phantom Gaming ITX | Samsung 970 Evo 2TB SSD
    64GB DDR4-2666 Samsung | EVGA RTX 2070 Black edition
    Fractal Arc Midi |Seasonic X650 PSU | Klipsch ProMedia 5.1 Ultra | Windows 10 Pro x64

  13. #13
    Invisible Modfish Vindir's Avatar
    Join Date
    Dec 2000
    Location
    Georgia
    Posts
    2,689
    Quote Originally Posted by Timman_24 View Post
    I believe Watson was fed text files of the questions. I'm pretty sure the enormous amount of computer programmers that are coding these days are not doing it at a machine level. I bet there are only a handful of people that actually write code at that basic level, which is needed to progress the software front. I'd venture to guess 90% or more of programmers only know high level languages such as C and Java. There is no way to tackle unique hardware issues using those.
    You might be disappointed there. Over the last ten years or so the web programming side of things has become such a huge industry that it's more likely the majority don't even know C or Java.

    The numbers are abysmal if you're looking for people who've taken the time to really understand assembly well enough to read through a common disassembly or even those who are into some of the cooler more esoteric languages like haskell, ocaml, lisp, smalltalk.
    Insert ancient Sharky sig here
    [
    Prince Vindir of the OC Crusaders
    Holding Boundaries and Breaking Barriers

    ]

  14. #14
    I don't roll on Shabbos! Timman_24's Avatar
    Join Date
    Aug 2004
    Location
    Urbana, IL
    Posts
    12,648
    Quote Originally Posted by Vindir View Post
    You might be disappointed there. Over the last ten years or so the web programming side of things has become such a huge industry that it's more likely the majority don't even know C or Java.

    The numbers are abysmal if you're looking for people who've taken the time to really understand assembly well enough to read through a common disassembly or even those who are into some of the cooler more esoteric languages like haskell, ocaml, lisp, smalltalk.
    Yes, so it doesn't surprise me that machine level advances are very slow and take a long time to implement. I can't imagine how long it would take to start over with a new computing technology, such as quantum.
    PC: Corsair 550D
    4280k | Asus Rampage Gene | Mushkin 4x4GB | EVGA 780
    Intel 120GB SSD + 2TB Seagate | Seasonic 660 Plat
    2x Alphacool XT45 | Laing DDC | Bitspower

    Currently playing: Civ 5
    Last Game Beaten: Walking Dead

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •