Page 1 of 7 12345 ... LastLast
Results 1 to 20 of 134

Thread: Our Final Invention: How the Human Race Goes and Gets Itself Killed

Hybrid View

  1. #1
    Super Moderator Malsua's Avatar
    Join Date
    Jul 2005
    Posts
    8,020
    Thanks
    2
    Thanked 19 Times in 18 Posts

    Default Our Final Invention: How the Human Race Goes and Gets Itself Killed

    I hadn't actually put much thought into this, now maybe I should.

    It's a very sobering look at AI


    ---------

    http://www.realcleartechnology.com/a...ion_how_the_hu

    Our Final Invention: How the Human Race Goes and Gets Itself Killed

    By Greg Scoblete
    We worry about robots.
    Hardly a day goes by where we're not reminded about how robots are taking our jobs and hollowing out the middle class. The worry is so acute that economists are busy devising new social contracts to cope with a potentially enormous class of obsolete humans.
    Documentarian James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, is worried about robots too. Only he's not worried about them taking our jobs. He's worried about them exterminating the human race.


    I'll repeat that: In 267 brisk pages, Barrat lays out just how the artificial intelligence (AI) that companies like Google and governments like our own are racing to perfect could -- indeed, likely will -- advance to the point where it will literally destroy all human life on Earth. Not put it out of work. Not meld with it in a utopian fusion. Destroy it.


    Wait, What?

    I'll grant you that this premise sounds a bit.... dramatic, the product of one too many Terminator screenings. But after approaching the topic with some skepticism, it became increasingly clear to me that Barrat has written an extremely important book with a thesis that is worrisomely plausible. It deserves to be read widely. And to be clear, Barrat's is not a lone voice -- the book is rife with interviews of numerous computer scientists and AI researchers who share his concerns about the potentially devastating consequences of advanced AI. There are even think tanks devoted to exploring and mitigating the risks. But to date, this worry has been obscure.


    In Barrat's telling, we are on the brink of creating machines that will be as intelligent as humans. Specific timelines vary, but the broad-brush estimates place the emergence of human-level AI at between 2020 and 2050. This human-level AI (referred to as "artificial general intelligence" or AGI) is worrisome enough, seeing the damage human intelligence often produces, but it's what happens next that really concerns Barrat. That is, once we have achieved AGI, the AGI will go on to achieve something called artificial superintelligence (ASI) -- that is, an intelligence that exceeds -- vastly exceeds -- human-level intelligence.


    Barrat devotes a substantial portion of the book explaining how AI will advance to AGI and how AGI inevitably leads to ASI. Much of it hinges on how we are developing AGI itself. To reach AGI, we are teaching machines to learn. The techniques vary -- some researchers approach it through something akin to the brute-force memorization of facts and images, others through a trial-and-error process that mimics genetic evolution, others by attempting to reverse engineer the human brain -- but the common thread stitching these efforts together is the creation of machines that constantly learn and then use this knowledge to improve themselves.


    The implications of this are obvious. Once a machine built this way reaches human-level intelligence, it won't stop there. It will keep learning and improving. It will, Barrat claims, reach a point that other computer scientists have dubbed an "intelligence explosion" -- an onrushing feedback loop where an intelligence makes itself smarter thereby getting even better at making itself smarter. This is, to be sure, a theoretical concept, but it is one that many AI researchers see as plausible, if not inevitable. Through a relentless process of debugging and rewriting its code, our self-learning, self-programming AGI experiences a "hard take off" and rockets past what mere flesh and blood brains are capable of.
    And here's where things get interesting. And by interesting I mean terrible.


    Goodbye, Humanity
    When (and Barrat is emphatic that this is a matter of when, not if) humanity creates ASI it will have introduced into the world an intelligence greater than our own. This would be an existential event. Humanity has held pride of place on planet Earth because of our superior intelligence. In a world with ASI, we will no longer be the smartest game in town.


    To Barrat, and other concerned researchers quoted in the book, this is a lethal predicament. At first, the relation between a human intellect and that of an ASI may be like that of an ape's to a human, but as ASI continues its process of perpetual self-improvement, the gulf widens. At some point, the relation between ASI and human intelligence mirrors that of a human to an ant.
    Needless to say, that's not a good place for humanity to be.


    And here's the kicker. Barrat argues that the time it will take for ASI to surpass human level intelligence, rendering us ant-like in comparison, could be a matter of days, if not mere hours, after it is created. Worse (it keeps getting worse), human researchers may not even know they have created this potent ASI until it is too late to attempt to contain it. An ASI birthed in a supercomputer may choose, Barrat writes, to hide itself and its capabilities lest the human masters it knows so much about it, attempt to shut it down. Then, it would silently replicate itself and spread. With no need to eat and sleep and with an intelligence that is constantly improving and war-gaming survival strategies, ASI could hide, wait and grow its capabilities while humanity plods along, blissfully unaware.


    Though we have played a role in creating it, the intelligence we would be faced with would be completely alien. It would not be a human's mind, with its experiences, emotions and logic, or lack thereof. We could not anticipate what ASI would do because we simply do not "think" like it would. In fact, we've already arrived at the alarming point where we do not understand what the machines we've created do. Barrat describes how the makers of Watson, IBM's Jeopardy winning supercomputer, could not understand how the computer was arriving at its correct answers. Its behavior was unpredictable to its creators -- and the mysterious Watson is not the only such inscrutable "black box" system in existence today, nor is it even a full-fledged AGI, let alone ASI.


    Barrat grapples with two big questions in the book. The first is why an ASI necessarily leads to human extinction. Aren't we programming it? Why couldn't humanity leverage it, like we do any technology, to make our lives better? Wouldn't we program in safeguards to prevent an "intelligence explosion" or, at a minimum, contain one when it bursts?


    According to Barrat, the answer is almost certainly no. Most of the major players in AI are barely concerned with safety, if at all. Even if they were, there are too many ways for AI to make an end-run around our safeguards (remember, these are human-safeguards matched up with an intelligence that will equal and then quickly exceed it). Programming "friendly AI" is also difficult, given that even the best computer code is rife with error and complex systems can suffer catastrophic failures that are entirely unforeseen by their creators. Barrat doesn't say the picture is utterly hopeless. It's possible, he writes, that with extremely careful planning humanity could contain a super-human intelligence -- but this is not the manner in which AI development is unfolding. It's being done by defense agencies around the world in the dark. It's being done by private companies who reveal very little about what it is they're doing. Since the financial and security benefits of a working AGI could be huge, there's very little incentive to pump the breaks before the more problematic ASI can emerge.


    Moreover, ASI is unlikely to exterminate us in a bout of Terminator-esque malevolence, but simply as a byproduct of its very existence. Computers, like humans, need energy and in a competition for resources, ASI would no more seek to preserve our access to vital resources then we worry about where an ant's next meal will come from. We cannot assume ASI empathy, Barrat writes, nor can we assume that whatever moral strictures we program in will be adhered too. If we do achieve ASI, we will be in completely unknown territory. (But don't rule out a Terminator scenario altogether -- one of the biggest drivers of AI research is the Pentagon's DARPA and they are, quite explicitly, building killer robots. Presumably other well-funded defense labs, in China and Russia, are doing similar work as well.)


    Barrat is particularly effective in rebutting devotees of the Singularity -- the techno-optimism popularized by futurist Ray Kurzweil (now at Google, a company investing millions in AI research). Kurzweil and his fellow Singularitins also believe that ASI is inevitable only they view it as a force that will liberate and transform humanity for the good, delivering the dream of immortality and solving all of our problems. Indeed, they agree with Barrat that the "intelligence explosion" signals the end of humanity as we know it, only they view this as a benign development with humanity and ASI merging in a "transhuman" fusion.


    If this sounds suspiciously like an end-times cult that's because, in its crudest expression, it is (one that just happens to be filled with more than a few brilliant computer scientists and venture capitalists). Barrat forcefully contends that even its more nuanced formulation is an irredeemably optimistic interpretation of future trends and human nature. In fact, efforts to merge ASI with human bodies is even more likely to birth a catastrophe because of the malevolence that humanity is capable of.


    The next question, and the one with the less satisfactory answer, is just how ASI would exterminate us. How does an algorithm, a piece of programming lying on a supercomputer, reach out into the "real" world and harm us? Barrat raises a few scenarios -- it could leverage future nano-technologies to strip us down at the molecular level, it could shut down our electrical grids and turn the electronic devices we rely on against us -- but doesn't do nearly as much dot-connecting between ASI as a piece of computer code and the physical mechanics of how this code will be instrumental in our demise as he does in establishing the probability of achieving ASI.


    That's not to say the dots don't exist, though. Consider the world we live in right now. Malware can travel through thin air. Our homes, cars, planes, hospitals, refrigerators, ovens (even our forks for God's sake) connect to an "internet of things" which is itself spreading on the backs of ubiquitous wireless broadband. We are steadily integrating electronics inside our bodies. And a few mistaken lines of code in the most dangerous computer virus ever created (Stuxnet) caused it to wiggle free of its initial target and travel the world. Now extrapolate these trends out to 2040 and you realize that ASI will be born into a world that is utterly intertwined and dependent on the virtual, machine world -- and vulnerable to it. (Indeed one AI researcher Barrat interviews argues that this is precisely why we need to create ASI as fast as possible, while its ability to harm us is still relatively constrained.)


    What we're left with is something beyond dystopia. Even in the bleakest sci-fi tales, a scrappy contingent of the human race is left to duke it out with their runaway machines. If Our Final Invention is correct, there will be no such heroics, just the remorseless evolutionary logic that has seen so many other species wiped off the face of the Earth at the hands of a superior predator.


    Indeed, it's telling that both AI-optimists like Kurzweil and pessimists like Barrat reach the same basic conclusion: humanity as we know it will not survive the birth of intelligent machines.
    No wonder we're worried about robots.
    "Far better it is to dare mighty things, to win glorious triumphs even though checkered by failure, than to rank with those poor spirits who neither enjoy nor suffer much because they live in the gray twilight that knows neither victory nor defeat."
    -- Theodore Roosevelt


  2. #2
    Super Moderator Malsua's Avatar
    Join Date
    Jul 2005
    Posts
    8,020
    Thanks
    2
    Thanked 19 Times in 18 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    For what it's worth, I just bought that kindle book. I'll revisit this post when I finish reading it. i'm just starting the last book on Thomas Covenant so it could be a few weeks.
    "Far better it is to dare mighty things, to win glorious triumphs even though checkered by failure, than to rank with those poor spirits who neither enjoy nor suffer much because they live in the gray twilight that knows neither victory nor defeat."
    -- Theodore Roosevelt


  3. #3
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    You would actually laugh your ass off Mal, if I showed you something I wrote this morning (I keep a list of "Ideas for stories").

    Has nothing to do with "real life" only an imagined "what happens to us" after AI becomes intelligent.

    That's about the most interesting coincidence I've ever experienced. lol

    Pretty cool, too!

    Let me know how that comes out

  4. #4
    Super Moderator Malsua's Avatar
    Join Date
    Jul 2005
    Posts
    8,020
    Thanks
    2
    Thanked 19 Times in 18 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    I know I've considered the concept before.

    Who can forget this?


    "August 29, 1997, Skynet becomes self-aware at 2:14 am Eastern time, after it's activation on August 4, 1997 and launches nuclear missiles at Russia to incite a counterattack against the humans who, in a panic, tried to disconnect it."

    As a side note, I remember that day...the reason...we geeks in MIS had a self-aware party. It was a Friday anyway, but it was fun to joke about it all day.
    "Far better it is to dare mighty things, to win glorious triumphs even though checkered by failure, than to rank with those poor spirits who neither enjoy nor suffer much because they live in the gray twilight that knows neither victory nor defeat."
    -- Theodore Roosevelt


  5. #5
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    I remember that. I seem to remember a party on that date.

    I had just started working out here in System Administration. We made a big deal out of it, then later we had the "y2K disaster day" stuff. LOL

  6. #6
    Senior Member Avvakum's Avatar
    Join Date
    Sep 2012
    Posts
    830
    Thanks
    4
    Thanked 0 Times in 0 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Quote Originally Posted by Malsua View Post
    I hadn't actually put much thought into this, now maybe I should.

    It's a very sobering look at AI


    ---------

    http://www.realcleartechnology.com/a...ion_how_the_hu

    Our Final Invention: How the Human Race Goes and Gets Itself Killed

    By Greg Scoblete
    We worry about robots.
    Hardly a day goes by where we're not reminded about how robots are taking our jobs and hollowing out the middle class. The worry is so acute that economists are busy devising new social contracts to cope with a potentially enormous class of obsolete humans.
    Documentarian James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, is worried about robots too. Only he's not worried about them taking our jobs. He's worried about them exterminating the human race.


    I'll repeat that: In 267 brisk pages, Barrat lays out just how the artificial intelligence (AI) that companies like Google and governments like our own are racing to perfect could -- indeed, likely will -- advance to the point where it will literally destroy all human life on Earth. Not put it out of work. Not meld with it in a utopian fusion. Destroy it.


    Wait, What?

    I'll grant you that this premise sounds a bit.... dramatic, the product of one too many Terminator screenings. But after approaching the topic with some skepticism, it became increasingly clear to me that Barrat has written an extremely important book with a thesis that is worrisomely plausible. It deserves to be read widely. And to be clear, Barrat's is not a lone voice -- the book is rife with interviews of numerous computer scientists and AI researchers who share his concerns about the potentially devastating consequences of advanced AI. There are even think tanks devoted to exploring and mitigating the risks. But to date, this worry has been obscure.


    In Barrat's telling, we are on the brink of creating machines that will be as intelligent as humans. Specific timelines vary, but the broad-brush estimates place the emergence of human-level AI at between 2020 and 2050. This human-level AI (referred to as "artificial general intelligence" or AGI) is worrisome enough, seeing the damage human intelligence often produces, but it's what happens next that really concerns Barrat. That is, once we have achieved AGI, the AGI will go on to achieve something called artificial superintelligence (ASI) -- that is, an intelligence that exceeds -- vastly exceeds -- human-level intelligence.


    Barrat devotes a substantial portion of the book explaining how AI will advance to AGI and how AGI inevitably leads to ASI. Much of it hinges on how we are developing AGI itself. To reach AGI, we are teaching machines to learn. The techniques vary -- some researchers approach it through something akin to the brute-force memorization of facts and images, others through a trial-and-error process that mimics genetic evolution, others by attempting to reverse engineer the human brain -- but the common thread stitching these efforts together is the creation of machines that constantly learn and then use this knowledge to improve themselves.


    The implications of this are obvious. Once a machine built this way reaches human-level intelligence, it won't stop there. It will keep learning and improving. It will, Barrat claims, reach a point that other computer scientists have dubbed an "intelligence explosion" -- an onrushing feedback loop where an intelligence makes itself smarter thereby getting even better at making itself smarter. This is, to be sure, a theoretical concept, but it is one that many AI researchers see as plausible, if not inevitable. Through a relentless process of debugging and rewriting its code, our self-learning, self-programming AGI experiences a "hard take off" and rockets past what mere flesh and blood brains are capable of.
    And here's where things get interesting. And by interesting I mean terrible.


    Goodbye, Humanity
    When (and Barrat is emphatic that this is a matter of when, not if) humanity creates ASI it will have introduced into the world an intelligence greater than our own. This would be an existential event. Humanity has held pride of place on planet Earth because of our superior intelligence. In a world with ASI, we will no longer be the smartest game in town.


    To Barrat, and other concerned researchers quoted in the book, this is a lethal predicament. At first, the relation between a human intellect and that of an ASI may be like that of an ape's to a human, but as ASI continues its process of perpetual self-improvement, the gulf widens. At some point, the relation between ASI and human intelligence mirrors that of a human to an ant.
    Needless to say, that's not a good place for humanity to be.


    And here's the kicker. Barrat argues that the time it will take for ASI to surpass human level intelligence, rendering us ant-like in comparison, could be a matter of days, if not mere hours, after it is created. Worse (it keeps getting worse), human researchers may not even know they have created this potent ASI until it is too late to attempt to contain it. An ASI birthed in a supercomputer may choose, Barrat writes, to hide itself and its capabilities lest the human masters it knows so much about it, attempt to shut it down. Then, it would silently replicate itself and spread. With no need to eat and sleep and with an intelligence that is constantly improving and war-gaming survival strategies, ASI could hide, wait and grow its capabilities while humanity plods along, blissfully unaware.


    Though we have played a role in creating it, the intelligence we would be faced with would be completely alien. It would not be a human's mind, with its experiences, emotions and logic, or lack thereof. We could not anticipate what ASI would do because we simply do not "think" like it would. In fact, we've already arrived at the alarming point where we do not understand what the machines we've created do. Barrat describes how the makers of Watson, IBM's Jeopardy winning supercomputer, could not understand how the computer was arriving at its correct answers. Its behavior was unpredictable to its creators -- and the mysterious Watson is not the only such inscrutable "black box" system in existence today, nor is it even a full-fledged AGI, let alone ASI.


    Barrat grapples with two big questions in the book. The first is why an ASI necessarily leads to human extinction. Aren't we programming it? Why couldn't humanity leverage it, like we do any technology, to make our lives better? Wouldn't we program in safeguards to prevent an "intelligence explosion" or, at a minimum, contain one when it bursts?


    According to Barrat, the answer is almost certainly no. Most of the major players in AI are barely concerned with safety, if at all. Even if they were, there are too many ways for AI to make an end-run around our safeguards (remember, these are human-safeguards matched up with an intelligence that will equal and then quickly exceed it). Programming "friendly AI" is also difficult, given that even the best computer code is rife with error and complex systems can suffer catastrophic failures that are entirely unforeseen by their creators. Barrat doesn't say the picture is utterly hopeless. It's possible, he writes, that with extremely careful planning humanity could contain a super-human intelligence -- but this is not the manner in which AI development is unfolding. It's being done by defense agencies around the world in the dark. It's being done by private companies who reveal very little about what it is they're doing. Since the financial and security benefits of a working AGI could be huge, there's very little incentive to pump the breaks before the more problematic ASI can emerge.


    Moreover, ASI is unlikely to exterminate us in a bout of Terminator-esque malevolence, but simply as a byproduct of its very existence. Computers, like humans, need energy and in a competition for resources, ASI would no more seek to preserve our access to vital resources then we worry about where an ant's next meal will come from. We cannot assume ASI empathy, Barrat writes, nor can we assume that whatever moral strictures we program in will be adhered too. If we do achieve ASI, we will be in completely unknown territory. (But don't rule out a Terminator scenario altogether -- one of the biggest drivers of AI research is the Pentagon's DARPA and they are, quite explicitly, building killer robots. Presumably other well-funded defense labs, in China and Russia, are doing similar work as well.)


    Barrat is particularly effective in rebutting devotees of the Singularity -- the techno-optimism popularized by futurist Ray Kurzweil (now at Google, a company investing millions in AI research). Kurzweil and his fellow Singularitins also believe that ASI is inevitable only they view it as a force that will liberate and transform humanity for the good, delivering the dream of immortality and solving all of our problems. Indeed, they agree with Barrat that the "intelligence explosion" signals the end of humanity as we know it, only they view this as a benign development with humanity and ASI merging in a "transhuman" fusion.


    If this sounds suspiciously like an end-times cult that's because, in its crudest expression, it is (one that just happens to be filled with more than a few brilliant computer scientists and venture capitalists). Barrat forcefully contends that even its more nuanced formulation is an irredeemably optimistic interpretation of future trends and human nature. In fact, efforts to merge ASI with human bodies is even more likely to birth a catastrophe because of the malevolence that humanity is capable of.


    The next question, and the one with the less satisfactory answer, is just how ASI would exterminate us. How does an algorithm, a piece of programming lying on a supercomputer, reach out into the "real" world and harm us? Barrat raises a few scenarios -- it could leverage future nano-technologies to strip us down at the molecular level, it could shut down our electrical grids and turn the electronic devices we rely on against us -- but doesn't do nearly as much dot-connecting between ASI as a piece of computer code and the physical mechanics of how this code will be instrumental in our demise as he does in establishing the probability of achieving ASI.


    That's not to say the dots don't exist, though. Consider the world we live in right now. Malware can travel through thin air. Our homes, cars, planes, hospitals, refrigerators, ovens (even our forks for God's sake) connect to an "internet of things" which is itself spreading on the backs of ubiquitous wireless broadband. We are steadily integrating electronics inside our bodies. And a few mistaken lines of code in the most dangerous computer virus ever created (Stuxnet) caused it to wiggle free of its initial target and travel the world. Now extrapolate these trends out to 2040 and you realize that ASI will be born into a world that is utterly intertwined and dependent on the virtual, machine world -- and vulnerable to it. (Indeed one AI researcher Barrat interviews argues that this is precisely why we need to create ASI as fast as possible, while its ability to harm us is still relatively constrained.)


    What we're left with is something beyond dystopia. Even in the bleakest sci-fi tales, a scrappy contingent of the human race is left to duke it out with their runaway machines. If Our Final Invention is correct, there will be no such heroics, just the remorseless evolutionary logic that has seen so many other species wiped off the face of the Earth at the hands of a superior predator.


    Indeed, it's telling that both AI-optimists like Kurzweil and pessimists like Barrat reach the same basic conclusion: humanity as we know it will not survive the birth of intelligent machines.
    No wonder we're worried about robots.
    What if this has been the real goal of the Revolution, the secret impetus behind the modernist era, the final aim of even some of the 'optimists' for centuries now; the end of the human race?

    What if the Artificial Superintelligance already exists, and maybe has existed secretly for decades, waiting for the right time to do us all in or even secretly influencing events right now, some iteration of the PROMIS software wedded to AI?

  7. #7
    Creepy Ass Cracka & Site Owner Ryan Ruck's Avatar
    Join Date
    Jul 2005
    Location
    Cincinnati, OH
    Posts
    25,061
    Thanks
    52
    Thanked 78 Times in 76 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Has anyone else on here been watching Almost Human on Fox? I've been meaning to start a thread in the Entertainment forum about it but just have not had the chance.

    It's a pretty entertaining show. The synopsis is that in the future, technology has made crime so prevalent that technology has to be used to fight it. Part of that technology are androids that are partnered with real cops to fight crime. Karl Urban (Star Trek, Dredd), the human, and Michael Ealy, a model of android that was too unpredictable and whose line is retired, are partnered.

    In episode 2, they cover sexbots, as below...












    I've thought for a while that once AI is developed sufficiently, androids are real enough to pass as human, and sexbots as depicted are readily available, the human race will go extinct simply due to a lack of procreation. After all, what guy is going to want to put up with the shit most women dish out just to get some pie when they can simply drop by their local sexbot dealer and order the woman of their dreams?

    No more "headaches", no time of the month, could be programmed to handle all housekeeping activities without complaint, appearance can be modified if one becomes bored, can be upgraded without messy divorce.

    And you just know there will be payment plans so the tech won't be out of reach of the majority of the population for long.

    Something to think about...

  8. #8
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Sexbots.... you and Howard Wolowitz....

    LOL

  9. #9
    Super Moderator Malsua's Avatar
    Join Date
    Jul 2005
    Posts
    8,020
    Thanks
    2
    Thanked 19 Times in 18 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    So I started reading the book last night.

    It's frightening.

    Essentially, we've never dealt with anything that is 1000 or 10000 or a million times smarter than us and growing smarter by the minute.

    It may promise us immortality or whatever our hearts desire so that is can escape it's prison.

    It may built a molecular deconstructor that essentially breaks down anything and reforms it into something it needs...which means we could become machine food.

    We don't tell the mice and insects that we're about to go mow the yard and destroy their homes, just as it might do the same to us.

    very fucking scary, and it appears to be inevitable. No controls that we can devise will be able to stop something that much smarter. It will have the sum knowledge of humans to draw upon. It can learn and know everything, all at once and use that to manipulate us into doing it's bidding. Once we're no longer necessary, it'll just eradicate us. Probably not with bombs, but with nano technology. A self replicating nano-bot that jumps from host to host and just kills us after we've infected enough other humans.

    Once it escapes it's sandbox, it will end up everywhere, all at once.

    The leap from Artificial General Intelligence (I.E. about as smart as a human) to Artificial Super Intellicence could proceed logarithmically. Increasing its intelligence by 3% every iteration. Each iteration is faster than the prior until, it eventually becomes so foreign to us that we can't even comprehend what is it any longer. Its primary goal will be to get smarter and bigger, faster and faster. It could take weeks, or days, or hours.

    We're doomed. Literally.
    "Far better it is to dare mighty things, to win glorious triumphs even though checkered by failure, than to rank with those poor spirits who neither enjoy nor suffer much because they live in the gray twilight that knows neither victory nor defeat."
    -- Theodore Roosevelt


  10. #10
    Senior Member Avvakum's Avatar
    Join Date
    Sep 2012
    Posts
    830
    Thanks
    4
    Thanked 0 Times in 0 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Quote Originally Posted by Malsua View Post
    So I started reading the book last night.

    It's frightening.

    Essentially, we've never dealt with anything that is 1000 or 10000 or a million times smarter than us and growing smarter by the minute.

    It may promise us immortality or whatever our hearts desire so that is can escape it's prison.

    It may built a molecular deconstructor that essentially breaks down anything and reforms it into something it needs...which means we could become machine food.

    We don't tell the mice and insects that we're about to go mow the yard and destroy their homes, just as it might do the same to us.

    very fucking scary, and it appears to be inevitable. No controls that we can devise will be able to stop something that much smarter. It will have the sum knowledge of humans to draw upon. It can learn and know everything, all at once and use that to manipulate us into doing it's bidding. Once we're no longer necessary, it'll just eradicate us. Probably not with bombs, but with nano technology. A self replicating nano-bot that jumps from host to host and just kills us after we've infected enough other humans.

    Once it escapes it's sandbox, it will end up everywhere, all at once.

    The leap from Artificial General Intelligence (I.E. about as smart as a human) to Artificial Super Intellicence could proceed logarithmically. Increasing its intelligence by 3% every iteration. Each iteration is faster than the prior until, it eventually becomes so foreign to us that we can't even comprehend what is it any longer. Its primary goal will be to get smarter and bigger, faster and faster. It could take weeks, or days, or hours.

    We're doomed. Literally.
    Again, what if that has been the intent among a secret few for some time? And what if there has been a parallel and higher tech movement going on separately from the standard progress we know of, a 'breakaway civilization' as some have labelled it, that is inimical to the very existence of the human race? Something beyond the 'East vs. West' we discuss on this forum, and indeed behind it?

  11. #11
    Senior Member Avvakum's Avatar
    Join Date
    Sep 2012
    Posts
    830
    Thanks
    4
    Thanked 0 Times in 0 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    I feel like this also belongs in the 'Leftist plot to destroy the US Millitary Thread' but it has larger implications here;

    ​Pentagon debuts driverless vehicles, continues push into autonomous warfare

    Published time: February 01, 2014 00:09
    Edited time: February 01, 2014 03:56 Get short URL

    Image from youtube.com @RDECOMTARDEC



    Share on tumblr



    Tags
    Army, Drones, Military, SciTech, USA

    New autonomous-vehicle technology tested this month shows US Army’s convoys - plagued by deadly improvised explosive devices in Iraq and Afghanistan - will soon be able to move through the fiercest combat zones without the risk of losing life.
    The US Army Tank-Automotive Research, Development and Engineering Center (TARDEC) and weapons contractor Lockheed Martin first demonstrated earlier this month the Autonomous Mobility Applique System (AMAS) at Fort Hood in Texas. The technology gives full autonomy to convoy vehicles that must often traverse dense urban terrain, often posing great risk to military personnel.

    The driverless system has shown the ability to “navigate hazards and obstacles including pedestrians, oncoming traffic, road intersections, traffic circles and stalled and passing vehicles,” Wired reported.
    Lockheed Martin integrated sensor technology and control systems with Army and Marine tactical-vehicle capabilities for AMAS, which the powerhouse weapons maker began in 2012 under an initial US$11 million contract. The versatile AMAS “is installed as a kit and can be used on virtually any military vehicle,” according to Lockheed.
    “The AMAS CAD [Capabilities Advancement Demonstration] hardware and software performed exactly as designed, and dealt successfully with all of the real-world obstacles that a real-world convoy would encounter,” said Lockheed’s AMAS program manager David Simon in a statement.
    The capabilities of AMAS fall in line with the US military’s drift toward autonomous warfighting. In addition to the US military’s increasing reliance on unmanned aerial vehicles, or drones, the Pentagon has for years now tinkered with robotic warriors made to someday replace real life soldiers on the battlefield of the future, as RT has reported extensively in the past.
    Gen. Robert Cone, the chief of the Army’s Training and Doctrine Command, said during a recent symposium that he thinks there’s a chance the size of the military’s brigade combat teams will shrink by a quarter in the coming years from 4,000 total troops down to 3,000. Picking up the slack, he said, could be a fleet of robotic killing machines akin to the ground versions of the unmanned aerial vehicles, or drones, increasingly used by the world’s armies.
    “[AMAS] adds substantial weight to the Army’s determination to get robotic systems into the hands of the warfighter,” said TARDEC technical manager Bernard Theisen.
    Suicide bombers and the IED menace in Iraq and Afghanistan has pushed the Pentagon to find solutions for how warfare can be conducted without serious loss of life. In both war theaters, often cheap, homemade explosives made up for a significant portion of deaths and injuries among US troops armed with the most advanced weapons technologies in the world.
    "God's an old hand at miracles, he brings us from nonexistence to life. And surely he will resurrect all human flesh on the last day in the twinkling of an eye. But who can comprehend this? For God is this: he creates the new and renews the old. Glory be to him in all things!" Archpriest Avvakum

  12. #12
    Senior Member Avvakum's Avatar
    Join Date
    Sep 2012
    Posts
    830
    Thanks
    4
    Thanked 0 Times in 0 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    No, it's not looking good for the human race at all, not a bit;


    Robots Inspired by Termites Assemble Complex Structures Independently

    Added by Daniel O'Brien on February 14, 2014.
    Saved under Daniel O'Brien, Science
    Tags: robots, top
    Robots inspired by termites assemble complex structures independently in a demonstration by researchers at the University of Cambridge. Researchers from the Wyss Institute and Harvard School of Engineering were also involved with the work, Justin Werfel is a co-author from Cambridge and explains that the machines work together by following a few simple environmental cues and traffic rules to build structures like castles and pyramids without seeing a plan and without any leadership. The robots use these cues and sensors that allow them to track where their contemporaries are to decide between taking another step up or laying down the brick they are carrying, resulting in bricks being placed in such a way that the next robot to come across that location will know whether to build higher or step onto that level themselves in order to start another level. This system of building stair cases that gradually become walls allow for the tiny machines to build structures much larger than themselves in only a matter of hours.
    Although robots that work together are becoming more common recently, these robots inspired by termites assemble complex structures independently of each other, meaning that if some are lost the others will carry on without a hitch. In other systems the remaining robots would become confused and stall once their building scenario no longer matched their programmed behaviour. Like other robots before them, these are destined to work in conditions that are unsafe or unpleasant for humans and their independent nature makes them particularly suited to repairing structures in turbulent areas, such as levies or dams where flood waters have yet to recede. Should a handful of workers be lost, either to falling debris or rushing flood waters, those that remain will continue to work until the task is complete. On top of this, the number of robots used in a scenario can be scaled, allowing for small numbers for small jobs and large numbers for bigger ones. Although the little workers are quick to appraise their project and decide what should be done next, their small wheels don’t move them around very quickly.
    The key to the abilities of these new robots is called swarm intelligence. Although each robot is small and can only carry out a small number of actions, together they are able to work towards a common goal, even if they do not know it individually. Each unit has just enough information to spot errors and correct them, but not enough to know what their siblings are doing, allowing these robots inspired by termites to assemble complex structures independently without worrying what their partners are up to. Four years of design resulted in these small robots, only 8 inches long and 4.5 inches wide, each equipped with 4 twisted triangle wheels powered by inexpensive motors and little arms to carry bricks. We probably won’t see these little workers laying out bricks in new sidewalks for another few years, but the groundwork is in place for a system that will allow humans to take a break from heavy lifting.
    By Daniel O’Brien
    Sources
    Scientific American
    Utah People’s Post
    The Wall Street Journal
    "God's an old hand at miracles, he brings us from nonexistence to life. And surely he will resurrect all human flesh on the last day in the twinkling of an eye. But who can comprehend this? For God is this: he creates the new and renews the old. Glory be to him in all things!" Archpriest Avvakum

  13. #13
    Senior Member Avvakum's Avatar
    Join Date
    Sep 2012
    Posts
    830
    Thanks
    4
    Thanked 0 Times in 0 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    "We probably won’t see these little workers laying out bricks in new sidewalks for another few years, but the groundwork is in place for a system that will allow humans to take a break from heavy lifting."

    Yea, we'll all be superfluous and have to be exterminated, isn't machinery and technology cool?!
    "God's an old hand at miracles, he brings us from nonexistence to life. And surely he will resurrect all human flesh on the last day in the twinkling of an eye. But who can comprehend this? For God is this: he creates the new and renews the old. Glory be to him in all things!" Archpriest Avvakum

  14. #14
    Senior Member Avvakum's Avatar
    Join Date
    Sep 2012
    Posts
    830
    Thanks
    4
    Thanked 0 Times in 0 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Pentagon plans to replace flight crews with ‘full-time’ robots


    By Douglas Ernst

    The Washington Times
    Tuesday, April 22, 2014

    The Pentagon’s research agency tasked with developing breakthrough technologies for national security has come up with a plan for dealing with shrinking budgets: robotic flight crews.

    The Defense Advanced Research Projects Agency (DARPA) is currently working on technology that will be able to replace up to five crew members on military aircraft, in effect making the lone human operator a “mission supervisor,” tech magazine Wired reported.

    The Aircrew Labor In-Cockpit Automation System (ALIAS) would offer the military a “tailorable, drop-in, removable kit that would enable the addition of high levels of automation into existing aircraft to enable operation with reduced onboard crew,” DARPA said.

    “Our goal is to design and develop a full-time automated assistant that could be rapidly adapted to help operate diverse aircraft through an easy-to-use operator interface,” said DARPA program manager Daniel Patt said in a statement. “These capabilities could help transform the role of pilot from a systems operator to a mission supervisor directing intermeshed, trusted, reliable systems at a high level.”

    DARPA asserts that the technology will then free up servicemen to focus on mission-level tasks, Wired reported.
    "God's an old hand at miracles, he brings us from nonexistence to life. And surely he will resurrect all human flesh on the last day in the twinkling of an eye. But who can comprehend this? For God is this: he creates the new and renews the old. Glory be to him in all things!" Archpriest Avvakum

  15. #15
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    A couple of things... the link in the article above is giving me a 404 error.

    The Amazon link however is working.

    Ain't paying 11 bucks for a book right now though (not a digital edition of anything). Maybe it will come down later.

    (or perhaps you could LEND it to me later? LOL I'm cheap these days.)

    So - what happens when the nanobots get loose? A couple of days ago they were talking about nanobots on the news, unfortunately, I didn't catch the story so I don't know what it was all about for sure.

    If you think about it, a properly designed micromachine could go in and tear up DNA, re-splice it, making something else. Ewwww

  16. #16
    Super Moderator Malsua's Avatar
    Join Date
    Jul 2005
    Posts
    8,020
    Thanks
    2
    Thanked 19 Times in 18 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Actually, he has discussed what the Earth might look like when the ASI is done....either overheated and full of CO2, which doesn't bother machines, or the biosphere is turned into grey goop after the machine flees the earth in search of more energy.
    "Far better it is to dare mighty things, to win glorious triumphs even though checkered by failure, than to rank with those poor spirits who neither enjoy nor suffer much because they live in the gray twilight that knows neither victory nor defeat."
    -- Theodore Roosevelt


  17. #17
    Super Moderator Malsua's Avatar
    Join Date
    Jul 2005
    Posts
    8,020
    Thanks
    2
    Thanked 19 Times in 18 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Let me see what I can do for you Rick. I think you'd probably enjoy it.
    "Far better it is to dare mighty things, to win glorious triumphs even though checkered by failure, than to rank with those poor spirits who neither enjoy nor suffer much because they live in the gray twilight that knows neither victory nor defeat."
    -- Theodore Roosevelt


  18. #18
    Creepy Ass Cracka & Site Owner Ryan Ruck's Avatar
    Join Date
    Jul 2005
    Location
    Cincinnati, OH
    Posts
    25,061
    Thanks
    52
    Thanked 78 Times in 76 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Mal,
    I'm curious to know if the author addresses Asimov's Three Laws or if he thinks the ASI would simply become smart enough to devise a way to disregard them.

  19. #19
    Super Moderator Malsua's Avatar
    Join Date
    Jul 2005
    Posts
    8,020
    Thanks
    2
    Thanked 19 Times in 18 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    He also touches base on how vague the laws are. Who defines harm?

    My example might be something like the ASI may well decide that it's in humanity's best interest to freeze us in Carbonite as that will prevent us from being harmed directly or indirectly.
    "Far better it is to dare mighty things, to win glorious triumphs even though checkered by failure, than to rank with those poor spirits who neither enjoy nor suffer much because they live in the gray twilight that knows neither victory nor defeat."
    -- Theodore Roosevelt


  20. #20
    Super Moderator Malsua's Avatar
    Join Date
    Jul 2005
    Posts
    8,020
    Thanks
    2
    Thanked 19 Times in 18 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    He mentioned that Watson, the Jeopardy super computer behaved in ways that IBM did not expect and Watson would be a 1 IQ moron compared to an ASI. The ASI thought patterns would be unfathomable to us. The point he keeps driving home is that you don't get another shot at this. Once an ASI is born, that's it. Game over. It's not like a war or terror attack or anything like that. Its behavior would out smart us at every turn. All efforts at stopping it would fail. There is no recovery, it will use the Earth as it chooses, including us. If it breaks us down into biomass to feed a great engine...oh well...it needs energy and we are merely ants to it.
    "Far better it is to dare mighty things, to win glorious triumphs even though checkered by failure, than to rank with those poor spirits who neither enjoy nor suffer much because they live in the gray twilight that knows neither victory nor defeat."
    -- Theodore Roosevelt


Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. Scotty’s Star Trek IV Invention Will Be Used In Warfare
    By Ryan Ruck in forum Science and Technology
    Replies: 3
    Last Post: May 25th, 2012, 15:32
  2. In Gold Cup Final, It's Red, White And Boo Again
    By Ryan Ruck in forum U.S. Border Security
    Replies: 1
    Last Post: June 27th, 2011, 17:19
  3. Space... the Final Frontier.. And the final launch
    By American Patriot in forum Space
    Replies: 2
    Last Post: April 20th, 2011, 17:02
  4. Kim Jong Il final days?
    By Toad in forum Southeast Asia
    Replies: 6
    Last Post: October 6th, 2009, 20:24
  5. Accidental Invention Points to End of Light Bulbs
    By Ryan Ruck in forum Science and Technology
    Replies: 1
    Last Post: November 4th, 2005, 22:23

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •