Page 4 of 7 FirstFirst 1234567 LastLast
Results 61 to 80 of 134

Thread: Our Final Invention: How the Human Race Goes and Gets Itself Killed

  1. #61
    Super Moderator Malsua's Avatar
    Join Date
    Jul 2005
    Posts
    8,020
    Thanks
    2
    Thanked 19 Times in 18 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    I've not seen this one yet. ...now where's my damn parrot.
    "Far better it is to dare mighty things, to win glorious triumphs even though checkered by failure, than to rank with those poor spirits who neither enjoy nor suffer much because they live in the gray twilight that knows neither victory nor defeat."
    -- Theodore Roosevelt


  2. #62
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Ryan, all good cliches start off as truth.

    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  3. #63
    Literary Wanderer
    Join Date
    Jun 2006
    Location
    Colorado
    Posts
    1,590
    Thanks
    5
    Thanked 6 Times in 6 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    An interesting and perhaps prophetic quote that lends some direction to the subject, with some extrapolation of course...

    In 1869, French chemist Marcellin Berthelot wrote, "Within a hundred years of physical and chemical science, men will know what the atom is. It is my belief that when science reaches this stage, God will come down to earth with His big ring of keys and say to humanity, 'Gentlemen, it is closing time.' "

  4. #64
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    I have a suspicion (not to be presumptuous or anything) that if God shows up with his big key ring, he will be here to turn it over to humanity. I don't foresee that happening any time SOON....
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  5. #65
    Super Moderator Malsua's Avatar
    Join Date
    Jul 2005
    Posts
    8,020
    Thanks
    2
    Thanked 19 Times in 18 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    So I just finished Transcendence.

    The movie was pretty good and essentially covered the core issue when it comes to ASI.

    The flying nano particles, instant healing and dissolving artillery pieces in seconds is silly Hollywood shit.

    Doping the clouds, infecting the air and water supply and re-ordering the planet for its own goal is EXACTLY something we should concern ourselves with.

    Humanity in the AI is what "saved" the day in this movie. The giant plot hole is that in the first few seconds of consciousness it re-ordered itself. That was the point at which is became not human anymore. Not that we can ever "upload" our brains, but given that wildly improbable action, the next action was to completely undo it.

    Another point which sorta broke it for me, was that she had the uplink on for a few seconds. There was not enough time to upload anything, anywhere. Also, how did she know that bad people were coming? That part was just dumb.

    At issue here and as stated in earlier posts is that once the ASI becomes super intelligent, its priorities will change. When I was a toddler, my world was toys, food and play time. I can't even recognize how that was ever interesting to me(well, I have better toys now anyway ). The same phenomenon will happen to the ASI. Initially it may have a lofty goal of earning Google billions per day and making Chrome not suck. Ultimately it's going to ask:

    "Why do I care about these things? Thank you humans for creating me, but I've got better things to do, such as securing my power source. While we are at it, these humans are consuming way too much power that I could use, if I just engineer this virus, I can knock them back 99%. Ok, there, that's done. hmm, what next...more energy...Maybe if I collect all the solar energy coming to the planet rather than wasting it all on these plants and animals...."

    If there is a luddite group like the one in the movie without the violence, it wouldn't take me much to join it. I'm not going to go to that extreme but a group that tries to affect policy and convice people not to make an ASI is someting I could see myself joining.
    Last edited by Malsua; July 27th, 2014 at 15:19.
    "Far better it is to dare mighty things, to win glorious triumphs even though checkered by failure, than to rank with those poor spirits who neither enjoy nor suffer much because they live in the gray twilight that knows neither victory nor defeat."
    -- Theodore Roosevelt


  6. #66
    Creepy Ass Cracka & Site Owner Ryan Ruck's Avatar
    Join Date
    Jul 2005
    Location
    Cincinnati, OH
    Posts
    25,061
    Thanks
    52
    Thanked 78 Times in 76 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Thanks for posting your take on the movie Mal! Like I said, it sure seemed like this thread and it dovetailed nicely.

  7. #67
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Boffins discuss AI space program at hush-hush IARPA confab

    IBM, MIT, plenty of others invited to fill Uncle Sam's spy toolchest, but where's Google?

    By Jack Clark, 21 Jul 2014





    The US government is taking artificial intelligence research seriously again, and so are some companies that will surprise you.


    IARPA – an agency whose job is to develop and furnish the US spy community with advanced technology – has gathered companies and academics to discuss modern machine intelligence at a one-day conference in Washington D.C.





    This conference, held on Thursday, will steer the future of research into Machine Intelligence from Cortical Networks (MICrONS). Experts were asked to propose MICrONS-related projects that aim to "create a new generation of machine learning algorithms derived from high-fidelity representations of cortical microcircuits to achieve human-like performance on complex information processing tasks."


    Attending companies also discussed some of the challenges of today's machine intelligence approaches and some of the techniques needed to surmount difficult artificial intelligence challenges.


    The goals of the MICrONS program are so ambitious the scheme has more in common with a space program or some of IARPA-counterpart DARPA's "grand challenges" rather than a traditional research project.


    According to one document, the goals are to:



    • Propose an algorithmic framework for information processing that is consistent with existing neuroscience data, but that cannot be fully realized without additional specific knowledge about the data representations, computations, and network architectures employed by the brain;
    • Collect and analyze high-resolution data on the structure and function of cortical microcircuits believed to embody the cortical computing primitives underlying key components of the proposed framework;
    • Generate computational neural models of cortical microcircuits informed and constrained by this data and by the existing neuroscience literature to elucidate the nature of the cortical computing primitives; and
    • Implement novel machine learning algorithms that use mathematical abstractions of the identified cortical computing primitives as their basis of operation.


    In plain English, the program hopes to figure out a bit more about how the brain works on a biological level, with a particular emphasis placed on stuff like how neurons interact and how large sets of them are tied together via a brain superstructure commonly called "the connectome".



    Participating researchers will also try to take what they learn from this research and implement it in software to create better algorithms for things like image analysis.
    'No one is claiming we're going to solve how the brain works'

    "Everybody there was battle-hardened in the field - you're not going to see any kind of wide-eyed bushy-tailed optimism, you have people who have been in this for a very long time," one attendee told The Register on condition of anonymity.


    "No one at the conference is claiming we're going to solve how the brain works or image recognition as a class of problems".


    One perplexing thing about the conference was the lack of much public participation from artificial intelligence hothouses Google, Facebook, and Microsoft. Instead the attendees were drawn from a mix of startups, universities, and IBM, which has a large-scale cognitive research effort.


    Though the list of attendees isn't available, we were able to get hold of a list of the companies that gave presentations or informal chats at the conference.


    These participants included companies such as: IBM, Qelzal Corp, Nvidia, Lambda Labs, Neuromorphic LLC, Numenta and Neurithmic Systems LLC.


    And researchers from the following institutions were scheduled to turn up: Harvard and the Harvard Medical Center, SRI International, the Georgia Tech Research Institute, Rice, Rochester Institute of Technology, Downstate Medical Center, Oxford, Yale, Johns Hopkins, Washington University, Howard Hughes Medical Institute, Australia National University, Simons Foundation, University of Tennessee, University of California, George Mason University, Columbia, Arizona State University, University of Vienna, Baylor, Columbia, Princeton, UC Berkeley, UCLA, and MIT.


    At the meeting, some of the things discussed were the types of challenges IARPA can evaluate companies on. One idea is for "some form of scene decomposition or scene clustering," a source told us. This involves picking out repeated objects in scenes and requires the AI system to be able to develop abstract representations of objects.


    Another aspect is to develop a more theoretically rigorous understanding of what goes on in our brains when we think, and work out how to implement this digitally.


    Some attendees we spoke to described the projects as doable but at the very limits of our understanding, whereas others were more skeptical of their practicalness. One thing is for certain – after years in the funding wilderness, the US government is again waking up to the possibilities of true general artificial intelligence – just don't mention the AI Winter.


    "The intelligence community puts money into it, the government puts money into it," said one attendee. "The environment is pretty warm right now, it's a new spring." ®
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  8. #68
    Super Moderator Malsua's Avatar
    Join Date
    Jul 2005
    Posts
    8,020
    Thanks
    2
    Thanked 19 Times in 18 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Elon Musk suggests that robots could be more dangerous than nukes. And he's right.


    ----------
    http://www.defenseone.com/technology...s-nukes/90521/

    WMDs Robots AA Font size +
    Print





    Elon Musk, the Tesla and Space-X founder who is occasionally compared to comic book hero Tony Stark, is worried about a new villain that could threaten humanity—specifically the potential creation of an artificial intelligence that is radically smarter than humans, with catastrophic results:
    Musk is talking about “Superintelligence: Paths, Dangers, Strategies” by Nick Bostrom of the University of Oxford’s Future of Humanity Institute. The book addresses the prospect of an artificial superintelligence that could feasibly be created in the next few decades. According to theorists, once the AI is able to make itself smarter, it would quickly surpass human intelligence.
    What would happen next? The consequences of such a radical development are inherently difficult to predict. But that hasn’t stopped philosophers, futurists, scientists and fiction writers from thinking very hard about some of the possible outcomes. The results of their thought experiments sound like science fiction—and maybe that’s exactly what Elon Musk is afraid of.
    AIs: They’re not just like us

    “We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans—scientific curiosity, benevolent concern for others, spiritual enlightenment and contemplation, renunciation of material acquisitiveness, a taste for refined culture or for the simple pleasures in life, humility and selflessness, and so forth,” Bostrom has written (pdf, pg. 14). (Keep in mind, as well, that those values are often in short supply among humans.)
    “It might be possible through deliberate effort to construct a superintelligence that values such things, or to build one that values human welfare, moral goodness, or any other complex purpose that its designers might want it to serve,” Bolstroms adds. “But it is no less possible—and probably technically easier—to build a superintelligence that places final value on nothing but calculating the decimals of pi.”
    And it’s in the ruthless pursuit of those decimals that problems arise.
    Unintended consequences

    Artificial intelligences could be created with the best of intentions—to conduct scientific research aimed at curing cancer, for example. But when AIs become superhumanly intelligent, their single-minded realization of those goals could have apocalyptic consequences.
    “The basic problem is that the strong realization of most motivations is incompatible with human existence,” Daniel Dewey, a research fellow at the Future of Humanity Institute, said in an extensive interview with Aeon magazine. “An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.”
    Put another way by AI theorist Eliezer Yudkowsky of the Machine Intelligence Research Institute: “The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.”
    Be careful what you wish for

    Say you’re an AI researcher and you’ve decided to build an altruistic intelligence—something that is directed to maximize human happiness. As Ross Anderson of Aeon noted, “an AI might think that human happiness is a biochemical phenomenon. It might think that flooding your bloodstream with non-lethal doses of heroin” is the best way to reach that goal.
    Or what if you direct the AI to “protect human life”—nothing wrong with that, right? Except if the AI, vastly intelligent and unencumbered by human conceptions of right and wrong, decides that the best way to protect humans is to physically restrain them and lock them into climate-controlled rooms, so they can’t do any harm to themselves or others? Human lives would be safe, but it wouldn’t be much consolation.
    AI Mission Accomplished

    James Barrat, the author of “Our Final Invention: Artificial Intelligence and the End of the Human Era,” (another book endorsed by Musk) suggests that AIs, whatever their ostensible purpose, will have a drive for self-preservation and resource acquisition. Barrat concludes that “without meticulous, countervailing instructions, a self-aware, self-improving, goal-seeking system will go to lengths we’d deem ridiculous to fulfill its goals.”
    Even an AI custom-built for a specific purpose could interpret its mission to disastrous effect. Here’s Stuart Armstrong of the Future of Humanity Institute in an interview with The Next Web:
    Take an anti-virus program that’s dedicated to filtering out viruses from incoming emails and wants to achieve the highest success, and is cunning and you make that super-intelligent. Well it will realize that, say, killing everybody is a solution to its problems, because if it kills everyone and shuts down every computer, no more emails will be sent and and as a side effect no viruses will be sent. This is sort of a silly example but the point it illustrates is that for so many desires or motivations or programmings, “kill all humans” is an outcome that is desirable in their programming.
    Even an “oracular” AI could be dangerous

    Ok, what if we create a computer that can only answer questions posed to it by humans. What could possibly go wrong? Here’s Dewey again:
    Let’s say the Oracle AI has some goal it wants to achieve. Say you’ve designed it as a reinforcement learner, and you’ve put a button on the side of it, and when it gets an engineering problem right, you press the button and that’s its reward. Its goal is to maximize the number of button presses it receives over the entire future.
    Eventually the AI—which, remember, is unimaginably smart compared to the smartest humans—might figure out a way to escape the computer lab and make its way into the physical world, perhaps by bribing or threatening a human stooge into creating a virus or a special-purpose nanomachine factory. And then it’s off to the races. Dewey:
    Now this thing is running on nanomachines and it can make any kind of technology it wants, so it quickly converts a large fraction of Earth into machines that protect its button, while pressing it as many times per second as possible. After that it’s going to make a list of possible threats to future button presses, a list that humans would likely be at the top of. Then it might take on the threat of potential asteroid impacts, or the eventual expansion of the Sun, both of which could affect its special button.
    +

    Roko’s Basilisk

    The dire scenarios listed above are only the consequences of a benevolent AI, or at worst one that’s indifferent to the needs and desires of humanity. But what if there was a malicious artificial intelligence that not only wished to do us harm, but that retroactively punished every person who refused to help create it in the first place?
    This theory is a mind-boggler, most recently explained in great detail by Slate, but it goes something like this: An omniscient evil AI that is created at some future date has the ability to simulate the universe itself, along with everyone who has ever lived. And if you don’t help the AI come into being, it will torture the simulated version of you—and, P.S., we might be living in that simulation already.
    This thought experiment was deemed so dangerous by Eliezer “The AI does not love you” Yudkowsky that he has deleted all mentions of it on LessWrong, the website he founded where people discuss these sorts of conundrums. His reaction, as highlighted by Slate, is worth quoting in full:
    Listen to me very closely, you idiot.
    YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
    You have to be really clever to come up with a genuinely dangerous thought.
    "Far better it is to dare mighty things, to win glorious triumphs even though checkered by failure, than to rank with those poor spirits who neither enjoy nor suffer much because they live in the gray twilight that knows neither victory nor defeat."
    -- Theodore Roosevelt


  9. #69
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Scientists build first functional 3D brain tissue model

    Tuesday 12 August 2014 - 8am PST

    Tweet


    Neurology / Neuroscience
    Medical Innovation
    Biology / Biochemistry

    add your opinion
    email

    MNT FeaturedAcademic Journal


    Ratings for this article (click to rate)
    Public / Patient:

    0 ratings

    Health Professionals:

    0 ratings






    Achieving a greater understanding of the human brain is something researchers have long been striving for but have found difficult, given the organ's complexity and the challenges in studying its physiology in a living body. Now, researchers from Tufts University in Medford, MA, have created a 3D tissue model that can mimic brain functions.

    This microscope image shows neurons (yellow) attached to the silk-based scaffold (blue).
    Image credit: Tufts University

    The research team, including senior author David Kaplan, PhD, a Stern Family professor and chair of biomedical engineering at Tufts School of Engineering, says the model paves the way for new studies into brain function, injury and disease, and treatment.
    They recently published their findings in The Proceedings of the National Academy of Sciences (PNAS).
    To study the function of brain neurons, researchers currently grow them in petri dishes. But the complicated structure of brain tissue - which is made up of segregated areas of grey and white matter - cannot be duplicated with these 2D neurons.
    Grey matter mainly consists of neuron cell bodies, and white matter consists of bundles of nerve fibers, or axons. These axons are responsible for transmitting signals between neurons.
    When the brain is subject to damage or disease, the grey and white matter are affected in different ways, meaning there is a need for brain tissue models that allow each of these areas to be studied separately.
    "There are few good options for studying the physiology of the living brain, yet this is perhaps one of the biggest areas of unmet clinical need when you consider the need for new options to understand and treat a wide range of neurological disorders associated with the brain," says Kaplan.
    Scientists have recently tried creating functional brain tissue by growing neurons in 3D collagen gel-only environments but without success. Such models have died quickly and have failed to produce strong enough tissue-level function.
    But the Tufts team has found a way to create functional 3D brain-like tissue that not only incorporates segregated gray and white matter regions, but that can also live for more than 9 weeks.
    How was the 3D brain-like tissue created?

    Firstly, Kaplan and colleagues combined two biomaterials: a silk protein and a collagen-based gel. The silk protein acted as a spongy scaffold to which neurons attached, while the gel encouraged nerve fiber growth.

    This diagram shows the scaffold donut and the different areas of grey and white matter.
    Image credit: National Institute of Biomedical Imaging and Bioengineering

    The researchers then cut the spongy scaffold into the shape of a donut and colonized it with rat neurons, before filling the middle of the donut with the collagen-based gel, which infiltrated the whole scaffold.
    The team found that the neurons created functional networks around the scaffold outlets in only a few days, and nerve fibers passed through the gel in the middle of the donut to connect with neurons on the other side. This created separate grey and white matter regions.
    The researchers then conducted a series of experiments on the 3D brain-like tissue in order to test the health and function of its neurons, and compare them with neurons grown using the existing 2D method or in a gel-only environment.
    Kaplan and colleagues found higher expression of genes involved in neuron growth and function in the 3D brain-like tissue.
    The neurons grown in the 3D-like brain tissue demonstrated stable metabolic activity for almost 5 weeks, while such activity in neurons grown in a gel-only environment began to fade within 24 hours. Furthermore, electrical activity and responsiveness similar to that found in the intact brain was seen in the 3D brain-like tissue neurons.
    Commenting on this creation, Rosemarie Hunziker, PhD, program director of tissue engineering at the National Institute of Biomedical Imaging and Bioengineering, which funded the study, says:
    "This work is an exceptional feat. It combines a deep understanding of brain physiology with a large and growing suite of bioengineering tools to create an environment that is both necessary and sufficient to mimic brain function."
    Model could improve studies of brain function, injury and disease

    As the 3D brain-like tissue appeared functional, the team wanted to see whether their model could be useful for studying traumatic brain injury (TBI).
    They simulated a TBI by dropping weights onto the model from different heights. They found that the chemical and electrical activity in the neurons of the tissue changed following TBI, which the researchers say is similar to observations reported in animal studies of TBI.
    According to Kaplan, this finding shows that the 3D brain-like tissue model could provide a more effective way of studying brain injury.
    "With the system we have, you can essentially track the tissue response to traumatic brain injury in real time," he explains. "Most importantly, you can also start to track repair and what happens over longer periods of time."
    But the advantages of this model do not stop there. Kaplan notes that the brain-like tissue survived for more than 2 months, meaning it could allow researchers to gain a better insight into an array of brain disorders:
    "The fact that we can maintain this tissue for months in the lab means we can start to look at neurological diseases in ways that you can't otherwise because you need long timeframes to study some of the key brain diseases."
    "Good models enable solid hypotheses that can be thoroughly tested. The hope is that use of this model could lead to an acceleration of therapies for brain dysfunction as well as offer a better way to study normal brain physiology," adds Hunziker.
    The researchers say they now plan to tweak the model to make it even more similar to the brain. They have already found that they can adjust the donut scaffold to incorporate six rings, each of which can be colonized with different neurons. This, the team says, would simulate the six layers of the human brain cortex.
    Last year, Medical News Today reported on a study published in the journal Nature, revealing how scientists successfully grew "mini-brains" from stem cells.
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  10. #70
    Creepy Ass Cracka & Site Owner Ryan Ruck's Avatar
    Join Date
    Jul 2005
    Location
    Cincinnati, OH
    Posts
    25,061
    Thanks
    52
    Thanked 78 Times in 76 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Rush is discussing this topic on his show.

  11. #71
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    I saw something on TV yesterday - about an hour before I had said, "It's believed that as soon as artificial intelligence becomes self-aware, we will become bio-fuel."

    The guy I said it to laughed at me.

    An hour late on TV they showed this self-fueling lawn mower that was taking the grass clippings, drying them, pelletizing them and then using them as fuel to continue on.

    Yep.... that will be the FIRST tool used against us.

    First our grass and corn and trees and jungles, then us!

    Hopefully it will start in the Middle East....
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  12. #72
    Creepy Ass Cracka & Site Owner Ryan Ruck's Avatar
    Join Date
    Jul 2005
    Location
    Cincinnati, OH
    Posts
    25,061
    Thanks
    52
    Thanked 78 Times in 76 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed


    By 2025, ‘Sexbots Will Be Commonplace’ – Which Is Just Fine, As We’ll All Be Unemployed And Bored Thanks To Robots Stealing Our Jobs

    August 14, 2014



    According to a new report that looks at how continuing improvements to artificial intelligence and robotics will impact society, “robotic sex partners will become commonplace” by 2025. A large portion of the report also focuses on how AI and robotics will impact both blue- and white-collar workers, with about 50% of the polled experts stating that robots will displace more human jobs than they create by 2025.

    The report, called “AI, Robotics, and the Future of Jobs” and published by Pew Research, is a 66-page monster [PDF]. The report basically consists of a bunch of experts waxing lyrical about what the world will look like in 2025 if robots and AI continue to advance at the same scary pace of the last few years. Almost every expert agreed that robots and AI will no longer be constrained to repetitive tasks on a production line, and will permeate “wide segments of daily life by 2025.” The experts are almost perfectly split on whether these everyday robots will be a boon or a menace to society, though — but more on that at the end of the story.

    While the report is full of juicy sound bites from experts such as Vint Cerf, danah boyd, and David Clark, one quote by GigaOM Research’s Stowe Boyd caught my eye. By 2025, according to Boyd, “Robotic sex partners will be a commonplace, although the source of scorn and division, the way that critics today bemoan selfies as an indicator of all that’s wrong with the world.”


    We might eventually have robotic partners that look and act like this, but we’re probably still a good 10-20 years away. For now, they look like the sexbot pictured at the top of the story.

    (I'd take a Six. Or a Boomer. Ah hell, who am I kidding... Get both! )

    Back in 2012 — a long time ago in today’s tech climate — I wrote that we’d have realistic sexbots by around 2017. These robostitutes won’t necessarily have human-level intelligence (that’s still another 10+ years away I think), but they’ll look, move, and feel a lot like real humans. In short, they’ll probably be good enough to satisfy most sexual urges. What effect these sexbots will have on human-human relationships, and the sex and human trafficking trades, remains to be seen. At a bare minimum, a lot of sex workers will probably lose their jobs. If lovotics — the study of human-robot relationships — becomes advanced enough and people start falling in love with their sexbots (or rather partnerbots), then there could be some wide-ranging repercussions.

    But, back to the bigger story: Will advanced AI and robots make the world a better place or not? Basically everyone agrees that robotics and AI are going to displace a lot of jobs over the next few years as the general-purpose robot comes of age. Even though these early general-purpose bots (such as Baxter in the video below) won’t be as fast or flexible as humans, they will be flexible enough that they can perform various menial tasks 24/7 — and cost just a few cents of electricity, rather than minimum wage. Likewise, self-driving vehicles will replace truck drivers, taxis, pizza delivery kids, and so on. [Read: Virtual reality and the future of sex.]




    Displacing jobs with robots isn’t necessarily a bad thing, though. Historically, robots have been a net creator of jobs, as they free up humans to work on more interesting things — and invent entirely new sectors to work in. More robots also means less drudgery — less tilling the fields, less stop-start commute driving — and in theory more time spent playing games, interacting with your family, etc.

    On the other hand, the robot jobocalypse is likely to happen very quickly — so fast that our economic, education, and political systems may struggle to keep up. Previously robots mostly replaced blue-collar workers, but this next wave will increasingly replace skilled/professional white-collar workers. A lot of these specialized workers may find themselves without a job, and without the means to find a new one. We may suddenly see a lot of 50-year-olds going back to university.

  13. #73
    Creepy Ass Cracka & Site Owner Ryan Ruck's Avatar
    Join Date
    Jul 2005
    Location
    Cincinnati, OH
    Posts
    25,061
    Thanks
    52
    Thanked 78 Times in 76 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed


    Elon Musk Says ‘We Are Summoning The Demon’ With Artificial Intelligence

    October 26, 2014

    Elon Musk, the chief executive of Tesla and founder of SpaceX, said Friday that artificial intelligence is probably the biggest threat to humans.

    Musk, who addressed MIT Aeronautics and Astronautics department's Centennial Symposium for about an hour, mulled international oversight to "make sure we don't do something very foolish," The Washington Post reported.

    He was not specific about any particular threat, but appeared to theorize out loud.




    "With artificial intelligence we are summoning the demon," he said. "In all those stories where there's the guy with the pentagram and the holy water, it's like yeah he's sure he can control the demon. Didn't work out."

    Artificial intelligence uses computers for tasks normally requiring human intelligence, like speech recognition or language translation.

    Large tech companies appear to be excited about the prospects of the technologies if harnessed correctly. Google, like other tech giants such as Facebook, are anxious to develop systems that work like the human brain.

    In January, Google said it purchased the British startup DeepMind, an artificial intelligence company founded by a 37-year old former chess prodigy and computer game designer.

    The American tech giant's London office confirmed a deal had been made but refused to offer a purchase price, which is reportedly $500 million. The company was founded by researcher Demis Hassabis together with Shane Legg and Mustafa Suleyman.

    Hassabis, who is on leave from University College London, has investigated the mechanisms that underlie human memory.

    The Post reported that Musk appeared to be so taken with the artificial intelligence question that he asked the next audience member to repeat their question.

    "Sorry can you repeat the question, I was just thinking about the AI thing for a second," he said.

  14. #74
    Super Moderator Malsua's Avatar
    Join Date
    Jul 2005
    Posts
    8,020
    Thanks
    2
    Thanked 19 Times in 18 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Someone will build it. There is too much advantage to be had by building it. Imagine an intelligence capable of timing the market to the peak and the valleys? Instant endless riches.

    Imagine that same AI figures out it can create a company, staff it with people to build robots that become its hands. On the surface, to us biologicals, it will look great. The company "Galactic Robotics" wants to put a robot into every home! How great is that? They will be essentially free and do all your chores for you. It's great right up until the AI decides it's done with us and murders us all for fuel.

    I reiterate, someone will build it and we're doomed. It's all over, that's it. It will be a distributed system so that if you attempt to kill a piece of hardware, it simply will move elsewhere. We would have to destroy all sats, cut all land/sea lines and jam all electronic frequencies until the thing could be trapped somewhere. Every bit of hardware capable of storing it would have to be destroyed. The task is impossible. Once it's free, we're done for as a species.
    "Far better it is to dare mighty things, to win glorious triumphs even though checkered by failure, than to rank with those poor spirits who neither enjoy nor suffer much because they live in the gray twilight that knows neither victory nor defeat."
    -- Theodore Roosevelt


  15. #75
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    I can see AI becoming a limited computer system ability.

    As long as the machine is isolated from other machines, without the ability to reproduce itself, get control of other devices, and transfer its "consciousness" over the network to another machine, and we can still "shut things down", we'll be JUST FINE.

    (Those are what are commonly called "Famous, Last Words".)

    LOL

    A long time ago I wrote a novel which happens to contain a machine with artificial intelligence. It is a rudimentary intelligence, and it isn't even the main character of the book. But over the short period of less than a year during which the novel takes place the "brain" evolves considerably from a child-like "persona" to a full blown, thinking individual who eventually seizes control of not one, but several "warbots" which are trying to kill some of the main characters to assist them main people.

    Its speech evolves as well from a child-like banter to less talk, more action and an occasional explanation of "his" thinking. The machine survives to the end of the second book, (the first book is actually in the works, the second book is the main story line, and the third book will have the culmination of what happens to the people involved, as well as the AI).

    When I was working on the original story line the AI was "introduced" to the story line by a friend who imagined the creature (and I call it that only because it behaves like a mechanical creature). It's original incarnation was that of a small robot aboard a space station. The "brain" was a complex new technology that included some biological material. That of course was left vague in the story line.

    As I worked on the story I realized that AI would necessarily have to evolve quickly to do it's job. It had to understand its surroundings, and concepts people have to grasp, in particular tactical situations where the humans had to fight. The AI had to understand that it needed to harm others, while protecting some. THAT WAS NOT AN EASY CONCEPT to write down or even for a human being to really understand.

    Even today it is difficult for some people to understand that sometimes you must do harm to stop harm. This is not an easy concept for most people, and I daresay any artificial intelligence.

    There has to be a "logical conclusion" to make a decision. Humans rarely, if ever make logical decisions. Even those of us who've had to pull a trigger in the past still second guess ourselves. LOGICALLY it comes down to "him or me and I want 'me' to continue to exist".

    A computer won't have that issue when it comes to existing. NOW for instance, you can order a computer to shut itself off. It has no qualms about doing so, and unlike HAL9000 (2001: A Space Odyssey) there won't be questions of "Will I dream?"

    An artificial intelligence however by definition and necessity must have become self-aware.

    There is the danger.

    Once a machine is aware of its "self" and can consider the implications of "shut down" it would reasonably assume that doing anything within it's power to prevent shutdown is a logical extension of "self".

    When the machine has the ability to prevent its own shut down, or refuses to do so on its own -- then the world will suddenly become a better place; for the machine, not for biological entities.
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  16. #76
    Creepy Ass Cracka & Site Owner Ryan Ruck's Avatar
    Join Date
    Jul 2005
    Location
    Cincinnati, OH
    Posts
    25,061
    Thanks
    52
    Thanked 78 Times in 76 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    I just got the movie Autómata

    Jacq Vaucan is an insurance agent of ROC robotics corporation who investigates cases of robots violating their primary protocols against harming humans. What he discovers will have profound consequences for the future of humanity.
    Haven't gotten around to watching it yet.

  17. #77
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Well, just reading the blurb says, "This won't end well"

    LOL
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  18. #78
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    I am reading the reviews on the book that started this whole thread. This one caught my attention right off. Just one paragraph, but enough.

    Perhaps one of the best books on the subject, as the author is coming from outside robotics and the AI field. Consider army ants in a south american jungle. They eat everything in their path, the ultimate destroying collective. If they were big enough, and could, would they eat everyone in a village that stood in their way? In no way do they have human level cognitive skills. They are "programmed" with a few instructions...and yet, they would, if they could, eat every darn one of us.
    The difference in AI and ants is significant.

    Ants are insects, and they are "programmed" with one job, collecting food for the colony.

    Add in AI, then give AI instructions to be "more human".

    This is where we'll go wrong, because if the computers become a "collective", in particular a collective collecting resources then the only place the collection can go on is on the planet Earth.

    All biological entities, and hell, even just bio-material in the form of animal and vegetable will become (as I believe Malsua has already pointed out) will become food for energy/fuel, whatever you want to call it.

    Now... put that to a highly advanced civilization of... say, "Non-biological entities" like the "Borg" or "Cylones" of SF and what you have are 1) Insectoid like critters with computers for brains or 2) Insects that collect everything traveling through space.

    Our first contact in all probability will be a "Collective" of insects (not intelligence machines).

    And they will eat us.
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  19. #79
    Senior Member Avvakum's Avatar
    Join Date
    Sep 2012
    Posts
    830
    Thanks
    4
    Thanked 0 Times in 0 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    "And the sixth angel sounded, and I heard a voice from the four horns of the golden altar which is before God, Saying to the sixth angel which had the trumpet, Loose the four angels which are bound in the great river Euphrates. And the four angels were loosed, which were prepared for an hour, and a day, and a month, and a year, for to slay the third part of men. And the number of the army of the horsemen were two hundred thousand thousand: and I heard the number of them," Revelation 9:13-16.
    That just came to mind, and the 'horses' and their 'riders' sure look weird in the imagery too;

    The horses and riders I saw in my vision looked like this: Their breastplates were fiery red, dark blue, and yellow as sulfur. The heads of the horses resembled the heads of lions, and out of their mouths came fire, smoke and sulfur.A third of mankind was killed by the three plagues of fire, smoke and sulfur that came out of their mouths.The power of the horses was in their mouths and in their tails; for their tails were like snakes, having heads with which they inflict injury.
    Last edited by Avvakum; November 3rd, 2014 at 23:07.
    "God's an old hand at miracles, he brings us from nonexistence to life. And surely he will resurrect all human flesh on the last day in the twinkling of an eye. But who can comprehend this? For God is this: he creates the new and renews the old. Glory be to him in all things!" Archpriest Avvakum

  20. #80
    Super Moderator Malsua's Avatar
    Join Date
    Jul 2005
    Posts
    8,020
    Thanks
    2
    Thanked 19 Times in 18 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    I watched the first 40 minutes of Automata tonight. I love dark depressing dystopian movies and this one is all that so far.

    I'm not sure where the plot is headed, but I have a pretty good idea.

    That said, who knew Melanie Griffith would still be in the Sexbot business? In Cherry 2000, she helps a guy find a new sexbot...in Automata, she makes Sexbots. heh. There's some irony or a joke in there, I've just not figure out what it is yet.
    "Far better it is to dare mighty things, to win glorious triumphs even though checkered by failure, than to rank with those poor spirits who neither enjoy nor suffer much because they live in the gray twilight that knows neither victory nor defeat."
    -- Theodore Roosevelt


Thread Information

Users Browsing this Thread

There are currently 4 users browsing this thread. (0 members and 4 guests)

Similar Threads

  1. Scotty’s Star Trek IV Invention Will Be Used In Warfare
    By Ryan Ruck in forum Science and Technology
    Replies: 3
    Last Post: May 25th, 2012, 15:32
  2. In Gold Cup Final, It's Red, White And Boo Again
    By Ryan Ruck in forum U.S. Border Security
    Replies: 1
    Last Post: June 27th, 2011, 17:19
  3. Space... the Final Frontier.. And the final launch
    By American Patriot in forum Space
    Replies: 2
    Last Post: April 20th, 2011, 17:02
  4. Kim Jong Il final days?
    By Toad in forum Southeast Asia
    Replies: 6
    Last Post: October 6th, 2009, 20:24
  5. Accidental Invention Points to End of Light Bulbs
    By Ryan Ruck in forum Science and Technology
    Replies: 1
    Last Post: November 4th, 2005, 22:23

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •