Page 5 of 7 FirstFirst 1234567 LastLast
Results 81 to 100 of 134

Thread: Our Final Invention: How the Human Race Goes and Gets Itself Killed

  1. #81
    Senior Member Avvakum's Avatar
    Join Date
    Sep 2012
    Posts
    830
    Thanks
    4
    Thanked 0 Times in 0 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Quote Originally Posted by Malsua View Post
    I watched the first 40 minutes of Automata tonight. I love dark depressing dystopian movies and this one is all that so far.

    I'm not sure where the plot is headed, but I have a pretty good idea.

    That said, who knew Melanie Griffith would still be in the Sexbot business? In Cherry 2000, she helps a guy find a new sexbot...in Automata, she makes Sexbots. heh. There's some irony or a joke in there, I've just not figure out what it is yet.
    IDK, I've been a fan of Melanie Griffith's ever since I watched 'Body Double', lol, and then I saw 'Cherry 2000' and that didn't hurt either.
    "God's an old hand at miracles, he brings us from nonexistence to life. And surely he will resurrect all human flesh on the last day in the twinkling of an eye. But who can comprehend this? For God is this: he creates the new and renews the old. Glory be to him in all things!" Archpriest Avvakum

  2. #82
    Creepy Ass Cracka & Site Owner Ryan Ruck's Avatar
    Join Date
    Jul 2005
    Location
    Cincinnati, OH
    Posts
    25,061
    Thanks
    52
    Thanked 78 Times in 76 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    LOL!

    Guess I'm going to have to put Automata at the top of my "To Watch" list.

  3. #83
    Super Moderator Malsua's Avatar
    Join Date
    Jul 2005
    Posts
    8,020
    Thanks
    2
    Thanked 19 Times in 18 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    So I finished Automata. It didn't exactly go as I figured as far as details but it was close. Also, there clearly was some call backs to Cherry 2000 as the final scene include a cable car across a huge canyon that looks like the area around the Hoover dam.

    It's not a bad movie but if you're looking for an exciting action movie, this ain't it. I've become fond of this type of movie simply because it's actually got a plot that doesn't involve something blowing up every 15 seconds.

    Another recent movie that had the same look and feel although entirely different plot, was "The Rover" with Guy Pierce and Robert Pattinson. I enjoy gritty post apoc movies
    "Far better it is to dare mighty things, to win glorious triumphs even though checkered by failure, than to rank with those poor spirits who neither enjoy nor suffer much because they live in the gray twilight that knows neither victory nor defeat."
    -- Theodore Roosevelt


  4. #84
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Check this out.....

    Doubling scores in a year.

    Artificial Intelligence Outperforms Average High School Senior

    blogs.wsj.com Published: November 4, 2014



    Advertisement



    Artificial intelligence in Japan is getting closer to entering college. AI software scored higher on the English section of Japan’s standardized college entrance test than the average Japanese high school senior, its developers said.




    The software, known as To-Robo, almost doubled its score on a multiple choice test from its performance a year ago, indicating progress toward a goal set by its developers to eventually pass the entrance exam for Tokyo University, Japan’s most prestigious college.
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  5. #85
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Artificial intelligence: summoning the demon

    We need to understand our own intelligence is competition for our artificial, not-quite intelligences.

    by Mike Loukides | @mikeloukides | +Mike Loukides | Comments: 4 | November 4, 2014



    A few days ago, Elon Musk likened artificial intelligence (AI) to “summoning the demon.” As I’m sure you know, there are many stories in which someone summons a demon. As Musk said, they rarely turn out well.


    There’s no question that Musk is an astute student of technology. But his reaction is misplaced. There are certainly reasons for concern, but they’re not Musk’s.


    The problem with AI right now is that its achievements are greatly over-hyped. That’s not to say those achievements aren’t real, but they don’t mean what people think they mean. Researchers in deep learning are happy if they can recognize human faces with 80% accuracy. (I’m skeptical about claims that deep learning systems can reach 97.5% accuracy; I suspect that the problem has been constrained some way that makes it much easier. For example, asking “is there a face in this picture?” or “where is the face in this picture?” is much different from asking “what is in this picture?”) That’s a hard problem, a really hard problem. But humans recognize faces with nearly 100% accuracy. For a deep learning system, that’s an almost inconceivable goal. And 100% accuracy is orders of magnitude harder than 80% accuracy, or even 97.5%.


    What kinds of applications can you build from technologies that are only accurate 80% of the time, or even 97.5% of the time? Quite a few. You might build an application that created dynamic travel guides from online photos. Or you might build an application that measures how long diners stay in a restaurant, how long it takes them to be served, whether they’re smiling, and other statistics. You might build an application that tries to identify who appears in your photos, as Facebook has. In all of these cases, an occasional error (or even a frequent error) isn’t a big deal. But you wouldn’t build, say, a face-recognition-based car alarm that was wrong 20% of the time — or even 2% of the time.


    Similarly, much has been made of Google’s self-driving cars. That’s a huge technological achievement. But Google has always made it very clear that their cars rely on the accuracy of their highly detailed street view. As Peter Norvig has said, it’s a hard problem to pick a traffic light out of a scene and determine if it is red, yellow, or green. It is trivially easy to recognize the color of a traffic light that you already know is there. But keeping Google’s street view up to date isn’t simple. While the roads change infrequently, towns frequently add stop signs and traffic lights. Dealing with these changes to the map is extremely difficult, and only one of many challenges that remain to be solved: we know how to interpret traffic cones, we know how to think about cars or humans behaving erratically, we know what to do when the lane markings are covered by snow. That ability to think like a human when something unexpected happens makes a self-driving car a “moonshot” project. Humans certainly don’t perform perfectly when the unexpected happens, but we’re surprisingly good at it.


    You wouldn’t build a face-recognition-based car alarm that was wrong 20% of the time.So, AI systems can do, with difficulty and partial accuracy, some of what humans do all the time without even thinking about it. I’d guess that we’re 20 to 50 years away from anything that’s more than a crude approximation to human intelligence. It’s not just that we need bigger and faster computers, which will be here sooner than we think. We don’t understand how human intelligence works at a fundamental level. (Though I wouldn’t assume that understanding the brain is a prerequisite for artificial intelligence.) That’s not a problem or a criticism, it’s just a statement of how difficult the problems are. And let’s not misunderstand the importance of what we’ve accomplished: this level of intelligence is already extremely useful. Computers don’t get tired, don’t get distracted, and don’t panic. (Well, not often.) They’re great for assisting or augmenting human intelligence, precisely because as an assistant, 100% accuracy isn’t required. We’ve had cars with computer-assisted parking for more than a decade, and they’ve gotten quite good. Larry Page has talked about wanting Google search to be like the Star Trek computer, which can understand context and anticipate what the humans wants. The humans remain firmly in control, though, whether we’re talking to the Star Trek computer or Google Now.


    I’m not without concerns about the application of AI. First, I’m concerned about what happens when humans start relying on AI systems that really aren’t all that intelligent. AI researchers, in my experience, are fully aware of the limitations of their systems. But their customers aren’t. I’ve written about what happens when HR departments trust computer systems to screen resumes: you get some crude pattern matching that ends up rejecting many good candidates. Cathy O’Neil has written on several occasions about machine learning’s potential for dressing up prejudice as “science.”


    The problem isn’t machine learning itself, but users who uncritically expect a machine to provide an oracular “answer,” and faulty models that are hidden from public view. In a not-yet published paper, DJ Patil and Hilary Mason suggest that you search Google for GPS and cliff; you might be surprised at the number of people who drive their cars off cliffs because the GPS told them to. I’m not surprised; a friend of mine owns a company that makes propellers for high-performance boats, and he’s told me similar stories about replacing the propellers for clients who run their boats into islands.


    David Ferrucci and the other IBMers who built Watson understand that Watson’s potential in medical diagnosis isn’t to have the last word, or to replace a human doctor. It’s to be part of the conversation, offering diagnostic possibilities that the doctor hasn’t considered, and the reasons one might accept (or reject) those diagnoses. That’s a healthy and potentially important step forward in medical treatment, but do the doctors using an automated service to help make diagnoses understand that? Does our profit-crazed health system understand that? When will your health insurance policy say “you can only consult a doctor after the AI has failed”? Or “Doctors are a thing of the past, and if the AI is wrong 10% of the time, that’s acceptable; after all, your doctor wasn’t right all the time, anyway”? The problem isn’t the tool; it’s the application of the tool. More specifically, the problem is forgetting that an assistive technology is assistive, and assuming that it can be a complete stand-in for a human.


    Second, I’m concerned about what happens if consumer-facing researchers get discouraged and leave the field. Although that’s not likely now, it wouldn’t be the first time that AI was abandoned after a wave of hype. If Google, Facebook, and IBM give up on their “moonshot” AI projects, what will be left? I have a thesis (which may eventually become a Radar post) that a technology’s future has a lot to do with its origins. Nuclear reactors were developed to build bombs, and as a consequence, promising technologies like Thorium reactors were abandoned. If you can’t make a bomb from it, what good is it?


    The problem isn’t the tool; it’s the application of the tool.If I’m right, what are the implications for AI? I’m thrilled that Google and Facebook are experimenting with deep learning, that Google is building autonomous vehicles, and that IBM is experimenting with Watson. I’m thrilled because I have no doubt that similar work is going on in other labs, in other places, that we know nothing about. I don’t want the future of AI to be shortchanged because researchers hidden in government labs choose not to investigate ideas that don’t have military potential. And we do need a discussion about the role of AI in our lives: what are its limits, what applications are OK, what are unnecessarily intrusive, and what are just creepy. That conversation will never happen when the research takes place behind locked doors.


    At the end of a long, glowing report about the state of AI, Kevin Kelly makes the point that every advance in AI, every time computers make some other achievement (playing chess, playing Jeopardy, inventing new recipes, maybe next year playing Go), we redefine the meaning of our own human intelligence. That sounds funny; I’m certainly suspicious when the rules of the game are changed every time it appears to be “won,” but who really wants to define human intelligence in terms of chess-playing ability? That definition leaves out most of what’s important in humanness.


    Perhaps we need to understand our own intelligence is competition for our artificial, not-quite intelligences. And perhaps we will, as Kelly suggests, realize that maybe we don’t really want “artificial intelligence.” After all, human intelligence includes the ability to be wrong, or to be evasive, as Turing himself recognized. We want “artificial smartness”: something to assist and extend our cognitive capabilities, rather than replace them.


    That brings us back to “summoning the demon,” and the one story that’s an exception to the rule. In Goethe’s Faust, Faust is admitted to heaven: not because he was a good person, but because he never ceased striving, never became complacent, never stopped trying to figure out what it means to be human. At the start, Faust mocks Mephistopheles, saying “What can a poor devil give me? When has your kind ever understood a Human Spirit in its highest striving?” (lines 1176-7, my translation). When he makes the deal, it isn’t the typical “give me everything I want, and you can take my soul”; it’s “When I lie on my bed satisfied, let it be over…when I say to the Moment ‘Stay! You are so beautiful,’ you can haul me off in chains” (1691-1702). At the end of this massive play, Faust is almost satisfied; he’s building an earthly paradise for those who strive for freedom every day, and dies saying “In anticipation of that blessedness, I now enjoy that highest Moment” (11585-6), even quoting the terms of his deal.


    So, who’s won the bet? The demon or the summoner? Mephistopheles certainly thinks he has, but the angels differ, and take Faust’s soul to heaven, saying “Whoever spends himself striving, him we can save” (11936-7). Faust may be enjoying the moment, but it’s still in anticipation of a paradise that he hasn’t built. Mephistopheles fails at luring Faust into complacency; rather, he is the driving force behind his striving, a comic figure who never understands that by trying to drag Faust to hell, he was pushing him toward humanity. If AI, even in its underdeveloped state, can serve this function for us, calling up that demon will be well worth it.
    Cropped image on article and category pages by edward musiak on Flickr, used under a Creative Commons license.
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  6. #86
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Google AI project apes memory, programs (sort of) like a human






    Credit: Thinkstock
    Neural Turing Machines attempt to emulate the brain's short-term memory

    By Tim Hornyak


    IDG News Service | Oct 30, 2014 8:35 AM PT




    The mission of Google's DeepMind Technologies startup is to "solve intelligence." Now, researchers there have developed an artificial intelligence system that can mimic some of the brain's memory skills and even program like a human.


    Featured Resource

    Presented by Scribe Software
    10 Best Practices for Integrating Data


    Data integration is often underestimated and poorly implemented, taking time and resources. Yet it
    Learn More




    The researchers developed a kind of neural network that can use external memory, allowing it to learn and perform tasks based on stored data.


    Neural networks are interconnected computational "neurons." While conventional neural networks have lacked readable and writeable memory, they have been used in machine learning and pattern-recognition applications such as computer vision and speech recognition.


    The so-called Neural Turing Machine (NTM) that DeepMind researchers have been working on combines a neural network controller with a memory bank, giving it the ability to learn to store and retrieve information.


    The system's name refers to computer pioneer Alan Turing's formulation of computers as machines having working memory for storage and retrieval of data.


    The researchers put the NTM through a series of tests including tasks such as copying and sorting blocks of data. Compared to a conventional neural net, the NTM was able to learn faster and copy longer data sequences with fewer errors. They found that its approach to the problem was comparable to that of a human programmer working in a low-level programming language.


    The NTM "can infer simple algorithms such as copying, sorting and associative recall from input and output examples," DeepMind's Alex Graves, Greg Wayne and Ivo Danihelka wrote in a research paper available on the arXiv repository.


    "Our experiments demonstrate that it is capable of learning simple algorithms from example data and of using these algorithms to generalize well outside its training regime."


    A spokesman for Google declined to provide more information about the project, saying only that the research is "quite a few layers down from practical applications."


    In a 2013 paper, Graves and colleagues showed how they had used a technique known as deep reinforcement learning to get DeepMind software to learn to play seven classic Atari 2600 video games, some better than a human expert, with the only input being information visible on the game screen.


    Google confirmed earlier this year that it had acquired London-based DeepMind Technologies, founded in 2011 as an artificial intelligence company. The move is expected to have a major role in advancing the search giant's research into robotics, self-driving cars and smart-home technologies.


    More recently, DeepMind co-founder Demis Hassabis wrote in a blog post that Google is partnering with artificial intelligence researchers from Oxford University to study topics including image recognition and natural language understanding.
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  7. #87
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    The Three Breakthroughs That Have Finally Unleashed AI on the World

    • By Kevin Kelly
    • |
    • 6:30 am |



    Craig & Karl
    A few months ago I made the trek to the sylvan campus of the IBM research labs in Yorktown Heights, New York, to catch an early glimpse of the fast-arriving, long-overdue future of artificial intelligence. This was the home of Watson, the electronic genius that conquered Jeopardy! in 2011. The original Watson is still here—it's about the size of a bedroom, with 10 upright, refrigerator-shaped machines forming the four walls. The tiny interior cavity gives technicians access to the jumble of wires and cables on the machines' backs. It is surprisingly warm inside, as if the cluster were alive.


    Today's Watson is very different. It no longer exists solely within a wall of cabinets but is spread across a cloud of open-standard servers that run several hundred “instances” of the AI at once. Like all things cloudy, Watson is served to simultaneous customers anywhere in the world, who can access it using their phones, their desktops, or their own data servers. This kind of AI can be scaled up or down on demand. Because AI improves as people use it, Watson is always getting smarter; anything it learns in one instance can be immediately transferred to the others. And instead of one single program, it's an aggregation of diverse software engines—its logic-deduction engine and its language-parsing engine might operate on different code, on different chips, in different locations—all cleverly integrated into a unified stream of intelligence.
    Consumers can tap into that always-on intelligence directly, but also through third-party apps that harness the power of this AI cloud. Like many parents of a bright mind, IBM would like Watson to pursue a medical career, so it should come as no surprise that one of the apps under development is a medical-diagnosis tool. Most of the previous attempts to make a diagnostic AI have been pathetic failures, but Watson really works. When, in plain English, I give it the symptoms of a disease I once contracted in India, it gives me a list of hunches, ranked from most to least probable. The most likely cause, it declares, is Giardia—the correct answer. This expertise isn't yet available to patients directly; IBM provides access to Watson's intelligence to partners, helping them develop user-friendly interfaces for subscribing doctors and hospitals. “I believe something like Watson will soon be the world's best diagnostician—whether machine or human,” says Alan Greene, chief medical officer of Scanadu, a startup that is building a diagnostic device inspired by the Star Trek medical tricorder and powered by a cloud AI. “At the rate AI technology is improving, a kid born today will rarely need to see a doctor to get a diagnosis by the time they are an adult.”


    As AIs develop, we might have to engineer ways to prevent consciousness in them—our most premium AI services will be advertised as consciousness-free.


    Medicine is only the beginning. All the major cloud companies, plus dozens of startups, are in a mad rush to launch a Watson-like cognitive service. According to quantitative analysis firm Quid, AI has attracted more than $17 billion in investments since 2009. Last year alone more than $2 billion was invested in 322 companies with AI-like technology. Facebook and Google have recruited researchers to join their in-house AI research teams. Yahoo, Intel, Dropbox, LinkedIn, Pinterest, and Twitter have all purchased AI companies since last year. Private investment in the AI sector has been expanding 62 percent a year on average for the past four years, a rate that is expected to continue.


    Amid all this activity, a picture of our AI future is coming into view, and it is not the HAL 9000—a discrete machine animated by a charismatic (yet potentially homicidal) humanlike consciousness—or a Singularitan rapture of superintelligence. The AI on the horizon looks more like Amazon Web Services—cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need.

    Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization. It will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now cognitize. This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI. This is a big deal, and now it's here.



    Craig & Karl
    Around 2002 I attended a small party for Google—before its IPO, when it only focused on search. I struck up a conversation with Larry Page, Google's brilliant cofounder, who became the company's CEO in 2011. “Larry, I still don't get it. There are so many search companies. Web search, for free? Where does that get you?” My unimaginative blindness is solid evidence that predicting is hard, especially about the future, but in my defense this was before Google had ramped up its ad-auction scheme to generate real income, long before YouTube or any other major acquisitions. I was not the only avid user of its search site who thought it would not last long. But Page's reply has always stuck with me: “Oh, we're really making an AI.”


    I've thought a lot about that conversation over the past few years as Google has bought 14 AI and robotics companies. At first glance, you might think that Google is beefing up its AI portfolio to improve its search capabilities, since search contributes 80 percent of its revenue. But I think that's backward. Rather than use AI to make its search better, Google is using search to make its AI better. Every time you type a query, click on a search-generated link, or create a link on the web, you are training the Google AI. When you type “Easter Bunny” into the image search bar and then click on the most Easter Bunny-looking image, you are teaching the AI what an Easter bunny looks like. Each of the 12.1 billion queries that Google's 1.2 billion searchers conduct each day tutor the deep-learning AI over and over again. With another 10 years of steady improvements to its AI algorithms, plus a thousand-fold more data and 100 times more computing resources, Google will have an unrivaled AI. My prediction: By 2024, Google's main product will not be search but AI.


    This is the point where it is entirely appropriate to be skeptical. For almost 60 years, AI researchers have predicted that AI is right around the corner, yet until a few years ago it seemed as stuck in the future as ever. There was even a term coined to describe this era of meager results and even more meager research funding: the AI winter. Has anything really changed?


    Yes. Three recent breakthroughs have unleashed the long-awaited arrival of artificial intelligence:


    1. Cheap parallel computation
    Thinking is an inherently parallel process, billions of neurons firing simultaneously to create synchronous waves of cortical computation. To build a neural network—the primary architecture of AI software—also requires many different processes to take place simultaneously. Each node of a neural network loosely imitates a neuron in the brain—mutually interacting with its neighbors to make sense of the signals it receives. To recognize a spoken word, a program must be able to hear all the phonemes in relation to one another; to identify an image, it needs to see every pixel in the context of the pixels around it—both deeply parallel tasks. But until recently, the typical computer processor could only ping one thing at a time.
    That began to change more than a decade ago, when a new kind of chip, called a graphics processing unit, or GPU, was devised for the intensely visual—and parallel—demands of videogames, in which millions of pixels had to be recalculated many times a second. That required a specialized parallel computing chip, which was added as a supplement to the PC motherboard. The parallel graphical chips worked, and gaming soared. By 2005, GPUs were being produced in such quantities that they became much cheaper. In 2009, Andrew Ng and a team at Stanford realized that GPU chips could run neural networks in parallel.
    That discovery unlocked new possibilities for neural networks, which can include hundreds of millions of connections between their nodes. Traditional processors required several weeks to calculate all the cascading possibilities in a 100 million-parameter neural net. Ng found that a cluster of GPUs could accomplish the same thing in a day. Today neural nets running on GPUs are routinely used by cloud-enabled companies such as Facebook to identify your friends in photos or, in the case of Netflix, to make reliable recommendations for its more than 50 million subscribers.
    2. Big Data
    Every intelligence has to be taught. A human brain, which is genetically primed to categorize things, still needs to see a dozen examples before it can distinguish between cats and dogs. That's even more true for artificial minds. Even the best-programmed computer has to play at least a thousand games of chess before it gets good. Part of the AI breakthrough lies in the incredible avalanche of collected data about our world, which provides the schooling that AIs need. Massive databases, self-tracking, web cookies, online footprints, terabytes of storage, decades of search results, Wikipedia, and the entire digital universe became the teachers making AI smart.
    3. Better algorithms
    Digital neural nets were invented in the 1950s, but it took decades for computer scientists to learn how to tame the astronomically huge combinatorial relationships between a million—or 100 million—neurons. The key was to organize neural nets into stacked layers. Take the relatively simple task of recognizing that a face is a face. When a group of bits in a neural net are found to trigger a pattern—the image of an eye, for instance—that result is moved up to another level in the neural net for further parsing. The next level might group two eyes together and pass that meaningful chunk onto another level of hierarchical structure that associates it with the pattern of a nose. It can take many millions of these nodes (each one producing a calculation feeding others around it), stacked up to 15 levels high, to recognize a human face. In 2006, Geoff Hinton, then at the University of Toronto, made a key tweak to this method, which he dubbed “deep learning.” He was able to mathematically optimize results from each layer so that the learning accumulated faster as it proceeded up the stack of layers. Deep-learning algorithms accelerated enormously a few years later when they were ported to GPUs. The code of deep learning alone is insufficient to generate complex logical thinking, but it is an essential component of all current AIs, including IBM's Watson, Google's search engine, and Facebook's algorithms.


    Craig & Karl
    This perfect storm of parallel computation, bigger data, and deeper algorithms generated the 60-years-in-the-making overnight success of AI. And this convergence suggests that as long as these technological trends continue—and there's no reason to think they won't—AI will keep improving.
    As it does, this cloud-based AI will become an increasingly ingrained part of our everyday life. But it will come at a price. Cloud computing obeys the law of increasing returns, sometimes called the network effect, which holds that the value of a network increases much faster as it grows bigger. The bigger the network, the more attractive it is to new users, which makes it even bigger, and thus more attractive, and so on. A cloud that serves AI will obey the same law. The more people who use an AI, the smarter it gets. The smarter it gets, the more people use it. The more people that use it, the smarter it gets. Once a company enters this virtuous cycle, it tends to grow so big, so fast, that it overwhelms any upstart competitors. As a result, our AI future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.
    AI Everywhere

    Over the past five years, cheap computing, novel algorithms, and mountains of data have enabled new AI-based services that were previously the domain of sci-fi and academic white papers. —ROBERT MCMILLAN


    Self-Driving Car | Google has moved on from its initial goal of trying to index the entire Internet. Now it wants to index reality—part of its effort to perfect its self-driving car. Before the vehicle navigates a particular route, Google drivers scope out the course and then produce the most precise maps imaginable. That way the autonomous car knows what to expect and simply has to scan the environment with its roof-mounted lasers, cameras, and radar systems to spot anything out of the ordinary. That's a much easier problem to solve than building a real-time map of the world.






    Body Tracker | To turn the human body into a game controller, researchers working on Microsoft's Xbox Kinect had to deploy new machine-learning techniques. First, the device's infrared emitter and sensor create a 3-D image of a player's frame and analyze its different parts—shoulders, feet, hands. Then, using a method called decision forests, Kinect's AI system guesses the body's most likely next position. The result is a system that reads your movements in real time, without overwhelming the Xbox's memory.


    Personal Photo Archivist | Matt Zeiler wants you to be able to find a snapshot as easily as you look up a phone number. His startup, Clarifai, is developing a new search technique to index the photos on your phone. While old-school image search looks for colors and lines, Clarifai's AI software understands corners and parallel lines, then can master higher-level concepts like wheels or cars as it studies more and more pictures.


    Universal Translator | The Skype Translator, which will debut in beta by year's end, translates speech in real time, allowing anyone to talk naturally with anyone else. The AI software examines millions of translated sentences until it becomes superb at guessing how any given jumble of words will translate. For voice recognition, it breaks down samples of the spoken word, analyzing them until it achieves a sophisticated grasp of the ways sounds combine to form speech.


    Smarter News Feed | Facebook hired one of the world's foremost deep-learning experts, Yann LeCun, to set up an AI lab last year. He's tasked with improving the social network's speech and image recognition software to make it more efficient at identifying, say, viral videos that you'll find funny or photos that you'll want to see—like your friends in a group snapshot.



    In 1997, Watson's precursor, IBM's Deep Blue, beat the reigning chess grand master Garry Kasparov in a famous man-versus-machine match. After machines repeated their victories in a few more matches, humans largely lost interest in such contests. You might think that was the end of the story (if not the end of human history), but Kasparov realized that he could have performed better against Deep Blue if he'd had the same instant access to a massive database of all previous chess moves that Deep Blue had. If this database tool was fair for an AI, why not for a human? To pursue this idea, Kasparov pioneered the concept of man-plus-machine matches, in which AI augments human chess players rather than competes against them.


    Now called freestyle chess matches, these are like mixed martial arts fights, where players use whatever combat techniques they want. You can play as your unassisted human self, or you can act as the hand for your supersmart chess computer, merely moving its board pieces, or you can play as a “centaur,” which is the human/AI cyborg that Kasparov advocated. A centaur player will listen to the moves whispered by the AI but will occasionally override them—much the way we use GPS navigation in our cars. In the championship Freestyle Battle in 2014, open to all modes of players, pure chess AI engines won 42 games, but centaurs won 53 games. Today the best chess player alive is a centaur: Intagrand, a team of humans and several different chess programs.


    But here's the even more surprising part: The advent of AI didn't diminish the performance of purely human chess players. Quite the opposite. Cheap, supersmart chess programs inspired more people than ever to play chess, at more tournaments than ever, and the players got better than ever. There are more than twice as many grand masters now as there were when Deep Blue first beat Kasparov. The top-ranked human chess player today, Magnus Carlsen, trained with AIs and has been deemed the most computer-like of all human chess players. He also has the highest human grand master rating of all time.


    If AI can help humans become better chess players, it stands to reason that it can help us become better pilots, better doctors, better judges, better teachers. Most of the commercial work completed by AI will be done by special-purpose, narrowly focused software brains that can, for example, translate any language into any other language, but do little else. Drive a car, but not converse. Or recall every pixel of every video on YouTube but not anticipate your work routines. In the next 10 years, 99 percent of the artificial intelligence that you will interact with, directly or indirectly, will be nerdily autistic, supersmart specialists.


    In fact, this won't really be intelligence, at least not as we've come to think of it. Indeed, intelligence may be a liability—especially if by “intelligence” we mean our peculiar self-awareness, all our frantic loops of introspection and messy currents of self-consciousness. We want our self-driving car to be inhumanly focused on the road, not obsessing over an argument it had with the garage. The synthetic Dr. Watson at our hospital should be maniacal in its work, never wondering whether it should have majored in English instead. As AIs develop, we might have to engineer ways to prevent consciousness in them—and our most premium AI services will likely be advertised as consciousness-free.


    What we want instead of intelligence is artificial smartness. Unlike general intelligence, smartness is focused, measurable, specific. It also can think in ways completely different from human cognition. A cute example of this nonhuman thinking is a cool stunt that was performed at the South by Southwest festival in Austin, Texas, in March of this year. IBM researchers overlaid Watson with a culinary database comprising online recipes, USDA nutritional facts, and flavor research on what makes compounds taste pleasant. From this pile of data, Watson dreamed up novel dishes based on flavor profiles and patterns from existing dishes, and willing human chefs cooked them. One crowd favorite generated from Watson's mind was a tasty version of fish and chips using ceviche and fried plantains. For lunch at the IBM labs in Yorktown Heights I slurped down that one and another tasty Watson invention: Swiss/Thai asparagus quiche. Not bad! It's unlikely that either one would ever have occurred to humans.


    Nonhuman intelligence is not a bug, it's a feature. The chief virtue of AIs will be their alien intelligence. An AI will think about food differently than any chef, allowing us to think about food differently. Or to think about manufacturing materials differently. Or clothes. Or financial derivatives. Or any branch of science and art. The alienness of artificial intelligence will become more valuable to us than its speed or power.


    As it does, it will help us better understand what we mean by intelligence in the first place. In the past, we would have said only a superintelligent AI could drive a car, or beat a human at Jeopardy! or chess. But once AI did each of those things, we considered that achievement obviously mechanical and hardly worth the label of true intelligence. Every success in AI redefines it.


    But we haven't just been redefining what we mean by AI—we've been redefining what it means to be human. Over the past 60 years, as mechanical processes have replicated behaviors and talents we thought were unique to humans, we've had to change our minds about what sets us apart. As we invent more species of AI, we will be forced to surrender more of what is supposedly unique about humans. We'll spend the next decade—indeed, perhaps the next century—in a permanent identity crisis, constantly asking ourselves what humans are for. In the grandest irony of all, the greatest benefit of an everyday, utilitarian AI will not be increased productivity or an economics of abundance or a new way of doing science—although all those will happen. The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  8. #88
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Here's the absolute final word on AI.


    I think, therefore I a.... oh fuck it. (Too bad this wasn't a real article) LOL


    Paris Hilton on Existential Risks from Artificial Intelligence

    October 27, 2014 AI, Humor, images, Interviews, Satire
    Share on facebook Share on twitter Share on email Share on print More Sharing Services ?


    • By: Peter Rothman


    Following up on Elon Musk‘s recent comments at MIT on the dangers of artificial intelligence, h+ Magazine decided to get together for an interview with another wealthy and important AI expert, celebrity heiress, DJ, and superhacker Paris Hilton.
    h+: Hi Paris, thanks for agreeing to this interview on such short notice. Can you give us a quick idea of what you are you up to these days?
    Paris: I’m an actress, a brand, a businesswoman. A DJ and an AI programmer. I’m all kinds of stuff.

    h+: So how did you first get interested in AI research?
    Paris: First I wanted to be a veterinarian. And then I realised you had to give them shots to put them to sleep, so I decided I’d just buy a bunch of animals and have them in my house instead.
    With AI it was sort of the same thing.

    h+: Paris, do you think that artificial intelligence poses an existential threat to humanity?

    Paris: All you have to do in life is go out with your friends, party hard, and look twice as good as the bitch standing next to you. So no.

    h+: What about the Singularity and the rise of a greater than human intelligence?
    Paris: I’m not a kid anymore. And I’m excited for all the amazing things to come. The AI will take care of itself.

    h+: Well, what about the idea that an artificial intelligence would extend itself into the universe converting all available matter into resources for computation?

    Paris: That’s Hot!



    h+: Ok, so how do you think we should control potentially dangerous AIs?
    Paris: A true heiress is never mean to anyone – except a girl who steals your boyfriend.

    h+: Isaac Asimov proposed three rules for robots to ensure that they were safe and wouldn’t harm humans. Do you think AIs can be built that follow Asimov’s Three Laws?
    Paris: The only rule is don’t be boring and dress cute wherever you go. Life is too short to blend in.

    h+: How can we ensure the development of friendly AI?
    Paris: I have this great test to see if a girl’s a real friend. When we’re shopping I’ll pick out an outfit that I know looks hot and one that is awful. If my friend says the bad one looks good, I know she’s not a good friend.

    h+: Thanks Paris for your time and all of your amazing insights into the field of artificial intelligence.


    Paris: My pleasure! Some girls are just born with glitter in their veins!
    [This article is a satire and is not a transcript of an actual interview with Ms. Hilton.]
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  9. #89
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Quantum Robotics will Create Artificial Intelligence 'Capable of Creativity'
















    Quantum computing will open up a new field of robotics able to learn and carry out complex creative tasks, according to researchers.
    Scientists from the Complutense University of Madrid (UCM) and the University of Innsbruck in Austria claim in a paper published in the journal Physical Review, that the principles of quantum mechanics can be applied to create "intelligent learning agents" relevant for applications involving complex task environments.
    "Building a model is actually a creative act, but conventional computers are no good at it," said Miguel Martin-Delgado from UCM.
    "That is where quantum computing comes into play. The advances it brings are not only quantitative in terms of greater speed, but also qualitative: adapting better to environments where the classic agent does not survive. This means that quantum computers are more creative."
    What is quantum computing?
    Quantum computers combine quantum mechanics with computer science to exponentially speed up processing. Traditional bits used in digital communications are replaced with quantum bits, or qubits, which act in a state of superposition that allow them to operate in multiple states at once, rather than just the two states - one or zero - in which traditional bits function.
    The emerging field of quantum computing has so far been applied in a variety of areas, including medicine and space travel, and has attracted the attention of the likes of Google.
    It has been hailed by some as the next technological revolution, capable of increasing processing powers exponentially in relation to the current abilities of traditional computers.
    The authors of the quantum robotics study state that the application of quantum computers in the area of robotics will accelerate one of the most difficult points to resolve in information technology: machine learning.
    "In the case of very demanding and 'impatient' environments, the outcome is that the quantum robot can adapt itself and survive, while the classic robot is destined to collapse," said Davide Paparo, co-author of the study.
    "(Quantum computing) means a step towards the most ambitious objective of artificial intelligence: the creation of a robot that is intelligent and creative, and that is not designed for specific tasks."
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  10. #90
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    AMC Nabs Artificial Intelligence TV Series Humans

    9,656
    3 Meredith Woerner

    ProfileFollow


    Meredith WoernerFiled to: humans



    10/15/14 1:00pm







    Expand
    AMC is readying an artificial intelligence drama picked up from Xbox's now defunct original programming slate. That's good news for us, because this series Humans looks and sounds awesome.
    THR is reporting that AMC has picked up the rights to Humans, starring William Hurt and Katherine Parkinson.
    "Humans" is set in a parallel present where the latest must-have gadget for any busy family is a 'Synth' - a highly-developed robotic servant eerily similar to its live counterpart. In the hope of transforming the way they live, one strained suburban family purchases a refurbished synth only to discover that sharing life with a machine has far-reaching and chilling consequences.
    Hurt will play a widower named George Millica, a character described as someone who has formed a close relationship with an out-of-date "synth" and treats it a lot more like his child than a robot. Parkinson plays a woman whose husband buys a synth that starts to show signs of humanity—whatever those signs are.
    The eight-episode series was penned by Sam Vincent and Jonathan Brackley (Spooks), and based on the Swedish drama Real Humans. The series is currently in production and will air sometime in 2015.
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  11. #91
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    The Future of AI is a Coin Flip: Immortality or Human Extinction?

    Elon Musk isn't the only genius afraid of the future.

    By Michael Howard on October 29, 2014



    • 0
    • 0
    • 0




    Flickr: Pascal Cyborg / Progress

    Elon Musk has joined everyone from the Unabomber to James Cameron (sometimes) to grandfathers worldwide in their fear of future technology. The difference is that Ted Kaczynski is a crazy murderer, whereas Musk is a sane philanthropist and technology expert whose respected vision of the future beckons genuine concern.
    Musk spoke to MIT’s Aeronautics and Astronautics department at their Centennial Symposium:
    I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.
    With artificial intelligence, we are summoning the demon. You know those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon, [but] it doesn’t work out.
    Unlike the Unabomber, Musk isn’t exploding humans to show his concern for humans. He’s putting his money where his fear of dystopian apocalypse is. In March, he invested in an AI startup, Vicarious, alongside other tech billionaires such as Jeff Bezos and Mark Zuckerburg. Musk told CNBC his investment comes “not from the standpoint of trying to make any investment return” but because he wants to monitor the “potentially dangerous outcome” of AI.
    “There’s been movies about this, like Terminator."
    Vicarious, which now has ten employees and over $56 million, aims to create a computer capable of human intelligence. A survey among experts estimated a median fifty percent likelihood of this achievement by the year 2050. Futurists have long referred to what happens next as the Technological Singularity, or simply, the Singularity. When man invents a computer with the creative capacity to invent an even superior computer, technological change will snowball with increasing rapidity into an unfathomable (due to our inferior intellect) future.
    When artificial intelligence grows exponentially, humans will live alongside far smarter machines, or maybe we won’t live at all. Musk mulled this over in August when he tipped his hat to the University of Oxford’s Dr. Nick Bostrom.
    Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.
    — Elon Musk (@elonmusk) August 3, 2014
    Bostrom has won the Gannon Award for Continued Pursuit of Human Advancement, has authored over 200 books, and is listed as one of Foreign Policy’s Top 100 Global Thinkers. At Oxford’s Future of Humanity Institute, Bostrom teaches that the current human condition will end in one of two ways: extinction or cosmic endowment. He believes AI will be capable of “superintelligence” very shortly after human intelligence is emulated, which could be when our fate is determined.
    Extinction

    “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” – Stephen Hawking
    A nihilistic kind of AI may determine that our world would be better off without people after it observes how we kill each other, harm the environment, and worship stars of really bad TV shows. Hollywood has devoted plenty of imagination to how AI might force mankind into a drawn-out suicide. Unfortunately, sci-fi has been known to predict the future of technology fairly accurately, and Hollywood is relentlessly morbid in this regard.

    Bostrom thinks a single superior supercomputer is the most probable scenario, like that of Eagle Eye, I, Robot, or (preferably) Hitchhiker’s Guide to the Galaxy. Bostrom believes that once AI is capable of human intellect, superintelligence can be achieved “within hours, minutes, or days.” Because the evolution will be constant, the first computer to achieve superintelligence will only continue getting smarter at the fastest possible pace, meaning no competition will ever surpass its power.
    There’s no telling what will happen after that, but in a world of logical AI, it’s safe to assume all AI will follow orders from the brightest of the bunch. If the brilliant leader turns out to be a misanthropic dick, it could mean drones and nukes for us.

    Cosmic Endowment

    On the bright side, with a little safety regulation from philanthropists like Elon Musk, artificial intelligence may steer mankind in the other direction, toward evolutionary superpowers and invincibility. Bostrom’s cosmic endowment refers to the potential for people to avoid extinction and colonize the universe by using technology to our benefit.
    Earth’s brightest minds are working toward this goal. Google Engineering Director Ray Kurzweil is a National Medal of Technology recipient with twenty honorary degrees who thinks man’s fusion with tech is imminent. In his 2005 book, The Singularity is Near, Kurzweil elucidates why "technology will be the metaphorical opposable thumb that enables our next step in evolution."
    A major catalyst for Kurzweil’s disease-free and hyper-intellegent utopia is biomedical nanotechnology, which happens to be making strides like never before. Last year, DNA-based computing was used in a living organism for the first time when scientists injected cockroaches with nanobots. In August, Harvard scientists crammed 700 terabytes of data into a single gram of DNA to break the previous world record by a thousand times. To put the mindboggling difficulty of these feats into perspective, by the time you finish reading this sentence, your fingernails will have grown one nanometer, maybe two… definitely two at this point.
    Only time will tell if the future of nanotechnology combats cancer and aging or if a diabolical AI hijacks microtech to malevolently manipulate sentient beings from a molecular level. On the other hand, maybe our extinction began long ago, and we’re all stuck in the Matrix as our real bodies act like batteries for a bot-dominated realm we cannot see. Maybe string theory—the theoretical framework in physics that requires eleven dimensions—is correct, but we can only detect these few dimensions our virtual prison allows us to perceive.
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  12. #92
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    US Navy: Artificial Intelligence in Patrol Boats Likely by Next Year



    By Anjalee Khemlani (staff@latinpost.com)

    First Posted: Oct 05, 2014 03:05 PM EDT





    MANAMA, BAHRAIN - DECEMBER 06: A US Navy patrol boat follows a boat that passed near the USS Ponce where U.S. Secretary of Defense Chuck Hagel was taking a tour, on December 6, 2013 in Manama, Bahrain. Secretary Hagel is on a six-day trip to the middle east before returning to Washington. US Secretary of Defense Chuck Hagel toured the Ponce which was recently refitted and converted to a staging base for mine countermeasures helicopters and carries a laser weapons system. (Photo : Photo by Mark Wilson/Getty Images)




    The U.S. Navy is rolling out a new program to use artificial intelligence for unmanned, armed patrol boats within the next year.


    These new patrol boats, which use technology adapted from NASA's Mars rover, will help escort and defend warships when traveling through sensitive sea lanes, Agence France-Presse reported.


    Currently, these patrol boats include four sailors on board to help guide the ships, but with the robotic system on board there will only need to be a single sailor who can oversee up to 20 ships.


    The Office of Naval Research released the results of a study Sunday that they deemed a breakthrough in the use of the technology.


    The unprecedented demonstration in August involved 13 robotic patrol craft escorting a ship on the James River in Virginia, AFP reported. While no shots were fired during the test, the boats are designed to be able to do so and can be equipped with lights, blaring sound and a 50-caliber machine guns.


    The demonstration involved simulating a threat to the Navy ships, which the boats responded to by circling the target to allow the ship to pass safely.


    But the robotic technology can do much more. In addition to detecting threats and responding, they also follow vessels and determining best routes, as well as sensing other obstacles.


    The potential for this much autonomy has raised concerns about robotic warfare and the hazards of not having comprehensive regulations in place to deal with such crafts.


    The Navy responded by saying that there are fail-safes, and if communication with the craft broke off, it would be cut off, leaving it dead in the water to prevent any mishaps.


    The new technology, dubbed CARACaS, or Control Architecture for Robotic Agent Command and Sensing, would be outfitted into the 11-yard long vessels known in the military as rigid hulled inflatable boats for a low cost.


    "We have every intention to use those unmanned systems to engage a threat," naval research chief Rear Admiral Matthew Klunder said. "There is always a human in the loop of that designation of the target and if so, the destruction of the target."
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  13. #93
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    I can see these Navy boats being a good use against piracy.
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  14. #94
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Found this from wayyyyy back in March.


    Artificial Intelligence could kill us all. Meet the man who takes that risk seriously




    Martin Bryant
    8 March '14, 03:30pm




    Thinking about the end of the world is something that most people try to avoid; for others, it’s a profession. The Future of Humanity Institute at the University of Oxford, UK specializes in looking at the ‘big-picture’ future of the human race, and notably, the risks that could wipe us out entirely.


    As you’d probably imagine, the risks considered by the Institute include things like nuclear war and meteor strikes, but one perhaps unexpected area that it’s looking into is the potential threat posed by artificial intelligence. Could computers become so smart that they become our rivals, take all our jobs and eventually wipe us all out? This Terminator-style scenario used to seem like science fiction, but it’s starting to be taken seriously by those who watch the way technology is developing.


    “I think there’s more academic papers published on either dung beetles or Star Trek than about actual existential risk,” says Stuart Armstrong, a philosopher and Research Fellow at the institute, whose work has lately been focused on AI. “There are very few institutes of any sorts in the world looking into these large-scale risks…. there is so little research… compared to other far more minor risks – traffic safety and things like that.”


    HAL 9000 from ‘2001: A Space Odyssey’



    “One of the things that makes AI risk scary is that it’s one of the few that is genuinely an extinction risk if it were to go bad. With a lot of other risks, it’s actually surprisingly hard to get to an extinction risk,” Armstrong explains. “You take a nuclear war for instance, that will kill only a relatively small proportion of the planet. You add radiation fallout, slightly more, you add the nuclear winter you can maybe get 90%, 95% – 99% if you really stretch it and take extreme scenarios – but it’s really hard to get to the human race ending. The same goes for pandemics, even at their more virulent.


    “The thing is if AI went bad, and 95% of humans were killed then the remaining 5% would be extinguished soon after. So despite its uncertainty, it has certain features of very bad risks.”
    An AI meets a human in a bar…

    So, what kind of threat are we talking about here?
    “First of all forget about the Terminator,” Armstrong says. “The robots are basically just armoured bears and we might have fears from our evolutionary history but the really scary thing would be an intelligence that would be actually smarter than us – more socially adept. When the AI in robot form can walk into a bar and walk out with all the men and/or women over its arms, that’s when it starts to get scary. When they can become better at politics, at economics, potentially at technological research.”


    The first impact of that technology, Armstrong argues, is near total unemployment. “You could take an AI if it was of human-level intelligence, copy it a hundred times, train it in a hundred different professions, copy those a hundred times and you have ten thousand high-level employees in a hundred professions, trained out maybe in the course of a week. Or you could copy it more and have millions of employees… And if they were truly superhuman you’d get performance beyond what I’ve just described.”
    Why would AI want to kill us?

    Okay, they make take our jobs, but the idea that some superior being would want to kill us may seem presumptuous. Google’s Director of Engineering, Ray Kurzweil, for example, has an optimistic view of how organic and cybernetic lifeforms will become increasingly intertwined in a more positive way – Skynet doesn’t have to become a reality, and if it does, it doesn’t necessarily have to turn against its creators. Armstrong thinks we should be aware of, and prepared for, the risks though.


    “The first part of the argument is they could get very intelligent and therefore very powerful. The second part of the argument is that it’s extremely hard to design some sort of motivation structure, or programming… that results in a safe outcome for such a powerful being.


    “Take an anti-virus program that’s dedicated to filtering out viruses from incoming emails and wants to achieve the highest success, and is cunning and you make that super-intelligent,” Armstong continues. “Well it will realise that, say, killing everybody is a solution to its problems, because if it kills everyone and shuts down every computer, no more emails will be sent and and as a side effect no viruses will be sent.


    “This is sort of a silly example but the point it illustrates is that for so many desires or motivations or programmings, ‘kill all humans’ is an outcome that is desirable in their programming.”


    Couldn’t we program in safeguards though? A specific ‘Don’t kill humans’ rule?


    “It turns out that that’s a more complicated rule to describe, far more than we suspected initially. Because if you actually program it in successfully, let’s say we actually do manage to define what a human is, what life and death are and stuff like that, then its goal will now be to entomb every single human under the Earth’s crust, 10km down in concrete bunkers on feeding drips, because any other action would result in a less ideal outcome.


    “So yes, the thing is that what we actually need to do is to try and program in essentially what is a good life for humans or what things it’s not allowed to interfere with and what things it is allowed to interfere with… and do this in a form that can be coded or put into an AI using one or another method.”
    Uncertain is not the same as ‘safe’

    Armstrong certainly paints a terrifying picture of life in a world where artificial intelligence has taken over, but is this an inevitability? That’s uncertain, he says, but we shouldn’t be too reassured by that.


    “Increased uncertainty is a bad sign, not a good sign. When the anti-global warming crowd mention ‘but there are uncertainties to these results’ that is utterly terrifying – what people are understanding is ‘there are increased uncertainties so we’re safe’ but increased uncertainties nearly always cut both ways.


    “So if they say there’s increased uncertainties, there’s nearly always increased probabilities of the tail risk – really bad climate change and that’s scary. Saying ‘we don’t know stuff about AI’ is not at all the same thing as saying ‘we know that AI is safe’. Even though we’re mentally wired to think that way. ”
    When might we see true AI?

    As for a timeframe as to when we could have super-intelligent AI, Armstrong admits that this is a tough question to answer.


    “Proper AI of the (kind where) ‘we program it in a computer using some method or other’… the uncertainties are really high and we may not have them for centuries, but there’s another approach that people are pursuing which is whole-brain emulations, some people call them ‘uploads’, which is the idea of copying human brains and instantiating them in a computer. And the timelines on this seem a lot more solid because unlike AI we know exactly what we want to accomplish and have clear paths to reaching it, and that seems to be plausible over a century timescale.”


    If computers can ‘only’ think as well as humans, that may not be so bad a scenario.


    “(With) a whole brain emulation… this would be an actual human mind so we wouldn’t have to worry that the human mind would misinterpret ‘keep humans safe’ as something pathological,” Armstrong says. “We would just have to worry about the fact of an extremely powerful human – a completely different challenge but it’s the kind of challenge that we’re more used to – constraining powerful humans – we have a variety of methods for that that may or may not work, but it is a completely different challenge than dealing with the completely alien mind of a true AI.”


    David from ‘AI: Artificial Intelligence’



    As for those true AIs that can outsmart any human, timeframes are a lot more fuzzy.


    “You might think you can get a good estimate off listening to predictors in AI, maybe Kurzweil, maybe some of the others who say either pro- or anti-AI stuff. But I’ve had a look at it and the thing is there’s no reason to suspect that these experts know what they’re talking about. AIs have never existed, they’ll never have any feedback about how likely they are to exist, we don’t have a theory of what’s needed in any practical sense.


    “If you plot predictions, they just sort of spread over the coming century and the next, seemingly 20 years between any two predictions and no real pattern. So definitely there is strong evidence that they don’t know when AI will happen or if it will happen.


    “This sort of uncertainty however goes both ways, the arguments that AI will not happen are also quite weak and the arguments that AI will not happen soon are also quite weak. So, just as you might think that say it might happen in a century’s time, you should also think that it might happen in five years’ time.


    “(If) someone comes up with a nearly neat algorithm, feeds it a lot of data, this turns out to be able to generalize well and – poof – you have it very rapidly, though it is likely that we won’t have it any time soon, we can’t be entirely confident of that either.”
    The philosophy of technology

    What became apparent to me while talking to Armstrong is that the current generation of philosophers, often ignored by those outside the academic circuit, have a role to play in establishing the guidelines around how we interact with increasingly ‘intelligent’ technology.


    Armstrong likens the process behind his work to computer programming. “We try to break everything down into the simplest terms possible, as if you were trying to program it into an AI or into any computer. Programming experience is very useful. But fortunately, philosophers, and especially analytic philosophers, have been doing this for some time. You just need to extend the program a bit. So see what you have and how you would ground it, so theories of how you learn stuff, how you know anything about the world and how to clearly define terms become very useful.”
    AI’s threat to your job

    The biggest problem Armstrong faces is simple disbelief from people that the threat of mass extinction from artificial intelligence is worth taking seriously.


    “Humans are very poor at dealing with extreme risks,” he says. “Humans in general and decision makers at all levels – we’re just not wired well to deal with high-impact, low-probability stuff… We have heuristics, we have mental maps in which extinction things go into a special category – maybe ‘apocalypses and religions or crazy people’, or something like that.”


    At least Armstrong is making headway when it comes to something that seems a little more impactful on our day-to-day lives in the nearer term – the threat AI poses to our jobs. “That’s perfectly respectable, that’s a very reasonable fear. It does seem that you can get people’s attention far more with mid-size risks than with extreme risks,” he says.
    “(AI) can replace practically anybody, including people in professions that are not used to being replaced or outsourced. So just for that, it’s worth worrying about, even if we don’t look at the dangerous effect. Which again, I’ve found personally if I talk about everybody losing their job it gets people’s interest much more than if I start talking about the extinction of the human species. The first is somehow more ‘real’ than the second.”


    Car assembly, one area where robots long ago replaced many human roles.



    I feel it’s appropriate to end our conversation with a philosophical question of my own. Could Armstrong’s own job be replaced by an AI, or is philosophy an inherently human pursuit?


    “Oh interesting… There is no reason why philosophers would be exempt from this, that philosophers would be able to be AI much better than humans because philosophy is a human profession,” he says.


    “If the AI’s good at thinking, it would be better. We would want to have done at least enough philosophy that we could get the good parts into the AI so that when it started extending it didn’t extend it in dangerous or counterproductive areas, but then again it would be ‘our final invention’ so we would want to get it right.


    “That does not mean that in a post AI world that there would not be human philosophers doing human philosophy, the point is that we would want humans to do stuff that they found worthwhile and useful. So it is very plausible that you would have in a post-AI society philosophers going on as you would have other people doing other jobs that they find worthwhile. If you want to be romantic about it, maybe farmers of the traditional sort.


    “I don’t really know how you would organise a society but you would have to organize it so that people would find something useful and productive to do, which might include philosophy.


    “In terms of ‘could the AIs do it beyond a human level’, well yes, most likely, at least to a way that we could not distinguish easily between human philosophers and AI.”
    We may be a long way away from the Terminator series becoming a documentary, but then again maybe we’re not. Autonomous robots with the ability to kill are already being taken seriously as a threat on the battlefields of the near future.


    The uncertainty around AI is why we shouldn’t ignore warnings from people like Stuart Armstrong. When the machines rise, we’ll need to be ready for them.


    See also: Can machine algorithms truly mimic the depths of human communication?
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  15. #95
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Google's DeepMind Builds Artificial Intelligence Computer That Mimics Human Brain


    DeepMind's Neural Turing Machine is advanced enough to essentially program itself(CC)

    Google-owned artificial intelligence (AI) startup DeepMind has unveiled a computer prototype capable of mimicking properties of the human brain's short-term memory.
    The Neural Turing Machine learns and stores algorithms and data as "memories", which it is then able to retrieve later to perform tasks it has not been previously programmed to do.
    "We have introduced the Neural Turing Machine, a neural network architecture that takes inspiration from both models of biological working memory and the design of digital computers," Alex Graves, Greg Wayne and Ivo Danihelka from DeepMind wrote in a recent research paper.

    "Our experiments demonstrate that it is capable of learning simple algorithms from example data and of using these algorithms to generalise well outside its training regime."
    This new form of AI builds on the work of American cognitive psychologist George Miller, who first began research into the capacity and function of the short-term memory in the 1950s.
    Computer scientists have since attempted to design algorithms and neural networks to replicate a process known as variable binding, which describes the ability to store data of different lengths and return to it when required.
    "Our architecture draws on and potentiates this work," the paper states. "For example, we were curious to see if a network that had been trained to copy sequences of length up to 20 could copy a sequence of length 100 with no further training."
    Further DeepMind research into AI technologies received a boost last month when Google announced a partnership with leading research teams from Oxford University.
    As part of the deal, Dark Blue Labs and Vision Factory at the university will work with DeepMind to accelerate research efforts.
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  16. #96
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    While I'm in the time machine....

    (Hawking's Birthday is on 8 January 1942, he is 72 and will be 73, so this article is over a year old.)

    Stephen Hawking Joins Anti-Robot Apocalypse Think Tank

    By Neal Ungerleider
    Stephen Hawking, who turns 71 today, has joined the board of an international think tank devoted to defending humanity from futuristic threats. The Cambridge Project for Existential Risk is a newly founded organization which researches existential threats to humanity such as extreme climate change, artificial intelligence, biotechnology, artificial life, nanotech, and other emerging technologies. Skype cofounder Jaan Tallinn and Cambridge professors Huw Price and Martin Rees founded the project in late 2012.


    Price and Tallinn collaborated on a speech at the 2012 Sydney Ideas Festival which argued that artificial intelligence has reached a threshold that could lead to an explosion of autonomous machine dominance similar to the rise of Homo sapiens. The Cambridge Project for Existential Risk's stated goal is to establish a research center dedicated to the study of autonomous robot (and other) threats within Cambridge University. Hawking's famous boxing wheelchair from The Simpsons will presumably be safe.
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  17. #97
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Stephen Hawking Warning About Artificial Intelligence

    Added by Sara Watson on May 4, 2014.
    Saved under Science, Technology
    Tags: hawking
    Legendary astrophysicist, author and cosmologist Stephen Hawking has written an article that warns about the dangers of artificial intelligence,(AI). He looks at the way technology is moving forward, and what predictions have been made about the future. The article was part of a paper that Hawking co-wrote with fellow science professors Stuart Russell a computer-science professor at Berkley University and physics professors Frank Wilczek and Max Tegmark of the Massachusetts Institute of Technology.
    Stephen Hawking was inspired to write the piece after watching Transcendence, starring Johnny Depp and Morgan Freeman. The film, which is in cinemas at the moment, looks at two opposing possible futures for humanity. One is the road in which AI is a strong and crucial part of our existence and taking over many aspects of human life. The other is an anti-technology perspective. However, Hawking warns about dismissing this sort of artificial intelligence simply as science fiction.
    There have already been a number of technological advancements towards the idea of AI. Listed in the article are so-called “digital personal assistants” Siri, Microsoft’s Cortana and Google Now. Hawking also mentions self-driving cars and the computer that won Jeopardy!. These progressions are considered “pale” by the astrophysicist, who states that there could be many other kinds of advancements in the future.
    Whilst Hawking writes that to successfully create AI would be one of the greatest achievements of the human race, he also sees potential problems in the future. The uses of AI are endless, world issues such as poverty, disease and war could become a thing of the past. But there is a price. Defence firms are already looking into how AI could be used to create weapons that are completely autonomous to eliminate their targets. Super-intelligent machines could self-replicate, improving on their faults and learning as they go.
    As such, they could outsmart any human counterparts in the case of financial markets, invent better machines than humanity could come up with and manipulate human leaders to their own benefit. Hawking notes that in the short-term, issues with AI would simply be centered around who has the control of the technology, whereas the long-term problems could be whether or not it is able to be controlled. This again has forerunner in sci-fi, such as the Terminator films where humanity battles against machines with AI, ones that were once under the human domain, but who chose to rebel against human overlords.
    In an effort to prevent technology from falling into the wrong hands, the UN and Human Rights Watch has suggested a treaty against such weapons, banning them from being produced. Hawking notes that the future cannot be predicted in terms of the changes in technology. He looks at the changes that have come about so far as evidence of this fact. While some elements of technological developments were predicted, Nikola Tesla predicted that people of the future would be able to send “wireless” messages and Isaac Asimov in 1988 predicted the internet, but whilst Asimov could see how “connected libraries” would allow the children of the future to have access to a seemingly endless field of technology, he could not foresee all the other elements that the internet gives us.
    Stephen Hawking is calling the developments into artificial intelligence “the best or worst thing” that could occur to humanity in the future. He warns that not enough research is being devoted to the possible risks that are involved. As such, moving forward more research is needed to try to overcome potential problems before they arise, rather than waiting until it is too late to fix.




    Read more at http://guardianlv.com/2014/05/stephe...mdL3UdiLY2I.99
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  18. #98
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Stephen Hawking: Humans evolve slowly, AI could stomp us out

    In the latest of his pessimistic thoughts on the future, the famed physicist warns yet again of the end of the human race.



    Stephen Hawking is not optimistic about the human race. BBC screenshot by Chris Matyszczyk/CNET
    We think of ourselves as evolved creatures. It's just that sometimes we forget how slow that evolution is.
    Along comes Stephen Hawking to remind us that artificial intelligence might just evolve a little quicker than we're prone to. The result could be the end of our evolution and, indeed, the end of us.
    In a BBC interview published Tuesday, Hawking paints a picture of humanity not dissimilar to a splattered Jackson Pollock.
    Hawking said he fears that a complete artificial intelligence would simply do away with us.


    AI "would take off on its own, and redesign itself at an ever increasing rate," he mused. The result would quite simply be that this new, exalted intelligence would see no need for our cumbersome, turgid ways. Or, as he put it: "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."



    This isn't the first time in recent months that Hawking has predicted our doom. In May, he warned that the moral goodness of AI depends on who controls it. In June, he cautioned that robots might simply turn out to be smarter than us.


    In the latest warning, however, Hawking was asked about the new artificial intelligence that helps him speak. Developed by Intel, it learns how he thinks and begins to offer words that he might wish to use. Somehow, though, Hawking still couldn't offer a more positive view of AI's future (or ours).


    It's not so easy to find much optimism in some of Hawking's more recent thoughts. Yes, he joined Facebook and joked about being an alien. But he's also warned that aliens might destroy us before our human-built robots will, simply because they'll take a look at us and rather dislike us.


    At heart it seems that for all the progress made with respect to the God Particle, for instance, Hawking worries that we're likely to be terminated. Indeed, in September he offered the gnarly thought that the God Particle itself might become unstable and cause a "catastrophic vacuum decay."


    It's tempting to be like Googlies, who seems to believe that any amount of engineering development must be a good thing.


    But with Hawking giving such consistently dire warnings, it may be wise to contemplate how we might control the uncontrollable when some bright minds believe, for example, that cars should drive us, rather than the other way around.


    On a more personal note, Hawking told the BBC that despite technological advances, he wants to carry on speaking in the somewhat robotic manner of his previous technology.


    "It has become my trademark, and I wouldn't change it for a more natural voice with a British accent," he said.


    However playful Hawking can be, though, one is left with his essentially pessimistic view of the future. He isn't alone either. Elon Musk, a founder of Tesla Motors and SpaceX, has likewise envisioned AI doom. And how many sci-fi movies have painted a beautiful, future world? And how many have offered something a little more frightening?
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  19. #99
    Creepy Ass Cracka & Site Owner Ryan Ruck's Avatar
    Join Date
    Jul 2005
    Location
    Cincinnati, OH
    Posts
    25,061
    Thanks
    52
    Thanked 78 Times in 76 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    I saw that but hadn't gotten around to reading what he said.

  20. #100
    Creepy Ass Cracka & Site Owner Ryan Ruck's Avatar
    Join Date
    Jul 2005
    Location
    Cincinnati, OH
    Posts
    25,061
    Thanks
    52
    Thanked 78 Times in 76 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Supposedly Elon Musk recently posted this on a discussion blog but it was deleted.

    (Don't know where it was posted originally)


Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. Scotty’s Star Trek IV Invention Will Be Used In Warfare
    By Ryan Ruck in forum Science and Technology
    Replies: 3
    Last Post: May 25th, 2012, 15:32
  2. In Gold Cup Final, It's Red, White And Boo Again
    By Ryan Ruck in forum U.S. Border Security
    Replies: 1
    Last Post: June 27th, 2011, 17:19
  3. Space... the Final Frontier.. And the final launch
    By American Patriot in forum Space
    Replies: 2
    Last Post: April 20th, 2011, 17:02
  4. Kim Jong Il final days?
    By Toad in forum Southeast Asia
    Replies: 6
    Last Post: October 6th, 2009, 20:24
  5. Accidental Invention Points to End of Light Bulbs
    By Ryan Ruck in forum Science and Technology
    Replies: 1
    Last Post: November 4th, 2005, 22:23

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •