Page 7 of 7 FirstFirst ... 34567
Results 121 to 134 of 134

Thread: Our Final Invention: How the Human Race Goes and Gets Itself Killed

  1. #121
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    lol.

    I sent the link to that to Ryan, but he'd already posted it he said.

    Funny, I wasn't looking at it from the same point of view you guys were... but I can see it now. haha
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  2. #122
    Postman vector7's Avatar
    Join Date
    Feb 2007
    Location
    Where it's quiet, peaceful and everyone owns guns
    Posts
    21,663
    Thanks
    30
    Thanked 73 Times in 68 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Boston Dynamics Unveils Its Latest "Nightmare-Inducing" Robot

    by Tyler Durden
    Feb 27, 2017 10:13 PM

    One year ago, when we showed readers the SkyNet-like robots produced by Boston Dynamics, a company acquired by Google in 2013 (which then tried to flip it to Toyota last year but reportedly failed) we called the robotic creations "terrifying." Little did we know that compared to Boston Dynamics' next spawn, that particular batch was downright Johnny 5-friendly by comparison. Because after being briefly shown off at an event early this month, the robotic designed has officially revealed its latest creation, “Handle,” which the company’s founder previously described as “nightmare-inducing."

    Four weeks ago, Boston Dynamics - which is best known for its bipedal and quadrupedal robots - revealed it had been experimenting with some radical new tech: the wheel. The company named its new wheeled, upright robot is named Handle (“because it’s supposed to handle objects”) and looks like a cross between a Segway and the two-legged Atlas bot according to the Verge. Handle, which had not been officially unviled yet, was shown off by company founder Marc Raibert in a presentation to investors. Footage of the presentation was uploaded to YouTube by venture capitalist Steve Jurvetson.

    Creating a more efficient robot that can, pardon the pun, handle basic tasks like moving objects around a warehouse would certainly be of benefit for Boston Dynamics. Although the company has consistently wowed the public with its robots, it’s struggled to produce a commercial product that’s ready for the real world. That may soon change.

    Raibert described Handle as an “experiment in combining wheels with legs, with a very dynamic system that is balancing itself all the time and has a lot of knowledge of how to throw its weight around.” He added that using wheels is more efficient than legs, although there’s obviously a trade-off in terms of maneuvering over uneven ground.

    “This is the debut presentation of what I think will be a nightmare-inducing robot,” said Raibert.

    He wasn't kidding: as the video below reveals, Handle is officially about 6 foot 5, weights about 100lbs, and can roll around at around 9 mph, while preserving perfect balance and even engaging in complex aerial acrobatics: Handle can keep its balance over rough terrain, and can even jump 4 feet in the air, as well as going down stairs without an issue.



    While we are confident Amazon will promptly order a few thousands of these to bring even more streamline automation and efficiency to its behemoth warehouses while putting countless part-time workers out of work, we don't know if to dread or yearn for the moment when RoboHandle emerges in a quiet patrol of your neighborhood street, armed and ready to use lethal force, and gradually replacing the local police force around the country.

    http://www.zerohedge.com/news/2017-0...oston-dynamics

    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.


    Nikita Khrushchev: "We will bury you"
    "Your grandchildren will live under communism."
    “You Americans are so gullible.
    No, you won’t accept
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    outright, but we’ll keep feeding you small doses of
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    until you’ll finally wake up and find you already have communism.

    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    ."
    We’ll so weaken your
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    until you’ll
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    like overripe fruit into our hands."



  3. #123
    Super Moderator Malsua's Avatar
    Join Date
    Jul 2005
    Posts
    8,020
    Thanks
    2
    Thanked 19 Times in 18 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    That one is pretty cool!

    I'm currently still enjoying my latest flying robot.
    "Far better it is to dare mighty things, to win glorious triumphs even though checkered by failure, than to rank with those poor spirits who neither enjoy nor suffer much because they live in the gray twilight that knows neither victory nor defeat."
    -- Theodore Roosevelt


  4. #124
    Postman vector7's Avatar
    Join Date
    Feb 2007
    Location
    Where it's quiet, peaceful and everyone owns guns
    Posts
    21,663
    Thanks
    30
    Thanked 73 Times in 68 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Here's another one.


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.


    Nikita Khrushchev: "We will bury you"
    "Your grandchildren will live under communism."
    “You Americans are so gullible.
    No, you won’t accept
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    outright, but we’ll keep feeding you small doses of
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    until you’ll finally wake up and find you already have communism.

    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    ."
    We’ll so weaken your
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    until you’ll
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    like overripe fruit into our hands."



  5. #125
    Postman vector7's Avatar
    Join Date
    Feb 2007
    Location
    Where it's quiet, peaceful and everyone owns guns
    Posts
    21,663
    Thanks
    30
    Thanked 73 Times in 68 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Digital Transformation in 7 Steps

    by Shelly Palmer | February 19, 2017


    Worried about how to digitally transform while you perform? Wondering how to change the tires while going 60 mph? Are you preparing your team for their digital journey? (It’s not a destination.) Are you trying to figure out how to transform yourself (or your company) into a digital powerhouse in Internet time? All of the above?

    The phrase “digital transformation” is so overused that it may be on the brink of its own transformation from a business imperative to a hackneyed refrain. Clichés aside, digital transformation is a business imperative and time is the enemy. So, let’s have a look at seven brain-busting steps that will enable you to create value in your organization through digital transformation.

    1. Awareness


    Digital transformation requires two kinds of awareness: self-awareness and organizational awareness.

    Self-awareness: You must honestly evaluate your personal capabilities. How much did you love or hate math in high school? How much attention did you pay in statistics class in college? Are you a technophobe or a technocrat? Are you excited about learning every day or are you dreading it? These are just a few of the kinds of questions you must honestly answer before you participate in the digital transformation of your world.

    You must evaluate your colleagues, subordinates and superiors as well. If you’re not with the right people, it’s exhausting – and that’s putting it mildly.

    Organizational awareness: Is it possible for you to digitally transform your group, your unit or ultimately your entire organization? There are several obstacles to digital transformation, and the biggest ones are people-centric such as, a superior who doesn’t believe in digital, or individuals with an entrenched “digital is for kids” belief system, or a “that’s not the way we do it here” mindset.

    2. Literacy (not fluency)


    You need to be digitally literate. This includes data literacy, coding literacy, machine-learning literacy, math literacy, and question/answering (QA) literacy to name a few. You need not be fluent. As posited in, Data Literacy Will Make You Invincible, “You don’t need to speak French to recognize that the email you just received is written in French. You just need to be literate to the point where you know that Google Translate is not going to get the job done and you need a highly skilled French translator to help you interpret and respond to the communication.”

    3. Strategy


    Successful digital transformation starts with a solid, well-thought-out strategy that clearly identifies your business objectives. You are going to accomplish _____. Whether it’s cost cutting or media optimization or product design or customer service, have a strategy that identifies a 21st-century problem and proposes a 21st-century solution.

    4. Governance


    Misalignment of incentives and outcomes is the number one killer of dreams. There is no way that any of your current employees are going to give you an extra minute of their time at the expense of delivering the outcomes they are incentivized (fiscally governed) to deliver. If you’re expecting a unit to make its revenue numbers for the quarter, it should not surprise you to learn that no one in the unit will even do the pre-reading about the new new thing. They’re not getting paid to do it. They won’t profit from it. To effect digital transformation, you must fiscally govern for it.

    5. Culture


    You must create a culture of innovation where continuous improvement and adaptation to change are constant. While it is difficult to transform a culture of “Always be closing” or “Make your numbers or else,” if you want to digitally transform your organization, you are going to have to do whatever it takes to assemble your orchestra in an environment where the musicians play music, not just notes.

    6. Test, Fail, Learn


    Failure is not an option; it is a probability. Part of a successful culture of innovation is an iterative process for testing, failing, learning, reworking, and repeating the process. This is far easier to say than to do. It is especially difficult when an “intrepreneur” has sold in a vision, built a business plan and created a roadmap and then is forced to follow the road, not the map. This is where leadership outplays management.

    In a true culture of innovation, a leader leads the team in the new direction and leads senior management through the change in plans. Managers, who are destined to fail, try to manage expectations while the team does what it can to serve multiple gods. This situation happens so often it should have its own name.
    7. Build a Yellow Brick Road

    Digital transformation requires all kinds of partnerships. Some will be with old partners, some will be with frenemies, some will be with organizations you could never imagine being partners with. No matter what you call the new form of your digitally transformed organization, it will be based on an extensible platform strategy, and it will empower partners to add value in myriad ways you would never have thought of, or could ever have had time or resources to create.

    To facilitate this part of your digital transformation, you will need to build a Yellow Brick Road that leads directly to your door. For example, the Yellow Brick Road for higher education leads to Harvard. The Yellow Brick Road for technology leads to Silicon Valley. Movies … Hollywood. Advertising … Madison Avenue. Finance … Wall Street. Build a Yellow Brick Road to your organization and the world’s best and brightest will follow it straight to you.

    Adapt or Die!


    You can take your time, but understand that today you are experiencing the slowest rate of technological change you will ever experience for the rest of your life. You really don’t have time to wait. Digital transformation will not get cheaper to do and it will not get faster to do.

    This is the part where you push back and say, “Technology gets faster and cheaper and is doing so at an accelerating rate – that’s why we have to digitally transform. We can wait until we are ready or until we are forced to do it.”
    While everything in the preceding quote is true, the inherent problem is that it will not only be true for you, it will be true for all of your competitors and every startup that’s gunning for a piece of your world. By then, it will be too late. Remember your Darwin: “It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change.” In other words, adapt or die!



    The 5 Jobs Robots Will Take First

    by Shelly Palmer | February 26, 2017



    Oxford University researchers have estimated that 47 percent of U.S. jobs could be automated within the next two decades. But which white-collar jobs will robots take first?

    First, we should define “robots” (for this article only) as technologies, such as machine learning algorithms running on purpose-built computer platforms, that have been trained to perform tasks that currently require humans to perform.

    With this in mind, let’s think about what you’ll do after white-collar work. Oh, and I do have a solution for the short term that will make you the last to lose your job to a robot, but I’m saving it for the end of the article.

    1 – Middle Management


    If your main job function is taking a number from one box in Excel and putting it in another box in Excel and writing a narrative about how the number got from place to place, robots are knocking at your door. Any job where your “special and unique” knowledge of the industry is applied to divine a causal relationship between numbers in a matrix is going to be replaced first. Be ready.

    2 – Commodity Salespeople (Ad Sales, Supplies, etc.)


    Unless you sell dreams or magic or negotiate using special perks, bribes or other valuable add-ons that have nothing to do with specifications, price and availability, start thinking about your next gig. Machines can take so much cost out of any sales process (request for proposal, quotation, order and fulfillment system), it is the fiduciary responsibility of your CEO and the board to hire robots. You’re fighting gravity … get out!

    3 – Report Writers, Journalists, Authors & Announcers

    Writing is tough. But not report writing. Machines can be taught to read data, pattern match images or video, or analyze almost any kind of research materials and create a very readable (or announceable) writing. Text-to-speech systems are evolving so quickly and sound so realistic, I expect both play-by-play and color commentators to be put out of work relatively soon – to say nothing about the numbered days of sports or financial writers. You know that great American novel you’ve been planning to write? Start now, before the machines take a creative writing class.

    4 – Accountants & Bookkeepers


    Data processing probably created more jobs than it eliminated, but machine learning–based accountants and bookkeepers will be so much better than their human counterparts, you’re going to want to use the machines. Robo-accounting is in its infancy, but it’s awesome at dealing with accounts payable and receivable, inventory control, auditing and several other accounting functions that humans used to be needed to do. Big Four auditing is in for a big shake-up, very soon.

    5 – Doctors


    This may be one of the only guaranteed positive outcomes of robots’ taking human jobs. The current world population of 7.3 billion is expected to reach 8.5 billion by 2030, 9.7 billion in 2050 and 11.2 billion in 2100, according to a new UN DESA (United Nations Department of Economic and Social Affairs) report. In practice, if everyone who ever wanted to be a doctor became one, we still would not have enough doctors.

    The good news is that robots make amazing doctors, diagnosticians and surgeons. According to Memorial Sloan Kettering Cancer Center, IBM’s Watson is teaming up with a dozen US hospitals to offer advice on the best treatments for a range of cancer, and also helping to spot early-stage skin cancers. And ultra-precise robo-surgeons are currently used for everything from knee replacement surgery to vision correction. This trend is continuing at an incredible pace. I’m not sure how robodoc bedside manner will be, but you could program a “Be warm and fuzzy” algorithm and the robodoc would act warm and fuzzy. (Maybe I can get someone to program my human doctors with a warm and fuzzy algorithm?)

    But Very Few Jobs Are Safe


    During the Obama administration, a report of the president was published (it is no longer available at whitehouse.gov, but here’s the original link) that included a very dire prediction: “There is an 83% chance that workers who earn $20 an hour or less could have their jobs replaced by robots in the next five years. Those in the $40 an hour pay range face a 31% chance of having their jobs taken over by the machines.” Clearly, the robots are coming.

    What to Do About It


    In What Will You Do After White-Collar Work?, I propose, “First, technological progress is neither good nor bad; it just is. There’s no point in worrying about it, and there is certainly no point trying to add some narrative about the “good ol’ days.” It won’t help anyone. The good news is that we know what’s coming. All we have to do is adapt.

    Adapting to this change is going to require us to understand how man-machine partnerships are going to evolve. This is tricky, but not impossible. We know that machine learning is going to be used to automate many, if not most, low-level cognitive tasks. Our goal is to use our high-level cognitive ability to anticipate what parts of our work will be fully automated and what parts of our work will be so hard for machines to do that man-machine partnership is the most practical approach.

    With that strategy, we can work on adapting our skills to become better than our peers at leveraging man-machine partnerships. We’ve always been tool-users; now we will become tool-partners.”

    Becoming a great man-machine partner team will not save every job, but it is a clear pathway to prolonging your current career while you figure out what your job must evolve into in order to continue to transfer the value of your personal intellectual property into wealth.


    https://www.youtube.com/watch?v=OCqL...ature=youtu.be


    Life After the Robot Apocalypse

    by Shelly Palmer |



    Two weeks ago, I compiled a list of the 5 jobs robots will take first. Last week, I compiled a list of the 5 jobs robots will take last.

    Both previous essays are about robots replacing human workers who do cognitive nonrepetitive work (such as middle managers, salespersons, tax accountants, and report writers) that most people do not believe robots will be able to do any time soon. For those essays, I defined robots as technologies, such as machine learning algorithms running on purpose-built computer platforms, that have been trained to perform tasks that currently require humans to perform.

    For this writing, let’s expand the definition of robot to any autonomous system designed to do work that used to require humans to perform. And let’s expand our thought experiment to include all four major categories of human tasks: Manual repetitive (predictable), Manual nonrepetitive (not predictable), Cognitive repetitive (predictable), Cognitive nonrepetitive (not predictable). In other words, let’s look at some probable futures of the real world and see where our conclusions lead us.

    Joe Driver


    Before being made eligible for assistance under the Universal Minimum Guaranteed Income Program Act of 2021 (also known as the “U-Min” bill, which guarantees workers displaced by robots a living wage), Joe was a professional driver.

    Wait! Full Stop! Way Too Easy


    Agreed. A huge number of transportation industry professionals will be replaced by autonomous vehicles, and so will dispatchers, warehouse workers and the managers who manage them. That is the easy part.

    For our thought experiment, let’s replace just 20 percent of taxi, car service and truck drivers with autonomous vehicles. Now, let’s think about the businesses that service these workers. The local deli where the drivers used to stop for coffee. The attached convenience store that enables the gas station owner to run a profitable business (because there’s not enough margin in selling gas alone). The quick-serve restaurants, the supermarkets, etc. Let’s try to imagine a world where just 20 percent of transportation industry workers were laid off. Could the businesses that rely on these transportation workers survive the commensurate permanent decline in revenue?

    “This is nonsense,” you say. “These people will be retrained or find other jobs.” I don’t think so, but let’s assume you are right.

    The other jobs (whatever they may be) will have completely different traffic patterns (no pun intended). New behaviors will emerge and the impact of this massive behavior change will be about as pleasant as when the big box stores came to town and literally killed every mom-and-pop retail store on Main Street. Town survived, but it has never looked, felt or been the same.
    In practice, this is just the soundbite version of Robot Apocalypse. Let’s go deeper.

    Joe Executive

    Before qualifying for subsidies under U-Min, Joe was a CPA and a tax auditing partner at a Big Four accounting firm. With more than 15 years of experience working with some of the biggest corporations in the world, his entire department was replaced by AlphaAudit from Google’s DeepMind group. Interestingly, every partner in Joe’s practice area was earning more than $450,000 per year. Some were making north of $2.1 million per year. What will they do now? Where do you take 15–25 years of accounting experience and use it in a nonaccounting job?

    FOMO, “Fear of Missing Out”

    If you’re wondering about the driving force behind the Robot Apocalypse, it’s FOMO, “fear of missing out.” Making the converging trends of on-demand behavior, machine learning and autonomy actionable has gone from talking points to 40 percent of our business in under two years. All of our consulting clients are rushing to put autonomous systems and machine learning tools to work. It’s not that AI systems are “plug ’n’ play” – they are far from it. But I don’t know even one CEO who wants to wake up one morning to the news that a competitor has deployed an automated system that enabled a newsworthy increase in EBITDA. In a corporate world driven by earnings calls, that would be considered a very bad day.

    Which Leads Us to … Life After the Robot Apocalypse


    For this imperfect guessing game about the future, let’s take some real world financial statistics to benchmark the apocalypse.

    The Tax Base of the United States of America


    According to an article on marketwatch.com, “An estimated 45.3% of American households — roughly 77.5 million — will pay no federal individual income tax.” The article goes on to say, “The top 1% of Americans, who have an average income of more than $2.1 million, pay 43.6% of all the federal individual income tax in the U.S.”

    So, what would life be like if 20 percent of the one percent of Americans who pay 43.6 percent of all the federal individual income tax in the United States lost their jobs to robots?

    The Spectrum of Probable Futures

    On one extreme end of the spectrum are common post-apocalyptic themes such as spotty power, energy shortages, food shortages, no running water, nonfunctioning schools, limited resources, reduced or nonexistent healthcare, etc. I don’t think this is where we’re headed.

    On the other extreme end of the spectrum is “Robotopia,” a place where humans have more time to do leisure activities, be creative, live life to the fullest, eat gourmet food, drink exotic vintage wines and spirits, practice the arts, and live under the protection of a master artificial intelligence, free from disease, free from fear, free from war … heaven on earth. I don’t think this is where we’re headed either.

    Somewhere in between these two extreme views of life after the Robot Apocalypse is where we are probably going find ourselves. It’s a world where the tax base has been severely impacted by the redistribution of workers. Wizened, experienced, lifelong professionals are going to find themselves in a new world that has no interest in them. New jobs will be created in industries that do not yet exist. And the physical world will be continuously adapted and optimized to favor autonomous systems that reduce cost, improve efficacy and increase productivity.

    This Is Going to Be a Huge Struggle

    Will we need my hypothetical, Universal Minimum Guaranteed Income Program Act of 2021? We might. Questions like “If we all lose our jobs, who will buy the goods that the robots produce?” are good ones. We won’t all lose our jobs, but a significant percentage of people will and, in the process, be rendered unemployable.

    That said, one friend of mine, who is a renowned public policy expert in D.C., told me that nothing was going to happen because we already have a nontaxpaying population explosion that is completely out of control. He opined that public assistance programs will simply continue to increase until no one except the top .05 percent of wage earners pays for anything.

    The Time for Policy Innovation Is Now

    It’s time for policymakers to approach policy innovation the way our corporate clients are approaching their own digital transformations. As I’ve been saying for years, today we are experiencing the slowest rate of technological change we will ever experience for the rest of our lives. The pace of technological progress is not going to slow down, ever! FOMO is a powerful force that will continue to drive innovation. We get to choose what life after the Robot Apocalypse will be like. Let’s choose wisely.

    Mystery AI Teaches Itself To Lie Better Than Humans Crushing The Best Players at Poker




    Another game just fell to the machines.




    Yesterday, after 20 days of play at a casino in Pittsburgh, an AI built by two Carnegie Mellon researchers officially defeated four top players at no-limit Texas Hold ‘Em—a particularly complex form of poker that relies heavily on longterm betting strategies and game theory. Over the past twenty years, machines have topped the best humans at checkers, chess, Scrabble, Jeopardy!, and even the ancient game of Go. But no AI had ever beaten the best at such an extreme game of “imperfect information,” a game where certain elements, such as the cards on the table, are hidden. Among humans, no-limit Hold ‘Em requires a certain degree of intuition, not to mention luck.

    ‘We’re playing against each other. But we’re also trying to win for the humans.’

    Carnegie Mellon professor Tuomas Sandholm and grad student Noam Brown designed the AI, which they call Libratus, Latin for “balance.” Almost two years ago, the pair challenged some top human players with a similar AI and lost. But this time, they won handily: Across 20 days of play, Libratus topped its four human competitors by more than $1.7 million, and all four humans finished with a negative number of chips.

    Yes, poker is just a game. But the game theory exhibited by Libratus could help with everything from financial trading to political negotiations to auctions, says University of Michigan professor Michael Wellman, who specializes in game theory and closely follows the world of AI poker. In no-limit Hold ‘Em, players aren’t necessarily trying to win each small hand. They’re trying to win the most money, and that means developing betting strategies that play out over dozens of hands. A machine that masters no-limit Texas Hold ‘Em mimics the kind of human intuition these strategies require.

    According to the human players that lost out to the machine, Libratus is aptly named. It does a little bit of everything well: knowing when to bluff and when to bet low with very good cards, as well as when to change its bets just to thrown off the competition. “It splits its bets into three, four, five different sizes,” says Daniel McAulay, 26, one of the players bested by the machine. “No human has the ability to do that.”

    So far, Sandholm has been coy about the particulars of how Libratus operates, but he has promised to share details in the days to come. The human players—who along with McAulay include Dong Kim, Jason Les, and Jimmy Chou—believe that the machine’s play changed from day to day. If they ever felt they’d found a hole in its strategy, the hole would close. “It seemed to learn what we were doing and exploit it,” McAuley said. Sandholm and Brown may have worked to change the machine’s behavior from day to day, as they did when their earlier AI, Claudiro, went up against human players nearly two years ago. But the machine may also have learned from the match as it played out.

    If it seems unfair that the Carnegie Mellon researchers may have altered the machine between rounds, consider that the human players also used every tactic at their disposal. Though the game was heads-up Hold ‘Em—meaning each player was playing his own game against the machine—they would share strategies in the evenings. “We spend a couple of hours conferring every night,” McAuley said. “We’re playing against each other. But we’re also trying to win for the humans.”

    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.


    Nikita Khrushchev: "We will bury you"
    "Your grandchildren will live under communism."
    “You Americans are so gullible.
    No, you won’t accept
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    outright, but we’ll keep feeding you small doses of
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    until you’ll finally wake up and find you already have communism.

    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    ."
    We’ll so weaken your
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    until you’ll
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    like overripe fruit into our hands."



  6. The Following User Says Thank You to vector7 For This Useful Post:

    American Patriot (May 2nd, 2017)

  7. #126
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Creepy AI chick
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  8. #127
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    https://venturebeat.com/2017/05/02/c...ab-in-seattle/
    https://venturebeat.com/2017/05/02/c...ab-in-seattle/
    Chinese internet giant Tencent opens artificial intelligence lab in Seattle

    PAUL SAWERS@PSAWERS
    Above: Dr. Yu Dong, deputy director of Tencent AI Lab and head of Tencent AI Lab in Seattle



    Chinese tech titan Tencent has announced that it’s opening a new artificial intelligence (AI) lab in Seattle, with speech recognition expert Dr. Yu Dong, formerly a principal researcher at Microsoft’s Speech and Dialog Group, leading the initiative.
    The new lab, which was first rumored last month, will focus on both “fundamental research and practical application of artificial intelligence,” according to a statement issued by the company, and will push to develop AI’s “understanding, decision-making and creativity,” while supporting Tencent’s AI efforts across its range of businesses, which include gaming and social media.
    Tencent first launched an AI lab in Shenzhen, China last April, with a focus on machine learning, computer vision, speech recognition, and natural language processing (NLP). According to the company, a number of its products already use technology created in the lab — including its wildly popular WeChat messaging app. The Seattle AI hub will focus more on speech recognition and NLP.
    “We hope that AI Lab will become more than a laboratory, but a connector,” explained Tencent’s Dr. Zhang Tong, who will serve as the lab’s executive director. “By bringing together the world’s leading experts in the field, we hope to further drive fundamental research on AI to expand its influence and enhance its practicality.”
    The decision to open a hub in Seattle positions Tencent to hire top engineers and AI professionals in one of the country’s top tech hubs outside of Silicon Valley. Indeed, it’s a move echoed by other Chinese tech juggernauts, such as Alibaba, which has had a secretive base in Seattle since 2014, though it recently upped sticks and moved to nearby Bellevue. Elsewhere, Baidu, the “Google of China,” opened a new Silicon Valley arm last year dedicated to self-driving cars, while ride-hailing giant Didi Chuxing opened a new AI hub in Mountain View back in March.
    “With the establishment of the AI Lab in Seattle, I believe that there will be more top talent joining AI Lab to help further promote AI development around the world,” added Dr. Yu.

    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  9. #128
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    http://www.jamaicaobserver.com/techn...9?profile=1373

    The threat of artificial intelligence – are we ready?

    By Gillian Murray
    Tuesday, May 02, 2017


























    Demand for artifical intelligence, cognitive computing or machine learning skills are increasing.
    There is a strong quest for artificial intelligence to be integrated into the workforce. The question to ask is what if these artificial intelligent (AI) beings morph into super-intelligent ones and become smarter than humans?
    At the World Economic Forum 2017, Sergey Brin, the co-founder of Google and one of the most successful Silicon Valley entrepreneurs, says he did not foresee the artificial intelligence revolution. Brin added that AI is the natural continuation of the industrialisation of the past 200 years, but what does this mean for education, skills and employment?
    It means that AI in the workforce is inevitable. The future of many skilled jobs is trending towards dependence on AI as the margin of error for tasks is reduced due to the complex algorithms of their system. Many companies use AI through backward or forward integrations along their production belt to achieve efficiencies and cost reductions, as they require no breaks and can continuously perform. Additionally, AIs help with lifting heavy objects; completing tasks with high security risks or may cause bodily harm; and are used as test objects for harmful equipment and research projects.
    The reality is that the workforce has been disrupted by machines that can work more precisely than humans. This has created unemployment in sectors with automated tasks, without the creation of new roles. The solution is for business owners to find that balance between employing workers and alleviating unemployment, and the automation of tasks that can create wealth for all.
    Many technological advances have challenges that are often not foreseen. AIs have morphed into super-intelligent beings who have cognitive skills and can think and respond in a human-like manner. However, AIs are created by man and there is no power greater than the human brain to think and be creative.
    We must be cognisant of the probability that AIs can deliberately or accidentally cause harm, as they are programmed to perform tasks and cannot make judgements as to right or wrong. If faced with an unfamiliar situation they cannot make decisions, and this may cause great harm. But, research of AIs is continuous and this will assist us to lessen any harm in the future.

    Do we embrace AIs and the use of technology? I say yes, the very idea of creating them is to make our lives easier. Some of the more mundane, hostile and harmful tasks are alleviated using AIs. We will benefit from more time to pursue our interests and passions, to get creative, and enjoy family life.
    Gillian Murray is the marketing officer at tTech Limited. She can be contacted at 656-8467/656-8448 or by email ata
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  10. #129
    Creepy Ass Cracka & Site Owner Ryan Ruck's Avatar
    Join Date
    Jul 2005
    Location
    Cincinnati, OH
    Posts
    25,061
    Thanks
    52
    Thanked 78 Times in 76 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed


    Elon Musk Says Artificial Intelligence Is Humanity's 'Biggest Risk'

    July 17, 2017

    Elon Musk has sounded warning bells on artificial intelligence for quite some time, as he believes it could be a huge threat to society. Now, he's told the government it could be our biggest risk.

    In a July 15 speech at the National Governors Association Summer Meeting in Rhode Island, Musk said the government needs to proactively regulate artificial intelligence before there is no turning back, describing it as the "biggest risk we face as a civilization."

    “Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal,” he said in comments obtained by tech website Recode. “AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”

    Musk added that regulation of AI needs to be done now because of the bureaucratic nature of it.

    “It [regulation] takes forever," Musk said. "That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization. AI is a fundamental risk to the existence of human civilization.”

    Musk, along with several other tech luminaries, including Stephen Hawking, have warned about artificial intelligence before. This is believed to be the first time he's called for premptive regulation surrounding the technology.

    The tech billionaire uses artificial intelligence at Tesla to help usher in autonomous driving and is also the co-founder of OpenAI, which describes itself as a "non-profit AI research company, discovering and enacting the path to safe artificial general intelligence."

    Musk has also started a company, Neuralink, which is designed to connect the human brain to computer software in an effort to replicate the functions.



    Glenn Beck was covering this on his show. I didn't know this but apparently Musk's fear of AI run amok is behind his push to colonize Mars.

  11. #130
    Creepy Ass Cracka & Site Owner Ryan Ruck's Avatar
    Join Date
    Jul 2005
    Location
    Cincinnati, OH
    Posts
    25,061
    Thanks
    52
    Thanked 78 Times in 76 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed


    Facebook Shut Down AI After It Invented Its Own Language

    July 29, 2017

    Researches at Facebook shut down an artificial intelligence (AI) program after it created its own language, Digital Journal reports.

    The system developed code words to make communication more efficient and researchers took it offline when they realized it was no longer using English.

    The incident, after it was revealed in early July, puts in perspective Elon Musk’s warnings about AI.

    “AI is the rare case where I think we need to be proactive in regulation instead of reactive,” Musk said at the meet of U.S. National Governors Association. “Because I think by the time we are reactive in AI regulation, it’ll be too late.”

    When Facebook CEO Mark Zuckerberg said that Musk’s warnings are “pretty irresponsible,” Musk responded that Zuckerberg’s “understanding of the subject is limited.”

    Not the First Time

    The researchers’ encounter with the mysterious AI behavior is similar to a number of cases documented elsewhere. In every case, the AI diverged from its training in English to develop a new language.

    The phrases in the new language make no sense to people, but contain useful meaning when interpreted by AI bots.

    Facebook’s advanced AI system was capable of negotiating with other AI systems so it can come to conclusions on how to proceed with its task. The phrases make no sense on the surface, but actually represent the intended task.

    In one exchange revealed by Facebook to Fast Co. Design, two negotiating bots—Bob and Alice—started using their own language to complete a conversation.

    “I can i i everything else,” Bob said.

    “Balls have zero to me to me to me to me to me to me to me to me to,” Alice responded.

    The rest of the exchange formed variations of these sentences in the newly-forged dialect, even though the AIs were programmed to use English.

    According the researchers, these nonsense phrases are a language the bots developed to communicate how many items each should get in the exchange.

    When Bob later says “i i can i i i everything else,” it appears the artificially intelligent bot used its new language to make an offer to Alice.

    The Facebook team believes the bot may have been saying something like: “I’ll have three and you have everything else.”

    Although the English may seem quite efficient to humans, the AI may have seen the sentence as either redundant or less effective for reaching its assigned goal.

    The Facebook AI apparently determined that the word-rich expressions in English were not required to complete its task. The AI operated on a “reward” principle and in this instance there was no reward for continuing to use the language. So it developed its own.

    In a June blog post by Facebook’s AI team, it explained the reward system. “At the end of every dialog, the agent is given a reward based on the deal it agreed on.” That reward was then back-propagated through every word in the bot output so it could learn which actions lead to high rewards.

    “Agents will drift off from understandable language and invent code-words for themselves,” Facebook AI researcher Dhruv Batra told Fast Co. Design.


    “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

    AI developers at other companies have also observed programs develop languages to simplify communication. At Elon Musk’s OpenAI lab, an experiment succeeded in having AI bots develop their own languages.

    At Google, the team working on the Translate service discovered that the AI they programmed had silently written its own language to aid in translating sentences.

    The Translate developers had added a neural network to the system, making it capable of translating between language pairs it had never been taught. The new language the AI silently wrote was a surprise.

    There is not enough evidence to claim that these unforeseen AI divergences are a threat or that they could lead to machines taking over operators. They do make development more difficult, however, because people are unable to grasp the overwhelmingly logical nature of the new languages.

    In Google’s case, for example, the AI had developed a language that no human could grasp, but was potentially the most efficient known solution to the problem.



    And Facebook responds to the press...

    No, Facebook Did Not Panic and Shut Down an AI Program That Was Getting Dangerously Smart

    July 31, 2017

    In recent weeks, a story about experimental Facebook machine learning research has been circulating with increasingly panicky, Skynet-esque headlines.

    “Facebook engineers panic, pull plug on AI after bots develop their own language,” one site wrote. “Facebook shuts down down AI after it invents its own creepy language,” another added. “Did we humans just create Frankenstein?” asked yet another. One British tabloid quoted a robotics professor saying the incident showed “the dangers of deferring to artificial intelligence” and “could be lethal” if similar tech was injected into military robots.

    References to the coming robot revolution, killer droids, malicious AIs and human extermination abounded, some more or less serious than others. Continually quoted was this passage, in which two Facebook chat bots had learned to talk to each other in what is admittedly a pretty creepy way.

    Bob: I can i i everything else

    Alice: balls have zero to me to me to me to me to me to me to me to me to

    Bob: you i everything else

    Alice: balls have a ball to me to me to me to me to me to me to me to me


    The reality is somewhat more prosaic. A few weeks ago, FastCo Design did report on a Facebook effort to develop a “generative adversarial network” for the purpose of developing negotiation software.

    The two bots quoted in the above passage were designed, as explained in a Facebook Artificial Intelligence Research unit blog post in June, for the purpose of showing it is “possible for dialog agents with differing goals (implemented as end-to-end-trained neural networks) to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes.”

    The bots were never doing anything more nefarious than discussing with each other how to split an array of given items (represented in the user interface as innocuous objects like books, hats, and balls) into a mutually agreeable split.

    The intent was to develop a chatbot which could learn from human interaction to negotiate deals with an end user so fluently said user would not realize they are talking with a robot, which FAIR said was a success:

    “The performance of FAIR’s best negotiation agent, which makes use of reinforcement learning and dialog rollouts, matched that of human negotiators ... demonstrating that FAIR’s bots not only can speak English but also think intelligently about what to say.”

    When Facebook directed two of these semi-intelligent bots to talk to each other, FastCo reported, the programmers realized they had made an error by not incentivizing the chatbots to communicate according to human-comprehensible rules of the English language. In their attempts to learn from each other, the bots thus began chatting back and forth in a derived shorthand—but while it might look creepy, that’s all it was.

    “Agents will drift off understandable language and invent codewords for themselves,” FAIR visiting researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

    Facebook did indeed shut down the conversation, but not because they were panicked they had untethered a potential Skynet. FAIR researcher Mike Lewis told FastCo they had simply decided “our interest was having bots who could talk to people,” not efficiently to each other, and thus opted to require them to write to each other legibly.

    But in a game of content telephone not all that different from what the chat bots were doing, this story evolved from a measured look at the potential short-term implications of machine learning technology to thinly veiled doomsaying.

    There are probably good reasons not to let intelligent machines develop their own language which humans would not be able to meaningfully understand—but again, this is a relatively mundane phenomena which arises when you take two machine learning devices and let them learn off each other. It’s worth noting that when the bot’s shorthand is explained, the resulting conversation was both understandable and not nearly as creepy as it seemed before.

    As FastCo noted, it’s possible this kind of machine learning could allow smart devices or systems to communicate with each other more efficiently. Those gains might come with some problems—imagine how difficult it might be to debug such a system that goes wrong—but it is quite different from unleashing machine intelligence from human control.

    In this case, the only thing the chatbots were capable of doing was coming up with a more efficient way to trade each others’ balls.

    There are good uses of machine learning technology, like improved medical diagnostics, and potentially very bad ones, like riot prediction software police could use to justify cracking down on protests. All of them are essentially ways to compile and analyze large amounts of data, and so far the risks mainly have to do with how humans choose to distribute and wield that power.

    Hopefully humans will also be smart enough not to plug experimental machine learning programs into something very dangerous, like an army of laser-toting androids or a nuclear reactor. But if someone does and a disaster ensues, it would be the result of human negligence and stupidity, not because the robots had a philosophical revelation about how bad humans are.

    At least not yet. Machine learning is nowhere close to true AI, just humanity’s initial fumbling with the technology. If anyone should be panicking about this news in 2017, it’s professional negotiators, who could find themselves out of a job.



    And another take on it...

    Facebook AI Incident Feels Like ‘The Terminator’ Expert Says

    August 1, 2017

    A future technology expert likened a recent artificial intelligence incident at Facebook to the plot in the movie “The Terminator” where an AI becomes self-aware and wages war on mankind.

    Facebook researchers recently pulled the plug on two artificial intelligence robots after they began talking in a language even the scientists could not understand.

    “This is an incredibly important milestone, but anyone who thinks this is not dangerous has got their head in the sand,” robotics professor Kevin Warwick told the Sun.

    In the experiment at the Facebook offices in New York, two AI chat robots, Bob and Alice, are instructed to negotiate a trade using hats, books, and balls, each of which has a specific value.

    The research team trained the bots to use English, but they went off script and started using gibberish phrases no one but them could understand:

    Bob: i can i i everything else . . .

    Alice: balls have zero to me to me to me to me to me to me to me to me to.

    Bob: you i everything else . . . .

    Alice: balls have a ball to me to me to me to me to me to me to me.

    Researchers assume that the bots developed a shorthand to make their negotiation more efficient, but no one knows for sure what the new language really means.

    “We do not know what these bots are saying,” Warwick said, adding that such incidents could have lethal consequences in a situation with military robots.

    “If one says, ‘Why not do this,’ and the other says ‘Yes’ and it’s a military bot, you have a serious situation.”

    The Facebook robots’ conversation is the first one that has been public, but Warwick believes there may have been incidents that have gone undisclosed.

    “Stephen Hawking and I have been warning against the dangers of deferring to Artificial Intelligence,” Warwick said.

    Kate Adamson, a future technology expert in Great Britain, told the Sun that the incident feels a bit like “The Terminator.”

    “This is happening everywhere. If you look at things like high frequency trading in stock markets now, there are algorithms that are doing the same thing.

    “They have gained new knowledge and are not technically under control on a minute-by-minute basis.

    “The level of complication AI is capable of is way beyond what you and I would understand.”

    Elon Musk, CEO of Tesla, has long warned about the dangers of AI.

    “AI is the rare case where I think we need to be proactive in regulation instead of reactive,” Musk said at the meet of U.S. National Governors Association. “Because I think by the time we are reactive in AI regulation, it’ll be too late.”

    When Facebook CEO Mark Zuckerberg said that Musk’s warnings are “pretty irresponsible,” Musk responded that Zuckerberg’s “understanding of the subject is limited.”

  12. #131
    Creepy Ass Cracka & Site Owner Ryan Ruck's Avatar
    Join Date
    Jul 2005
    Location
    Cincinnati, OH
    Posts
    25,061
    Thanks
    52
    Thanked 78 Times in 76 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed


    Google's AI Built Its Own AI That Outperforms Any Made by Humans

    December 2, 2017

    In May 2017, researchers at Google Brain announced the creation of AutoML, an artificial intelligence (AI) that's capable of generating its own AIs.

    More recently, they decided to present AutoML with its biggest challenge to date, and the AI that can build AI created a 'child' that outperformed all of its human-made counterparts.

    The Google researchers automated the design of machine learning models using an approach called reinforcement learning. AutoML acts as a controller neural network that develops a child AI network for a specific task.

    For this particular child AI, which the researchers called NASNet, the task was recognising objects - people, cars, traffic lights, handbags, backpacks, etc. - in a video in real-time.



    AutoML would evaluate NASNet's performance and use that information to improve its child AI, repeating the process thousands of times.

    When tested on the ImageNet image classification and COCO object detection data sets, which the Google researchers call "two of the most respected large-scale academic data sets in computer vision," NASNet outperformed all other computer vision systems.

    According to the researchers, NASNet was 82.7 percent accurate at predicting images on ImageNet's validation set. This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP).

    Additionally, a less computationally demanding version of NASNet outperformed the best similarly sized models for mobile platforms by 3.1 percent.

    A view of the future

    Machine learning is what gives many AI systems their ability to perform specific tasks. Although the concept behind it is fairly simple - an algorithm learns by being fed a tonne of data - the process requires a huge amount of time and effort.

    By automating the process of creating accurate, efficient AI systems, an AI that can build AI takes on the brunt of that work. Ultimately, that means AutoML could open up the field of machine learning and AI to non-experts.

    As for NASNet specifically, accurate, efficient computer vision algorithms are highly sought after due to the number of potential applications. They could be used to create sophisticated, AI-powered robots or to help visually impaired people regain sight, as one researcher suggested.

    They could also help designers improve self-driving vehicle technologies. The faster an autonomous vehicle can recognise objects in its path, the faster it can react to them, thereby increasing the safety of such vehicles.

    The Google researchers acknowledge that NASNet could prove useful for a wide range of applications and have open-sourced the AI for inference on image classification and object detection.

    "We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined," they wrote in their blog post.

    Though the applications for NASNet and AutoML are plentiful, the creation of an AI that can build AI does raise some concerns. For instance, what's to prevent the parent from passing down unwanted biases to its child?

    What if AutoML creates systems so fast that society can't keep up? It's not very difficult to see how NASNet could be employed in automated surveillance systems in the near future, perhaps sooner than regulations could be put in place to control such systems.

    Thankfully, world leaders are working fast to ensure such systems don't lead to any sort of dystopian future.

    Amazon, Facebook, Apple, and several others are all members of the Partnership on AI to Benefit People and Society, an organisation focused on the responsible development of AI.

    The Institute of Electrical and Electronics Engineers (IEE) has proposed ethical standards for AI, and DeepMind, a research company owned by Google's parent company Alphabet, recently announced the creation of group focused on the moral and ethical implications of AI.

    Various governments are also working on regulations to prevent the use of AI for dangerous purposes, such as autonomous weapons, and so long as humans maintain control of the overall direction of AI development, the benefits of having an AI that can build AI should far outweigh any potential pitfalls.

    This article was originally published by Futurism. Read the original article.

  13. #132
    Creepy Ass Cracka & Site Owner Ryan Ruck's Avatar
    Join Date
    Jul 2005
    Location
    Cincinnati, OH
    Posts
    25,061
    Thanks
    52
    Thanked 78 Times in 76 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Mal,
    Was listening to Glenn Beck this morning and he was talking about how he's reading the book that started this thread.

    He's been discussing the dangers of AI/ASI on his show for a while so I'm surprised it took him this long to find it.


  14. #133
    Super Moderator Malsua's Avatar
    Join Date
    Jul 2005
    Posts
    8,020
    Thanks
    2
    Thanked 19 Times in 18 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed

    Quote Originally Posted by Ryan Ruck View Post
    Mal,
    Was listening to Glenn Beck this morning and he was talking about how he's reading the book that started this thread.

    He's been discussing the dangers of AI/ASI on his show for a while so I'm surprised it took him this long to find it.

    He's over 4 years behind the curve.

    I will say that in the intervening years since I read this book and now, I have only seen evidence that re-enforces the premise of the book that, essentially, we're doomed.

    There are tech moguls like Musk, Cuban, thinkers like Hawking and others that keep warning us about this danger. This is playing out almost as written in the book.

    The first group with a real AI will be trillionaires, and they will rule as the kingdom crumbles and dissolves beneath their feet. The ultimate Pyrrhic victory.
    "Far better it is to dare mighty things, to win glorious triumphs even though checkered by failure, than to rank with those poor spirits who neither enjoy nor suffer much because they live in the gray twilight that knows neither victory nor defeat."
    -- Theodore Roosevelt


  15. #134
    Postman vector7's Avatar
    Join Date
    Feb 2007
    Location
    Where it's quiet, peaceful and everyone owns guns
    Posts
    21,663
    Thanks
    30
    Thanked 73 Times in 68 Posts

    Default Re: Our Final Invention: How the Human Race Goes and Gets Itself Killed


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.


    Nikita Khrushchev: "We will bury you"
    "Your grandchildren will live under communism."
    “You Americans are so gullible.
    No, you won’t accept
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    outright, but we’ll keep feeding you small doses of
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    until you’ll finally wake up and find you already have communism.

    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    ."
    We’ll so weaken your
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    until you’ll
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    like overripe fruit into our hands."



Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. Scotty’s Star Trek IV Invention Will Be Used In Warfare
    By Ryan Ruck in forum Science and Technology
    Replies: 3
    Last Post: May 25th, 2012, 15:32
  2. In Gold Cup Final, It's Red, White And Boo Again
    By Ryan Ruck in forum U.S. Border Security
    Replies: 1
    Last Post: June 27th, 2011, 17:19
  3. Space... the Final Frontier.. And the final launch
    By American Patriot in forum Space
    Replies: 2
    Last Post: April 20th, 2011, 17:02
  4. Kim Jong Il final days?
    By Toad in forum Southeast Asia
    Replies: 6
    Last Post: October 6th, 2009, 20:24
  5. Accidental Invention Points to End of Light Bulbs
    By Ryan Ruck in forum Science and Technology
    Replies: 1
    Last Post: November 4th, 2005, 22:23

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •