Human takeover by machines may be closer than we think


By Tia Ghose
LiveScience

Are you prepared to meet your robot overlords?

The idea of superintelligent machines may sound like the plot of “The Terminator” or “The Matrix,” but many experts say the idea isn’t far-fetched. Some even think the singularity — the point at which artificial intelligence can match, and then overtake, human smarts — might happen in just 16 years.

But nearly every computer scientist will have a different prediction for when and how the singularity will happen.

Some believe in a Utopian future, in which humans can transcend their physical limitations with the aid of machines. But others think humans will eventually relinquish most of their abilities and gradually become absorbed into artificial intelligence (AI)-based organisms, much like the energy-making machinery in our own cells. [5 Reasons to Fear Robots]

Singularity near?
In his book “” (Viking, 2005), futurist Ray Kurzweil predicted that computers will be as smart as humans by 2029, and that by 2045, “computers will be billions of times more powerful than unaided human intelligence,” Kurzweil wrote in an email to LiveScience.

“My estimates have not changed, but the consensus view of AI scientists has been changing to be much closer to my view,” Kurzweil wrote.

Bill Hibbard, a computer scientist at the , doesn’t make quite as bold a prediction, but he’s nevertheless confident AI will have human-level intelligence some time in the 21st century.

“Even if my most pessimistic guess is true, it means it’s going to happen during the lifetime of people who are already born,” Hibbard said.

But other AI researchers are skeptical.

“I don’t see any sign that we’re close to a singularity,” said Ernest Davis, a computer scientist at .

While AI can trounce the best chess or Jeopardy player and do other specialized tasks, it’s still light-years behind the average 7-year-old in terms of common sense, vision, language and intuition about how the physical world works, Davis said.

For instance, because of that physical intuition, humans can watch a person overturn a cup of coffee and just know that the end result will be a puddle on the floor. A computer program, on the other hand, would have to do a laborious simulation and know the exact size of the cup, the height of the cup from the surface and various other parameters to understand the outcome, Davis said. [10 Cool Facts About Coffee]

Infinite abilities
Once the singularity occurs, people won’t necessarily die (they can simply upgrade with cybernetic parts), and they could do just about anything they wanted to — provided it were physically possible and didn’t require too much energy, Hibbard said. more

How Skynet Might Emerge From Simple Physics


How Skynet Might Emerge From Simple Physics

provocative new paper is proposing that complex intelligent behavior may emerge from a fundamentally simple physical process. The theory offers novel prescriptions for how to build an AI — but it also explains how a world-dominating superintelligence might come about. We spoke to the lead author to learn more.

In the paper, which now appears in Physical Review Letters,  physicist and computer scientist Dr. Alex Wissner-Gross posits a Maximum Causal Entropy Production Principle— a conjecture that intelligent behavior in general spontaneously emerges from an agent’s effort to ensure its freedom of action in the future. According to this theory, intelligent systems move towards those configurations which maximize their ability to respond and adapt to future changes.

Causal Entropic Forces

It’s an idea that was partially inspired by Raphael Bousso’s Causal Entropic Principle, which suggests that universes which produce a lot of entropy over the course of their lifetimes (i.e., a gradual decline into disorder) tend to have properties, such as the cosmological constant, that are more compatible with the existence of intelligent life as we know it.

Causal Entropic Forces

It’s an idea that was partially inspired by Raphael Bousso’s Causal Entropic Principle, which suggests that universes which produce a lot of entropy over the course of their lifetimes (i.e., a gradual decline into disorder) tend to have properties, such as the cosmological constant, that are more compatible with the existence of intelligent life as we know it.

“I found Bousso’s results, among others, very suggestive since they hinted that perhaps there was some deeper, more fundamental, relationship between entropy production and intelligence,” Wissner-Gross told io9.

The reason that entropy production over the lifetime of the universe seems to correlate with intelligence, he says, may be because intelligence actually emerges directly from a form of entropy production over shorter time spans.

So the big picture — and the connection with the Anthropic Principle— is that the universe may actually be hinting to us as to how to build intelligences by telling us through the tunings of various cosmological parameters what the physical phenomenology of intelligence is,” he says.

To test this theory, Wissner-Gross, along with his colleague Cameron Freer, created a software engine called Entropica. The software allowed them to simulate a variety of model universes and then apply an artificial pressure to those universes to maximize causal entropy production.

“We call this pressure a Causal Entropic Force — a drive for the system to make as many futures accessible as possible,” he told us. “And what we found was, based on this simple physical process, that we were actually able to successfully reproduce standard intelligence tests and other cognitive behaviors, all without assigning any explicit goals.”

For example, Entropica was able to pass multiple animal intelligence tests, play human games, and even earn money trading stocks. Entropica also spontaneously figured out how to display other complex behaviors like upright balancing, tools use, and social cooperation.READ MORE

DARPA Begins Building Skynet And Its Robots With ‘Real’ Brains Meet Your Future Enemy:


DARPA Begins Building Skynet And Its Robots With ‘Real’ Brains
The next frontier for the robotics industry is to build machines that think like humans. Scientists have pursued that elusive goal for decades, and they believe they are now just inches away from the finish line. A Pentagon-funded team of researchers has constructed a tiny machine that would allow robots to act independently. Unlike traditional artificial intelligence systems that rely on conventional computer programming, this one “looks and ‘thinks’ like a human brain,” said James K. Gimzewski, professor of chemistry at the . Gimsewski is a member of the team that has been working under sponsorship of the Defense Advanced Research Projects Agency on a program called “physical intelligence.” This technology could be the secret to making robots that are truly autonomous, Gimsewski said during a conference call hosted by Technolink, a -based industry group. This project does not use standard robot hardware with integrated circuitry, he said. The device that his team constructed is capable, without being programmed like a traditional robot, of performing actions similar to humans, Gimsewski said.

Next Generation Biometrics To UseBrain Waves
There are many different solutions coming to market aiming to make secure Web and computer login easy and secure. Biometrics have been considered and deployed but adoption right out of science fiction is emerging that would authenticate via brain waves. Students and a professor at the University of California Berkley School of Information is working on a system that would have a user wear a headset equipped with electroencephalograms – EEGs – to measure brain wave activity. Using brain waves for identification is not a new idea but the technology used to read those brainwaves is new, according to a release from the UC School of information. “Traditional clinical EEGs typically employ dense arrays of electrodes to record 32, 64, 128, or 256 channels of EEG data. But new consumer-grade headsets use just a single dry-contact sensor resting against the user’s forehead, providing a single-channel EEG signal from the brain’s left frontal lobe,” the release states.

Are We Paying Enough Attention To Information Technology’s Dark Side?
For centuries, the threat and selective use of brute force has steered the international balance of power. In the last couple decades, the system has increasingly accommodated economic power as a means of non-violent leverage between states. Now, says ’s Marc Goodman, we must add technology into the mix. Technological power is not new, of course, but information technology’s exponential pace and declining cost is changing how the global game is played and who the players are. Control of technology is passing from the richest states and governments to smaller groups and individuals, and the results are both inspiring and terrifying. As Goodman says, “The ability of one to affect many is scaling exponentially—and it’s scaling for good and it’s scaling for evil.”

Pentagon

Meet Your Future Enemy: Pentagon Developing Humanoid Terminator Robots That Will Soon Carry Weapons 
Have no illusions about where this is headed: wants to develop and deploy a robotic army of autonomous soldiers that will kill without hesitation. It’s only a matter of time before these robots are armed with rifles, grenade launchers and more. Their target acquisition systems can be a hybrid combination of both thermal and night vision technologies, allowing them to see humans at night and even detect heat signatures through building walls. This is the army humanity is eventually going to face. You’d all better start getting familiar with the anatomy of humanoid robots so that you know where to shoot them for maximum incapacitation effect. You’d also better start learning how to sew thermal blankets into clothing, hoodies and scarves in order to fool thermal imaging systems.