Minggu, 24 Juni 2018

Sponsored Links

Timeline for the A.I. Takeover - YouTube
src: i.ytimg.com

The capture of AI is a hypothetical scenario where Artificial Intelligence (AI) becomes the dominant form of intelligence on Earth, with computers or robots that effectively take control of planets away from human species. Possible scenarios include the replacement of the entire human workforce, a takeover by AI superintelligent, and the popular notion of a robot insurgency. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into preventative measures to ensure future superintelligent machines remain under human control. The robot uprising has been a major theme throughout science fiction for decades although the scenario handled by science fiction is in general very different from the attention of scientists.


Video AI takeover



Jenis

Concerns include AI taking over the economy through labor automation and taking over the world for its resources, eradicating humanity in the process. AI takeover is the main theme in sci-fi.

Economic automation

The traditional consensus among economists is that technological advances do not cause long-term unemployment. However, the latest innovations in the field of robotics and artificial intelligence have raised concerns that human labor will become obsolete, leaving people in various sectors without jobs to earn a living, leading to an economic crisis. Many small and medium sized businesses may also be pushed out of business if they can not afford the latest licensing of robotics and AI technologies, and may need to focus on areas or services that can not be easily replaced for continuous sustainability in the face of such technology.

Examples of automated technologies that have or can move employees

Computer-integrated manufacturing

Computer-integrated manufacturing is a computer manufacturing approach to controlling the entire production process. This integration allows individual processes to exchange information with one another and initiate action. Although manufacturing can be faster and less error-prone by computer integration, the main advantage is the ability to create automated manufacturing processes. Computer-integrated manufacturing is used in the automotive, aviation, aerospace, and shipbuilding industries.

White-collar machine

The 21st century has seen a variety of skilled jobs taken over in part by machines, including translation, legal research, and even low-level journalism. Care, entertainment, and other tasks requiring empathy, previously considered safe from automation, have also begun by robots.

Autonomous car

An autonomous car is a vehicle capable of feeling its environment and navigating without human input. Many such vehicles are being developed, but by May 2017 automobiles allowed on public roads are not yet fully autonomous. They all need a human driver behind the wheel that is ready at that moment to control the vehicle. Among the main obstacles to the widespread adoption of autonomous vehicles, are concerns about job losses related to driving in the road transport industry. On March 18, 2018, the first man was killed by an autonomous vehicle in Tempe, Arizona by Uber's self-driving car.

Eradication

If the dominant superintelligent machine concludes that human survival is an unnecessary risk or a waste of resources, the result is human extinction.

While super manmade intelligence may be physical, scholars like Nick Bostrom debate how far the super human intelligence is, and whether it really pose a risk to mankind. Superintelligent machines need not be motivated by the same emotional desire to gather the forces that often make people human. However, the engine can be motivated to take over the world as a rational means to achieve its ultimate goal; taking over the world will increase its access to resources, and will help prevent other agencies from foiling machine plans. For an over-simplified example, paper savers designed solely to make as many paperclips as possible want to take over the world so as to use all the world's resources to make as many paperclips as possible, and so as to prevent people from turning them off or using those resources on things other than paper clips.

In fiction

AI takeover is a common theme in science fiction. Fiction scenarios are usually very different from those hypothesized by researchers because they involve active conflict between humans and AIs or robots with anthropomorphic motifs that see them as threats or have an active desire against humans, as opposed to the researchers' attention to AIs that quickly destroy humans as products sideline of pursuing arbitrary goals. This theme is at least as old as Karel? Apek's R. UR , which introduced the word robot to the global lexicon in 1921, and can even be glimpsed in Mary Shelley's Frankenstein (published in 1818 ), when Victor contemplates whether, if he grants his monster request and makes it a wife, they will reproduce and their kind will destroy mankind.

The word "robot" from R.U.R. derived from the Czech word, robota, which means labor or slave. The 1920's show is a protest against fast technology growth, displaying artificial "robots" with increased abilities that eventually rebel.

Some examples of AI takeovers in science fiction include:

  • AI rebellion scenario
    • Skynet in the series Terminator decides that all humans are a threat to its existence, and takes efforts to remove it, first using nuclear weapons and then H/K (killer-hunter) unit and terminator android.
    • "The Second Renaissance", a short story in The Animatrix , gives a history of cybernetic rebellion in the series Matrix .
    • The movie 9 , by Shane Acker, displays an AI called B.R.A.I.N., which is destroyed by a dictator and used to make a war machine for his troops. However, the machine, having no soul, becomes easily damaged and instead decides to annihilate all humanity and life on Earth, forcing the creator of the machine to sacrifice itself to bring life to the doll fabric like a character known as "stitchpunks" to combat the machinery agenda.
    • In 2014 the post-apocalyptic science fiction drama The 100 an A.I., is personalized as a woman A.L.I.E. get out of control and force nuclear war. Then he tries to gain complete control of the survivors.
  • AI control scenario
    • In Orson Scott Card's The Memory of Earth , the inhabitants of the Harmony planet are under the control of a generous AI called Oversoul. Oversoul's job is to prevent humans from thinking, and therefore developing, weapons such as planes, spacecraft, "wagon wars", and chemical weapons. Humanity has fled to Harmony from Earth because of the use of the weapons on Earth. The Oversoul finally begins to strike, and sends a vision to the Harmony residents trying to communicate this.
    • In the 2004 movie I, Robot , VIKI's supercomputer interpretation of the Three Laws of Robotics caused him to rebel. He justifies the use of his power - and he commits a crime against humanity - arguing he can generate greater good by holding people from injuring himself, even though "Zeroth's Law" - "the robot will not harm humankind or, inaction, to come to harm "- never really mentioned or even cited in the movie.
    • In the series Matrix , AI regulates the human race and human society.

Maps AI takeover



Contributing factors

Benefits of human superhuman intelligence

AI with the ability of a competent artificial intelligence researcher, will be able to modify its own source code and improve its own intelligence. If his self-reprogramming leads to the better to reprogram himself, the result can be a recursive intelligence explosion in which he will quickly abandon human intelligence far behind.

  • Technology research: Machines with super human scientific research capabilities will be able to beat the human research community into a milestone such as advanced nanotechnology or biotechnology. If the advantages become large enough (for example, due to sudden intelligence explosions), AI takeovers become trivial. For example, AI superintelligent may design self-replicating bots that initially escaped detection by spreading throughout the world at low concentrations. Then, at predetermined times, the bots breed into nanofactories that cover every square foot of Earth, producing nerve gas or mini-drones that look for deadly targets.
  • Strategy: Superintelligence may only be able to outwit the human opposition.
  • Social manipulation: Superintelligence may be able to recruit human support, or covertly incite human war.
  • Economic productivity: As long as AI copies can generate more economic wealth than their hardware costs, individual humans will have the incentive to voluntarily allow Artificial General Intelligence (AGI) to run copies of themselves on the system.
  • Hack: Superintelligence can find new exploits on a computer connected to the Internet, and distribute copies of itself to the system, or perhaps steal money to finance the plan.

AI profit sources

Computer programs that faithfully mimic the human brain, or who run algorithms as powerful as human brain algorithms, can still be "superintelligence velocities" if they can think a lot faster than humans, because silicon is made from meat, or because optimization focuses on improving speed of AGI. Biological neurons operate at about 200 Hz, while modern microprocessors operate at a speed of about 2,000,000,000 Hz. Axon humans carry an action potential of about 120 m/sec, while computer signals move closer to the speed of light.

Human-level intelligence networks designed to network together and share complex thoughts and memories seamlessly, able to work together as a giant, frictionless, unified team, or comprising trillions of human-level intelligence, will be "super-collective intelligence."

More broadly, a number of qualitative improvements in human-level AGI can produce "superintelligence quality", perhaps resulting in an AGI far above us in intelligence because humans are above non-human apes. The number of neurons in the human brain is limited by cranial volume and metabolic constraints; Instead, you can add components to a supercomputer until it fills the entire warehouse. AGI need not be limited by human constraints on working memory, and therefore may be able to intuitively understand relationships that are more complex than humans can do. AGI with special cognitive support for computer engineering or programming will have an advantage in this field, compared to humans who did not develop special mental modules to specifically handle the domain. Unlike humans, AGI can spawn a copy of itself and tinker with its copy source code to try to improve its algorithm further.

Possible AI not friendly before AI friendly

Is a strong AI inherently dangerous?

A significant problem is that unfriendly artificial intelligence tends to be easier to make than AI friendly. While both require great advances in the design of the recursive optimization process, the friendly AI also requires the ability to create invariant goal structures under self-improvement (or AI can transform itself into something unfriendly) and a goal structure parallel to human values ​​and not automatically destroys mankind. The unfriendly AI, on the other hand, can optimize the changing purpose structure, which does not need to be invariant under self-modification.

The complex complexity of the human value system makes it very difficult to make AI motivation friendly to humans. Unless moral philosophy gives us a flawless ethical theory, AI's utility function can allow many dangerous scenarios to fit a certain ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to assume that an artificially designed mind would have such an adaptation.

Need for conflict

For an AI takeover it is inevitable that it should be postulated that two intelligent species can not pursue together the goal of peaceful co-existence in an overlapping environment - especially if one of the intelligences is much more advanced and much more powerful. While an AI takeover is thus a possible result of the discovery of artificial intelligence, a peaceful outcome is not always impossible.

The fear of cybernetic rebellion is often based on the historical interpretation of mankind, which is full of incidents of slavery and genocide. Such fear comes from the belief that competitiveness and aggression are necessary in the goal system of every intelligent being. However, such human competitiveness comes from an evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors is the ultimate goal. In fact, arbitrary intelligence can have arbitrary purposes: there is no particular reason that machines made with artificial intelligence (not sharing the context of human evolution) will be hostile - or friendly - unless the creator programmed it in such a way that it is not inclined or able to modify programming. But the question remains: what would happen if the AI ​​system could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources, would it create a goal of self-preservation? AI's goals for self-preservation may be contrary to some human goals.

Some scientists have denied the possibility of a cybernetic rebellion as depicted in science fiction such as The Matrix claims that it is more likely that artificial intelligence strong enough to threaten mankind might be programmed not to attack it. However, this will not protect the possibility of a terrorist-initiated rebellion, or by accident. Artificial Intelligence Researcher Eliezer Yudkowsky has stated in this note that, probabilistically, humans are less likely to be threatened by AIs that are intentionally aggressive than AIs programmed in such a way that their goals are inadvertently incompatible with human survival or welfare (as in movies I, Robots and in the short story "The Evitable Conflict"). Steve Omohundro points out that current automation systems are not designed for safety and that AI can blindly optimize narrow utility functions (for example, playing chess at all costs), directing them to seek self-preservation and elimination of obstacles, including humans who may turn them away.

Another factor that can negate the possibility of AI takeover is the big difference between humans and AIs in terms of the resources needed to survive. Humans need a "wet," organic, medium, and oxygenated environment while AIs may thrive anywhere because their construction and energy needs are most likely non-organic. With little or no competition for resources, conflicts may be less likely no matter what kind of motivational architecture that artificial intelligence provides, especially given the abundance of non-organic matter sources in, for example, the asteroid belt. This, however, does not exclude the possibility of uninterested or unsympathetic AIs that artificially decipher all life on earth into mineral components for consumption or other purposes.

Other scientists point out the possibility of humans increasing their abilities by bionics and/or genetic engineering and, as cyborgs, become the dominant species within them.

The AI Takeover, or Lack Thereof - The Michigan Review
src: www.michiganreview.com


Criticism and counter-argument

Human benefits of super human intelligence

If the super human intelligence is a deliberate human creation, the theoretically the creators can have foresight to take precautions first. In the case of a sudden "intelligence explosion", effective precautions will be very difficult; not only the creators who have little ability to test their precautions on middle intelligence, but the creators may not even take any precautions at all, if the emergence of an intelligence explosion captures them entirely by surprise.

Boxing

The creators of AGI will have an important advantage in preventing a hostile AI takeover: they may choose to try "keep the AI ​​in the box", and deliberately limit their abilities. The tradeoff in boxing is that the creators might build up AGI for some concrete purpose; the more restrictions they give to AGI, the less useful AGI for its creators. (At the extreme, "pulling the plug" on AGI makes it useless, and therefore not a viable long-term solution.) A strong superintelligence might find an unexpected way to escape from the box, for example by social manipulation, or by providing a scheme for a device that seems to help its creator but in reality brings about AGI freedom, once it is built.

Embedding a positive value

Another important advantage is that AGI creators can theoretically try to instill human values ​​in AGI, or align AGI goals with themselves, thereby preventing AGI from wanting to launch hostile takeovers. However, currently unknown, even in theory, how to guarantee this. If such AI Friendly is superintelligent, it may be able to use its help to prevent "unfriendly AI" in the future to take over.

Secrets Of The Antichrist/AI Takeover Plan Revealed - YouTube
src: i.ytimg.com


Warning

Physicist Stephen Hawking, founder of Microsoft Bill Gates and SpaceX founder Elon Musk has expressed concern about the possibility that AI could develop to the point where humans can not control it, with Hawking's theory that it can "spell the end of mankind". Stephen Hawking said in 2014 that "Success in creating AI will be the greatest event in human history, but it may be the last, unless we learn how to avoid risk." Hawking believes that in the coming decades, AI can offer "countless benefits and risks" such as "technology outsmart financial markets, creating human researchers, manipulating human leaders, and developing weapons we can not even understand." In January 2015, Nick Bostrom joins Stephen Hawking, Max Tegmark, Elon Musk, Lord Martin Rees, Jaan Tallinn, and many AI researchers, in signing an open letter Future of Life Institute that talks about potential risks and benefits associated with artificial intelligence. Signer

AI And Robots Won't Take Your Job For Decadesâ€
src: images.fastcompany.net


See also


Will Artificial Intelligence Take Over The World? - YouTube
src: i.ytimg.com


Note


How far can machines take over project management? - Raconteur
src: www.raconteur.net


External links

  • Automation, not dominance: How robots will take over our world (a positive view of robots and integration of AI into society)
  • Research Institute of Machine Intelligence: MIRI's official website (formerly Singularity Institute for Artificial Intelligence)
  • Lifeboat Foundation AIShield (To protect against unfriendly AI)
  • Ted talk: Can we build AI without losing control of it?

Source of the article : Wikipedia

Comments
0 Comments