Theoretical physicist Stephen Hawking, arguably one of the smartest people in history, warned in an interview with the BBC that “the development of full artificial intelligence (AI) could spell the end of the human race”.
Hawking went on to say, speaking at the Web Summit technology conference in Lisbon, Portugal, “AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. This could significantly disrupt our economy. »
In 2015, dozens of brainiac scientists and tech experts, including famous physicists like Hawking and Elon Musk, signed a letter warning that while AI could be used for great good, it could also have potentially devastating, dangerous and unintended uses.
Then, in 2018, thousands of scientists and AI leaders, seeing where governments were heading with the military use of AI, signed a pledge stating, “we will not participate in or support the development, manufacture, trade or use of lethal weapons”. autonomous weapons.
The Commitment on Lethal Autonomous Weapons of the Future of Life stated that “lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and every individual. Thousands of AI researchers agree that by removing the risk, accountability and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to monitoring and data systems.
In other words, just say “No” to killer robots.
Other equally dangerous uses of AI exist.
Although it does not use bullet or death ray technology, AI is also used for information creation and dissemination, and potentially, propaganda and disinformation.
Did you know that newspaper articles and columns are written by robots?
It’s true: newspaper articles and columns are written by robots.
Robot-written technology has been around for a while.
In 2015, Digiday.com reported that the Washington Post’s robot reporter, named “Heliograf”, had written hundreds of published articles covering everything from the Olympics and political elections to football games.
They went on to describe how the Associated Press used AI bots to report revenue coverage, and USA Today used it to create videos, all under harmless technology called “automated journalism.”
In 2019, Jaclyn Peiser of The New York Times reported, “About one-third of content published by Bloomberg News uses some form of automated technology.
The company’s system, Cyborg, is capable of helping journalists produce thousands of articles on company earnings reports each quarter. »
She also reported that the Australian version of The Guardian had published its first “machine-assisted” article, a story about political donations, and noted the Associated Press’ use of “Automated Insights”, a company that produces billions of robotic stories every year.
Also known as “algorithmic journalism” or “robotic journalism,” automated journalism refers to computers that gather information from a variety of sources and, using specialized programming with fake intelligence (ahem, sorry) a “artificial” intelligence, attempts to piece together information in a way that will trick human beings into believing that other human beings have actually written a newspaper article or magazine column.
The results can vary, from frankly laughable to damn convincing.
How can you tell something was written by a robot?
Well, the good folks at MIT-IBM Watson AI Lab and Harvard University have invented a robot to guess if something is written by a robot or a human.
Check it out at gltr.io/dist/index.html, then test some text for yourself, like stuff in this log.
Let me know what you discover.