WASHINGTON, March 5, 2017 — Laws govern us. Humans. People. There are also laws that govern the machines that humans use. There are no laws, yet, however, to govern machines that think on their own, an unexpected new legal frontier that is already upon us.
A discussion on the legal ramifications of such machines, driven by what is termed “artificial intelligence” (AI), would include an examination of civil laws of responsibility, involving, for example, if and when it can be shown that a machine, say a robot, caused harm. The discussion would also include criminal laws that would address crimes committed by robots and other forms of artificial intelligence.
A programmable digital computer was invented back in the 1940s. That machine applied concepts of mathematical reasoning to solve problems. Scientists soon began to discuss the possibility of building an electronic brain.
In 1956, at Dartmouth College, research on what we now call AI took off. Scientists who attended a workshop there were given millions of dollars to create a machine as intelligent as a human being. Their prediction of success within a generation did not contemplate the intricate difficulties involved with the project.
Fast forward to our current pace of scientific research, which includes the creation of machines that conquered various levels of reasoning, natural language, games, symbolic logic. The world as we know it today now has computers that drive cars without drivers, beat Ken Jennings in Jeopardy, and provide “predictive” interactions based on our prior requested actions. Amazon and other websites now suggest books, products and services; travel sites offer individualized vacation suggestions; Netflix recognizes our reactions to films and suggests others of similar interest; Nest, the learning thermostat, adjusts our household temperature throughout the day; and Pandora selects our favorite music.
Humans have always tried to improve their lives in every imaginable spectrum. Today, the use of technology has become the method of choice in these endeavors. The past 100 years saw the most dramatic technological changes to life than in all of recorded human history. The next 100 years will surpass the past exponentially.
AI Research Today: Legal Ramifications
Artificial intelligence researchers are now trying to tackle what is known as reinforcement learning. This is a training method that allows AI creations to learn from past experiences, correct for error, and re-do. With this process, models will be created where the AI entity will determine the best course of action and then implement that course.
But imagine, for example, using AI for traffic control. The AI determines it is more efficient to cycle the traffic light for 27 seconds instead of 30. This could theoretically cause drivers to run the light and cause more accidents. Besides the obvious fact that the driver running a light would be responsible if an accident occurred in that scenario, would the human developers who created the AI system also be responsible? Traditional civil law, or “tort” law would say “no” for many reasons.
AI design is after all, artificial. Thus, ideas that address liability, or empaneling “a jury of peers” in such a case seems ridiculous. Similarly, a criminal prosecution would seem equally absurd, unless somehow it could be shown that the responsible AI developer intended to create harm.
Cars are currently being designed and programmed to “talk” to each other, the better to avoid accidents. But take this one step further. Imagine your AI personal assistant “talking” to another person’s AI assistant. Imagine these assistants “like” each other. Will they date? Break up? Who gets the engagement ring?
Law does not yet regulate non-human behavior. Bees are not liable if they sting. Beehive masters might be if they intentionally set the bees on you.
AI systems are very close to matching human thought processing. Self-awareness was a scientific research bastion believed to be far off; that is, until in 2015 Japanese researchers conducted testing in this area and conclusively established that their robot creations were “self aware.”
Three robots were created and programmed to believe that two of them had been given a “dumbing pill” which would make them mute. Two robots were silenced. When asked which of them hadn’t received the pill, only one was able to say “I don’t know” out loud. Upon hearing its own reply, the robot changed its answer, realizing that it was the one who hadn’t received the pill. The programmers said that to prove the self-awareness concept, the robot must have understood the rules, recognized its own voice and been aware of the fact that it is a separate entity from the other robots.
AI systems, however, are not people and therefore cannot currently be taken into a courtroom to stand trial or answer for accidents.
But that notion—namely, that AI systems are not people—is currently being challenged.
Alexa, the voice-enabled robot created by Amazon, is now the subject of a courtroom controversy set to answer the question “where do free speech rights apply?”
Amazon, Echo, Alexa: AI and First Amendment Rights
Freedom of speech is a foundational right we enjoy as Americans, as set out in the First Amendment to our Constitution. In 1791, when our government passed the First Amendment and guaranteed our right to free speech, the idea of of AI entities like Alexa, Siri, Cortana and Google Assistant were clearly not contemplated.
In November, 2015, Victor Parris Collins was in the home of and was allegedly murdered by James Bates, who owned an Amazon Echo. Bates has been charged with first-degree murder after police found Collins face down in Bates’ bathtub. Police requested the Echo recordings, hoping to hear conversations and anything Alexa may have “said” that would be embedded into the Echo.
Amazon has resisted turning over the tapes, clearly fearing a huge hit in sales if Echo becomes known as a spy device that’s been installed in customers’ homes. Amazon’s argument is that Alexa has a First Amendment right to freedom of speech.
The speech Amazon asserts that is protected consists of the recordings stored by Amazon for Bates’ Echo device: first, his own speech in the form of a request for information or a command to Alexa; and second, the response of the Alexa Voice Service itself, conveying the information it determined most responsive to Bates’ inquiries and commands. Amazon claims both are protected speech under the First Amendment.
Amazon’s argument is plausible according to Toni Massaro, a professor at the University of Arizona College of Law:
“Free speech arguments that favor machine speech are surprisingly plausible under current doctrine and theory. Of course, Amazon itself has free speech rights. As long as Alexa can be seen as Amazon, there is a protected speaker here.”
The EU Weighs in on AI
The European Union is currently considering whether robots should be granted legal status with rights like human beings. The finding of the Committee on Legal Affairs for the European Parliament is that “robots, androids, and other forms of artificial intelligence are poised to unleash a new industrial revolution, which is likely to leave no stratum of society untouched.” Proposed laws include the following:
- A robot may not injure a human being, or through inaction, allow a human being to come to harm;
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; and
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The EU committee’s report also addresses legal liabilities, noting that liability should be proportionate to the instructions given the robot and its autonomy.
Conclusion: To Be Continued
By necessity, “instructions” in the concept of liability just described would have to go all the way back to the initial programming of an AI device or software, as the evolution of AI must include the assumption that the AI entity “thinks;” meaning liability could only attach if the instructions failed to foresee but should have foreseen negative consequences.
Warning Will Robinson! Warning!
Paul A. Samakow is an attorney licensed in Maryland and Virginia, and has been practicing since 1980. He represents injury victims and routinely battles insurance companies and big businesses that will not accept full responsibility for the harms and losses they cause. He can be reached at any time by calling 1-866-SAMAKOW (1-866-726-2569), via email, or through his website.
His book “The 8 Critical Things Your Auto Accident Attorney Won’t Tell You” can be instantly downloaded, for free, on his website: http://www.samakowlaw.com/book.
Samakow has now also started a small business consulting firm. His new book “Step By Step, Achieve Small Business Success” is available at www.thebusinessanswer.com.Click here for reuse options!
Copyright 2017 Communities Digital News
• The views expressed in this article are those of the author and do not necessarily represent the views of the editors or management of Communities Digital News.
This article is the copyrighted property of the writer and Communities Digital News, LLC. Written permission must be obtained before reprint in online or print media. REPRINTING CONTENT WITHOUT PERMISSION AND/OR PAYMENT IS THEFT AND PUNISHABLE BY LAW.
Correspondingly, Communities Digital News, LLC uses its best efforts to operate in accordance with the Fair Use Doctrine under US Copyright Law and always tries to provide proper attribution. If you have reason to believe that any written material or image has been innocently infringed, please bring it to the immediate attention of CDN via the e-mail address or phone number listed on the Contact page so that it can be resolved expeditiously.