AI Now Report 2018
2018
The AI Now Report 2018 by the AI Now Institute revealed unsafe and poor practices by IBM Watson, the U.S. Immigration and Customs Enforcement, the Xinjiang Autonomous Region, and Amazon’s Rekongnition tool.
The interdisciplinary work of the AI Now Institute made recommendations for governments to expand powers of sector-specific agencies to regulate artificial intelligence. Among their positions include safe practices for face recognition to protect public interest, addressing discrimination, expanding artificial intelligence education to include social issues and ethics, and exclusion of and accountability, fairness, and transparency in artificial intelligence. They warned of the growing power of artificial intelligence-driven surveillance, automated decision systems, and bias and discrimination through automated systems.
The 2018 report revealed July documents that reported the computer system IBM Watson produced “unsafe and incorrect” cancer treatment recommendations. A September investigation showed IBM was building an “ethnicity detection” feature to use non-consensual, secret police camera footage of thousands of people. With the New York City Police Department, the question-answering computer system would scan for race-based face features.
As these issues raise concern among the public, the accountability gap for artificial intelligence increases. The innovation, efficiency, recklessness, and amoral culture of artificial intelligence research has produced these harm, especially to individuals of subjugated groups. ICE (U.S. Immigration and Customs Enforcement) deliberately tampered with the risk assessment algorithm so it would detain all immigrants in custody. Those who study social, political, and ethical issues also may not be appropriately versed in the science of artificial intelligence research to appropriately answer the questions that arise. Accountability and transparency would help mitigate many of these issues.
The American Civil Liberties Union and University of California-Berkeley researchers found inaccurate results of Amazon’s Rekognition tool. The algorithm misidentified matching thousands of photos of arrested people with U.S. Congress members. The false positives occurred more frequently among Congress members of color, with an error rate of nearly 40% and 5% for white members.
Most egregiously oppressive and unethical is the police state of the Xinjiang Autonomous Region. The government has backed surveillance cameras, spyware, and stealth to detain and re-educate the predominantly Muslim Uyghur minority. They use machine learning technology to generate lists of suspects to forcefully discipline the group through detainment camps. The power data-driven social control magnifies the blatant misuse and threatens basic individual rights.
The full report can be read here…
back…