International Law Weekend 2019: A Glance into Tech & Globalization
By: Alicia Xue
The American Branch of the International Law Society’s (ABILA’s) 2019 International Law Weekend was a great experience. It was so inspiring seeing people from different nationalities, ethnicities, cultural, and social backgrounds committed to the same mission – facilitating communication and cooperation across borders. Despite recent discouraging news in the international sphere, international law has shown its resilience, and we will endure. International law is facing unprecedented challenges: the rise of populist nationalism, the climate change, the refugee crisis, etc., and all these challenges call for a revisiting and reform of international law.
Nowadays, one of the hottest topics in international law is technology. Issues related to technology, such as intellectual property, privacy, and cybersecurity, are becoming increasingly relevant in our globalized world. In this blog, I will briefly talk about one panel at the conference: International Intellectual Property Law in the Age of Smart Technology and Intelligent Machines. The panel is comprised of four talks: “Can Copyright Cope with the Challenges of New Technology in the 21st Century?”, “The Communication Right in Copyright: Infringement and Enforcement Issues”, “Robotic Collective Memory” and “The Algorithmic Divide and Inequality in the Age of Artificial Intelligence.”
The Communication Right in Copyright: Infringement and Enforcement Issues
In this talk, Dr. Cheryl Foong from Curtin University discussed automated/platform infringement. In the case ABC v. Aereo (2014), broadcasters sued Aereo, a company providing services that enable viewers to view live and time-shifted streams of television on internet-connected devices, for violating copyright laws. The U.S. Supreme Court ruled for the plaintiff, on the basis that Aereo was not merely “an equipment provider,” but with an “overwhelming likeness to cable companies” that “performs petitioners’ works ‘publicly.’” However, Justice Scalia dissented, stating that the Court should not make a judgement on novel technologies, and the copyright laws should be modified to address the new issues. Dr. Foong further discussed two similar EU cases: Svensson v. Retriever (2014) (a website that redirects users to protected works which are already available online does not infringe copyrights in these works) and GS Media v. Sanoma (2016) (linking to freely distributed online contents without consents of the right holders constitutes a communication to the public, and thus possibly infringing the copyrights).
Robotic Collective Memory
Next, Prof. Michal Shur-Orfy from The Hebrew University of Jerusalem talked about the application of social robots in collective memory. Collective memory refers to a social construct of group identity, usually reserved in remembrance institutions like museums or archives. The use of social robots in collective memory helps present valuable information to the public more vividly, since interaction with a robot seems like interaction with a real human. Robotic collective memory presents two major challenges. First, robotic collective memory is more likely to be influenced by editorial choices than traditional collective memory, since the underlying information (data and code) in a social robot is invisible to the viewers. Second, robotic collective memory is more vulnerable to manipulation (e.g. hacking). Mr. Shur-Orfy offered several possible solutions to these challenges: to let the audience know that the presenter is a robot; to explain the function of the technology; to enhance the data security; and to ensure the data integrity.
The Algorithmic Divide and Inequality in the Age of Artificial Intelligence
This talk was by Prof. Peter K. Yu from Texas A&M University School of Law. The term “digital divide” describes the inequality between the people who have internet access and the people who do not or who have less, and such divide has presented a new socioeconomic challenge in the age of artificial intelligence. Professor Yu talked about three facets of the digital divide, namely algorithm deprivation, algorithm discrimination, and algorithm distortion. Algorithm deprivation applies to those without access to the internet. Algorithm discrimination refers to discrimination associated with technology. For example, Twitter’s Microsoft AI chatbot learned discriminative languages from online posts, and HP’s facial recognition system could not recognize the faces of people of color. To eliminate such biases, a larger – and also more inclusive – dataset is needed in training these algorithms. Algorithm distortion refers to how improperly designed algorithms or inadequate data affects people’s understanding of the world. While algorithm deprivation and discrimination mostly affect the underprivileged side of the algorithmic divide, algorithm distortion has an impact on all internet users.
To sum up, technological development has created new legal – also socioeconomic and political – problems globally, and the solutions to such problems require trans-disciplinary approaches as well as international cooperation.