by Davar Ardalan and Kee Malesky

We are the storytelling species. It’s natural then to want to discuss the future of storytelling in artificial intelligence and the cultural implications that come with it.

To consider “Artificial Intelligence, Culture, and Storytelling” more than 50 thought leaders from a variety of disciplines gathered for a Symposium co-sponsored by IVOW and Morgan State University in Baltimore on April 23. Representatives from the United Nations, the Australian government’s innovationXchange program, and Management Systems International, as well as renowned machine learning experts, educators, journalists, technologists, and business leaders from across the US and Mexico engaged in conversations about AI as a tool for culturally rich and socially conscious storytelling.

By early afternoon, our focus turned to the need for a future “Declaration of Machine, Citizen, and Culture” that could guide engineers, designers, machine learning experts, and users to understand and protect human rights, and to help ensure inclusivity and diversity in the digital world.

We considered the fact that we’ve been modern civilized human beings for about 10,000 years, with evolving levels of self-awareness that have allowed us to ask essential questions, experience individual consciousness and share it with others.

So we asked ourselves, How do we bring the Machine into this discussion of human rights? What issues/concerns are specific to culture-related AI applications? What does human-centered AI look like? What are the rights and privileges of human beings in the digital universe?

As with any new technology, it’s important to create guidelines on the proper ways to utilize these new tools. Do we need to create machines to hold other machines accountable and accurate, or a responsible third party to review new products before launch?

One participant pointed out early, we need to identify specific issues and current inadequacies; the problem isn’t in the algorithms, it comes from people and society. Data is agnostic and amoral and diverse datasets do exist; but people have biases. A multidisciplinary approach is essential to find balanced solutions. Systems will need to be trained to be aware of cultural context. Dominant biases have considerable power to negatively impact the lives of others; we have to keep humans accountable too. AI expert Mark Riedl of the Georgia Tech School of Interactive Computing, suggested that we should look to Europe and new laws around AI accountability.

Ellen Yount of Management Systems International, Nisa McCoy of IVOW, Louise Stoddard of the United Nations and AI expert Rafael Perez y Perez.

AI expert Mark Finlayson of Florida International University urged us to pause and consider what the problem is first before making any declarations.

Lisha Bell, from Pipeline Angels, brought up the point that some biases have more power than others, and we need to hold humans accountable for making AI balanced and diverse. An AI system must be interrogatable; we should be able to understand why a system made a decision.

Louise Stoddard, who works with the United Nations Office of Least Developed Countries, asked “What are the barriers to access and how can we remove them? Who owns this process?” She stressed the need to listen to stories “from the ground.”

Ellen Yount, Vice President of Management Systems International (MSI), liked the idea of having a charter that encourages us to “Do no harm.” AI developers should consider the social implications of telling stories, as well as any unintended consequences.

A Lesson From History

Ahead of our conversation, we looked at some highlights in the history of human rights — and the granting or asserting of those rights, which have evolved over time and been extended to women, minorities, workers, children, disabled, immigrants and refugees:

  • Cyrus Cylinder, 6th century BCE (“I permitted all to dwell in peace”)
  • Magna Carta, 1215 (“No Freeman shall be taken or imprisoned but by lawful judgment of his Peers”)
  • US Declaration of Independence, 1776 (“All men are endowed by their Creator with certain unalienable rights”)
  • US Bill of Rights, 1791 (freedom of religion and the press, and to petition and assemble peaceably)
  • French Declaration on the Rights of Man and Citizen, 1789 (“the natural and imprescriptible rights of man…are liberty, property, safety and resistance against oppression”)
  • UN Declaration of Human Rights, 1948 (“Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind”)
  • Beijing Declaration, 1995 (“women’s rights are human rights”)

And one idea from 1950s’ fiction that resonates today: Isaac Asimov’s Laws of Robotics, the first of which says “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

Students and faculty from the University of Maryland and Morgan State University were among those participating or documenting conversations around the symposium.

We didn’t intend to produce an actual Declaration of Citizen, Machine, and Culture at the symposium; we wanted to begin a conversation between our IVOW team and a wide variety of experts. We are well aware that others are actively debating how to balance human needs and rights with the challenges of the digital universe, including these current endeavors:

  • Microsoft released a book in January called The Future Computed, on the effects of artificial intelligence on society, which argued that perhaps coders should take a “Hippocratic oath” as physicians do: “First, do no harm.”
  • Al Jazeera held a Future Media Leaders’ Summit last month, to discuss how to frame the ethics behind AI in a human-centric field like journalism. Robots are more efficient, but can they empathize and make human judgments?
  • At MIT Technology Review’s EmTech Digital conference, the Partnership on AI presented its guiding principles: “working to protect the privacy and security of individuals; striving to respect the interests of all parties that may be affected by AI advances; helping keep AI researchers socially responsible; ensuring that AI research and technology is robust and safe; and creating a culture of cooperation, trust, and openness among AI scientists to help achieve these goals.” The conclusion of the conference: For better AI, diversify the people building it.
  • AI-4-ALL is a nonprofit that runs summer programs teaching AI to students from underrepresented groups. “AI will change the world; who will change AI?” is their tagline. “Our vision is for AI to be developed by a broad group of thinkers and doers advancing AI for humanity’s benefit.”
  • From The Future Computed: “Artificial Intelligence can serve as a catalyst for progress in almost every area of human endeavor. But, as with any innovation that pushes us beyond current knowledge and experience, the advent of AI raises important questions about the relationship between people and technology, and the impact of new technology-driven capabilities on individuals and communities. We are the first generation to live in a world where AI will play an expansive role in our daily lives.”

As the declaration workshop came to a close, IVOW’s product designer Nisa McCoy pointed out that design isn’t perfect, and we need to look at the initial intention of any technology, and then review. We need to know more about a product and its impact before it’s launched. Apps and products must be safe, reliable, private, secure, inclusive, transparent, and accountable. We need a cross-cultural understanding of the audience and users.

After the session, one of the symposium attendees, Paris Adkins-Jackson, founder and CEO of DataStories by Seshat, a research, evaluation, and data analysis company, compiled this summary of the highlights of our discussion, putting it in the form of a declaration:

We declare that citizen, machine, and culture are inherently and essentially connected and in communication with each other; and

We declare that those connections produce inequities and amplify biases;

Thus, we declare that when engaging research, product development, or other actions related to Artificial Intelligence and Machine Learning, there must be a thorough examination prior to development of the potential impact of any product on people including access, appropriation, and amplification of injustice; and

We declare that such information will be a large component of the criteria in the decision to develop the product;

Also, we declare that a system of accountability be developed to mitigate any unforeseen challenges that arise which indicate there has been an adverse impact on people;

Lastly, we declare we will make all efforts possible to work with diverse and non-traditional disciplines to investigate impact, and develop and implement accountability platforms as well as to assist with product development.

About the Authors:

Davar Ardalan is the Founder and Storyteller in Chief of IVOW. She has been a journalist in public media for 25 years, most of those at NPR News, where she designed stories anchored in multiculturalism and steeped in historical context. In 2015, her last position at NPR was senior producer of the Identity and Culture Unit. Realizing that there is a gaping hole in AI algorithms that will define our future stories, Davar created IVOW, whose AI software sifts through data on world cultures, traditions and history for modern consumer storytelling applications. IVOW’s intent is the effective fusion of AI with cultural storytelling to help diminish bias in algorithmic identification and train AI software to be much more inclusive.

Kee Malesky is IVOW’s Director of Research. Her 30-year career as a news librarian for National Public Radio prepared her for quick research and ​fact-checking on deadline, which she continues to find fascinating and fulfilling work. ​Kee’s copy-editing abilities come from her experience as the author of two books (All Facts Considered and Learn Something New Every Day), as well as from working on book projects with former NPR colleagues Scott Simon, Davar Ardalan, Jacki Lyden, and others. Before her retirement in 2013, Kee was the recipient of several awards from international library organizations.