COMMIT/: Cultural bias in AI
Track theme: "How to deal with cultural bias in AI?"
Programme: Tuesday 17 March 2020
Public keynotes about Cultural bias in AI
13h50 - 14h20 Keynote Martijn Kleppe, KB
14h20 - 14h50 Keynote Stefania Milan, UvA
15h00 - 17h30 Session by invitation only
Aim of the track
Do you want to discover where the cultural biases are in your organisation's data and systems?
Cultural AI is the study, design and development of socio-technological AI systems that are implicitly or explicitly aware of the subtle and subjective complexity of human culture. It is as much about using AI for understanding human culture as it is about using knowledge and expertise from the humanities to analyse and improve AI technology. It studies how to deal with cultural bias in data and technology and how to build AI technology that is optimised for cultural and ethical values.
Currently, there is a need for such systems. While the big tech companies admit this, they still focus on the larger markets and ignore a large amount of cultural and linguistic variation that one finds even in a small market like the Netherlands. If big tech develops a voice assistant system for Dutch, for instance, it will be trained to recognise standard Dutch, not any of its dialects. Cultural biases cause user exclusion, undermining the efforts of governments to leverage digitalisation in order to be inclusive, resilient and democratic.
In this session we invite you to discuss, and perhaps discover, your organisational use cases where AI fails, performs sub-optimally, or causes exclusion because it is not aware of the socio-cultural context it is to operate in. Think of the voice assistant that cannot deal with dialects or an automatic classification system for job ads and CVs that is trained on a predominantly male work domain like ICT that misclassifies many CVs from a different domain.
The goal of the session is to draw up plans to address issues arising from the lack of cultural awareness in AI systems, and to put culturally aware AI on the map within institutes, companies and policy makers.
Antal van den Bosch (Director Meertens Institute)
Jacco van Ossenbruggen (VU Amsterdam/ Centrum voor Wiskunde en Informatica)
Mieke van den Berg (Director COMMIT/)
Marieke van Erp (Lead Digital Humanities Lab, KNAW Humanities Cluster)
KNAW Humanities Cluster huc.knaw.nl
Centrum voor Wiskunde en Informatica www.cwi.nl
Programme with keynote speakers
13:50-14:20 Keynote speaker
|Title: Responsible use of AI within the library and digital heritage field||Speaker: Martijn Kleppe|
The last decade, millions of heritage objects have been digitized, such as newspapers, books, photographs, TV programs and archival documents. This enables humanities scholars to conduct computer-assisted research in order to recognize patterns in huge amounts of historical texts. Recently, computer scientists discovered these types of datasets as well to develop and train algorithms from the Artificial Intelligence domain. For them, the size of the data is interesting, as well as the scientific challenges. The data is not always perfectly digitized allowing them to experiment with techniques to improve the data. But the historical nature of the data also brings in new perspectives and bias to understand evolving human cultural values.
Consequently, digital heritage data has become both a source for AI research as well as stimulator to create new techniques to explore and search through digital heritage collections. For example, at the KB, National Library of the Netherland we experiment with computer vision techniques to analyze the contents of historical newspaper photos. The Netherlands Institute for Sound and Vision is even further and uses speech recognition to make the content of television programs discoverable. We therefore expect that the coming years, AI techniques will be used even more to make digital heritage collections better accessible through their different modalities.
At the same time, the use of these types of techniques raises questions. As a library and heritage field, we want us to use algorithms in a responsible manner. We think the user should be in control and the diversity of the background of the data needs to be taken into account. At the KB, we therefore formulated seven principles by which we check our work with and for AI. In this keynote I will reflect on the current applications of AI techniques within the heritage sector and present the seven principles as a starting point for debate.
|Info Martijn Kleppe
Martijn Kleppe, Head of Research at KB, National Library of the Netherlands.
14:20-14:50 Keynote speaker
|Title: Beyond the AI universalism: Towards a roadmap for cultural AI||Speaker: Stefania Milan|
Artificial Intelligence (AI) is set to have unparalleled consequences on human life. Like many other technologies of our times, it is surrounded by an aura of inevitability and, to some extent, infallibility. Yet it may reproduce inequalities and amplify discrimination. What’s more, today’s understanding of and plans for intelligent systems tend to “universalize” goals and needs, ignoring cultural diversity and local specificities, to the point they present technology as operating outside of history and of specific sociopolitical, cultural, and economic contexts (cf. Milan and Treré, 2019). This talk offers some building blocks towards a roadmap for Cultural AI, able to interpret, leverage and address plurality.
|Info Stefania Milan
Dr Stefania Milan (University of Amsterdam) is Associate Professor of New Media and Digital Culture at the Department of Media Studies, University of Amsterdam. Her work explores the interplay between digital technology, activism and governance. Stefania is the Principal Investigator of two projects financed by the European Research Council exploring data- and algorithmic-mediated forms of civic engagement (see data-activism.net and algorithms.exposed), and co-principal investigator in the Marie Curie Innovative Training Network “Early language development in the digital age” (e-ladda.eu). As of May 2020, she will be coordinating the project “Making the hidden visible: Co-designing for public values in standard-making and governance”, funded by the Dutch Research Council. Stefania is the author of Social Movements and Their Technologies: Wiring Social Change (Palgrave Macmillan, 2013/2016) and co-author of Media/Society (Sage, 2011). She enjoys experimenting with digital and action-oriented research methods and finding ways to bridge research with policy and action.
Programme with user cases
15:00-17:30 Discussion session
|Title: User cases – RTL||Speaker: Daan Odijk, (Lead data scientist, RTL)|
RTL, as the largest commercial broadcaster of the Netherlands, has had a stamp on Dutch culture for over 30 years. In a declining TV market, RTL is refocussing from targeting the masses to delivering content more personally. Daan will share how RTL uses data science, ML and AI to help our users find the right content for them, and what challenges around cultural bias this touches.
|Title: User cases – NEWSGAC||Speaker: Kim Smeenk, (Digital Humanities researcher, RuG)|
In the NEWSGAC project, AI was used to automatically classify articles from digitized newspapers on genre, to support journalism scholars in studying trends in journalistic discourse over time. Classifying genre, however, is a challenging task, and requires considerable interpretation. When using "black box" AI techniques for this task on a large news corpus, cultural bias and other machine errors tend to go unnoticed. Kim will demonstrate how AI transparency can be improved through data visualizations that show performance per genre, by offering article-level and classifier-level explanations, and by providing comparison between alternative machine learning pipelines.
In the coming years, COMMIT/ will undertake various activities to strengthen, link and make visible the public-private partnership in the field of ICT Science. For six years knowledge institutions (general and technical universities), start-ups, SMEs, corporate companies and non-profit organisations across the entire ICT-Science worked together in the COMMIT/ ICT research program. More than 130 partners worked together in this public-private partnership until 2017. From security to e-health and from human-machine interaction to e-food.
Cultural AI Lab
The Cultural AI Lab initiative is aimed at the analysis of the inherent cultural bias in data, and the influence of this bias on computer programmes operating on these data. The lab is a unique collaboration of humanities scholars from the KNAW Humanities Cluster, computer scientists of CWI and TNO, and the cultural heritage sector NDE represented by the KB National Library, the Netherlands Institute Sound & Vision, and the Rijksmuseum.
ICT.OPEN2020Registration website for ICT.OPEN2020
Marloes van den Heuvelictopen2020@nwo.nl
Marloes van den Heuvelictopen2020@nwo.nl
MartiniPlazaMartiniPlazaLeonard Springerlaan 2 9727 KB Groningen Netherlands