Online Workshop 2: Reimagining Industry / Academic / Cultural Heritage Partnerships in AI
The workshop will be held online and hosted by Auburn University (Alabama, USA), on Monday, 25 October, and Tuesday, 26 October, 2021.
Workshop 2 Title: “Reimagining Industry / Academic / Cultural Heritage Partnerships in AI”
Times for both days are:
– 8:00 am to 12:00 pm Pacific Daylight (UTC -7)
– 10:00 am to 2:00 pm Central Daylight (UTC -6)
– 11:00 am to 3:00 pm Eastern Daylight (UTC -5)
– 4:00 pm to 8:00 pm UK (UTC +1)
The workshop is free of charge, but registration is required at https://forms.gle/9USeZdciNRZT8ETB9
Please register by 23rd October. Attendees will then be sent the Zoom link to attend the workshop.
This workshop, the second in a series, is focused on the application of innovative AI research methods and collaborations among industry, academia, and cultural institutions, how they are now, and what they might look like in the future. The workshop invites critique, visions, and revisions of what and how these relations might grow with equity and social justice interweaved from the design process onwards; we will explore both synergies and differences in the ethics, motivations, and practices implicated in such industry / cultural heritage partnerships, reimagining them for a thoughtful and intentional future.
Please join us for a stimulating set of presentations followed by robust and thoughtful discussion.
- David De Roure, Oxford University
- Eun Seo Jo, Stanford University
- Katherine McDonough, The Alan Turing Institute
- Mutale Nkonde, AI For the People
- Caroline Sinders, Convocation Design+Research; London College of Communication
- Jeff Steward, Harvard Art Museums
- Glen Worthey, HathiTrust Research Center / University of Illinois Urbana-Champaign
- Dr. Samantha Shorey, The University of Texas at Austin
- Dr. Kanta Dihal, Leverhulme Centre for the Future of Intelligence, University of Cambridge
- Dr. Erinma Ochu, Manchester Metropolitan University
(Times are Central Daylight, so please adjust accordingly).
Monday 25th October
10:00 – 10:15: Opening
10:15 – 12:00: Katherine McDonough (Turing Institute), Jeff Steward (Harvard Art Museums), Glen Worthey (HathiTrust Research Center).
Chair: Dr. Chris Loughnane (Auburn University)
12:00 – 12:30: BREAK
12:30 – 1:30: Caroline Sinders (Independent/London College of Communication), Kanta Dihal (Leverhulme Center for the Future of Intelligence, Cambridge University).
Chair: Samantha Deutch (Frick Collection)
1:30 – 2:00: Roundtable with team and presenters.
Chair: Prof. Claire Warwick (Durham University)
2:00 – Closing
Tuesday 26th October
10:00 – 10:15: Welcome (summary of day 1)
10:15 – 12:00: Eunseo Jo (Stanford University), David De Roure (Oxford University), Samantha Shorey (University of Texas at Austin).
Chair: Nicole Coleman (Stanford)
12:00 – 12:30: BREAK
12:30 – 1:30: Mutale Nkonde (Columbia University), Erinma Ochu (Manchester Metropolitan University).
Chair: Annalina Caputo (Dublin City University)
1:30 – 2:00: Wrap-up / Roundtable with team and presenters.
Chairs: Glen Worthey (HathiTrust Research Center), Dr. Chris Loughnane (Auburn University),
2:00 – Closing
Presentation Abstracts & Speaker Biographies
David De Roure:
Title: Emerging Scholarly Practice and Scholarly Primitives: a Case Study in Music and AI
Abstract: Our knowledge infrastructure has evolved to support digital scholarship, as we adopt new methods which realise the affordances of the digital – including computation, but also socio-technical engagement at scale. The digital musicology community has been “an early adopter”, establishing new research practices which illuminate possible futures in other fields. Today we see the music community increasingly adopting artificial intelligence techniques in both analysis and composition, and a growing symbiosis of human and machine. This talk explores what insights this might bring for our future knowledge infrastructures.
Bio: David De Roure is Professor of e-Research at University of Oxford and Director of the Oxford e-Research Centre. He has strategic responsibility for Digital Humanities at Oxford within the The Oxford Research Centre in the Humanities, collaborates in Oxford’s WSTNet laboratory with the Oxford Internet Institute, and is a member of the Oxford Cyber Security Centre. He is a Strategic Advisor to the Economic and Social Research Council in the area of Social Media Data.
Focused on advancing digital scholarship, David works closely with multiple disciplines including social sciences (studying social machines), digital humanities (computational musicology), computer science (large scale distributed systems and social computing) and previously bioinformatics, chemistry, environmental science and social statistics. He has extensive experience in hypertext, Web and Linked Data. Drawing on this broad interdisciplinary background he is a frequent speaker and writer on digital scholarship and the future of scholarly communications.
David was closely involved in the UK e-Science programme and held a national role from 2009-2013 as the UK National Strategic Director for Digital Social Research. He is a UK representative on the European e-Infrastructure Reflection Group, one of the UK PIs for the Square Kilometre Array telescope, a partner in the UK Software Sustainability Institute and from 2011-2013 was a Research Fellow at the Graduate School of Library and Information Science at the University of Illinois at Urbana-Champaign.
He is a Fellow of the British Computer Society, a Member of the Institute of Mathematics and its Applications, a Supernumerary Fellow of Wolfson College and a member of the Wolfson College Digital Research Cluster.
Eun Seo Jo:
Title: Digital Archives at the Intersection of New Archival History and AI
Abstract: Digitization of historical archives in troves has ushered in a new era of data science in history, changing the scholarship and preservation of the past. But more pressingly, digitized archives have also revolutionized developments in commercial technology by feeding the training models that we interact with everyday. In this talk, I explore the consequences of and paths imagined by digital historical materials both in historical research and in tech deployment and the new interconnections forged among archives, data science, and AI. In doing so, I introduce a perspective of History, called New Archival History, a paradigm of historical thinking situating History in the broader context of data science. I also discuss the urgency for wide collaboration among historians, archivists, librarians, and technologists for a more informed, effective, and harmless production of knowledge and technology.
Bio: Eun Seo Jo is a PhD student in History (international) supervised by Professor Zephyr Frank in History and Professor Gavin Wright in Economics. Her dissertation project is a computational analysis of the language of American diplomacy using the Foreign Relations of the United States (FRUS) series, a select collection of diplomatic correspondences in the State Department from the mid nineteenth century. She has also worked on U.S.-East Asian relations during the Cold War. As a Stanford Data Science Scholar (2018-2020), Jo is interested in applications of machine learning on historical data and the ethical concerns of using socio-cultural data for AI research and systems. Jo’s research languages are Korean, Japanese, French, and Portuguese. She has a bachelor’s in Economics and History from Brown University and a master’s in Computer Science (Artificial Intelligence) from Stanford.
Title: Maps as [Open] [Humanities] Data: From Access to Analysis
Abstract: Creating data from maps means transforming visual and text content from images of scanned map sheets so that it can be processed computationally. Automatic, rather than manual, ‘datafication’ makes it possible to work with large numbers of maps. In this talk I will share experiments using MapReader, the pipeline created by Living with Machines (at The Alan Turing Institute in London) for asking questions of thousands of maps. Along the way, I reflect on what it means for historical maps to be open data, why creating a humanistic approach to computer vision matters, and how new interdisciplinary collaborations make it all possible.
Bio: Dr. Katherine McDonough is a historian of eighteenth-century France working at the intersection of political culture and the history of science and technology. She completed her PhD in History at Stanford in 2013. She has taught at Bates College and was a postdoctoral researcher in digital humanities at Western Sydney University (Australia). Before joining the Turing Institute, Katie was the Academic Technology Specialist in the Department of History/Center for Interdisciplinary Digital Research at Stanford University.
Her first book manuscript, ‘Public Works Laboratory: Building a Province in Eighteenth-Century France’ is a spatial history of the corvée, the forced labor regime used from the 1730s until the Revolution on highway construction sites.
At the Turing, Katie works on the Living with Machines project. Her research will focus on 1) developing methods for geographic information retrieval from text and visual sources such as census records and Ordnance Survey maps and 2) examining how the expansion of transportation infrastructure changed 19th century communities.
Title: How Storytelling can Combat Online Disinformation
Abstract: This talk examines the role narrative persuasion plays in advancing disinformation narratives. Our social media feeds are governed by recommendation algorithms that use AI software programs to decide which information we see. These algorithms are optimized to spread messages including anger and hate because they result in high engagement rates. This talks looks to the role storytelling for social good can play in combating algorithmically mandated communication networks. Nkonde’s remarks center on a campaign targeting Oscar-winning director Barry Jenkins around his movie Harriet, asking why writers and directors are not being engaged in our quest to find the truth on our timelines.
Bio: Mutale Nkonde is the leader and founder of AI for the People, a communications firm whose mission is to use art and culture to empower general audiences to combat racial bias in technological design.
Prior to starting AI for the People, Nkonde worked in AI Governance. During that time, she was part of the team that introduced the Algorithmic Accountability Act, the DEEP FAKES Accountability Act, and the No Biometric Barriers to Housing Act, (reintroduced in 2021) to the US House of Representatives in 2019.
Nkonde started her career as a news producer at the BBC in London, is a much sought after commentator on race. In 2019 she published a report on Advancing Racial Literacy in Tech, and her work has been featured in Wired, Washington Post and New York Times. In 2021 Nkonde was part of a news report, on facial recognition and shareholder activism, that was nominated for a New York News Emmy.
Nkonde is currently studying how to identify disinformation at Columbia University, she is a fellow at Stanford University’s Digital Civil Society Lab, and formerly held fellowships at the Berkman Klein Center of Internet and Society at Harvard, and the Institute of Advanced Study at Notre Dame
Title: In Defense of Useful Art: How art allows for confrontation, exploration, and systematic problem solving
Abstract: Can artwork be useful, can it be productive, and can it be a work of activism? Sinders’ artistic output can take the shape of a white paper, a civil society action, a design to solve a solution, a social justice workshop, an article, or an artwork artifact. However, she considers all of these outputs to be a form of artistic practice and research practice. For the past few years, Sinders has been looking at the impacts of artificial intelligence in society. Some of this work has taken the shape of lectures and workshops on data, surveillance, and AI, numerous articles on the harms of AI, the Feminist Data Set arts research project, and a new project recognizing human labor behind artificial intelligence systems. Her current project named TRK or Technically Responsible Knowledge is an open source project that examines wage inequality and creates open source alternatives to data labeling and training in AI. TRK is an alternative, open source tool for dataset training and labeling, a time consuming but integral aspect of machine learning that must be completed in part by a human. The tool offers a wage calculator that helps visualize a livable wage to those that will then be responsible for completing the tasks. TRK is a part of the Feminist Data Set Project, using intersectional feminism as a framework to investigate each part of the machine-learning pipeline for bias, inequity, and harm.
Bio: Caroline Sinders is a critical designer, researcher, and artist. For the past few years, she has been examining the intersections of artificial intelligence, intersectional justice, systems design, harm, and politics in digital conversational spaces and technology platforms. She has worked with the United Nations, Amnesty International, IBM Watson, the Wikimedia Foundation, and others. Sinders has held fellowships with the Harvard Kennedy School, Google’s PAIR (People and Artificial Intelligence Research group), Ars Electronica’s AI Lab, the Weizenbaum Institute, the Mozilla Foundation, Pioneer Works, Eyebeam, Ars Electronica, the Yerba Buena Center for the Arts, the Sci Art Resonances program with the European Commission, and the International Center of Photography. Her work has been featured in the Tate Exchange in Tate Modern, the Contemporary Art Center of New Orleans, Telematic Media Arts, Victoria and Albert Museum, MoMA PS1, LABoral, Wired, Slate, Hyperallergic, Clot Magazine, Quartz, the Channels Festival, and others. Sinders holds a Masters from New York University’s Interactive Telecommunications Program.
Title: Elephants on Parade or: A Cavalcade of Discoveries from Five CV Systems
Abstract: In 2014 the Harvard Art Museums (HAM) started exploring third party AI systems as part of an effort to create new paths into the museum’s collection of 250,000 art objects. That exploration led HAM to integrate not just one computer vision service but five computer vision services into the museum’s data processing pipeline. To date the services have generated over 33 million descriptions, tags, and annotations, all of which are available for exploration and research via Harvard Art Museum’s APIs. In this talk I will share what we’ve discovered while pitting five black box computer vision services against each other.
Bio: Jeff Steward is Director of Digital Infrastructure and Emerging Technology, Harvard Art Museums. He directs the museums on the use of a wide range of digital technology. He oversees the collections database, API, and photography studio. For the opening of the new Harvard Art Museums in November 2014, he helped launch the Lightbox Gallery, a public research and development space. Steward has worked at museums with museum data since 1999. Areas of research include visualization of cultural datasets; open access to metadata and multimedia material; and data interoperability and sustainability.
Title: Does Information Want to Be Free?
Abstract: Whether or not we actually believe it, and regardless of both its intended and its accepted interpretations, Stewart Brand’s famous 1984 quip that “information wants to be free” suggests an economic approach to one of the crucial questions of AI research: in our new information economy, who should provision and control the raw materials, the means of production, the product, and the profits of artificial intelligence? We who gather here as the AEOLIAN Network — a network focused specifically on the applications of AI in the not-for-profit academic and cultural heritage sectors — inevitably find ourselves both dependent upon and implicated in the activities, data, code, practices, and economies of the commercial sector. Shoshanna Zuboff has characterized this sector, both helpfully and provocatively, as “surveillance capitalism,” but there are many other useful (and less negatively-charged) characterizations of these inevitable, dangerous, desirable, comfortable cultural-commercial interactions as well. How should we act, and how should we be as actors, in this complex web of relationships? And what are some of the consequences of our so acting and so being?
Bio: Glen Worthey is the Associate Director for Research Support Services in the HathiTrust Research Center, based in the University of Illinois at Urbana-Champaign School of Information Sciences. The HathiTrust Research Center enables and supports computational access to the HathiTrust digital library, and Glen oversees research and tool development in that area. He was the Digital Humanities Librarian in the Stanford University Libraries from 1997 through 2019, and was the founding head of the Libraries’ Center for Interdisciplinary Digital Research (CIDR).
Long active in the international DH community, he hosted the international “Digital Humanities 2011” conference at Stanford, and was co-chair of the Program Committee for “Digital Humanities 2018” in Mexico City. Glen has also served on the executive boards of the Association for Computers in the Humanities (ACH), the Text Encoding Initiative (TEI), and the Alliance of Digital Humanities Organizations (ADHO), of which he is Chair of the Executive Board, and a co-convener of its “DH in Libraries” Special Interest Group.
Glen’s graduate work focused on Russian children’s literature at the University of California, Berkeley, and he retains an active interest in this and related topics: multilingual DH, Russian literature and culture, children’s literature, and poetry translation.
Dr. Samantha Shorey:
Title: Working Futures for AI in Organizations
Abstract: The promise (or the threat) that artificial intelligence will eliminate human labor is a recurring theme in narratives about innovation. Yet, every automated technology begets new types of work that are required for its success. Situated at the meeting of human and machine, these are jobs that the founding editor of Wired optimistically calls the “jobs that machines dream up.” Here, human collaboration and supervision increases, not decreases, in value. This supervision can come at a grueling cost. Dystopian futures are already present in seemingly automated places: in the production facilities where human hands still assemble the iPhone and on the social media platforms where human eyes still moderate online content. When we look more closely at automated processes, we begin to see both the glimmering and the dark possibilities of technology work.
How cultural organizations will be transformed by AI isn’t predetermined by the intentions of designers or the anticipations of critics. Rather, technology will become meaningful through use. Implementation is replete with moments of intervention, as technologies are reinvented and resisted. The practices and perspectives of those who use and work alongside AI are an important but often unseen aspect of this technology – these insights are essential to our imaginings of its future.
Bio: Dr. Samantha Shorey (Ph.D., University of Washington) is a design researcher who studies how narratives about innovation create opportunities or build barriers for people to become innovators. Her work engages overlooked stories of innovation to recognize the contributions of women to technology design—both presently and in the past. Dr. Shorey is currently leading an NSF-funded project examining how Artificial Intelligence (AI) technologies are being adopted and adapted by essential workers during the Covid-19 pandemic. She uses methods of ethnography and critical making to learn about various technology design processes, ranging from prototyping with 3D printing to engineering with e-textiles. Dr. Shorey’s teaching is dedicated to helping students understand how workplace cultures impact the development and diffusion of new technology.
Dr. Shorey’s work has been published in communication journals such as New Media and Society, as well as proceedings for the Association of Computing Management conferences on Human-Computer Interaction (CHI) and Designing Interactive Systems. Before coming to UT, she was a fellow at the Smithsonian Museum’s Lemelson Center for the Study of Invention and Innovation, where she investigated the women who handmade computer hardware for the Apollo moon missions. She was a research associate in the Tactile and Tactical Design Lab in the Department of Human Centered Design and Engineering at the University of Washington. She has also worked with collaborative research teams at the University of Oxford, MIT, and as a pre-doctoral intern at Airbnb.
Dr. Kanta Dihal:
Title: Global AI Narratives and Decolonizing AI: Collaborative, Cross-Sectoral Research for Social Justice
Abstract: Millennia-old dreams of intelligent machines have shaped the hopes, fears, and expectations for contemporary technology. Yet this influence has not been singularly positive. From the twentieth century onwards, both those who have imagined fictional AI and those who have attempted to build it have been part of a narrow elite of mostly white male Americans.
Yet, although much AI technology is developed in Silicon Valley, the West is not the only place to ever have imagined the existence of intelligent machines. Different religious, linguistic, philosophical, literary and cinematic traditions have led to different conceptions of AI. Many of these worldviews are currently not given the attention they deserve, both within cultures and between them. It is with this in mind that in 2018 I co-founded the Global AI Narratives and Decolonizing AI research projects.
In this talk, I will show how these projects make use of partnerships between academia, cultural institutions, industry, and activist organizations in order to disrupt the status quo of Western AI narratives. I will share best practices and suggestions for future work, as well as examples of the kind of collaborative research outputs this project has made possible.
Bio: Dr Kanta Dihal is a Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence, University of Cambridge. She leads two research projects, Global AI Narratives and Decolonizing AI, in which she explores intercultural public understanding of artificial intelligence as constructed by fictional and nonfictional narratives. Kanta’s work intersects the fields of science communication, literature and science, and science fiction. She has a PhD in science communication from the University of Oxford: in her thesis, ‘The Stories of Quantum Physics,’ she investigated the communication of conflicting interpretations of quantum physics to adults and children. She is co-editor of the new book AI Narratives: A History of Imaginative Thinking About Intelligent Machines (Oxford University Press, 2020) and has co-authored a series of papers on AI narratives with Dr Stephen Cave, including ‘The Whiteness of AI’ (Philosophy and Technology, 2020). She is currently writing the book Stories in Superposition.
Kanta has advised the World Economic Forum, the UK House of Lords, and the United Nations on portrayals and perceptions of AI. She has been an invited speaker on national and international TV and radio and at events including CogX (2018 and 2019), TEDx, and New Scientist Live.
Dr. Erinma Ochu:
Title: Stewarding the AI Commons
Abstract: Industry 4.0 (hardware, software, data, AI) has the potential to shape daily lives and cultures, rapidly changing the way we learn, socialise and work. Whilst new forms of participation, value creation, and governmentality are clearly emerging in health and social care, what role might cultural AI collaborations play in destabilising centralised power, reimagining knowledge generation and cultural learning? Drawing on existing curated AI exhibits and AI arts practice, that will inform, new project, Patterns in Practice, this talk asks what shared values, stewardship and pluriversal worlds can emerge, through collaboration, beyond narrow definitions of AI.
Bio: Erinma is a biotechnologist working at the intersection of the arts, emerging technologies and future media in the iSchool at Manchester Metropolitan University. Erinma is Senior Lecturer in Digital Media and Communications and on research leave from Sept 2021. They are founder of #OpenLight, a climate and culture platform for collaborative inquiry emerging from knowledge exchange with culturally diverse artists and cultural entrepreneurs, experimenting with emerging technologies. #OpenLight is funded by Wellcome.
This transdisciplinary research is concerned with inventing immersive and interactive life forms, from which to develop culturally nuanced informational literacies to navigate a warming world. The ambition is to contribute to considerations of collective consciousness that disrupt notions of intelligence and life, informed by scientific racism and the legacies of imperial biotechnology tied to industry. This work seeks to decolonise neuroscience and biology en route, drawing on queer and black feminist epistemologies.
Erinma gained a PhD in Applied Neuroscience from The University of Manchester then trained and worked in story development, film production and distribution as EAVE graduate at B3 Media setting up their shorts to feature programme, which continues today. They have led, collaborated on and acted as critical friend on a range of culture change research initiatives including the Beacons for Public Engagement, Creative People and Places and most recently the Community for Engaging Environments, which is now led by cultural geographer,Professor Hilary Geoghegan.
Erinma is Co-I on Patterns In Practice led by critical data studies researcher, Dr Jo Bates, focused on examining the values, beliefs and cultures where machine learning is employed in cheminformatics, higher education and arts practice. Erinma is co-founder of Squirrel Nation and visiting racial justice fellow at The Ada Lovelace Institute, critiquing the ethics of AI informed by Sylvia Wynter’s ‘The Ceremony Must be Found, After Humanism’. Squirrel Nation were Manchester International Fellows in 2017/18 and in residence with INIVA/ Stuart Hall Library. Squirrel Nation were invited jury members presiding over the 2021 Neuhaus Institute Open Fellowships. Erinma served as interim chair on BBSRC’s Science In Society Panel, as co-chair of UKRI’s Citizen Science programme and is a current steering group member to MMU’s Network for Research Race and Racism led by Professor Farida Vis.
25th to 26th October 2021