• image
  • Datafied Ontologies
  • April 28, 2022 | 5:30 PM EST
  • A facilitated discussion of recent articles on the onto-epistemological dimensions of human data assemblages and the “ghost workers” behind the curtain.

    Co-facilitators: Gavin J.D. Smith (Sociology, ANU) and Lori Moon (PhD. and NLP Researcher, Elemental Cognition, NYC)

    Primary Readings: Lupton, “How do Data Come To Matter? Living and Becoming with Personal Data” and “Introduction”, Ch. 1 and Ch. 3 from Gray and Suri’s Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass

    Register here for the interview at 5:30PM EST.

    Register here the workshop discussion to follow.

    Critical AI’s main focal point for Fall 2021 is our Ethics of Data Curation workshop (to be held over Zoom), the product of a National Endowment for the Humanities and Rutgers Global sponsored international collaboration between Rutgers and the Australian National University. The lead organizers for the series are Katherine Bode and Baden Pailthorpe at ANU and Lauren M.E. Goodlad at Rutgers. All of the workshops and associated talks are free and open to the public but space is limited so please register well in advance (see schedule and registration links below).

    “Artificial Intelligence” (AI) today centers on the technological affordances of data-centric machine learning. While talk of making AI ethical, democratic, human-centered, and inclusive abounds, it suffers from lack of interdisciplinary collaboration and public understanding.

    At the heart of AI’s social impact is the determinative power of data:
    the leading technologies derive their “intelligence” from mining huge troves of data (often the product of unconsented surveillance) through opaque and resource-intensive computation.

    The Big Tech tendency to favor ever-larger models that use data “scraped” from the internet creates complications of many kinds including the under-presentation of women, people of color, and people in the developing world; the mistaken belief that stochastic text-generating software like GPT-3 truly “understands” natural language; the misguided haste to uphold this technology as the “foundation” on which the future of all AI will be built; and the environmental and social impact of privileging ever-larger models that emit tons of carbon and cost millions of dollars to train. 

    Our Ethics of Data Curation workshop invites you to join a network of cross-disciplinary scholars including leading thinkers on the question of data curation and data-centric machine learning technologies. Please join the discussion, or if the time doesn’t work for you, watch the recordings of our workshop meetings and join us on Critical AI’s blog for asynchronous conversations.