• image
  • Submarine Ontologies/Ontologies of Judgement
  • March 10, 2022 | 5:30 PM EST
  • This meeting focuses on chapters from The Promise of Artificial Intelligence: Reckoning and Judgment (2019) by philosopher and information scientist Brian Cantwell Smith. Useful for thinking about “ontology” more generally, Smith’s notion of “submarine” ontologies enables us to imagine the world as machines perceive it. His distinctions between reckoning and judgment elucidate the respective affordances of machine and human “intelligence.”

     

    Our co-facilitators are Katherine Bode (a DH scholar from ANU), Christopher Newfield (MLA Vice President & Director of Research, ISRF), and Mark S.D. Sammons, PhD (a researcher in Natural Language Processing)

    Register here for the interview at 5:30PM EST.

    Register here the workshop discussion to follow.

    Critical AI’s main focal point for Fall 2021 is our Ethics of Data Curation workshop (to be held over Zoom), the product of a National Endowment for the Humanities and Rutgers Global sponsored international collaboration between Rutgers and the Australian National University. The lead organizers for the series are Katherine Bode and Baden Pailthorpe at ANU and Lauren M.E. Goodlad at Rutgers. All of the workshops and associated talks are free and open to the public but space is limited so please register well in advance (see schedule and registration links below).

    “Artificial Intelligence” (AI) today centers on the technological affordances of data-centric machine learning. While talk of making AI ethical, democratic, human-centered, and inclusive abounds, it suffers from lack of interdisciplinary collaboration and public understanding.

    At the heart of AI’s social impact is the determinative power of data:
    the leading technologies derive their “intelligence” from mining huge troves of data (often the product of unconsented surveillance) through opaque and resource-intensive computation.

    The Big Tech tendency to favor ever-larger models that use data “scraped” from the internet creates complications of many kinds including the under-presentation of women, people of color, and people in the developing world; the mistaken belief that stochastic text-generating software like GPT-3 truly “understands” natural language; the misguided haste to uphold this technology as the “foundation” on which the future of all AI will be built; and the environmental and social impact of privileging ever-larger models that emit tons of carbon and cost millions of dollars to train. 

    Our Ethics of Data Curation workshop invites you to join a network of cross-disciplinary scholars including leading thinkers on the question of data curation and data-centric machine learning technologies. Please join the discussion, or if the time doesn’t work for you, watch the recordings of our workshop meetings and join us on Critical AI’s blog for asynchronous conversations.