The new form of the problem can be described in terms of a game which we call the ‘imitation game’. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two.
(…)
We now ask the question, ’What will happen when a machine takes the part of A in this game?
Symbolic AI
1966: ELIZA
Rule-based AI
Until late 1980s: Expert Systems & Machine Translation
Statistical AI
Late 1990s/early 2000s: Topic Modeling & Data Mining
Neural AI
2010s: Deep Learning & Large Language Models
Generative AI
Today: Generative AI & Foundation Models
Known problems of Generative AI
Bias in training data
Lack of explainability
Lack of transparency
Lack of accountability
Lack of reproducibility
Environmental impact
Ethical issues
Legal issues
Social issues
Epistemological issues
…
What can we do about it?
We can
Understand the technology and its history
Understand the limitations and problems of AI
Critical AI Studies
Teaching AI Literacy
Critical AI Literacy
Technical literacy: Understanding how AI systems work, their capabilities and limitations
Epistemological awareness: Questioning what counts as knowledge and how AI shapes it
Ethical evaluation: Considering consent, privacy, transparency, and accountability
Social impact assessment: Examining power structures, equity, and broader implications
Practical application: Developing workflows that maintain scholarly rigor
Continuous learning: Staying informed as technology evolves rapidly
Decoding Inequality (UniBe)
ChatGPT and Beyond (UZH)
What can we do about it?
We can
Understand the technology and its history
Understand the limitations and problems of AI
Make better use of AI tools
DH in Action: Swiss Projects Using LLMs (Tools & Platforms)
Re-Experiencing History with AI (UZH)
Data visualization to access cultural archives (SUPSI & ETH Library)
Generating alt text for historical sources and objects (Stadt.Geschichte.Basel, University of Basel)
LLM benchmarking for humanities tasks (RISE, UNIBAS)
Bibliography
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. «On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜». In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. FAccT ’21. New York, NY, USA: Association for Computing Machinery, 2021. https://doi.org/10.1145/3442188.3445922.
Long, Duri, and Brian Magerko. «What Is AI Literacy? Competencies and Design Considerations». In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–16. CHI ’20. New York, NY, USA: Association for Computing Machinery, 2020. https://doi.org/10.1145/3313831.3376727.
O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. First edition. New York: Crown Publishing Group, 2016.