Calendar

Note: For full bibliographic details on the readings and viewings listed below, see the Readings and Viewings page.

Sessions READINGS AND VIEWINGS Study Questions
I: Introduction to the Philosophy, Ethics, and Geopolitics of AI
1: Introduction to Ethical Issues in CS/AI
  • Kissinger, “How the Enlightenment Ends”
  • Bryson, “The Future of AI’s Impact on Society”
  • Jordan, “Artificial Intelligence—The Revolution Hasn’t Happened Yet”
  1. What are the key ethics issues involving AI?
  2. How should we as a society balance risks versus benefits of the pursuit of AI?  
2: Can AI Be Intelligent?
  • Kissinger, “How the Enlightenment Ends”
  • Kissinger et al., “The Metamorphosis”
  • Aristotle, Nicomachean Ethics, Book I.1–5
  • Turing, “Computing Machinery and Intelligence”
  1. What are Kissinger’s (and co-authors’) key arguments and the evidence for them?
  2. What criteria does Turing lay out about what intelligence means? Do you agree or not? Why or why not?
  3. What is Aristotle’s argument about happiness, and how would you judge the objectives of AI on that basis?
II: “Welcome to the Second Machine Age”
3: Humans Need Not Apply
  • Grey, “Humans Need Not Apply”
  • Freedman, “Basic Income: A Sellout of the American Dream” 
  • Rousseau, Discourse on Inequality, Part Two (in First and Second Discourses)
  • Schumpeter on “creative destruction” in Capitalism, Socialism and Democracy
  1. What is really different about AI in terms of the desire for utopia?
  2. How do the AI readings bolster your argument and how do you disagree with them?
  3. What is Rousseau’s main point and the reasoning behind it?
  4. How does Schumpeter’s concept of creative destruction fit with the utopianism of AI (or not), in particular thinking about what will happen with humans as AI flourishes?
  5. Do you think that we should strive towards independence or more dependency on machines and why?
4: Who Lives and Who Drives
  • Lin, “The Ethical Dilemma of Self-Driving Cars”
  • Rahwan, “What Moral Decisions Should Driverless Cars Make?”
  • Emerging Technology from the arXiv, “Why Self-Driving Cars Must Be Programmed to Kill”
  • Grush, “Fatalities Associated with Crash Induced Fuel Leakage and Fires”
  • Bentham, Principles, Chapters 1 and 4
  1. What are the central tenets of utilitarianism?
  2. Is utilitarianism compelling?
  3. What makes it ethical?
  4. Apply it to the question of programming self-driving vehicles. How is it helpful?
  5. Are you a utilitarian? Why or why not?
  6. How would you program the autonomous vehicle algorithm and why?
5: Strategic Competition and AI
  • Conger, “Google Plans Not to Renew its Contract for Project Maven”
  • Horowitz et al., “Strategic Competition in an Era of Artificial Intelligence”
  • Goldman, “Inside China’s Plan for Global Supremacy”
  • Churchill, “Mankind is Confronted with One Supreme Task”
  1. What is the one supreme task according to Churchill?
  2. From what you learned from Churchill, how do you understand the relationship between foreign policy and technological progress?
  3. Does cyber change the practice of foreign policy and meaning of war? If so, how and in what ways, and if not, why not?
6: Ethical and Legal Aspects of Autonomous Weapons (Special Guest Prof. Jeremy Rabkin)
  • Human Rights Watch, “Losing Humanity”
  • Future of Life Institute, “Lethal Autonomous Weapons Pledge”
  • Rabkin and Yoo, “‘Killer Robots’ Can Make War Less Awful”
  • UN News, “Autonomous weapons that kill must be banned”
  • Møller, “Secretary-General's Message”
  • European Union External Action, “EU Statement Group of Governmental Experts"
  • Law Library of Congress, “Regulation of Artificial Intelligence”  
  • Busby and Cuthbertson, “‘Killer Robots’ Ban Blocked”
  • Hague Convention (VII) 
  • U.S. Uniform Code of Military Justice, Art. 133
  • Lieber, General Orders No. 100
  • Hague Convention (IV) on the Law and Custom of War on Land (1907), excerpts
  • U.S. Army Judge Advocate General’s School. “Law of Land Warfare”
  • Geneva Conventions of 1949, Additional Protocol I
     
  1. Are there good reasons to worry that weapons with recognition capacities will be more harmful than “automatic” weapons with no such capacity, such as contact land mines and sea mines? Why aren't “smart weapons” better than the “dumb weapons” that preceded them?
  2. Are there good reasons to think robots operating on the ground would be more destructive than weapons used in past wars, such as aerial bombardment or artillery shelling (with notoriously poor accuracy)? Or is there some ethical objection to loss of human control, apart from the likely scale of casualties or destruction?
  3. Are there good reasons to think “autonomous weapons” are more objectionable than autonomous passenger vehicles, given that both may cause injury if they malfunction? Are mistakes of machines more objectionable than mistakes of human operators?
  4. Historically (and still to some extent today), the law of war has claimed more trust for—because it attributed more honor or sound judgement to—officers than lower ranks. If we worry about relegating decisions to machines, should we emphasize the importance of “human control” or rather focus on the kinds of humans at the controls?
  5. Do you believe “autonomous” “killer robots” in U.S. arsenals will pose more risk to the world than atomic weapons under the personal control of Supreme Leader Kim Jong-un of North Korea or Ayatollah (and “Supreme Leader”) Ali Khamenei of Iran? If you were guiding the foreign policy of the European Union, which risk would you focus on?
III: Liberal Democracy and AI
7: AI and Free Speech
  • Tufekci, “It’s the (Democracy-Poisoning) Golden Age of Free Speech”
  • Wu, “Is the First Amendment Obsolete?” 
  • Zuckerberg, address given at Georgetown on free speech
  • Hughes, “It’s Time to Break Up Facebook” 
  • US Constitution, First Amendment
  • Hamilton, Federalist Paper #84
  • Franklin, “On Freedom of Speech and the Press”
  • Jefferson, First Inaugural Address
  • Madison, letter to W.T. Barry.
  1. According to the AI articles, what are the challenges to free speech and what should be done about them?
  2. What is the key message of Federalist 84 and what are the arguments behind it?
  3. What are Franklin, Jefferson, and Madison’s key points about free speech?
IV: Politics, Ethics, and Economics of Private Platforms
8: Life Without Privacy, or Who Owns Your Data?
  • Central Intelligence Agency, “Background to ‘Assessing Russian Activities’”
  • Bayrasli and McNamee, “Facing Up to Facebook”
  • Lynch, “Face Off”
  • O’Neil, Weapons of Math Destruction, Introduction and Chapter 1.
  • Barbaro, “The Chinese Surveillance State”
  • Zuboff, “You Are Now Remotely Controlled”
  • Plato. The Republic, Book I: “Thrasymachus,” 336b1-354c3, and Book 2: 357a1-382c5.
  1. Why and how are algorithms biased?
  2. What are some of the effects in authoritarian regimes?
  3. In liberal democratic regimes how do they transform capitalism and labor?
  4. What is Thrasymachus’ critique of justice?
  5. How does Socrates tame Thrasymachus?
  6. Can you see a connection between Thrasymachus’ teaching and the use of AI for surveillance?
V: Algorithmic Judgment and Humanity
9: Algorithmic Decision Making: Bias-Fairness
  • Hao, “This Is How Gender Bias Really Happens”
  • Cossins, “Discriminating Algorithms”
  • Crawford, “The Trouble with Bias”
  • Locke, Second Treatise of Government, on “natural rights and civil society”
  1. What is the problem with bias? What is the best way to mitigate it?
  2. What does Locke mean by “natural rights?” What is his argument for them?
  3. What would Locke say about algorithmic bias and how would he argue about what to do?
10: Lover AI
  • Devlin, “Sex Robots”
  • Garland, Ex Machina
  • Plato, Symposium, Eryximachus and Aristophanes speeches
  • Shakespeare, Sonnet 130
  • Cranach, Adam and Eve
  1. What is love according to Eryximachus? According to Aristophanes? Which do you agree with more and why?
  2. What do you learn from Shakespeare’s sonnet?
  3. What do you learn from the Cranach painting?
  4. Can we love an AI machine and can the AI machine love us back?
11: AI and Friendship
  • Zuckerberg, “Building Global Community”
  • Turkle, Alone Together, Introduction and Chapter 3
  • Shakya and Christakis, “Association of Facebook Use with Compromised Well-Being”
  • Schaub, “Unfriending Friendship”
  • Konnikova, “The Limits of Friendship”
  • Aristotle. Nicomachean Ethics, Book VIII
  1. What are the kinds of friendship according to Aristotle?
  2. Which sort are most of your friends?
  3. How does social media affect his analysis? What would he say about it?
  4. Do you agree with Turkle, Schaub, Zuckerberg, and Konnikova? Why or why not?
VI: Philosophical Ethics and Anthropology of AI
12: Unartificial Intelligence
  • Plato, The Republic, Book VII, “Simile of the Cave”
  • Aristotle, Nicomachean Ethics, Book IX, X.6-8
  • Leonardo, images, notebooks
  • Michelangelo, Sistine Chapel

 

  1. According to Plato, what is the relation of thinking and genuine education to freedom?
  2. What is great about Leonardo and Michelangelo, and can AI mimic them? Why or why not?
  3. How does AI contribute toward or hinder these endeavors?
VII: Wrap-Up
13: Final Presentations and Discussion