Tae Wan Kim, Dokyun Lee, and David Danks. “Explainable AI, right to explanation, the goodness criteria, counterfactuals, and the grasp-ability test”(draft)(slides).
Tae Wan Kim, Thomas Donaldson & John Hooker, “Mimetic vs. Anchored Value Alignment in Artificial Intelligence (AI)”: (working paper).
3. Tae Wan Kim & Bryan Routledge, "Algorithmic transparency, a right to explanation, and placing trust" (working paper; extended abstract for IEEE). (Slides) In 2016, the European Parliament adopted the General Data Protection Regulation (GDPR) to regulate automated algorithmic decision systems. Researchers debate whether GDPR, in addition to “a right to be forgotten,” grants data subjects another novel kind of protection, a so-called “right to explanation.” If GDPR grants such a right, companies (e.g., Facebook) that process the personal data of EU residents have a duty to provide a meaningful explanation to involved parties (e.g., users, customers, or employees) about how their automated algorithmic systems reach decisions. The debate itself assumes that there ought to be such a right, but the justification for this right is underexplored. I, along with my coauthor, search for the ethical foundations of the right. We contend that the ethical foundation of informed consent—autonomy and intelligent trust—justifies a right to explanation.
4. Tae Wan Kim & Alan Strudler, "Team Production, opportunism, and governance: A Confucian approach" (completed working paper; under review. Please email me if you want to read it). Various researchers suggest authority as a solution to the team production problem (shirking and wasteful rent-seeking). The existing theories commonly use a market-based understanding of authority, but the model has limitations in understanding the nature of authority in team contexts. We develop a Confucian role-obligation-based model of authority. In short, we argue for a team production theory of the firm to be more deeply team-based.