1. Tae Wan Kim & Bryan Routledge, "Algorithmic transparency, a right to explanation, and placing trust" (working paper; extended abstract for IEEE PAC 2018). (Slides) In 2016, the European Parliament adopted the General Data Protection Regulation (GDPR) to regulate automated algorithmic decision systems. Researchers debate whether GDPR, in addition to “a right to be forgotten,” grants data subjects another novel kind of protection, a so-called “right to explanation.” If GDPR grants such a right, companies (e.g., Facebook) that process the personal data of EU residents have a duty to provide a meaningful explanation to involved parties (e.g., users, customers, or employees) about how their automated algorithmic systems reach decisions. The debate itself assumes that there ought to be such a right, but the justification for this right is underexplored. I, along with my coauthor, search for the ethical foundations of the right. We contend that the ethical foundation of informed consent—autonomy and intelligent trust—justifies a right to explanation.


2. "Redefining autonomous agents" (Collaborating with John Hooker). Researchers, companies, and governmental bodies are racing to developing more autonomous machines such as self-driving vehicles, robot caregivers and autonomous weapons. But majorities of Americans are worried about emerging autonomous technologies (Smith and Anderson 2017). The concern is, perhaps, groundless, given that using autonomous vehicles, for instance, will decrease the total amount of accidents. Yet, the public reaction is not entirely unreasonable. The dominant notion of autonomous agent (e.g., Franklin and Graesser 1996), is the very reason for the concern, we argue. According to the definition, autonomous machines are agents that can make a decision independently of exogenous forces (e.g., human intervention). It follows, as a logical matter, that autonomous machines can make a decision inconsistently with humans’ interests. The dominant model, once fully realized, would be dangerous. It would be so because the model is indifferent to rationality, we explain. We offer a rationality-responsive notion of autonomous agents, by which single autonomous agent systems are simultaneously understood as multiagents systems. 

3. "Autonomy, reasons-responsiveness, and artificial intelligence: A moral case for explainable AIs." When some kind of AI gives a response to a medical doctor (“Cancer”), a lawyer (“Guilty”), or an investment banker (“Buy”), the machine gives the professional the most likely answer, but it does not provide a rationale. There is a fundamental mismatch between the ways some kinds of machine learning algorithms (e.g., neural networks) and humans think.[1] The mismatch challenges some aspect of autonomy—i.e., “reasons-responsiveness.” One’s reasons-responsiveness is respected only if she is in a condition in which she is capable of acting for reasons that she endorses.[2] The mismatch challenges humanity, given that autonomy is a central aspect of humanity. I aim to develop this argument to support the idea of explainable/interpretable AIs.

Jack Ma.jpg


4. "What if Jack Ma is a Confucian CEO?" It is often believed that the business practice of hereditary CEO succession in East Asian firms is in accordance with Confucianism. This article draws upon Confucian classical texts to critically explore this popular belief. The analysis reveals why Confucianism does not justify the practice and explains that the Confucian account of governance and leadership change is fundamentally based upon worthiness rather than pedigree. The analysis urges management researchers, policymakers, and public media to overcome the ideology of “vulgar Confucianism” by carefully examining whether or not what they call “Confucian” practices are justifiably Confucian, especially when the term “Confucian” is used to explain and rationalize Chinese or broadly Asian economic models and business practices.

6. "G.A.A.P and Fairly Presented: Toward Accounting Constitutionalism" (slides) (with Pierre Jinghong Liang)

7. "Fair Liver Transplantation" (with Sridhar Tayur and David Danks)

8. "Algorithmic authority and organizational legitimacy" (with Kiwon Han).


[1] Jenna Burrell, “How the machine ‘thinks’: Understanding opacity in machine learning algorithms,” Big Data & Society (2016, January-June), pp. 1-12.

[2] Elizabeth Anderson, “Practical reason and incommensurable goods,” in Incommensurability, incomparability and practical reasons, Ruth Chand (ed.) (Cambridge: Harvard University Press).