On autonomous machine and machine ethics

Majors definitions of autonomous agents in engineering define they are agents that can decide their own goals. Thus, in principle, fully autonomous machines can act in ways inconsistent with human interests. A promising solution is to teach codes of ethics to autonomous machines. But this does not fully dissolve the problem. In theory, fully autonomous agents can choose whether to use codes of ethics. This potential creates the public anxiety about singularity--a point at which super artificial intelligence destroys human civilizations. The fundamental problem is the major definition of autonomous agents in robotics. A totally different definition of autonomous agents is possible and has been studied in moral philosophy at least since Kant. For Kant, autonomous actions are actions that are based on gneralizable action plans, so autonomy is the ability to act on a generalizable manner. If we use this notion of autonomy, autonomous super artificial intelligence will be super ethical. I am currently working on this thought with John Hooker. To be continued.