FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
REMOTE ON THURSDAY, JULY 19TH

View: session overviewtalk overviewside by side with other conferences

09:00-10:30 Session 130F: Opening and First Presentation Round
09:00
Welcome and Introduction
09:15
Understanding Ethical Reasoning
09:45
Human morality, robots’ moral competence, and the deepest kind of trust

ABSTRACT. As robots increasingly engage in society as educators, rescuers, and caretakers, they face the uniquely human domain of morality. Could robots ever master this domain and acquire moral competence? In the first part of this talk I offer theoretical arguments and empirical evidence to propose that moral competence consists primarily of a massive web of norms, decisions in light of these norms, judgments when such norms are violated, and a vocabulary for moral communication. Affect and emotion—so central in many other models of morality—may be important, but not in the way people commonly assume. In the second part I examine how formal system verification fits into this model of moral competence and offer an optimistic view: If verification provides justification and is properly communicated, it can provide convincing evidence of a machine’s moral competence. Finally, I propose that trust is multi-dimensional and that humans can have different kinds of trust in a robot (e.g., in its reliability or capacity) but that the deepest kind is trust in an agent’s moral integrity. Robots that provably and convincingly have moral competence would deserve such deep moral trust.

10:15
Elements of a Model of Trust in Technology

ABSTRACT. Trust is discussed in the context of other factors influencing the decision to utilize a technology and the overt and covert costs, risks and side effects incurred by that decision. We outline possible steps towards the quantification of trust in artificial autonomous systems and discuss some implications regarding the design and verification of such systems.

10:30-11:00Coffee Break
11:00-12:30 Session 132F: Second Presentation Round
11:00
How to Free Yourself from the Curse of Bad Equilibria

ABSTRACT. A standard problem in game theory is the existence of equilibria with undesirable properties. To pick a famous example, in the Prisoners Dilemma, the unique dominant strategy equilibrium leads to the only Pareto inefficient outcome in the game, which is strictly worse for all players than an alternative outcome. If we are interested in applying game theoretic solution concepts to the analysis of distributed and multi-agent systems, then these undesirable outcomes correspond to undesirable system properties. So, what can we do about this? In this talk, I describe the work we have initiated on this problem. I discuss three approaches to dealing with this problem in multi-agent systems: the design of norms or social laws; the use of communication to manipulate the beliefs of players; and in particular, the use of taxation schemes to incentivise desirable behaviours.

11:30
Specification and Verification for Robots that Learn
12:00
Moral Permissibility of Actions in Smart Home Systems

ABSTRACT. With this poster, we present ongoing work of how to operate a smart home via a Hybrid Ethical Reasoning Agent (HERA). This work is part of the broader scientific effort to implement ethics on computer systems known as machine ethics. We showcase an everyday example involving a mother and a child living in the smart home. Our formal theory and implementation allows us to evaluate actions proposed by the smart home from different ethical points of view, i.e. utilitarianism, Kantian ethics and the principle of double effect. We discuss how formal verification, in the form of model-checking, can be used to check that the modeling of a problem for reasoning by HERA conforms to our intuitions about ethical action.

12:30-14:00Lunch Break
14:00-15:30 Session 134F: Third Presentation Round
Chair:
14:00
TBA
14:30
The computational neuroscience of human-robot relationships
15:00
Trust, Failure, and Self-Assessment During Human-Robot Interaction
15:30-16:00Coffee Break