Despite the plethora of ethical issues related to or arising from artificial intelligence, there is little guidance on what it would mean to undertake research in the area in a responsible way. A key incentive of this symposium is to foster discourse between philosophers & social scientists who are interested in computer ethics and AI researchers & practitioners who are fluent in the processes and practices of AI.
Submissions due February 1, 2012.
Symposium on a Framework for Responsible Research and Innovation in Artificial Intelligence, organized by Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB).
Despite the plethora of ethical issues related to or arising from artificial intelligence, there is little guidance on what it would mean to undertake research in the area in a responsible way. Ethical issues that are currently being discussed include whether and under what conditions artificial agents are capable of being moral subjects, which rights or obligations they have or should have or how moral codes and ethical theories will have to change in the light of potentially autonomous artificial intelligence, to name but a few. In the light of these highly interesting and very complex questions, researchers interested in ethics and AI have tended to pay much less attention to the types of responsibility issues research on AI may raise and how they can be addressed.
Important questions that need to be answered are: how ethical issues can be identified in the first place, who is responsible for addressing them, to whom they are responsible, what the consequences are of such responsibility and which roles the different stakeholders play in the ensuing network of responsibility. In addition, it is unclear to what degree recent attempts to promote responsible research and innovation in other disciplines (Kjolberg & Strand, 2011; Owen & Goldberg, 2010) are pertinent to ICT and AI. Similarly, it remains unclear whether generic attempts to provide guidance on ethics in ICT (Harris, Jennings, Pullinger, Rogerson, & Duquenoy, 2011; Wright, 2011) are appropriate for the types of problems to be discussed in AI. The principles of robotics published by the EPSRC include guidance for designers, builders and users of robots. It is currently not clear how these principles have been applied in practice and in what way they can facilitate a responsible approach to AI.
The proposed symposium will draw on work undertaken in two important research projects. The first is the European FP7 research project ETICA (Ethical Issues of Emerging ICT Applications, www.etica-project.eu<http://www.etica-project.eu>), which has identified particular ethical issues that can be expected to arise from AI in the medium-term future. The second is the UK EPSRC project on a “Framework of Responsible Research and Innovation in ICT” which aims to provide a community-owned account of responsibility within the ICT area. AI presents core research questions for both of these projects which motivates the proposal for this workshop specifically on responsible innovation in AI.
A key incentive of this symposium is to foster discourse between philosophers & social scientists who are interested in computer ethics and AI researchers & practitioners who are fluent in the processes and practices of AI. Providing governance arrangements that give suitable attention to the ethics of AI will require an equal and overarching understanding of both of these aspects.The symposium will therefore aim to specifically attract case studies and comparable accounts of ethical issues of AI. It will solicit contributions on the identification of such issues, their resolution, their context and the way in which such experiences are of broader interest. It is envisaged that the symposium will be highly interactive to allow space for the discovery of so far underdeveloped areas in need of research aimed at understanding ethical issues in AI.
A further outcome of the symposium will be the development of case studies to be shared among the community. The EPSRC project has set itself the task of developing an Observatory for Responsible Research and Innovation in ICT which will be a community-owned resource fostering increased understanding and discussion of pertinent issues. The project has a significant budget to be allocated to AI researchers interested in carrying out case studies that focus on ethical issues encountered in the design and development of such systems. The proposed symposium will contribute to identifying interested AI researchers who may want to participate in carrying out case studies.
Submissions will be invited on topics such as:
- Responsibilities of the individual researcher / developer in AI
- Collective responsibility in AI; how is it defined and enforced?
- Unforeseen consequences and side effects in AI: how can they be addressed?
- Limitations of responsibility in AI
- Responsibility, liability and accountability: is the legal framework for AI sufficient?
- Responsibility and relativism: how can responsibilities be defined in a context of cultural and disciplinary pluralism?
Submissions / Timeline
Submission of extended abstracts (up to 1500 words) will be invited. Submissions should be sent by email to bstahl(at)dmu.ac.uk
- submissions should be in by 1 February 2012
- acceptance/rejection decisions will be made by 1 March 2012
- final versions of abstracts, papers, etc. (as appropriate for your symposium), for inclusion in proceedings, delivered by authors to the symposium chairs by 30 March 2012.
Read more on the conference website: http://www.aisb.org.uk/convention/aisb12/