Abstract
1- Introduction
2- Or and ethics
3- An ethical framework for data science
4- Research approach
5- Diagnosis and planning
6- Intervention (action taking)
7- Evaluation (assessment)
8- Reflection (learning)
9- Summary
References
Abstract
The ethical aspects of data science and artificial intelligence have become a major issue. Organisations that deploy data scientists and operational researchers (OR) must address the ethical implications of their use of data and algorithms. We review the OR and data science literature on ethics and find that this work is pitched at the level of guiding principles and frameworks and fails to provide a practical and grounded approach that can be used by practitioners as part of the analytics development process. Further, given the advent of the General Data Protection Regulation (GDPR) an ethical dimension is likely to become an increasingly important aspect of analytics development. Drawing on the business analytics methodology (BAM) developed by Hindle and Vidgen (2018) we tackle this challenge through action research with a pseudonymous online travel company, EuroTravel. The method that emerges uses an opportunity canvas and a business ethics canvas to explore value creation and ethical aspects jointly. The business ethics canvas draws on the Markkula Center’s five ethical principles (utility, rights, justice, common good, and virtue) to which explicit consideration of stakeholders is added. A contribution of the paper is to show how an ethical dimension can be embedded in the everyday exploration of analytics development opportunities, as distinct from a stand-alone ethical decision-making tool or as an overlay of a general set of guiding principles. We also propose that value and ethics should not be viewed as separate entities, rather they should be seen as inseparable and intertwined.
Introduction
Business analytics is playing a greater and greater role in our daily lives, impacting on job applications, medical treatment, parole eligibility, and loans and financial services. There are undoubted benefits to algorithmic decision-making in general, and artificial intelligence (AI) in particular. For example, AI is being used to detect the early stages of colorectal cancer, achieving 86% accuracy (Mukherjee, 2017). Such is the interest in AI for healthcare that the UK Government is pledging millions to AI applications for the early diagnosis of cancer and other chronic diseases, using patient data and lifestyle information to highlight patients at risk (Perkins, 2018). However, algorithmic decision-making is not without its dark side. Mann and O’Neill (2016) question the use of algorithms in hiring decisions, d’Alessandro, O’Neil, and LaGatta (2017) raise concerns about predictive policing. The Cambridge Analytica case has thrust data analytics squarely into the public domain. It is alleged that Cambridge Analytica collected data from more than 50 million Facebook users (without permission) and used that data to build a system to target US voters with personalized political advertisements with the aim of influencing the US election outcome (Greenfield, 2018). Unsurprisingly, algorithmic decision-making is attracting the interest of researchers as well as practitioner and regulators (e.g., Kitchin, 2017; Newell & Mirabelli, 2015).