خلاصه
1. معرفی
2 سازمان ها خطرات هوش مصنوعی در محل کار را نادیده می گیرند
3 هوش مصنوعی میتواند محیطهای کاری را بهینه کند، اما به کارگران نیز آسیب وارد میکند
4 شکاف و چالش در شیوه های WHS برای شناسایی و مدیریت خطرات هوش مصنوعی
5 کرامت انسانی و استقلال در هوش مصنوعی با استفاده از محل کار
6 چارچوب اخلاقی هوش مصنوعی
7 بوم هوش مصنوعی
8 ادغام مفهومی پذیرش هوش مصنوعی و دیدگاههای WHS
9 نتیجه گیری
اعلامیه ها
منابع
Abstract
1 Introduction
2 Organisations underappreciate workplace risks of AI
3 AI can optimise workplaces, but also burden and harm workers
4 Gaps and challenges in WHS practices to identify and manage AI risks
5 Human dignity and autonomy in the AI using workplace
6 AI ethics frameworks
7 The AI Canvas
8 Conceptual integration of AI adoption and WHS viewpoints
9 Conclusion
Declarations
References
چکیده
هوش مصنوعی (AI) در رشد اقتصادی و عملیات تجاری به طور یکسان در مرکز توجه قرار دارد. گفتمان عمومی در مورد پیامدهای عملی و اخلاقی هوش مصنوعی عمدتاً بر سطح اجتماعی متمرکز شده است. یک پایگاه دانش در حال ظهور در مورد خطرات هوش مصنوعی برای حقوق بشر پیرامون امنیت داده ها و نگرانی های حفظ حریم خصوصی وجود دارد. یک رشته کار جداگانه استرس های کار در اقتصاد گیگ را برجسته کرده است. این تمرکز غالب بر حقوق بشر و تأثیرات کنسرت ها به قیمت نگاه دقیق تر به چگونگی تغییر شکل دادن روابط سنتی محل کار و به طور خاص تر، سلامت و ایمنی محل کار توسط هوش مصنوعی بوده است. برای پرداختن به این شکاف، ما یک مدل مفهومی برای توسعه یک کارت امتیازی ایمنی و سلامت کار (WHS) به عنوان ابزاری برای ارزیابی و مدیریت خطرات و خطرات احتمالی برای کارگران ناشی از استفاده از هوش مصنوعی در محل کار ترسیم میکنیم. یک مطالعه تحقیقاتی کیفی و مبتنی بر عمل از پذیرندگان هوش مصنوعی برای تولید و آزمایش لیست جدیدی از خطرات بالقوه هوش مصنوعی برای سلامت و ایمنی کارگران استفاده شد. ریسکها پس از ارجاع متقابل اصول اخلاقی هوش مصنوعی استرالیا و اصول طراحی کار خوب با ایدهپردازی، طراحی و پیادهسازی هوش مصنوعی توسط Canvas AI شناسایی شدند، چارچوبی که در غیر این صورت برای ارزیابی پتانسیل تجاری هوش مصنوعی به یک تجارت استفاده میشود. سهم منحصر به فرد این تحقیق توسعه یک ماتریس جدید است که در حال حاضر خطرات شناخته شده یا پیش بینی شده برای WHS و جنبه های اخلاقی در هر مرحله پذیرش هوش مصنوعی را نشان می دهد.
Abstract
Artificial Intelligence (AI) is taking centre stage in economic growth and business operations alike. Public discourse about the practical and ethical implications of AI has mainly focussed on the societal level. There is an emerging knowledge base on AI risks to human rights around data security and privacy concerns. A separate strand of work has highlighted the stresses of working in the gig economy. This prevailing focus on human rights and gig impacts has been at the expense of a closer look at how AI may be reshaping traditional workplace relations and, more specifically, workplace health and safety. To address this gap, we outline a conceptual model for developing an AI Work Health and Safety (WHS) Scorecard as a tool to assess and manage the potential risks and hazards to workers resulting from AI use in a workplace. A qualitative, practice-led research study of AI adopters was used to generate and test a novel list of potential AI risks to worker health and safety. Risks were identified after cross-referencing Australian AI Ethics Principles and Principles of Good Work Design with AI ideation, design and implementation stages captured by the AI Canvas, a framework otherwise used for assessing the commercial potential of AI to a business. The unique contribution of this research is the development of a novel matrix itemising currently known or anticipated risks to the WHS and ethical aspects at each AI adoption stage.
Introduction
Artificial intelligence (AI) has given rise to a new type of machine, a prediction machine, which can automate cognitive tasks traditionally associated with white-collar workers. These smart machines are lowering the cost and effort of making accurate predictions in operational processes. The AI technology may thus have potential benefits, such as increasing productivity, streamlining processes, and integrating value chains. Advocates of AI maintain that companies that do not take advantage of these benefits risk being driven out of the marketplace by others that do. As human expertise is being outperformed by AI, companies may prefer delivering their products and services, internally as well as externally, with the help of AI. The impact that such operational and logistical change in the workplaces may have on workers, however, remains uncertain.
Sociologists of work have long argued that workplace technologies are a key instrument of social change, reshaping labour relations and introducing new systems of worker discipline and control (Barley 1988; Braverman 1974; Kellogg et al. 2020; Zuboff 1988). From this critical sociological perspective, AI technologies in the workplace represent a new, contested domain for workers, raising questions around job security, worker autonomy, and worker status for which there is yet no clear social consensus. There is emerging research evidence on the qualitative impacts of AI on workers in specific industries, particularly insecure workers in the gig economy (Myhill et al. 2020; Convery et al. 2020). While the COVID-19 pandemic of 2020 has, at least temporarily, normalized flexible and remote work, many firms had already integrated complex, networked technologies integrating AI even before the pandemic to deliver products and services across borders and time zones, and to deploy skilled workers in more effective ways. The rise of networked technologies and “digital first” business strategy played out in the global economic context of increasingly precarious forms of employment, unequal wage growth, the transformation of manufacturing, and a growing income and skills divide in the workforce leading to reduced economic and social mobility. These outcomes are not inevitable, but are the result of personal, economic and political decisions (Srnicek and Williams 2016; Benanav 2020). Whether the new technologies adversely affect the security and wellbeing of employees in a workplace is above all a question of whether they are permitted to do so, or whether instead these risks are contained by regulation and oversight.
Conclusion
A growing use of AI will change economies, the manufacturing of goods and delivery of services. It will also affect human relations at a societal level, including how we collect, use fairly and keep safe the data that drive AI. Whilst much of the debate of the ethical use of AI has focussed on the societal level, AI also has profound implications in and for the workplaces where it is being used. These workplaces are currently and in most industrial and late-industrial nations regulated by WHS guidelines, rules and prescriptions, which rarely address the specific impact of AI on processes, job roles, and communication among employees, and between employees and managers.
In this paper, we have outlined a model for detecting risks and hazards that AI may bring to workplaces, using a set of AI Ethics Principles, as endorsed by the Australian Government, and the AI Canvas, a technical stages model of AI implementation. The explorative process commenced with a review of the literature on AI risks and hazards as they may apply to workplaces. These potential risks and hazards were further explored in consultation with AI experts, professionals and users—a process that also helped to populate the risk assessment matrix with examples of workplace specific AI risks and hazards. These risk examples were further developed and validated during consultations with AI adopters and WHS professionals. Each of these risks and hazards was linked to statutory WHS principles as advocated by SWA, the Australian WHS regulators and oversight agency.