Developments in algorithmic technologies are bringing high impact changes in social and organizational life, and in particular for work. The resulting issues and challenges are complex, and unravelling them requires inter-disciplinary approaches. To provoke new research and enquiry in this area, the KIN Center for Digital Innovation hosted an interactive parallel session at NWO’s Synergy conference on February 7th at ‘t Spant in Bussum.
A highly diverse group of researchers took part, from disciplines such as linguistics, anthropology, psychology, law, and philosophy as well as experts from industry and policy advisors. They joined forces in interactive roundtables to formulate the most pressing research issues from their perspectives, to draft ideas for collaborative projects that harness diverse expertise and research approaches to tackle the challenges related to the impact of AI on work. It was a fruitful exercise and we have synthesized the discussion into three overarching research questions here:
RQ1: What are the implications of AI for how expertise are developed and applied?
E.g. How will junior professionals such as lawyers learn their craft, when the simple tasks of their profession are performed by AI?
Issues that need interdisciplinary attention:
– What are the current limits of the much-hyped technologies and can we realistically map what they are capable of and where we still require human judgement?
– How can AI, when it is embedded into certain professions, works, and organizations change our trust in those professions, works, and organizations (e.g., do patients trust more doctors who are supported by algorithms)?
– What types of activities and responsibilities do you leave to the machine and what remains under jurisdiction of humans?
RQ2: How/ can AI make organizational processes more efficient and effective?
E.g. Can AI be used to evaluate and synthesise evidence admitted to court, to process hearings more quickly? Would it be possible to use AI to rank quality and eliminate the bottom percentile of grant applications, or C.V.’s, freeing time to review applications with the best chance of success?
Issues that require exploration:
– How can we define and measure ‘quality’?
– How do we identify unusual, creative, outlier cases, and ensure they are treated differently from the ‘mass’?
– How do you make sure we don’t lose the human dimension in our decision making processes?
– What is being done to ensure that algorithms and the processes around them are auditable?
– Who do we hold accountable for the decisions that are made based on ‘advice’ from AI?
– To what extent can human “wisdom” (and/ or intuition and emotions) be built in algorithmic technologies and if not, how do you let them collaborate?
– If technologies change our understanding of what is right and real, how could a proper study of their effects be even possible?
RQ3: How will our work and workplaces adapt to the presence of new digital colleagues?
E.g. As digital assistants (such as Alexa and Siri) become part of knowledge work, what kind of interfaces are appropriate, and where do we draw the line between human and technology?
Issues that need interdisciplinary attention:
– To what extent do automated technologies affect human workers and the meaning they attach to their work?
– What could we learn from human-animal studies about the way humans relate to machines in the workplace?
– What can we learn from studies of language acquisition that research how babies learn to recognise patterns, as a basis for understanding the limits of language recognition technologies?
– How/can we distinguish between technology and humans if both are becoming so much intertwined (the technology is designed by humans, but also changes humans, and how they behave, what norms they adhere to etc)? Is it still meaningful to make this distinction?
All these pressing questions would benefit from integration of expertise, skills, and research methods from across disciplines, including:
1. Philosophers and historians addressing questions related to how people and machines relate to one another, what is “right” and who decides.
2. Organizational and behavioral scholars, able to understand how people are affected by these technologies in their day to day work practices but also to learn how humans shape the design of these technologies.
3. Linguists and literary researchers who can offer advice and insights on how language works and how it is parsed and understood, as well as the challenges and limitations of translation across contexts
4. Social scientists who understand the limitations and opportunities in working with sets of data that are a key ingredient to training and improving AI can offer both a critical perspective and also advise on how to avoid foreseeable issues
5. Data scientists, unfortunately not present at the conference, are needed for their deep understanding of how these systems are built and developed over time. They are also needed to study to what extent and how the insights generated by above scholars can be incorporated in the design of the technology.
What do you think are pressing questions? How can your discipline contribute to understanding this phenomenon? Continue the conversation by following us on twitter (@kinresearch) or contact us at email@example.com.
Subscribe to our newsletter to keep up to date of our new activities.