The title is a bit too general in my opinion. I expected a discussion of theories and respective experiments to contrast and draw conclusions on the topic. However, the content is written in the form of an essay, elaborating on the line of interpretation of the author around the historical development of AI and the influence of science-fiction and futurists in risk perception.
The author proposes that experts dominate the risk perception narrative around AI. However, people don't build a risk attitude only from expert opinions, but also from their lived experiences with AI and everything they understand as that. AI is already here and there are critical, damaging events from its use that are already evident to people, e.g., Cambridge Analytical, Facebook, and Twitter scandals. People rely on past events to build a risk attitude towards AI independent of their exposure to expert opinions or not. Besides, not only do experts help spread a risk perception towards AI, but laypeople also do that effectively, aided by the SARF through social media.
Sometimes there was a blurred line between risk perception vs. actual risks of AI. It is important to consistently make a distinction between the two since they are indeed different concepts. In addition, the author assumes that people are actively/consciously seeking expert opinions to build a risk perception, which I doubt is the case. In general terms, risk perception information is fluid and transfers best from close, related cultural groups and their sustained perceptions rather than being suspended in time, waiting for experts to set the nature and intensity of a built risk attitude. In other words, people and experts construct risk perceptions in parallel, and given the nature of uncertainty, unknowns, and the interdisciplinary nature and applications of AI, there's a level of preconceptions and heuristics playing a role in what is influencing the risk perception of these groups, see Pestalozzi et al. (2019).
In summary, I loved the reading and the refreshing contact with risk perception literature particularly addressing AI. I appreciated the clarity of the concepts presented and the intellectual reflection, which helped me build a bridge between the field of risk perception, user experience, and human factors in AI.
Comments
Post a Comment