This case set is part of the Giving Voice to Values (GVV) curriculum. To see other material in the GVV curriculum, please visit http://store.darden.virginia.edu/giving-voice-to-values. Dax Enzo is the director of the ethical machine learning and responsible artificial intelligence (AI) team at a large US-based social media company, SOCIALCORP. In their young tenure, Enzo must decide whether to release research that their team conducted on the algorithmic amplification of political content on SOCIALCORP. The research shows that SOCIALCORP’s content recommendation algorithm amplifies political content of right-leaning individuals and news outlets more than it amplifies content from the political left, but it does not (yet) explain why. Additionally, the research is difficult for laypeople to understand because it is very technical, complex, and multilayered. The research is submitted to a peer-reviewed journal, but the journal takes a long time to conduct the review, in part because it can work only with an aggregated data set: the complete data set cannot be made accessible due to user privacy concerns. Enzo decides it is their responsibility to publish the research as soon as possible, even without peer review and with the risk that the research could be perceived as unsubstantiated or superficial. In this A case, Enzo’s challenge is to devise a strategic action plan that facilitates SOCIALCORP’s support of releasing the research without peer review. This case set is intended for use at the graduate level in business and science, technology, engineering, and math, (STEM) classrooms, for example in courses on ethics and technology, responsible data and computer science (including machine learning and AI), or technology management. It could also be taught to advanced undergraduates who have developed a foundation in GVV or ethics and technology.
This case set is designed to help students to do the following: (1) Understand the social and political implications of technology products that include content recommendation algorithms. (2) Learn about corporate responsibility vis-à-vis continually evolving technologies. (3) Anticipate public-interest concerns about content recommendation algorithms. (4) Learn about the relationship between corporate research and reputational risks. (5) Identify relevant internal stakeholders, understand their information needs, and map out their different approaches to both risk and responsibility. (6) Develop strategies for the involvement of stakeholders in relevant organizational processes to pursue a values-driven approach.