Call for Papers: The Digital Undertow of Generative AI
The Digital Undertow of Generative AI: Hidden Currents Reshaping Organizations, Work, Education and Public-Sector Innovation Agendas
Guest Editors:
Marta Choroszewicz, Senior Researcher, Department of Social Sciences, University of Eastern Finland, Joensuu Campus, Finland.
Tracey L Adams, Professor, Department of Sociology, The University of Western Ontario, London, Ontario, Canada.
Theme Overview
Generative AI technologies are portrayed as transformative forces that carry promises, risks and threats to the current foundations of work, education and society at large. Among these, the most visible promises are those related to efficiency, productivity and well-being for inidviduals who learn to master these tools. As organizations develop, experiment with, and integrate various AI assistants and agents, they face challenges related to change management, regulatory compliance, and numerous ethical concerns. Employers, educational institutions and especially public-sector organizations encounter complex dilemmas around the development, purchase and use of generative AI tools, including the allocation of resources and responsibility for their functioning and outputs. These technologies are also expected to accelerate existing social divisions and create new hierarchies, raising profound questions about building resilient futures for our societies. Yet, the multifaceted societal impacts of generative AI remain largely underexplored, even as these technologies proliferate across workplaces and institutions.
Current debates on generative AI often emphasize its instrumental use and value, highlighting benefits such as increased productivity, improved efficiency, the elimination of routine tasks, and the simplification of work (e.g., Brynjolfsson et al., 2023). The adoption of these technologies is largely driven by a widely accepted belief in their transformative potential. These technologies are expected to reshape professional practices by enabling new forms of human–AI collaboration that integrate diverse types of expertise and knowledge across boundaries. Yet, these ideals are not always straightforward in practice (e.g., Alon-Barkat & Busuioc, 2023; Boulus-Rødje et al., 2024; Lebovitz et al., 2022);
However, these optimistic narratives are increasingly counterbalanced by critical concerns. Scholars point to persistent issues such as errors, inaccuracies, and hallucinations in AI-generated outputs, as well as biases rooted in training data and technical development processes (e.g., Bender et al., 2021; Hannigan et al., 2024). Privacy risks and questions of accountability further complicate the ethical landscape (Raji et al., 2020; Wachter et al., 2024). Beyond technical, legal, and ethical challenges, researchers warn that overreliance on generative AI may erode essential cognitive capacities, reducing individuals’ engagement in critical and reflective thinking – skills that are often foundational for professional judgment and societal decision-making (Gerlich, 2025).
Call for Papers
In this Issue, we are particularly interested in exploring the dynamic processes of less visible and indirect changes – the digital undertows (Orlikowski & Scott, 2023) – associated with the current wave of digital transformation driven by the integration of generative AI into various areas of social life. The concept of “digital undertow” shifts our focus beyond visible disruptions caused by digital technologies toward hidden, indirect changes that occur quietly, without explicit recognition or debate, yet whose cumulative impact may profoundly alter the foundations of professional practices, governance structures, and societal values.
Thus, this Issue aims to examine how the increasing use of generative AI across social domains may subtly reconfigure institutional norms, redistribute authority, and reshape what counts as legitimate expertise and knowledge.
We invite contributions that explore the hidden institutional and societal transformations associated with generative AI. We encourage research that examines how individuals, organizations, and policymakers negotiate and respond to these shifts—both anticipated and emergent. Key questions include, but are not limited to:
- How does the adoption of generative AI reshape institutional norms, values, and authority structures?
- What forms of authority and responsibility emerge as generative AI systems mediate knowledge production and work practices?
- What forms of expertise and legitimacy are displaced or reconstituted through generative AI-mediated practices?
- How do these hidden currents influence inclusion, trust, and fairness in social and organizational life?
Submission Guidelines
We welcome empirical and conceptual papers from sociology, communication and media studies, public administration, education studies, and related fields.
Articles should be 6,000-7,000 words, including references and an abstract (150-200 words) and 3-6 keywords.
All submitted manuscripts will undergo double-blind peer review according to the journal’s editorial policy.
Articles must be submitted in UK English, APA7.
Timeline for the Issue:
- Full paper submission: 20th May 2026
- Article reviews to authors: 1st August 2026
- Revised articles submitted: 15th September 2026
- The end of revie process and final submission of revised articles: 30th November 2026
- Publication of the issue: December 2026
Contact: For inquiries, please contact marta.choroszewicz@uef.fi.
References
Alon-Barkat, S., & Busuioc, M. (2023). Human–AI interactions in public sector decision making: “automation bias” and“selective adherence” to algorithmic advice. Journal of Public Administration Research and Theory, 33(1), 153–169. https://doi.org/10.1093/jopart/muac007
Bender, E.M., Gebru, T., McMillan-Major, A., & Schmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21). Association for Computing Machinery, 610–623. https://doi.org/10.1145/3442188.3445922
Brynjolfsson, E., Li, S., & Raymond, L.R. 2023. Generative AI at work. NBER Working Paper No 31161. Cambridge: National Bureau of Economic Research. https://www.nber.org/system/files/working_papers/w31161/w31161.pdf
Boulus-Rødje, N., Cranefield, J., Doyle, C., & Fleron, B. (2024). GenAI and me: the hidden work of building and maintaining an augmentative partnership. Personal Ubiquittous Computing, 28, 861–874. https://doi.org/10.1007/s00779-024-01810-y
Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15, 6. https:// doi.org/10.3390/soc15010006
Hannigan, T.R., McCarthy, I.P., Spicer, A. (2024). Beware of botshit: How to manage the epistemic risks of generative chatbots. Business Horizons, 67(5), 471–486. https://doi.org/10.1016/j.bushor.2024.03.001
Lebovitz, S., Lifshitz-Assaf, H. & Levina, N. (2022). ‘To engage or not to engage with AI for critical judgments: How professionals deal with opacity when using AI for medical diagnosis’. Organization Science, 33(1), 126–48. https://doi.org/10.1287/orsc.2021.1549
Orlikowski, W., & Scott, S.V., (2023). The digital undertow and institutional displacement: A sociomaterial approach. Organization Theory 4(2). https://doi.org/10.1177/26317877231180898
Raji, I. D., Smart, A., White, N.R., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end- to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability and Transparency (FAT* 2020) (pp. 33–44). https://doi.org/10.1145/3351095.3372873.
Wachter, S., Mittelstadt, B., & Russell, C. (2024). Do large language models have a legal duty to tell the truth? Royal Society Open Science, 1, 240197. https://doi.org/10.1098/rsos.24019




