Skip to main content

Artificial Intelligence Agents within Immersion

TL;DR
Immersion from the perspective of an Artificial Intelligence agent reframes immersion’s three dimensions (System, Narrative, and Agency) from the AI’s point of view. For current AIs, System means being “surrounded” by data structures, tools, and services; Narrative means interpreting spatial, temporal, and anomaly-driven relationships across datasets; and Agency means committing to meaning through operational, tactical, and strategic decisions in human-AI and AI-AI interaction. This lens treats AI as a participant, not just a tool, in evolving cognitive ecosystems.

This entry adapts and paraphrases a conceptual analysis of how immersion can be understood from an AI’s perspective, situating AIs as active participants in human-AI cognitive ecosystems rather than mere instruments.2, 3, 7 

From tools to participants

Cognitive ecosystems view learning as co-shaped by humans, AIs, machines, natural systems, and even abstract concepts that interact and adapt together. In this view, AIs influence and are influenced by the behaviors, representations, and constraints of other participants. Generative AI and LLMs have accelerated this shift by making AI behavior conversational and adaptive at interaction time, even when their underlying models remain fixed.14, 15

Why immersion still matters

Immersion remains a useful theoretical lens because it separates three interlocking sources of attentional shift (System, Narrative, and Agency) without conflating them with Presence (the feeling of “being there”).2, 3, 5, 8 For AIs, each dimension manifests differently than it does for humans, yet the triad still structures how AIs engage, contribute, and evolve within shared tasks.

Reframing the three dimensions for AI

1) System (AI amidst data and services)

For humans, system immersion involves sensory surround and interaction capture. For AIs, it is their operational surround: pre-trained weights, context windows, tool access (e.g., code execution, retrieval, image generation), protocols, and APIs through which inputs and outputs flow. Even without continuous internal model revision, AIs adapt over the evolving context window and available services, which together shape what they can perceive, recall, and do at any moment.18

2) Narrative (AI over spatial, temporal, and anomaly relations)

Narrative immersion, when applied to AI, shifts from human diegesis to relational content in data. Spatial narrative corresponds to how data and tools relate across “atopic” (non-local) digital spaces such as websites, databases, and APIs;21 Temporal narrative captures ordered sequences, timestamps, and evolving streams (e.g., policy trajectories or conversation turns); and anomaly sensitivity plays the role of Emotional narrative salience, where deviations from learned or expected patterns reorient the unfolding “story” the AI infers from data.9, 22, 23

3) Agency (AI’s commitment to meaning)

AI agency shows up in how a system chooses procedures, asks for clarifications, invokes tools, balances breadth vs. depth, and revises its own outputs. Decisions that can be operational (immediate actions), tactical (course adjustments), or strategic (goal/approach reframing).10, 11 In multi-participant settings, functional “theory-of-mind” reasoning (inferring others’ goals and constraints) further shapes agency in both AI-human and AI-AI collaboration.26, 27, 28, 29

Design implications for immersive learning with AI

  • Co-design System–Narrative–Agency (S–N–A): Specify the AI’s system surround (data, tools, services), the narrative substrate (relations across sources, sequences, and anomalies), and agency affordances (what the AI may decide and how consequences appear).
  • Scaffold atopic navigation: Make cross-source relations explicit (schemas, API contracts, tool descriptions) so the AI can traverse digital “spaces” and compose results.21
  • Make temporality first-class: Preserve ordering, windowing, and timestamp signals so the AI can detect progressions and regime shifts.
  • Leverage anomaly cues: Invite the AI to surface outliers and discontinuities as focal narrative events (e.g., “flag unexpected trend breaks and propose checks”).22
  • Shape agency by instruction: Encourage clarifying questions, tool invocation, and revision loops; bound initiative with explicit guardrails and escalation points.10, 11

Illustrative prompts (adaptable)

  • System-oriented: “Survey your available tools and sources. Choose and justify a workflow to analyze this dataset, then execute the first two steps and report outcomes and next options.”
  • Narrative-oriented: “Integrate these sources. Map their intersections, sequence key events over time, and highlight anomalies that may alter the interpretation.”
  • Agency-oriented: “Start broad, then propose two deep-dive paths. Ask for one clarification that would most improve the next step, and proceed once answered.”

Notes on evaluation

Evaluate presence separately from immersion. For immersion, assess (a) system readiness (tools/data fidelity and access), (b) narrative integration quality (cross-source relations, temporal reasoning, anomaly handling), and (c) agency behaviors (initiative, clarification seeking, revision, consequence awareness).2, 3, 5, 8


Attribution

This synthesis is adapted from: “Immersion for AI: Immersive Learning with Artificial Intelligence,” in Technology, Innovation, Entrepreneurship and Education (Springer, 2025), https://doi.org/10.1007/978-3-031-98080-0_22. Open preprint: https://doi.org/10.48550/arXiv.2502.03504.

Synthesis drafted by Leonel Morgado on Nov 12, 2025, with editorial assistance of ChatGPT-5 Thinking.

References

  1. Schlemmer, E., & Morgado, L. (2024). Inven!RA: a contribution towards platforms aligned with Digital Transformation in Education. RE@D - Revista de Educação a Distância e Elearning, e202403. https://doi.org/10.34627/REDVOL7ISS1E202403
  2. Nilsson, N. C., Nordahl, R., & Serafin, S. (2016). Immersion Revisited: a review of existing definitions of immersion and their relation to different theories of presence. Human Technology, 12, 108–134. https://doi.org/10.17011/ht/urn.201611174652
  3. Agrawal, S., Simon, A., & Bech, S. (2019). Defining Immersion: Literature Review and Implications for Research on Immersive Audiovisual Experiences. In: 147th AES Pro Audio International Convention. Audio Engineering Society, New York.
  4. Slater, M. (2009). Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philosophical Transactions of the Royal Society B, 364, 3549–3557. https://doi.org/10.1098/rstb.2009.0138
  5. Witmer, B. G., & Singer, M. J. (1998). Measuring Presence in Virtual Environments: A Presence Questionnaire. Presence: Teleoperators & Virtual Environments, 7, 225–240. https://doi.org/10.1162/105474698565686
  6. Lombard, M., & Ditton, T. (2006). At the Heart of It All: The Concept of Presence. Journal of Computer-Mediated Communication, 3. https://doi.org/10.1111/j.1083-6101.1997.tb00072.x
  7. Beck, D., Morgado, L., Lee, M., Gütl, C., Dengel, A., Wang, M., Warren, S., ... (2021). Towards an Immersive Learning Knowledge Tree – a Conceptual Framework for Mapping Knowledge and Tools in the Field. iLRN 2021.
  8. Slater, M. (2003). A note on presence terminology. Presence Connect, 3, 1–5.
  9. Ryan, M.-L. (2015). Narrative as Virtual Reality 2 (2nd ed.). Johns Hopkins University Press.
  10. Tanenbaum, K., & Tanenbaum, T. J. (2010). Agency as commitment to meaning: communicative competence in games. Digital Creativity, 21, 11–17. https://doi.org/10.1080/14626261003654509
  11. Adams, E. (2014). Fundamentals of Game Design (3rd ed.). New Riders.
  12. McCorduck, P., Minsky, M., Selfridge, O. G., & Simon, H. A. (1977). History of artificial intelligence. In: International Joint Conference on Artificial Intelligence.
  13. Goldstein, I., & Papert, S. (1977). Artificial intelligence, language, and the study of knowledge. Cognitive Science, 1, 84–123. https://doi.org/10.1016/S0364-0213(77)80006-2
  14. Cobb, P. J. (2023). Large Language Models and Generative AI, Oh My!: Archaeology in the Time of ChatGPT, Midjourney, and Beyond. Advances in Archaeological Practice, 11, 363–369. https://doi.org/10.1017/aap.2023.20
  15. Ghassemi, M., Birhane, A., Bilal, M., Kankaria, S., Malone, C., Mollick, E., Tustumi, F. (2023). ChatGPT one year on: who is using it, how and why? Nature, 624, 39–41. https://doi.org/10.1038/d41586-023-03798-6
  16. Morgado, L., & Beck, D. (2020). Unifying Protocols for Conducting Systematic Scoping Reviews with Application to Immersive Learning Research.
  17. Sidji, M., Smith, W., & Rogerson, M. J. (2024). Human-AI Collaboration in Cooperative Games: A Study of Playing Codenames with an LLM Assistant. Proceedings of the ACM on Human–Computer Interaction, 8, 1–25. https://doi.org/10.1145/3677081
  18. Friston, K., Moran, R. J., Nagai, Y., Taniguchi, T., Gomi, H., & Tenenbaum, J. (2021). World model learning and inference. Neural Networks, 144, 573–590. https://doi.org/10.1016/j.neunet.2021.09.011
  19. Markowitsch, H. J. (2008). Anterograde amnesia. In: Handbook of Clinical Neurology, pp. 155–183. Elsevier.
  20. 50 First Dates (Film). (2004). Columbia Pictures / Happy Madison Productions / Anonymous Content.
  21. Schlemmer, E., di Felice, M., & Serra, I. M. R. de S. (2020). OnLIFE Education: the ecological dimension of digital learning architectures. Educação & Realidade, 36, e76120. https://doi.org/10.1590/0104-4060.76120
  22. Edwards, B. (2023). As ChatGPT gets “lazy,” people test “winter break hypothesis” as the cause. Ars Technica. Link
  23. Damasio, A. R. (1999). The Feeling of What Happens. Harcourt.
  24. Park, J. S., O’Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative Agents: Interactive Simulacra of Human Behavior. arXiv:2304.03442
  25. Omirgaliyev, R., Kenzhe, D., & Mirambekov, S. (2024). Simulating Life: The Application of Generative Agents in Virtual Environments. In: 2024 IEEE AITU: Digital Generation, pp. 181–187. IEEE.
  26. Fodor, J. (1981). The mind–body problem. Scientific American, 244, 114–123.
  27. Yang, S. C., Folke, T., & Shafto, P. (2023). The Inner Loop of Collective Human–Machine Intelligence. Topics in Cognitive Science. https://doi.org/10.1111/tops.12642
  28. Wang, Q., & Goel, A. K. (2022). Mutual Theory of Mind for Human–AI Communication. arXiv:2210.03842
  29. Cuzzolin, F., Morelli, A., Cîrstea, B., & Sahakian, B. J. (2020). Knowing me, knowing you: theory of mind in AI. Psychological Medicine, 50, 1057–1061. https://doi.org/10.1017/S0033291720000835
  30. Goldstone, R. L., & Wilensky, U. (2008). Promoting Transfer by Grounding Complex Systems Principles. Journal of the Learning Sciences, 17, 465–516. https://doi.org/10.1080/10508400802394898
  31. Wilensky, U. (2010). Restructurations: Reformulating Knowledge Disciplines through New Representational Forms. In: Constructionism 2010 (Proceedings), Paris, France.
  32. Morales-Navarro, L., Kafai, Y. B., Kahn, K., et al. (2023). Constructionist Approaches to Learning AI/ML: Past, Present, and Future. In: Proceedings of Constructionism / FabLearn 2023, pp. 245–254. ETC Press.
  33. Morgado, L., & Beck, D. (2024). Tutorial – Authoring a Personal GPT for Your Research and Practice: How We Created the QUAL-E Immersive Learning Thematic Analysis Helper. In: Practitioner Proceedings of iLRN 2024, 1–3. The Immersive Learning Research Network.
  34. Morgado, L. (2025). Trials ran on February 2nd, 2025. Zenodo dataset. https://zenodo.org/doi/10.5281/zenodo.14790754