Artificial Intelligence Agents within Immersion
Immersion from the perspective of an Artificial Intelligence agent reframes immersion’s three dimensions (System, Narrative, and Agency) from the AI’s point of view. For current AIs, System means being “surrounded” by data structures, tools, and services; Narrative means interpreting spatial, temporal, and anomaly-driven relationships across datasets; and Agency means committing to meaning through operational, tactical, and strategic decisions in human-AI and AI-AI interaction. This lens treats AI as a participant, not just a tool, in evolving cognitive ecosystems.
This entry adapts and paraphrases a conceptual analysis of how immersion can be understood from an AI’s perspective, situating AIs as active participants in human-AI cognitive ecosystems12 rather than mere instruments.1, 2, 3, 7
From tools to participants
Cognitive ecosystems view learning as co-shaped by humans, AIs, machines, natural systems, and even abstract concepts that interact and adapt together. In this view, AIs influence and are influenced by the behaviors, representations, and constraints of other participants. Generative AI and LLMs have accelerated this shift by making AI behavior conversational and adaptive at interaction time, even when their underlying models remain fixed.13, 14, 15
Why immersion still matters
Immersion remains a useful theoretical lens because it separates three interlocking sources of attentional shift (System1, Narrative6, and Agency4) without conflating them with Presence (the feeling of “being there”).2, 3, 5, 8, 9 For AIs, each dimension manifests differently than it does for humans, yet the triad still structures how AIs engage, contribute, and evolve within shared tasks.
Reframing the three dimensions for AI
1) System (AI amidst data and services)
For humans, system immersion involves sensory surround and interaction capture. For AIs, it is their operational surround: pre-trained weights, context windows, tool access (e.g., code execution, retrieval, image generation), protocols, and APIs through which inputs and outputs flow. Even without continuous internal model revision, AIs adapt over the evolving context window and available services, which together shape what they can perceive, recall, and do at any moment.18
2) Narrative (AI over spatial, temporal, and anomaly relations)
Narrative immersion, when applied to AI, shifts from human diegesis to relational content in data. Spatial narrative corresponds to how data and tools relate across “atopic” (non-local) digital spaces such as websites, databases, and APIs; Temporal narrative captures ordered sequences, timestamps, and evolving streams (e.g., policy trajectories or conversation turns); and anomaly sensitivity plays the role of Emotional narrative salience, where deviations from learned or expected patterns reorient the unfolding “story” the AI infers from data.9
3) Agency (AI’s commitment to meaning)
AI agency shows up in how a system chooses procedures, asks for clarifications, invokes tools, balances breadth vs. depth, and revises its own outputs. Decisions that can be operational (immediate actions), tactical (course adjustments), or strategic (goal/approach reframing).10, 11 In multi-participant settings, functional “theory-of-mind” reasoning (inferring others’ goals and constraints) further shapes agency in both AI-human and AI-AI collaboration17.
Design implications for immersive learning with AI
- Co-design System–Narrative–Agency (S–N–A): Specify the AI’s system surround (data, tools, services), the narrative substrate (relations across sources, sequences, and anomalies), and agency affordances (what the AI may decide and how consequences appear).
- Scaffold atopic navigation: Make cross-source relations explicit (schemas, API contracts, tool descriptions) so the AI can traverse digital “spaces” and compose results.
- Make temporality first-class: Preserve ordering, windowing, and timestamp signals so the AI can detect progressions and regime shifts.
- Leverage anomaly cues: Invite the AI to surface outliers and discontinuities as focal narrative events (e.g., “flag unexpected trend breaks and propose checks”).
- Shape agency by instruction: Encourage clarifying questions, tool invocation, and revision loops; bound initiative with explicit guardrails and escalation points.10, 11
Illustrative prompts (adaptable)
- System-oriented: “Survey your available tools and sources. Choose and justify a workflow to analyze this dataset, then execute the first two steps and report outcomes and next options.”
- Narrative-oriented: “Integrate these sources. Map their intersections, sequence key events over time, and highlight anomalies that may alter the interpretation.”
- Agency-oriented: “Start broad, then propose two deep-dive paths. Ask for one clarification that would most improve the next step, and proceed once answered.”
Notes on evaluation
Evaluate presence separately from immersion. For immersion, assess (a) system readiness (tools/data fidelity and access), (b) narrative integration quality (cross-source relations, temporal reasoning, anomaly handling), and (c) agency behaviors (initiative, clarification seeking, revision, consequence awareness).2, 3, 5, 8, 16
Attribution
This synthesis is adapted from: “Immersion for AI: Immersive Learning with Artificial Intelligence,” in Technology, Innovation, Entrepreneurship and Education (Springer, 2025), https://doi.org/10.1007/978-3-031-98080-0_22. Open preprint: https://doi.org/10.48550/arXiv.2502.03504.
Synthesis drafted by Leonel Morgado on Nov 12, 2025, with editorial assistance of ChatGPT-5 Thinking.
References
- Nilsson, N. C., Nordahl, R., & Serafin, S. (2016). Immersion revisited: a review of existing definitions of immersion and their relation to different theories of presence. Human Technology, 12, 108–134. https://doi.org/10.17011/ht/urn.201611174652
- Agrawal, S., Simon, A., & Bech, S. (2019). Defining immersion: literature review and implications for research on immersive audiovisual experiences. In 147th AES Pro Audio International Convention. Audio Engineering Society.
- Witmer, B. G., & Singer, M. J. (1998). Measuring presence in virtual environments: a presence questionnaire. Presence: Teleoperators & Virtual Environments, 7, 225–240. https://doi.org/10.1162/105474698565686
- Beck, D., Morgado, L., Lee, M., et al. (2021). Towards an immersive learning knowledge tree: a conceptual framework for mapping knowledge and tools in the field. In Proceedings of the 7th International Conference of the Immersive Learning Research Network.
- Slater, M. (2003). A note on presence terminology. Presence Connect, 3, 1–5.
- Ryan, M.-L. (2015). Narrative as Virtual Reality 2: Revisiting Immersion and Interactivity in Literature and Electronic Media (2nd ed.). Johns Hopkins University Press.
- Tanenbaum, K., & Tanenbaum, T. J. (2010). Agency as commitment to meaning: communicative competence in games. Digital Creativity, 21, 11–17. https://doi.org/10.1080/14626261003654509
- Adams, E. (2014). Fundamentals of Game Design (3rd ed.). New Riders.
- Cobb, P. J. (2023). Large Language Models and Generative AI, Oh My!: Archaeology in the Time of ChatGPT, Midjourney, and Beyond. Advances in Archaeological Practice, 11, 363–369. https://doi.org/10.1017/aap.2023.20
- Ghassemi, M., Birhane, A., Bilal, M., Kankaria, S., Malone, C., Mollick, E., & Tustumi, F. (2023). ChatGPT one year on: who is using it, how and why? Nature, 624, 39–41. https://doi.org/10.1038/d41586-023-03798-6
- Friston, K., Moran, R. J., Nagai, Y., Taniguchi, T., Gomi, H., & Tenenbaum, J. (2021). World model learning and inference. Neural Networks, 144, 573–590. https://doi.org/10.1016/j.neunet.2021.09.011
- Schlemmer, E., di Felice, M., & Serra, I. M. R. de S. (2020). OnLIFE Education: the ecological dimension of digital learning architectures. Educação & Realidade, 36, e76120. https://doi.org/10.1590/0104-4060.76120
- Edwards, B. (2023). As ChatGPT gets “lazy,” people test “winter break hypothesis” as the cause. Ars Technica. Link
- Damasio, A. R. (1999). The Feeling of What Happens. Harcourt.
- Fodor, J. (1981). The mind–body problem. Scientific American, 244, 114–123.
- Yang, S. C., Folke, T., & Shafto, P. (2023). The Inner Loop of Collective Human–Machine Intelligence. Topics in Cognitive Science. https://doi.org/10.1111/tops.12642
- Wang, Q., & Goel, A. K. (2022). Mutual Theory of Mind for Human–AI Communication. arXiv:2210.03842
- Cuzzolin, F., Morelli, A., Cîrstea, B., & Sahakian, B. J. (2020). Knowing me, knowing you: theory of mind in AI. Psychological Medicine, 50, 1057–1061. https://doi.org/10.1017/S0033291720000835
No comments to display
No comments to display