How can explainability be integrated into AI-driven system interaction, and how does it influence user trust, understanding, and control in dynamic OS environments?
PhAI is a speculative operating system (OS) concept that challenges the legacy of app-based computing. Instead of relying on static containers like files or folders, it builds task-specific interfaces on the fly, based on user intent, context, and workflow. The system replaces app invocation with modular function orchestration, integrates explainable AI, and adapts to the user’s way of thinking. PhAI envisions operating systems as cognitive partners—formed by users, not just used by them—and shifts the burden of adaptation from human to machine.
For decades, operating systems have been built around the desktop metaphor, structuring digital interaction through documents, folders, and static applications. This once helped users understand computers by mirroring physical office logic. However, today’s workflows are fluid, cross-functional, and driven by goals, rather than containers. Still, OS environments require users to manually string together tools, assuming they recall the right app, format, and path. This reinforces fragmented tool use, increases cognitive overhead, and breaks down in non-linear tasks. From a UX perspective, it violates key heuristics: systems fail to match real-world thinking, overload short-term memory, and restrict user flow. The result is a persistent misalignment between system structure and human problem-solving.
I shaped the conceptual foundation of the project, bridging academic research and system design. Drawing from over 200 sources in HCI and AI interaction, I translated theoretical insight into usable architecture. Throughout the process, I aligned the method with the phase: from foundational framing to structural prototyping. I ensured the project remained navigable and grounded, managing direction, maintaining coherence, and keeping ideas actionable. My role combined deep content expertise with systemic thinking: ensuring that PhAI wasn’t just visionary, but buildable.
PhAI was developed through a research-led, speculative design process. We began with an extensive literature review — exploring paradigms in HCI, HCAI, system interaction, and operating system design. Synthesising these insights, we mapped the architectural tensions between intent recognition, function orchestration, and explainability.
Rather than building a visual prototype, we deliberately scoped the project as a speculative design model, focusing on system logic, interaction principles, and structural clarity. The work resulted in a conceptual architecture and critical framing, not a production-ready interface.
How can explainability be integrated into AI-driven system interaction, and how does it influence user trust, understanding, and control in dynamic OS environments?
Desk research on human-centred XAI and interaction design. Explored layered explanation models and how user-facing transparency affects trust and learnability in adaptive systems.
Users don’t need complete system transparency — they need timely, relevant, and controllable explanations. Explainability is effective when it’s interactive, layered, and readily available at the point of doubt.
PhAI integrates a 3-layer explanation model: quick feedback, assistant dialogue, and structured breakdown (XUI). An embedded evaluation layer tracks clarity, trust, and user adaptation over time.
How can modular function architectures be used to generate user interfaces dynamically, and how does this affect usability, cognitive load, and long-term adaptability in operating systems?
Desk research on FaaS architectures, modular composition, and adaptive interface systems. Focus on how task-driven UI generation supports flexibility, reuse, and structural personalisation.
App containers impose static workflows. Function-level UI enables the system to respond directly to intent, reducing friction and allowing interfaces to adapt over time to individual usage patterns.
PhAI assembles interfaces on demand using modular functions. Depending on the task, they are persistent or transient and adapt structurally to user behaviour, routines, and task logic.
How can natural language interfaces support real-time intent clarification and user control in systems that parse ambiguous, goal-based input?
Desk research on input mediation in HCI and dialogue systems. Explored how feedback loops, system initiative, and disambiguation protocols shape interaction in adaptive interfaces.
One-shot input often fails when goals are vague. Systems require a dialogic layer that refines input through clarification, confirmation, or guided reformulation, without disrupting the task flow.
PhAI’s Prompt Line Interface parses open input, surfaces detected intent, and lets users confirm or adjust it. This creates a negotiation layer for intent, resolving ambiguity before execution begins.
How can user intent be recognised and processed as a central unit of interaction, and how does this shift affect the cognitive model of human–AI interaction in operating systems?
Desk research on intent modelling, goal-based interfaces, and cognitive offloading. Focused on limitations of command-based interaction and advantages of interpreting user input as high-level goals, not tool-specific steps.
Systems that force users to translate goals into sequences create friction. Recognising intent as the unit of interaction aligns system behaviour with human thinking and reduces the need for procedural planning.
PhAI replaces command logic with intent parsing. User input is treated as a goal, not a step toward achieving a goal. The system assembles functional modules around that intent, enabling purpose-driven interaction instead of app-structured workflows.
PhAI reimagines the operating system as an adaptive, intent-centred platform. Instead of launching fixed apps, the system assembles modular functions into purpose-built interfaces, based on user goals, context, and behaviour. This shifts the burden of orchestration from the user to the system. Interactions can be initiated via natural language or visual navigation. A built-in explainability layer provides insight into system behaviour and allows users to refine it. PhAI reframes the OS as a cognitive environment: not one users must conform to, but one that learns, adapts, and responds to how they actually think and work.
PhAI replaces command-based interaction with goal recognition. Instead of selecting apps or triggering features, users articulate high-level intents, such as preparing a workshop or publishing a product. These inputs are interpreted as structured goals through natural language parsing, behavioural context, and system memory.
The system assembles only the functions required to achieve the goal, reducing unnecessary complexity and cognitive load. This shifts the interaction from a procedural execution to an outcome-oriented approach. Tasks are no longer initiated through application logic but through purpose-driven orchestration, grounded in the user’s mental model. Intent becomes the new entry point for the system.
The Prompt Line Interface (PLI) is more than just an input channel. It enables natural language as a primary mode of interaction, allowing users to express what they aim to do, rather than how to do it. By combining classic command-line logic with intent interpretation, it acts as an interpretative interaction layer that dynamically maps goals to actions within the system.
At its core lies the principle of interaction as dialogue: PLI structures user interaction as a feedback-driven loop. The system not only parses intent, but also asks for clarification, suggests refinements, and explains its reasoning. This enables users to refine their goals iteratively, making complex system behaviour legible and adjustable.
As a hybrid interface logic, PLI facilitates seamless transitions between language and GUI. By externalising reasoning and accommodating ambiguity, it repositions the OS as a responsive partner in task negotiation, rather than a reactive command receiver.
PhAI replaces monolithic applications with modular function blocks that dynamically assemble into temporary user interfaces. Instead of loading entire apps, the system pulls only the functions relevant to the user’s current goal, reducing surface complexity and eliminating redundant features.
These functions are not bound to persistent containers. They exist as composable units that form task-specific UI shells, tailored to each intent and disposed of once the task is completed. This marks a shift from static interface paradigms to ephemeral, goal-driven compositions.
For UX, this enables precision and efficiency. Interfaces are stripped to their operational core—what’s needed now, not what might be helpful later. This supports progressive disclosure: only showing deeper options when contextually appropriate. Visual and functional hierarchy is restructured around task flow rather than tool logic.
Technically, this architecture mirrors modular thinking in programming, where software is built from reusable, loosely coupled components. But in PhAI, this logic is applied to the interface itself. The result is a UI that scales with complexity when needed, yet collapses back into simplicity when possible.
By reimagining software as function-first rather than container-bound, PhAI transforms how interaction is shaped: not by software boundaries, but by human goals and needs.
At the heart of PhAI lies the orchestration layer—a system-level logic that translates recognised intent into coordinated interaction structures. It doesn’t process input (like the PLI) or define functional modules (like FaaS). Instead, it dynamically arranges what appears, when, and in what structure—based on the user’s goal, context, and interaction history.
This layer orchestrates UI generation across time and tasks. It considers where the user is in the task flow, what steps are typical for similar goals, and how different functions should be sequenced or combined to achieve the desired outcome. Interface shells are assembled not through static rules, but via an adaptive, stateful logic engine.
By doing so, the OS becomes elastic. It contracts to reduce friction and expands when needed, eliminating the need for the user to manually manage transitions between apps or interface states. The orchestration layer ensures that every element—function, layout, flow—aligns with human reasoning and reduces the need for meta-navigation.
Where traditional systems leave users responsible for stitching together various tools, PhAI takes on this burden. It automates the logic of composition, letting users stay focused on their goals. In short, it’s not the user navigating the system. It’s the system navigating around the user.
PhAI marks a conceptual leap in rethinking how users interact with operating systems in AI-mediated environments. The outcome is not a polished interface, but a structured model for intent-based interaction, modular UI generation, and transparent orchestration. By aligning design research with HCI and HCAI discourse, the project offers a transferable framework for future OS architectures, grounded in UX principles and system logic.
The absence of visual mockups is intentional: our focus lies on interaction structure and system behaviour, not stylistic expression. This scope enabled us to explore cognitive alignment, interface adaptability, and explainability in depth, a level of understanding that visual surface design alone could not convey.
PhAI challenged me early on — its scope felt overwhelming. Redesigning an OS is conceptually vast; finding a meaningful focus was critical. The project taught me the importance of deliberate scoping and how it enables depth without losing direction. I also learned to balance conceptual exploration with the discipline of keeping a clear project vision.
Staying on the architectural and interaction-principle level, without rushing into UI screens, was a key learning. It required resisting the pull toward visual output and instead framing a coherent system model. Finally, the project deepened my understanding of how to structure research rigorously: ensuring that findings, design tensions, and outcomes align cleanly, so that each design decision stands on solid ground.