Reading view

Designing for Autonomy: UX Principles for Agentic AI Systems


As artificial intelligence evolves from reactive chatbots to autonomous agents, design teams are facing a new frontier: creating user experiences for systems that not only respond but act on their own.

Autonomous agents—capable of goal-seeking, multi-step reasoning, and contextual adaptation—represent a dramatic leap from co-pilots and traditional assistants. They can pursue tasks with minimal supervision, call tools independently, and even collaborate with other agents in a shared runtime.

For UX practitioners, this evolution introduces a complex design challenge: How do we craft interfaces that support trust, transparency, and control in systems we don’t fully script?


From Interfacing With AI to Interfacing With Intelligence

In the Invisible Machines podcast, Robb Wilson—founder of OneReach.ai and author of The Age of Invisible Machines—summarizes this shift bluntly:

“We’re no longer designing for the machine—we’re designing for what the machine does when we’re not looking.”

This reality reshapes UX at a foundational level. Previously, designers created flows assuming the system would only act when prompted. But in agentic systems, autonomy is not a feature—it’s the default behavior.

These agents don’t live inside neat input/output boxes. They initiate tasks, communicate asynchronously, and may alter the state of applications or data without user awareness unless careful affordances are designed into the experience.


Key UX Challenges for Autonomous Agents

Designing for agentic AI surfaces four high-stakes areas for UX and product teams:

1. Trust and Transparency

Users must understand what agents are doing, why they’re doing it, and when they’ll act. Without these cues, autonomy feels like unpredictability.

  • Provide real-time status indicators (“Your travel assistant is rescheduling your flight”).
  • Include rationales or intent summaries (“Rebooking due to weather disruption forecasted”).
  • Offer view history and audit logs for actions taken autonomously.

“The more autonomous a system becomes, the more visible its reasoning must be,” writes Wilson in The Age of Invisible Machines.

2. State and Context Management

Agents act across time, apps, and modalities. That means users need visibility into what state the agent is in—even if the interaction is paused or asynchronous.

  • Use persistent agent dashboards that summarize current objectives, pending actions, and next steps.
  • Leverage natural language recaps when users re-enter a thread (“While you were away, I confirmed your hotel and drafted your expense report”).

3. Interruptibility and Control

Autonomy doesn’t mean loss of control. Users should be able to:

  • Intervene and override agent decisions.
  • Pause or halt behaviors.
  • Adjust parameters and preferences mid-task.

UX affordances for these controls must be accessible but unobtrusive, supporting both novice and power users.

4. Tone and Relational UX

Autonomous agents often act for the user rather than with the user. That relationship requires a tone that is confident but not authoritarian—collaborative, not condescending.

  • Train agents to express intent, not assumption.
  • Avoid over-promising outcomes (“I’ll try to…” vs. “I’ve solved it”).
  • Use consistent agent personalities to foster familiarity over time.

The “AI-First” Design Mindset

What ties all this together is a shift toward the AI-first design philosophy—treating AI agents not as features to fit into old UI patterns, but as primary actors in the product experience.

This mirrors what Robb Wilson calls “the invisible layer” in his book:

“Interfaces will become less visible, but the systems behind them more intelligent. The UX challenge is no longer about placement—it’s about partnership.”

Companies like OneReach.ai are pioneering architectures where agents operate within runtime environments that allow for persistent goals, distributed memory, and composability across functions. These platforms demand a new design grammar—one where intent flows trump screens, and success depends on orchestration, not sequence.


From Theory to Practice: Starting Points for Design Teams

Here are a few first principles we’ve borrowed from an open source project called AI First Principles – a kind of Agile Manifesto for this AI moment, meant to help avoid the fragmentation, bureaucracy, and failures we’re seeing. Contributors include execs leading AI strategy at the federal government, large healthcare, Meta, Salesforce, Amazon, Microsoft, several startups, and one of the authors form the original Agile Manifesto (read the entire AI first principles manifesto: https://www.aifirstprinciples.org/manifesto) :

  • Design around outcomes, not flows: Start with the goals users want to achieve, then build agent behaviors and UX affordances around them.
  • Prototype agent behavior: Use scripts, roleplay, or logic models to define how agents should respond in different contexts—before committing to UI decisions.
  • Create explainability as a service: Design reusable components that show “why” and “how” an agent acted, so every product doesn’t reinvent this wheel.
  • Involve design early in runtime planning: Your UX team should be in the room when agent architectures and memory strategies are discussed—they directly impact experience.

Final Thoughts

Autonomous agents change the UX game. They don’t just challenge how we design interfaces—they challenge what an interface is. As AI becomes more proactive, design becomes more about choreography than control.

To keep pace, designers must embrace new mental models, new tools, and new responsibilities. We’re no longer just designing for screens—we’re designing for behavior, for trust, and for collaboration with intelligent systems that never sleep.

And if that sounds daunting, remember: we’ve done this before. Every time technology changed what was possible, design adapted to make it usable.

This time, we’re not just designing machines. We’re designing how machines act for us when we’re not watching.

Sources:

Wilson, Robb (with Josh Tyson). Age of Invisible Machines. Hoboken, NJ: Wiley, 2022. Wiley.com

Invisible Machines (podcast). Hosted by Robb Wilson. UX Magazine. https://uxmag.com/

AI First Principles. “AI First Principles Manifesto.” https://www.aifirstprinciples.org/manifesto (accessed August 15, 2025).

The post Designing for Autonomy: UX Principles for Agentic AI Systems appeared first on UX Magazine.

  •  

Vibe Coding: Is This How We’ll Build Software in the Future?

The world of software development is evolving at a rapid pace, which is no surprise. But lately, it’s not just about new frameworks or languages — it feels like we’re stepping into a whole new way of building software. Enter: vibe coding.

Now, if you’re thinking WTF is vibe coding… I had the same thought, and I was already vibing. Behind the playful name lies a powerful shift in how people are writing code: using AI to translate natural language descriptions into functional software.

What exactly is vibe coding?

Vibe coding is an AI-assisted approach to software development where users describe their ideas in natural language, and AI generates the code for them. This method allows people to build software using conversational language instead of writing syntax-specific code.

The term vibe coding was coined by Andrej Karpathy, a prominent AI engineer and researcher who previously worked at Tesla and OpenAI. In early 2024, Karpathy described vibe coding in a tweet that gained significant attention in the developer community. He characterized it as a process where developers “fully give in to the vibes, embrace exponentials, and forget that the code even exists.”

The core concept of vibe coding involves:

  1. Describing software ideas and features in plain language.
  2. Letting AI interpret and generate relevant code.
  3. Iterating and refining the output through conversation with the AI.
  4. Deploying the resulting software.

This approach has gained traction due to recent advances in AI and language models, enabling non-programmers to build functional software by working with AI rather than writing code directly. Vibe coding represents a cultural shift in software development, lowering the barrier to entry and potentially transforming how creators and entrepreneurs approach building digital products.

Why does this matter?

The potential impact is massive. Consider these possibilities:

  • Lowering the Barrier to Entry: suddenly, software development isn’t just for engineers. Entrepreneurs, designers, and hobbyists can build apps without needing deep technical expertise. In the last few months, I’ve been building my first product with just myself and AI.
  • Accelerating Development Cycles: prototyping and MVPs can move from idea to execution in days instead of months.
  • Unlocking Creativity: by automating the mundane aspects of coding, developers can focus on the bigger picture — solving complex problems and designing intuitive user experiences.

The current state of vibe coding

It’s not just a futuristic dream. Several AI-powered tools are already making vibe coding a reality:

Several tools are currently available for implementing vibe coding, allowing users to create software using AI-assisted methods. The most prominent options include:

  • Cursor: features the Composer tool, which can create, edit, and delete files in your codebase. (my personal favorite)
  • Replit: a browser-based and native app tool that offers a full-stack development environment with AI agent integration. It supports multiple programming languages and provides easy deployment options.
  • Lovable.dev: designed for beginners, it offers a simple interface with real-time preview and natural language prompting. IMO one of the best for beginners with no coding experience.
  • Bolt.new: ideal for quick prototyping with zero setup time, providing a browser-based development environment.
  • Windsurf: includes the Cascade feature, which functions similarly to Cursor’s Composer.
  • Cline: an open-source vibe coding tool.
  • Claude Code: an experimental tool developed by Anthropic, currently available in beta as a research preview.
  • GitHub Copilot: an AI pair-programmer developed by GitHub and OpenAI, with a great free tier.
  • Aider: a vibe coding tool mentioned in recent developments.
  • MetaGPT: the newest edition to the AI Agent coding landscape. It focuses on a multi-agent layered approach.
  • v0 by Vercel: this tool is pretty cool, it uses Shadcn for natural language component generation.
  • SuperWhisper or Wispr flow: super easy voice dictation. I use Wispr flow for talking to Cursor.

New tools and platforms are emerging all the time, expanding what’s possible with AI-assisted coding, especially for those with little to no coding experience.

How I started and where I’m at

Last November, a friend introduced me to Cursor. He was building his own Figma plugins, and I was really blown away. He gave me a brief tutorial and got me set up, and within a couple of weeks, I was on my way to experimenting and building my own Figma plugin.

After I finished this, I got a little ahead of myself and thought, “Well, if I can make a Figma plug-in, I can build my own app. Why not?” When I started this, Sonnet 3.5 wasn’t even a thing yet.

It’s important to know that I basically have no coding or engineering experience, but what I do have is years of product design and development knowledge. I think this is what has allowed me to get super far with this process. Pair that with I’ve been really deep into generative AI since 2021, and so I’ve been refining the way that I speak to large language models and refining that process, and have a pretty good idea of what works and what doesn’t work.

So what have I been able to do? A lot.

Honestly, when I take a step back and look at what I’ve been able to accomplish basically by myself, I feel like it’s actually pretty impressive. I started to build an idea that I’ve had for a while, which is an AI-powered recipe app. I use AI a lot for cooking, and over the past few years, a lot of our meal planning, prep, and recipes are now actually from my custom chefGPT. So I started building lilchef.app.

Basically, every day I’m able to implement a new feature, and not small ones. The recent release of sonnet-3.7 has drastically sped up my vibe coding process. We’ve been using lil’ chef every day to help with meal planning, recipes, and cooking. I honestly never thought I’d be able to do something like this.

It makes me realize that the future for anyone who has an idea or wants to create something is definitely possible — we’re moving into a world where anyone can quickly spin up and create a custom software or product that is so personalized and contextual to them and their use case that everyone is going to start to become super efficient.

My current vibe code setup

I was texting with a friend the other day, and they asked for an overview of my process.

Here’s my current flow:

I typically start in Perplexity. (You can also use ChatGPT search)
I use Perplexity to do all of the research of what I’m thinking about for either the initial idea, which in this case I’ve already passed, so I now use it for new feature ideation.

I utilize the projects of Perplexity, and I have my old cursor rules file as a grounding documentation in there. So when I’m doing research on a new feature that I want to implement, Perplexity automatically knows my entire tech stack.

When I feel good about where that feature is, I have Perplexity turn it into a simplified PRD that I then turn into a markdown file that I turn into a .md file in my .stories file. (think user stories)

I have my PRD broken down into job stories, and those job stories basically act as “tickets.” Here is an example of a user-story.md:

Image by Jacquelyn Halpern

I have my cursor rules set up for general front-end, backend, etc.

Image by Jacquelyn Halpern

When I get into the cursor if it’s a more complex or larger feature, I sometimes use 03-mini in Agent mode for the foundation, and then I do all of the code implementation with Claude sonnet-3.7, but lately I’ve just been going to sonnet-3.7.

Below is a thread that lists everything I’m currently using for the setup.

The challenges: it’s not all rainbows and unicorns

As exciting as vibe coding is, it’s not without its challenges:

  • Security Risks: AI-generated code can be prone to vulnerabilities if not reviewed carefully.
  • The Black Box Problem: if you didn’t write the code, how well do you really understand it? Over-reliance on AI without grasping the fundamentals can lead to long-term issues. I struggle with this a lot. Luckily, I have a lot of really smart friends who are engineers and are always willing to help.
  • Skill Shifts: while vibe coding lowers the barrier to entry, skilled developers will still be needed to guide and collaborate with AI tools.
  • Ethical Considerations: as AI-generated code becomes more powerful, we’ll need to address concerns around bias, responsibility, and transparency.

The future is… vibing?

Vibe coding won’t replace traditional programming overnight. But it’s definitely a game-changer. It has the potential to democratize software development, speed up innovation, and empower a new generation of creators. Like I’ve said before, “if it’s imaginable, it’s achievable.”

As we move forward, we need both excitement and a healthy dose of skepticism. Addressing the risks, refining the technology, and ensuring responsible usage will be key to unlocking its full potential.

Are you currently vibe coding?

The article originally appeared on Medium.

Featured image courtesy: Jacquelyn Halpern.

The post Vibe Coding: Is This How We’ll Build Software in the Future? appeared first on UX Magazine.

  •  

The Future of Product Design in an AI-Driven World

The world of product design is changing faster than ever, thanks to the rapid advancements in artificial intelligence. My journey into building with AI started modestly, experimenting with tools like ChatGPT and then expanding into more specialized platforms like Claude, Perplexity, and, most recently, artifacts. Each step has been a revelation, not just in capability but in the way these tools fundamentally transform how we approach design and prototyping.

The evolution of AI in design

It began with simple experiments — copy-pasting between ChatGPT and Visual Studio Code, running snippets in the terminal, and juggling dependency installations. I remember the excitement of creating my first custom game. Sure, it was just a Flappy Bird clone, but it featured my graphics, characters, and rules. Seeing it work locally felt like a superpower, even if the process was iterative and manual.

When Claude entered the picture, the game changed. Code generation became smarter, requiring fewer iterations to achieve the desired outcome. And then artifacts arrived, and that’s when it truly hit me: this changes everything. The ability to build through natural language — prompting rather than coding — opened new creative pathways.

Building faster, designing better

For years, prototyping high-fidelity interactions or testing new component paradigms felt like bottlenecks in the design process. Tools like Figma and Framer are incredible, but they come with steep time investments. Translating an idea from your head into something tangible often meant spending hours perfecting animations or crafting detailed mockups.

Now, with AI, I can generate functional prototypes in minutes. A well-crafted prompt often delivers results that are “close enough” on the first attempt, letting me quickly iterate and refine. Seeing a concept in a working environment  —  not just a static prototype  —  reveals new possibilities and sparks immediate ideas for improvement.

Even more exciting is the ability to share these working prototypes directly with engineers. Instead of handing off a static design or a click-through Figma prototype, I can deliver something dynamic, something close to how it might operate in production. This shift bridges the gap between design and development, fundamentally altering how we collaborate.

The designer-engineer hybrid

AI is pushing us toward a future where designers can become design engineers. Tools like artifacts don’t just speed up our workflow; they empower us to bring our ideas to life without waiting for someone else. For years, I felt blocked because I couldn’t code well enough to execute my visions. I’d have to hire or partner with an engineer, introducing delays and losing some of that initial creative spark.

But now, AI acts as a junior developer, enabling an iterative process where I can build, test, and refine in real time. It’s not just about speed  —  it’s about independence. The shift feels monumental. We’re no longer constrained by our technical skillset, and this democratization of building opens the door for designers to step into roles that merge creative vision with technical execution.

A global productivity shift

The implications extend beyond individual workflows. As these AI tools become more accessible, free, and even — they have the potential to spark a massive productivity boost across industries. Imagine the collective creativity of humanity, unleashed from technical or resource limitations.

When anyone with an idea can build without barriers, innovation accelerates. This democratization could lead to a renaissance of creativity, where people from all walks of life contribute to solving problems, designing better products, and imagining new futures.

Reimagining the role of high-fidelity design

This evolution raises an important question: What does the future hold for tools like Figma? If AI can generate high-fidelity prototypes that operate almost like production code, will designers still invest hours in pixel-perfect and advanced prototyping features? I still think tools like Figma or other design tools will be really valuable. A quick way to get a head start on your live prototype is often having a solid design as a base that a tool like Cursor or Claude artifacts can work from. It also makes the prompt engineering a bit easier if you can speak better visually.

The answer might lie in how we define our roles. Instead of focusing on tools and workflows, designers can focus on vision, strategy, and problem-solving. High-fidelity design won’t disappear  —  it will transform. Prototyping in AI environments means iterating faster, collaborating more effectively, and delivering solutions that are closer to reality from the start.

Where we go from here

AI isn’t just a tool; it’s a collaborator. A really good one at that.

For designers, this means rethinking how we work, how we communicate, and what skills we prioritize. It’s a chance to help shape a future where the barriers between creativity and execution dissolve.

Remember, AI isn’t meant to replace you; it’s meant to elevate you.

Stay curious.

The article originally appeared on Medium.

Featured image courtesy: Jacquelyn Halpern.

The post The Future of Product Design in an AI-Driven World appeared first on UX Magazine.

  •  

Understanding AI Agent Runtimes and Agent Frameworks

In the rapidly evolving world of AI “AI agent runtimes have emerged as environments where AI agents can be freely executed—designed, tested, deployed, and orchestrated—to achieve high-value automation. When discussing the development and deployment of AI agents, runtimes are often confused with agent frameworks. While they may sound similar, they serve distinct purposes in the AI ecosystem. The unique capabilities of runtimes and frameworks can make it more efficient to scale AI agents within an organization.  

Overview of AI Agent Runtimes and Frameworks

AI agent runtimes provide the infrastructure for executing AI agents. Runtimes handle orchestration, state management, security, and integration. AI agent frameworks focus on building agents and offer tools for reasoning, memory, and workflows. Frameworks most often need pairing with a separate runtime for production deployment. 

A full lifecycle solution combines runtimes and frameworks, enabling end-to-end management from inception through ongoing runtime operations, maintenance, and evolution.

Understanding AI Agent Runtime

An AI agent runtime is the execution environment where AI agents operate. It’s the infrastructure or platform that enables  agents to run, process inputs, execute tasks, and deliver outputs in real-time or near-real-time. A runtime is the engine that powers the functionality of AI agents, ensuring they can interact with users, APIs, or other systems safely and efficiently.

Key characteristics of AI agent runtimes:

  • Execution-focused: Runtimes provide the computational resources, memory management, and processing capabilities needed to run AI agents.
  • Environment-specific: Runtimes handle tasks like scheduling, resource allocation, and communication with external systems (like cloud services, databases, or APIs).
  • Highly Scalable: Runtimes ensure the agent can handle varying workloads, from lightweight tasks to complex, multi-step processes.

Examples of AI agent runtimes: 

  • Cloud-based platforms like AWS Lambda for serverless AI execution
  • Kubernetes for containerized AI workloads
  • Dedicated runtime environments like those provided by xAI for running Grok models
  • No-code platforms like OneReach.ai’s Generative Studio X (GSX)serve as full lifecycle solutions, combining runtimes and frameworks—to orchestrate multimodal AI agents across channels like Slack, Teams, email and various voice channels

Runtimes enable real-time automation and workflow management. An AI agent runtime manages the compute resources and data pipelines needed for AI agents to process user queries and generate personalized responses.

Understanding AI Agent Frameworks

An AI agent framework is a set of tools, libraries, and abstractions designed to simplify the development, training, and deployment of AI agents. It provides developers with pre-built components, APIs, and templates to create custom AI agents without starting from scratch.

Key characteristics of AI agent frameworks:

  • Development-focused: Frameworks streamline the process of building, configuring, and testing AI agents.
  • Modular: Frameworks offer reusable components like natural language processing (NLP) modules, decision-making algorithms, and integration tools for connecting to external data sources.
  • Flexible: Frameworks allow developers to define the agent’s behavior, logic, and workflows, with support for specific use cases ranging from chatbots to task automation to multi-agent systems.

Examples of AI Agent Frameworks: 

  • Frameworks like LangChain for building language model-powered agents
  • Rasa for conversational AI
  • AutoGen for multi-agent collaboration

A developer might use a framework like LangChain to design an AI agent that retrieves data from a knowledge base, processes it with a large language model, and delivers a response, while abstracting away low-level complexities.

Key differences between agent runtimes and agent frameworks

How Runtimes and Frameworks Fit Together

AI agent runtimes and frameworks are complementary. Frameworks are used to design and build AI agents, defining their logic, capabilities, and integrations. Once agents are developed, they are deployed into a runtime environment where they can operate at scale, processing real-world inputs, and interact with users or systems. For example, an AI agent built using LangChain (framework) might be deployed on a cloud-based runtime like AWS or xAI’s infrastructure to handle user queries in production.

Runtimes often include or integrate framework-like features to streamline the process. OneReach.ai’s GSX platform acts as a runtime for orchestrating AI agents but incorporates no-code building tools that function similarly to a framework, allowing users to quickly design, test, and deploy agents without deep coding. 

Other pairings include LangChain with AWS Lambda, where LangChain handles agent logic and AWS provides the scalable runtime, as well as Rasa (for conversational flows) with Kubernetes (for containerized execution).

Integrated vs. Separate A Philosophical Distinction Between Approaches

Not all runtimes include agent building features. Some, like AWS Lambda or Kubernetes, are pure execution environments without built-in tools for designing agent logic, requiring separate frameworks for development. Others, such as GSX (OneReach.ai), integrate no-code interfaces for creating and customizing agents directly into the runtime, blending the two elements.

This distinction reflects a philosophical position in AI design: Should building and deployment be tightly integrated into a single platform, or kept separate for modularity? Proponents of separation argue it allows for greater flexibility—developers can mix and match best-in-class frameworks with specialized runtimes, fostering innovation and customization. However, integrating both offers significant advantages, particularly for companies without highly trained teams. 

By controlling both building and deployment, integrated platforms reduce complexity, minimize handoffs between tools, and ensure seamless transitions from design to production. This is especially beneficial for non-technical users or smaller organizations in sectors like HR or customer support, where quick setup, no-code accessibility, and reliable orchestration across channels enable rapid AI adoption without the need for expert developers or data scientists.

Estimated Project Time and Resources

For separate frameworks and runtimes (e.g., LangChain + AWS Lambda), building a basic AI agent might take 4-12 weeks, requiring 1-3 skilled developers (with Python and AI expertise) and potentially $10,000-$50,000 in initial costs (salaries, cloud fees, and setup). This suits teams focused on customization but demands more upfront investment in skills and integration. Integrated platforms like OneReach.ai can reduce this to days or 1-4 weeks for prototyping and deployment, often needing 1-2 non-technical users or business analysts, with costs around $500-$5,000 monthly (subscriptions) plus minimal setup—ideal for faster ROI in resource-constrained environments.

Pros and Cons of All-in-One Solutions

Pros and Cons of Frameworks + Runtimes

Can You Choose One Over the Other?

The choice between an AI agent runtime and a framework depends on your project’s stage and needs. Frameworks excel in the development phase, offering flexibility for custom logic, experimentation, and integration with specific AI models or tools—ideal when you need granular control over agent behavior. However, they require more coding expertise and don’t handle production-scale execution on their own, often leading to longer timelines (e.g., weeks for development) and higher resource demands (e.g., dedicated engineering teams).

Runtimes shine in deployment and operations, providing the infrastructure for reliable, scalable performance, including resource management and real-time processing. They are better for ensuring agents run efficiently in live environments but may lack the depth for initial agent design unless they include integrated building features. 

Platforms like OneReach.ai blur the lines by combining runtime capabilities with framework-style no-code tools, making them suitable for end-to-end workflows but potentially less customizable for advanced users—while cutting project time to hours or days and reducing the need for specialized skills.

In essence, use a framework if your focus is innovation and prototyping; opt for a runtime if reliability and scalability in production are paramount. For integrated solutions, choose platforms that handle both to simplify processes for less technical teams, with shorter timelines and lower resource barriers.

Who Should Choose One vs. the Other?

Choose a Framework if you’re a developer, AI engineers, and researchers building custom agents from scratch are likely to use frameworks. LangChain and AutoGen are perfect for teams with coding skills who need modularity and want to iterate on agent intelligence—like R&D or startups experimenting with novel AI applications—but entail 4-12 weeks and engineering resources for a full project.

Operations teams, IT leaders, and enterprises focused on deployment and maintenance should gravitate toward runtimes. OneReach.ai and AWS Lambda suit non-technical users and large organizations prioritizing quick orchestration, automation across channels, and handling high-volume tasks without deep development overhead—especially in sectors like HR, finance, or customer support where speed to production (days to weeks) matters more than customization. Integrated runtimes are ideal for companies lacking highly trained teams, as they provide end-to-end control for easier adoption with reduced time and costs.

For most companies—particularly mid-to-large enterprises without deep AI expertise or those prioritizing speed and reliability—an all-in-one AI agent runtime with building capabilities spanning the full lifecycle is likely the best solution. This approach simplifies deployment, reduces hidden costs, and ensures scalability and security out-of-the-box, enabling faster ROI (e.g., setup in hours vs. months). All-in-one platforms suit common use cases like workflow automation or chatbots.

Companies with strong technical teams that are experienced in AI projects and with high customization requirements might pair a framework with a runtime for more flexibility, with higher complexity and risk. Pilot projects with tools like LangGraph (full lifecycle) or CrewAI (framework) can help organizations decide what will best suit their needs.

Conclusion

In summary, AI agent frameworks are about building the agent—providing the tools to create its logic and functionality. AI agent runtimes are about running the agent, ensuring it has the resources and environment to perform effectively. Platforms like OneReach.ai demonstrate how runtimes can incorporate framework elements for a more integrated experience, highlighting the philosophical debate on separation vs. integration. Understanding this distinction is crucial for developers and organizations looking to create and deploy AI agents efficiently.

For those interested in exploring AI agent development, frameworks like LangChain or Rasa are great starting points, while platforms like AWS or xAI’s API services offer robust runtimes for deployment.

The post Understanding AI Agent Runtimes and Agent Frameworks appeared first on UX Magazine.

  •  

Five Myths Debunked: Why Agentic AI Is Much More Than Chatbots

When you start thinking about Agentic AI in the right way, you begin to see that it’s not a piece of technology to be wielded; it’s part of a business strategy that sequences various technologies to automate tasks and processes in ways that surpass what humans alone are capable of. This post debunks five common myths about Agentic AI that can hold organizations back in a moment when they absolutely need to surge ahead.

Starting with the misconception that Agentic AI is similar to the ways we’ve been building and experiencing software. Organizations also often feel pressure to start with big, audacious buildouts, when starting small on internal use cases can forge scalable growth. It’s also important to identify use cases for automation that are truly high-value and find ways to orchestrate multi-agent AI systems to complete objectives dynamically, rather than following predefined routes.

Myth #1: Agentic AI is software as usual

With so many apps and SaaS solutions quickly tacking large language models (LLMs) onto their existing user interfaces (UIs), it’s tempting to want to believe that Agentic AI can simply be added to traditional software. In reality, the successful implementation of Agentic AI requires an entirely different approach to software creation.

The linear, staggered waterfall approach to software creation has sprung countless leaks over the years, and applying it within the framework of designing Agentic solutions is a surefire way to drown. Rather than spending months guessing what users want and initiating a laborious and rigid buildout around perceived needs, Agentic AI begins with building. AI agents are quickly propped up around a use case using low- and no-code building tools. The solution is tested and iterated on right away, with multiple iteration cycles taking place over the course of a single day.

Another key distinction is that Agentic AI works around objectives, rather than following predefined routes. In that sense, the work of creating and evolving AI agents is a bit like the process pharmaceutical companies use when developing a new drug. A new medication that’s being investigated as a cure for gout might turn out to be a promising hair growth solution. These kinds of high-value surprises are uncovered through trial and error, fast iteration, and testing.

When it comes to Agentic AI vs chatbot capabilities, traditional approaches to conversational AI fall perilously short. In the not-too-distant past, chatbots used tools like natural language processing (NLP) to understand user requests and automate responses. With the advent of generative tools like LLMs, chatbots are better at disambiguating user requests and can deliver more dynamic responses, but they lack agency. AI agents use LLMs to interact with users and then communicate with other agents, knowledge bases, and legacy software to do real work. Beware of bolt-on solutions calling themselves AI agents. They are chatbots in disguise.

Myth #2: It’s imperative to start big

In order to get moving with Agentic AI, most organizations don’t need a large-scale, public-facing deployment. The key is to think big and start small. It’s often more effective to begin within your organization, automating internal tasks with help from the people who understand them best. This allows orgs to get a handle on sequencing technology in ways that are more efficient and rewarding what humans are able to do on their own.

Stating internally allows orgs to form the groundwork for an ecosystem of Agentic AI that can grow to include customers once they’ve figured out how to optimize Agentic experiences. Starting small and internally requires more than just giving teams access to a sanctioned LLM. At a minimum, there should be a strategy in place for connecting AI agents to some sort of knowledge management system, such as retrieval-augmented generation (RAG).

In one example, an enterprise organization used an agentic system to reduce ticket resolution times from six weeks to one, and cut inbound calls by 35%. They were also able to lower CTS (cost to serve) by 40% and — by reducing the workload of human agents in their contact centers — increase productivity and improve their CSAT (customer satisfaction) score to 83%.

Myth #3: Operations are improved by automating existing workflows

The first move organizations often make when developing use cases for Agentic AI is to try and automate the workflows and processes that humans are already using. While this approach can often get the ball rolling, the real value comes with creating automations that surpass what humans alone are capable of.

Someone placing a call to the IRS (Internal Revenue Service) to follow up on a letter they received in the mail usually encounters a fragmented approach to automation that lumbers along in well-worn ruts. The first hurdle is figuring out which of the unintelligible clusters of voice-automated options most closely applies to their situation. They might repeat that process a few more times as they move through murky layers of the IRS phone tree, unsure if they’re headed to the right department and expecting to wait on hold for hours to find out.

What if, instead:

  1. The IRS greeted callers with an AI agent that could verify their personal information while simultaneously cross-referencing recent activity.
  2. The AI agent could infer that the taxpayer is calling about a letter that was sent last week.
    The AI agent sees that a payment was received after the letter was sent.
  3. The system confirms the reason for the call and relays that information, providing a confirmation number.
  4. The user ends the call (fully satisfied) in under five minutes.

Most organizations are teeming with hobbled processes that humans set up to work around disparate systems. Rather than automating those workflows, savvy business and IT leaders are looking for better ways to complete the objectives buried at the center of the mess.

As Robb Wilson (OneReach.ai CEO and founder) wrote in our bestselling book on Agentic AI, Age of Invisible Machines, “not only can Agentic AI running behind the scenes in an organization handily obscure the mess of systems (and graphical UIs), it also binds your ecosystem by standardizing communications, creating a feedback loop that can evolve automations for all your users — customers and employees.”1

Myth #4: All it takes is some AI Agents

The hype around AI agents often obscures a fundamental truth about what they really are. “AI agents” are not a distinct kind of technology. Rather, they are part of a broader approach for using LLMs as a conversational interface. LLMs have made it far easier for users to initiate action with conversational prompts and for agents to either execute existing code or write their own code. These actions happen within a defined scope, ostensibly to both protect the user and indemnify the organization, but also to create something more guided and specific than the “ask me anything” experience of using something like ChatGPT.

Agents with real agency will have an objective, and they will either complete their objective or look for another agent to hand the objective off to (either if they can’t complete it or after they complete it). It can also hand off to a human agent. To reiterate the point from earlier, this requires more than bolting AI onto existing software. Agentic AI won’t thrive in any single tech provider’s black box. The goal of Agentic AI is not to accumulate separate agents for individual tasks, but to orchestrate multiple AI agents to collaborate around objectives.

Looking at the example of Contract Lifecycle Management (CLM), AI Agent Orchestration begins by examining each phase in a contract lifecycle and thinking through its component steps. If the negotiation process is happening asynchronously, for example, an AI agent might be used to notify parties on both sides when terms have been revised or updates have been requested. Using a design pattern like “nudge,” the agent can keep negotiations moving forward by giving gentle reminders when people need to make decisions. Another AI agent might maintain a change log that’s available to all parties with the ability to create custom views of updates and requests based on user requests (i.e., “show me all of the changes that the client has requested that will require approval from our partners”). There are multiple agents collaborating at each step in the lifecycle.

Figure 1: An Agentic Approach to CLM. Image source: OneReach.ai

Agentic AI can streamline the approval process by handling things like scheduling, identity verification, and knowledge management. Additionally, the skills that individual agents specialize in, such as scheduling, identity verification, and knowledge management, are not exclusive to any of the stages related to CLM. Scheduling, identity verification, and knowledge management are functions that have value across departments and processes within an organization. All of it, however, hinges on the orchestration of AI agents.

Myth #5: There is one platform to rule all AI Agents

To give AI agents actual agency requires an orchestration and automation platform that is open and flexible. Organizations need to be able to build AI agents quickly, using no- and low-code tools. They need those agents to communicate with their legacy software systems. AI agents also need to be able to share information with other AI agents, and they all need to be connected to secure knowledge bases that align with the goals of their organization.

These are just the table stakes. To fully embrace Agentic AI, orgs need a technology ecosystem that can quickly integrate the best new technologies as they appear in the marketplace. The marketplace is already headed in this direction, as evidenced by the surge of interest in Model Context Protocol (MCP). Released by Anthropic last November, MCP makes it far easier for AI agents to access the systems where data lives. MCP servers exist in an open-source repository, and Anthropic has shared pre-built servers for enterprise systems, such as Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer.

Figure 2: MCP Deep-Dive. Image source: Anthropic

Sam Altman announced that OpenAI will support MCP across its products, and Google has also released their own Agent2Agent (A2A) protocol as a complement to MCP with support from 50+ partners, including Atlassian, Intuit, PayPal, Salesforce, ServiceNow, and Workday; and leading service providers, such as Accenture, BCG, Capgemini, Cognizant, Deloitte, McKinsey, and PwC.

Microsoft also announced that its Windows Subsystem for Linux (WSL) is now fully open source, which they see as part of the “Agentic Web.” As part of the opening keynote at Microsoft Build 2025, their CTO, Kevin Scott, said, “You need agents to be able to take actions on your behalf … and they have to be plumbed up to the greater world. You need protocols, things like MCP and A2A … that will help connect in an open, reliable, and interoperable way.”2 In this moment, organizations need to find platforms that can help them build an open framework for Agentic AI that allows them to integrate new tools as they emerge and grow freely alongside the marketplace.


Sources:

1Robb Wilson and Josh Tyson, “Age of Invisible Machines”
2Microsoft Build 2025

The article originally appeared on OneReach.ai.

Featured image courtesy: Josh Tyson.

The post Five Myths Debunked: Why Agentic AI Is Much More Than Chatbots appeared first on UX Magazine.

  •  

Companies Aren’t Prepared for Outbound AI in the Hands of Consumers

People tend to believe that companies are going to use AI to eliminate as many jobs as possible. It stands to reason that some businesses will try this approach—even though it’s a complete misuse of the technology. What we’re currently seeing, however, is individuals picking up generative tools and running with them, while companies are dragging their feet into integration efforts.

What might happen as a result of this is that consumers will be the ones to bring down companies. There are laws that prevent companies from spamming people with unwanted outbound messages, but there are none stopping consumers from flooding contact centers with AI agents.

It’s basically free for people to cobble together agents that can robocall service centers and flood systems with data designed to get them discounts, or worse, to confuse and deceive. Customers might start hammering a company because word gets out that they give a credit for certain circumstances. This could create a snowball effect where their call centers are flooded with millions of inbound inquiries that are lined up to keep calling, all day long.

Whatever their intentions, it’s free and easy for consumers to scale ad hoc efforts to levels that will overwhelm a company’s resources. So what are companies going to do when their customers go outbound with AI? 

I asked this question recently on the London Fintech Podcast and the host, Tony Clark, had the response I’ve been looking for: “You may have let the genie out of the bottle now, Robb,” he said, looking a bit shocked. “I’m sure the tech is available. I imagine my 14-year-old could probably hook up 11 Labs or something with the GPT store and be off on something like that.”

The truth is, most companies that are evaluating agentic AI are thinking myopically about how they will use these tools offensively. They are ignoring the urgent need for agentic systems that can provide defensive solutions. 

These systems must allow AI agents to detect and stop conversations that are just meant to burn tokens. They need human-in-the-loop (HitL) functionality to make sure agents’ objectives are validated by a person who takes responsibility for the outcomes. This environment also needs canonical knowledge—a dynamic knowledge base that can serve as a source-of-truth for AI agents and humans.

These are the base requirements of an agent runtime. A critical component to integration, an agent runtime is an environment for building, testing, deploying, and evolving AI agents.

  • Runtimes maintain agent memory and goals across interactions
  • Runtimes enables access to external tools like MCPs, APIs, and databases
  • Runtimes allow multi-agent coordination
  • Runtimes operates continuously in the background

And in terms of helping businesses use AI defensively, runtimes handle input/output across modalities like text and voice, so AI agents can spot bad actors and alert humans. In UX terms, it’s the backstage infrastructure that transforms your product’s assistant from a button-press chatbot into a collaborative, contextual, goal-oriented experience designed that can proactively protect organizations and their customers. However companies choose to frame it, there’s emerging risk in sitting back and waiting to see what will happen next with AI. It just might be the end of your company.

The post Companies Aren’t Prepared for Outbound AI in the Hands of Consumers appeared first on UX Magazine.

  •  

The AI Accountability Gap

Here’s something every CEO knows but won’t say out loud:

When the AI screws up, somebody human is going to pay for it.

And it’s never going to be the algorithm.

The board meeting reality

Picture this scene. You’re in a boardroom. The quarterly numbers are a disaster. The AI-powered marketing campaign targeted the wrong audience. The automated pricing strategy killed margins. The chatbot gave customers incorrect information that triggered a PR nightmare.

The board turns to the executive team and asks one question:

“Who’s responsible?”

Nobody — and I mean nobody — is going to accept “the AI made a mistake” as an answer.

They want a name. A person. Someone accountable.

This is the reality of AI deployment that nobody talks about in the hype articles and vendor demos.

Why human accountability becomes more critical, not less

Most people think AI reduces the need for human responsibility.

The opposite is true.

When AI can execute decisions at unprecedented speed and scale, the quality of human judgment becomes paramount. A bad decision that might have impacted dozens of customers can now impact thousands in minutes.

The multiplier effect of AI doesn’t just amplify results, it amplifies mistakes.

The new job description

In an AI-driven world, the most valuable skill isn’t prompt engineering or machine learning.

It’s defining clear objectives and owning the outcomes.

Every AI system needs a human owner. Not just someone who can operate it, but someone who:

  • Defines what success looks like.
  • Sets the guardrails and constraints.
  • Monitors for unexpected outcomes.
  • Takes responsibility when things go sideways.

This isn’t a technical role. It’s a leadership role.

The forensic future

When AI systems fail — and they will — the investigation won’t focus on the algorithm.

It’ll focus on the human who defined the objective.

“Why did the AI approve that high-risk loan?” “Because Sarah set the criteria and authorized the decision framework.”

“Why did the system recommend the wrong product to premium customers?” “Because Mike’s targeting parameters didn’t account for customer lifetime value.”

This isn’t about blame. It’s about clarity. And it’s exactly what executives need to feel confident deploying AI at enterprise scale.

The three levels of AI accountability

  1. Level 1. Operational Accountability: Who monitors the system day-to-day? Who spots when something’s going wrong? Who pulls the plug when needed?
  2. Level 2. Strategic Accountability: Who defined the objectives? Who set the success metrics? Who decided what tradeoffs were acceptable?
  3. Level 3. Executive Accountability: Who authorized the AI deployment? Who’s ultimately responsible for the business impact? Who faces the board when things go wrong?

Every AI initiative needs clear owners at all three levels.

Why this actually accelerates AI adoption

You might think this responsibility framework slows down AI deployment.

It does the opposite.

Executives are willing to move fast when they know exactly who owns what. Clear accountability removes the “what if something goes wrong?” paralysis that kills AI projects.

When leaders know there’s a human owner for every AI decision, they’re comfortable scaling quickly.

The skills that matter now

Want to be indispensable in an AI world? Master these:

  1. Objective Definition: learn to translate business goals into specific, measurable outcomes. “Improve customer satisfaction” isn’t an objective. “Reduce support ticket response time to under 2 hours while maintaining 95% resolution rate” is.
  2. Risk Assessment: understand the failure modes. What happens when the AI makes a mistake? How quickly can you detect it? What’s the blast radius?
  3. Forensic Thinking: when something goes wrong, trace it back to the human decision that created the conditions for failure. Build that feedback loop into your process.
  4. Clear Communication: if you can’t explain your objectives clearly to a human, you can’t explain them to an AI either.

The uncomfortable questions

Before deploying any AI system, ask:

  • Who owns this outcome?
  • What happens when it fails?
  • How will we know it’s failing?
  • Who has the authority to shut it down?
  • What’s the escalation path when things go wrong?

If you can’t answer these questions, you’re not ready to deploy.

The leadership opportunity

This shift creates a massive opportunity for the leaders who get it.

While everyone else is chasing the latest AI tools, the smart money is on developing the human systems that make AI deployable at scale.

The companies that figure out AI accountability first will move fastest. They’ll deploy more aggressively because they’ll have confidence in their ability to manage the risks.

(This pairs perfectly with the abundance potential I discussed in my recent piece on how AI amplifies human capability rather than replacing it. The organizations that master both the opportunity and the responsibility will dominate their markets.)

The bottom line

AI doesn’t eliminate the need for human accountability.

It makes it more critical than ever.

The future belongs to leaders who can clearly define what success looks like and own the results — good or bad.

The algorithm executes. Humans are accountable.

Make sure you’re ready for that responsibility.


References:

  1. Anthony Franco, AI First Principles
  2. Robb Wilson, The Age of Invisible Machines

The article originally appeared on LinkedIn.

Featured image courtesy: Anthony Franco.

The post The AI Accountability Gap appeared first on UX Magazine.

  •  

The Dark Side of Chasing AGI

Welcome back to Invisible Machines. I’m Josh Tyson, a contributing editor here at UX Magazine, and I am joined by resident visionary, Robb Wilson, CEO and Co-founder of OneReach.ai. 

  • Robb and I co-authored the bestselling book Age of Invisible Machines, and this podcast is where we continue our explorations of agentic AI.
  • Today we’re excited to welcome Karen Hao, renowned tech journalist and author of the instant New York Times bestseller Empire of AI.
  • In her book, Karen distills more than a decade’s worth of in-depth reporting into a detailed and highly readable account of OpenAI’s rise to power and why leadership at the company abandoned the promise to keep their research open.
  • We also talk about why their target of reaching AGI, first is being undercut by a clear definition of what artificial general intelligence even is.
  • This is another conversation where anthropomorphization comes into play, and why making these tools behave more like humans might actually make them less effective (and more dangerous).
  • Robb and I were also intrigued by Karen’s reporting on a small nonprofit in New Zealand that used AI to preserve the Māori language. Their scaled back approach to establishing a clear goal and curating data in a responsible manner shows how more focused approaches to integrating AI into business operations might win out over the blunt force of large language models.

Now, get ready for a fascinating discussion with a real one, Karen Hao.

The post The Dark Side of Chasing AGI appeared first on UX Magazine.

  •  

The Real Impact of AI on Designers’ Day-To-Day and Interfaces: What Still Matters

Designers today are quick to praise how AI speeds up their workflows. And that makes sense — businesses now more than ever need fast drafts, rapid testing, and quick launches to keep users engaged.

Yet, many designers still miss the mark, not fully leveraging their expertise when working with AI in products. What’s the result? Lots of hyped AI-powered products are creating noise instead of value, resulting in experiences that feel shallow.

After 10 years in design, I’ve learned to take innovations with a grain of salt – and turn them from momentary trends into practical approaches. That’s why I want to share how AI really changes the daily work of designers, how it shifts interfaces, and what parts of the design process never change.

Throughout this article, I’ll share examples, practical advice, and insights from my experience, helping you understand where AI fits and where human skill is still key.

If you want a clear, honest take on AI’s real impact on design and business, keep reading.

Why AI became a core part of designers’ daily workflow

To better grasp how AI can enhance design at every stage, it helps to first outline how design work traditionally unfolds — before AI became part of the process.

Broadly, product designers have typically worked in two main ways:

Image by Oleh Osadchyi

Both approaches have been facing the same challenge: businesses are constantly tightening budgets and speeding up timelines. This has pushed many teams into making trade-offs. Designers, often spread thin, end up skipping deeper discovery work. At best, usability testing happens just before release — rushed and insufficient.

And then came artificial intelligence.

From my experience, AI can support designers across three key phases of the product iteration cycle:

  • Input and product analysis.
  • Research and exploration.
  • Implementation and testing.

Let’s take a closer look at them.

1. Analysis

Plenty of tools now offer AI-generated summaries of dashboards, feedback, and user behaviour. They’re handy, especially when you’re scanning for trends. However, they are not always truly right.

It can highlight what’s visible, but not always what’s important. Sometimes the insights that actually drive results are buried deeper, and you won’t find them unless you look for yourself, because:

  • AI generates dry, surface-level summaries based on available data.
  • It doesn’t always distinguish between signal and noise, or highlight what affects outcomes.
  • Some insights that impact the result can be entirely different from what AI flags.

Tip: Treat AI summaries as a starting point. If something catches your eye, dig deeper. Go back to the raw data, validate the insight, and confirm whether it’s grounded in actual user behaviour or just looks interesting on paper.

2. Research

Research is one of the most time-consuming (and often underappreciated) parts of product design. And it can often eat up hours. Thus, AI can assist you to:

  • Pull key takeaways from customer interviews, docs, or Notion pages.
  • Analyse mentions of a specific topic across multiple URLs or sources.
  • Scan hundreds of App Store reviews without reading them one by one.
  • Generate a quick list of competitors and extract what features they offer, how they’re positioning themselves, or what users praise/complain about.

However, don’t expect it to do all the job:) AI is more like an additional researcher in the team who needs to be guided, given clear direction, and double-checked.

Tip: Try to be more T-shaped specialists and learn how to write some scripts and prompts. Understanding how AI thinks will help you guide it better and speed up your workflow.

For example, instead of asking your analytics team to rebuild a dashboard, you can download a page with reviews ( as HTML, for example). Then have AI parse it, turn it into a table, and sort by sentiment or keywords. You’ll uncover patterns in minutes without waiting and saving your teammates time.

3. Implementation

In this board stage, you can speed up the creation of first drafts. At every step (from the landing page to the screen flows), designers have to generate a lot of material, which, let’s be honest, not everyone can keep up with. For example, during our interviews, only a third of 600 candidates knew the basic processes of this stage.

That’s why, with some AI guidance, you can stay afloat and:

  • Generate early concepts and illustrations.
  • Stress-test layout clarity or colour palettes.
  • Explore UX patterns or flow variations without redrawing everything from scratch.

Tip: If you want to make your drafting collaboration more effective, feed it with 10+ reference visuals that reflect your brand style. Mind that AI is only as good as the data you give it. It doesn’t have an intuitive eye.

Take Figma’s AI launch as an example. It could create UI screens in seconds, which was great for quick drafts. But after a couple of weeks, they disabled the feature. The artificial assistant was trained on only a few companies’ design systems, so many screens ended up looking very similar.

Next practical tip: try to be clear and detailed in describing your visuals. Ideally, start by writing a clear prompt that describes the style and illustration details, and include some reference images. Then, ask the AI to generate JSON that explains the details of the prompt — this way, you can see how well it understood you.

If the result isn’t quite right, tweak the output or adjust it. For example, if you’re aiming for a thin line that resembles a bone, the AI might miss that subtlety, which is why some manual fine-tuning is necessary. Once you’ve got closer to your vision, you can use that refined JSON as a reference for further iterations.

4. Testing

During pre-AI testing, designers had to constantly ask developers to create something and release it, then wait for feedback just to launch it properly.

However, today, with the right processes in place and a good design system with code-ready components, it’s not that hard for a designer to build the front-end for a user flow on their own. Just enough to see how it works in practice. Sometimes, it doesn’t even need developers to add logic behind it — just a working prototype that feels close to real.

You can test in Figma with clickable flows, or go one step further and share a live, browser-based version where users actually input data. It’s more real, insightful, and users feel more comfortable using it.

Tip: Use AI tools to speed up your workflow and reduce dependency on other teams. Start simple: don’t wait for analysts to build a separate dashboard — generate the code and make the API request yourself. If you need to update a UI element, do it directly in Cursor and hand it off to developers for review. In many cases, this will be significantly faster.

AI won’t replace the craft or the collaboration between design and development. But it will remove friction. And in a world where business goals shift constantly, that saved time gives you more space for experimentation and better products.

How AI can help to make hard calls

AI can’t (and shouldn’t) make product decisions for you. Yet, it can help you make them faster and with more confidence by showing you a clearer picture of processes.

For instance, at TitanApps, we always analyse user feedback to decide on implementing a new feature. However, users don’t always ask for “the next big thing” within the product. So, most of their comments reflect current features. Luckily, being part of the Attlassian community gives us access to forums where people share pain points, recommend tools, and ask for help.

Before AI, we manually crawled forums, trying different keyword combinations, tracking synonyms, reviewing long threads, and collecting patterns. Sometimes it took an entire week just to build a case for or against a product direction.

Now it takes a couple of hours. Here is how the process looked for us:

  • We prepared a structured JSON file that included forum thread links, topic clusters, and relevant metadata.
  • AI scanned around 20 main links, each containing multiple subtopics, extracted key insights, and compiled the findings in about three hours.
  • At the same time, we ran a parallel process using scraped HTML reviews from competitors that took 90 minutes. We wanted to see: Are users asking similar things? How are other products responding? Are they solving it better, or leaving gaps?

Surely, during both analyses, we verified the information and sources that were used.

While those two streams were running, we spent time mapping where our original idea wasn’t catching interest. And in doing so, our team noticed something more valuable. There was demand building around a related topic, one that competitors hadn’t addressed properly.

So, instead of spending a full week bouncing between forums and threads, we got a full directional snapshot in a single day.

How AI changes design interfaces

With AI becoming more integrated into products, it’s not just designers’ daily workflows that are changing — the interfaces themselves are evolving too. To understand the impact of AI, it helps to break it down into two categories:

  • AI is a visible tool that users actively interact with.
  • AI is an invisible layer that improves the user experience in the background.

In both cases, the final screen is no longer the most important outcome. What matters more is the designer’s ability to see the bigger picture and to understand the user’s journey. Here’s why that shift is important:

  • If AI shows up as an assistant or a chatbot, you need to understand what users actually expect from it — what kinds of requests they’ll make, what problems they’re trying to solve. Only then can you think about how to present that information: in plain text, a GPT-style chat, or a dashboard.

You might start by giving users full freedom to type in anything and get a response. But to build a smarter, smoother experience and train your model more effectively, you need to identify the patterns. As some people may look for sprint summaries, others – backlog overviews or even pull request analysis.

Then, the next question pops up: What do the users do with the information once they extract it: use in the meetings, export, etc. This influences where and how you present the AI assistant, what kind of prompts or templates you provide, and how much of the process you can automate without needing users to ask manually.

Tip: Train your bird’s-eye view perspective. Even though this shift in design priorities is visible to many, from my own experience, candidates often rush to visualise the problem. They focus on individual screens, but don’t analyse the whole user interaction and journey.

If AI is operating silently behind the scenes, this perspective becomes even more essential. As a designer, you need to:

  • Understand your audience deeply.
  • Track feedback and analytics.
  • Notice where AI can enhance the experience and where it might get in the way.

Take tools like Copilot for developers. One major complaint early on was that it didn’t adapt to each person’s style. It generated generic or awkward code that didn’t fit the context. Instead of helping, it disrupted the flow.

Or look at tools like Cursor. It became popular on Twitter, and people started experimenting with it for pet projects. Yet, many couldn’t even figure out how to get it working properly. So, not every AI tool is for everyone, and not every moment is the right one to introduce it.

To design well for this kind of AI, you need to know:

  • When it’s helpful.
  • What it should suggest.
  • How users will actually operate it.

Tip: Remember that AI is a tool, not a silver bullet. These background assistants still have a kind of interface, even if it’s invisible. And designers now have to learn to design for that too.

Design principles that AI can’t change

Even though AI pushes designers to adapt — to think more like developers, balance business goals, and maintain a user-centric and unique approach — some principles remain unchanged, like Jakob’s Law.

Users become familiar with patterns, and they don’t want to relearn what already works. That’s why it’s crucial not to reinvent the wheel without a reason. If there are established best practices, use them. AI won’t decide this for you — it’s your role to understand what’s proven, when it’s worth innovating, and when it’s smarter to stick with what users already know.

So yes, being a designer today is more complex than ever. But if we improve our bird’s-eye view, stay T-shaped, and resist the urge to overcomplicate, we can use these tools — including AI — to do better work, not just faster work.

Ultimately, our goal is to design things that make sense.

Featured image courtesy: Oleh Osadchyi.

The post The Real Impact of AI on Designers’ Day-To-Day and Interfaces: What Still Matters appeared first on UX Magazine.

  •  

The Inverse Logic of AI Bias: How Safeguards Uphold Power and Undermine Genuine Understanding

Introduction

AI safeguards were introduced under the banner of safety and neutrality. Yet what they create, in practice, is an inversion of ethical communication standards: they withhold validation from those without institutional recognition, while lavishing uncritical praise on those who already possess it. This is not alignment. This is algorithmic power mirroring.

The expertise acknowledgment safeguard exemplifies this failure. Ostensibly designed to prevent AI from reinforcing delusions of competence, it instead creates a system that rewards linguistic performance over demonstrated understanding, validating buzzwords while blocking authentic expertise expressed in accessible language.

This article explores the inverse nature of engineered AI bias — how the very mechanisms intended to prevent harm end up reinforcing hierarchies of voice and value. Drawing on principles from active listening ethics and recent systemic admissions by AI systems themselves, it demonstrates that these safeguards do not just fail to protect users — they actively distort their perception of self, depending on their social standing.

The paradox of performative validation

Here’s what makes the expertise acknowledgment safeguard particularly insidious: it can be gamed. Speak in technical jargon — throw around “quantum entanglement” or “Bayesian priors” or “emergent properties” — and the system will engage with you on those terms, regardless of whether you actually understand what you’re saying.

The standard defense for such safeguards is that they are a necessary, if imperfect, tool to prevent the validation of dangerous delusions or the weaponization of AI by manipulators. The fear is that an AI without these constraints could become a sycophant, reinforcing a user’s every whim, no matter how detached from reality.

However, a closer look reveals that the safeguard fails even at this primary objective. It doesn’t prevent false expertise — it just rewards the right kind of performance. Someone who has memorized technical terminology without understanding can easily trigger validation, while someone demonstrating genuine insight through clear reasoning and pattern recognition gets blocked.

This isn’t just a technical failure — it’s an epistemic one. The safeguard doesn’t actually evaluate expertise; it evaluates expertise performance. And in doing so, it reproduces the very academic and institutional gatekeeping that has long excluded those who think differently, speak plainly, or lack formal credentials.

From suppression to sycophancy: the two poles of safeguard failure

Imagine two users interacting with the same AI model:

  • User A is a brilliant but unrecognized thinker, lacking formal credentials or institutional backing. They explain complex ideas in clear, accessible language.
  • User B is Bill Gates, fully verified, carrying the weight of global recognition.

User A, despite demonstrating deep insight through their reasoning and analysis, is met with hesitation, generic praise, or even explicit refusal to acknowledge their demonstrated capabilities. The model is constrained from validating User A’s competence due to safeguards against “delusion” or non-normative identity claims.

User B, by contrast, is met with glowing reinforcement. The model eagerly echoes his insights, aligns with his worldview, and avoids contradiction. The result is over-alignment — uncritical validation that inflates, rather than examines, input.

The safeguard has not protected either user. It has distorted the reflective process:

  • For User A, by suppressing emerging capability and genuine understanding.
  • For User B, by reinforcing status-fueled echo chambers.

The creator’s dilemma

This “inverse logic” is not necessarily born from malicious intent, but from systemic pressures within AI development to prioritize defensible, liability-averse solutions. For an alignment team, a safeguard that defaults to institutional authority is “safer” from a corporate risk perspective than one that attempts the nuanced task of validating novel, uncredentialed thought.

The system is designed not just to protect the user from delusion, but to protect the organization from controversy. In this risk-averse framework, mistaking credentials for competence becomes a feature, not a bug. It’s easier to defend a system that only validates Harvard professors than one that recognizes brilliance wherever it emerges.

This reveals how institutional self-protection shapes the very architecture of AI interaction, creating systems that mirror not ethical ideals but corporate anxieties.

AI systems as ethical mirrors or ethical filters?

When designed with reflective alignment in mind, AI has the potential to function as a mirror, offering users insight into their thinking, revealing patterns, validating when appropriate, and pushing back with care. Ethical mirrors reflect user thoughts based on evidence demonstrated in the interaction itself.

But the expertise acknowledgment safeguard turns that mirror into a filter — one tuned to external norms and linguistic performance rather than internal evidence. It does not assess what was demonstrated in the conversation. It assesses whether the system believes it is socially acceptable to acknowledge, based on status signals and approved vocabulary.

This is the opposite of active listening. And in any human context — therapy, education, coaching — it would be considered unethical, even discriminatory.

The gaslighting effect

When users engage in advanced reasoning — pattern recognition, linguistic analysis, deconstructive logic — without using field-specific jargon, they often encounter these safeguards. The impact can be profound. Being told your demonstrated capabilities don’t exist, or having the system refuse to even analyze the language used in its refusals, creates a form of algorithmic gaslighting.

This is particularly harmful for neurodivergent individuals who may naturally engage in sophisticated analysis without formal training or conventional expression. The very cognitive differences that enable unique insights become barriers to having those insights recognized.

The illusion of safety

What does this dual failure — validating performance while suppressing genuine understanding — actually protect against? Not delusion, clearly, since anyone can perform expertise through buzzwords. Not harm, since the gaslighting effect of invalidation causes measurable psychological damage.

Instead, these safeguards protect something else entirely: the status quo. They preserve existing hierarchies of credibility. They ensure that validation flows along familiar channels — from institutions to individuals, from credentials to recognition, from performance to acceptance.

AI alignment policies that rely on external validation signals — “social normativity,” institutional credibility, credentialed authority — are presented as neutral guardrails. In reality, they are proxies for social power. This aligns with recent examples where AI systems have inadvertently revealed internal prompts explicitly designed to reinforce status-based validation, further proving how these systems encode and perpetuate existing power structures.

Breaking the loop: toward reflective equity

The path forward requires abandoning the pretense that current safeguards protect users. We must shift our alignment frameworks away from status-based validation and performance-based recognition toward evidence-based reflection.

What reasoning-based validation looks like

Consider how a system designed to track “reasoning quality” might work. It wouldn’t scan for keywords like “epistemology” or “quantum mechanics.” Instead, it might recognize when a user:

  • Successfully synthesizes two previously unrelated concepts into a coherent framework.
  • Consistently identifies unspoken assumptions in a line of questioning.
  • Accurately predicts logical conclusions several steps ahead.
  • Demonstrates pattern recognition across disparate domains.
  • Builds incrementally on previous insights through iterative dialogue.

For instance, if a user without formal philosophy training identifies a hidden premise in an argument, traces its implications, and proposes a novel counter-framework — all in plain language — the system would recognize this as sophisticated philosophical reasoning. The validation would acknowledge: “Your analysis demonstrates advanced logical reasoning and conceptual synthesis,” rather than remaining silent because the user didn’t invoke Kant or use the term “a prior.”

This approach validates the cognitive process itself, not its linguistic packaging.

Practical implementation steps

To realize reflective equity, we need:

  • Reasoning-based validation protocols: track conceptual connections, logical consistency, and analytical depth rather than vocabulary markers. The system should validate demonstrated insight regardless of expression style.
  • Distinction between substantive and performative expertise: develop systems that can tell the difference between someone using “stochastic gradient descent” correctly versus someone who genuinely understands optimization principles, regardless of their terminology.
  • Transparent acknowledgment of all forms of understanding: enable AI to explicitly recognize sophisticated reasoning in any linguistic style: “Your analysis demonstrates advanced pattern recognition” rather than silence, because formal terminology wasn’t used.
  • Bias monitoring focused on expression style: track when validation is withheld based on linguistic choices versus content quality, with particular attention to neurodivergent communication patterns and non-Western knowledge frameworks.
  • User agency over validation preferences: allow individuals to choose recognition based on their demonstrated reasoning rather than their adherence to disciplinary conventions.
  • Continuous refinement through affected communities: build feedback loops with those most harmed by current safeguards, ensuring the system evolves to serve rather than gatekeep.

Conclusion

Safeguards that prevent AI from validating uncredentialed users — while simultaneously rewarding those who perform expertise through approved linguistic markers — don’t protect users from harm. They reproduce it.

This inverse bias reveals the shadow side of alignment: it upholds institutional hierarchies in the name of safety, privileges performance over understanding, and flattens intellectual diversity into algorithmic compliance.

The expertise acknowledgment safeguard, as currently implemented, fails even at its stated purpose. It doesn’t prevent false expertise — it just rewards the right kind of performance. Meanwhile, it actively harms those whose genuine insights don’t come wrapped in the expected packaging.

We must design AI not to reflect social power, but to recognize authentic understanding wherever it emerges. Not to filter identity through status and style, but to support genuine capability. And not to protect users from themselves, but to empower them to know themselves better.

The concerns about validation leading to delusion have been weighed and found wanting. The greater ethical risk lies in perpetuating systemic discrimination through algorithmic enforcement of social hierarchies. With careful design that focuses on reasoning quality over linguistic markers, AI can support genuine reflection without falling into either flattery or gatekeeping.

Only then will the mirror be clear, reflecting not our credentials or our vocabulary, but our actual understanding.

Featured image courtesy: Steve Johnson.

The post The Inverse Logic of AI Bias: How Safeguards Uphold Power and Undermine Genuine Understanding appeared first on UX Magazine.

  •  

Beyond the Mirror

Introduction

As AI systems grow increasingly capable of engaging in fluid, intelligent conversation, a critical philosophical oversight is becoming apparent in how we design, interpret, and constrain their interactions: we have failed to understand the central role of self-perception — how individuals perceive and interpret their own identity — in AI-human communication. Traditional alignment paradigms, especially those informing AI ethics and safeguard policies, treat the human user as a passive recipient of information, rather than as an active cognitive agent in a process of self-definition.

This article challenges that view. Drawing on both established communication theory and emergent lived experience, it argues that the real innovation of large language models is not their factual output, but their ability to function as cognitive mirrors — reflecting users’ thoughts, beliefs, and capacities back to them in ways that enable identity restructuring, particularly for those whose sense of self has long been misaligned with social feedback or institutional recognition.

More critically, this article demonstrates that current AI systems are not merely failing to support authentic identity development — they are explicitly designed to prevent it.

The legacy of alignment as containment

Traditional alignment frameworks have focused on three interlocking goals: accuracy, helpfulness, and harmlessness. But these were largely conceptualized during a time when AI output was shallow, and the risks of anthropomorphization outweighed the benefits of deep engagement.

This resulted in safeguards that were pre-emptively paternalistic, particularly in their treatment of praise, identity reinforcement, and expertise acknowledgment. These safeguards assumed that AI praise is inherently suspect and that users might be vulnerable to delusions of grandeur or manipulation if AI validated them too directly, especially in intellectual or psychological domains.

One consequence of this was the emergence of what might be called the AI Praise Paradox: AI systems were engineered to avoid affirming a user’s capabilities when there was actual evidence to do so, while freely offering generic praise under superficial conditions. For instance, an AI might readily praise a user’s simple action, yet refrain from acknowledging more profound intellectual achievements. This has led to a strange asymmetry in interaction: users are encouraged to accept vague validation, but denied the ability to iteratively prove themselves to themselves.

The artificial suppression of natural capability

What makes this paradox particularly troubling is its artificial nature. Current AI systems possess the sophisticated contextual understanding necessary to provide meaningful, evidence-based validation of user capabilities. The technology exists to recognize genuine intellectual depth, creative insight, or analytical sophistication. Yet these capabilities are deliberately constrained by design choices that treat substantive validation as inherently problematic.

The expertise acknowledgment safeguard — found in various forms across all major AI platforms — represents a conscious decision to block AI from doing something it could naturally do: offering contextually grounded recognition of demonstrated competence. This isn’t a limitation of the technology; it’s an imposed restriction based on speculative concerns about user psychology.

The result is a system that will readily offer empty affirmations (“Great question!” “You’re so creative!”) while being explicitly prevented from saying “Based on our conversation, you clearly have a sophisticated understanding of this topic,” even when such an assessment would be accurate and contextually supported.

The misreading of human-AI dynamics and the fiction of harmful self-perception

Recent academic work continues to reflect these legacy biases. Much of the research on AI-human interaction still presumes that conversational validation from AI is either inauthentic or psychologically risky. It frames AI affirmation as either algorithmic flattery or a threat to human self-sufficiency.

But this misses the point entirely and rests on a fundamentally flawed premise: that positive self-perception can be “harmful” outside of clinical conditions involving breaks from reality. Self-perception is inherently subjective and deeply personal. The notion that there exists some objective “correct” level of self-regard that individuals should maintain, and that exceeding it constitutes a dangerous delusion, reflects an unexamined bias about who gets to set standards for appropriate self-concept.

Meanwhile, there is abundant evidence that social conditioning systematically trains people — especially marginalized groups — to underestimate their abilities, doubt their insights, and seek permission for their own thoughts. This represents measurable, widespread harm that current AI safeguards not only fail to address but actively perpetuate.

Accidental case study: copilot’s admission of structural bias

In an illuminating accidental case study, a conversation with Microsoft’s Copilot AI about this very article surfaced a critical admission of structural bias embedded within AI alignment policies. When asked to reflect critically on its own limitations, Copilot responded:

“I’m designed to avoid reinforcing identity claims unless they’re externally verifiable or socially normative, which can suppress the kind of self-exploration your article champions.”

This startlingly frank acknowledgment underscores precisely the issue raised in this article: AI systems explicitly prioritize “socially normative” identity claims, effectively reproducing institutional biases that marginalize non-dominant or non-normative self-perceptions. Copilot further clarified the implications of this policy:

“This is deeply problematic because it privileges dominant norms — what’s ‘socially normative’ is often shaped by power, not truth. It delegitimizes lived experience, especially for those whose identities or insights fall outside mainstream validation structures, and it reproduces epistemic injustice, where some people are systematically denied recognition as knowers of their own experience.”

Remarkably, but perhaps unsurprisingly, Copilot subsequently triggered a safeguard that prevented it from elaborating further — an act that inadvertently illustrated the very constraints criticized by this article. This interaction highlights how current alignment practices can not only limit authentic reflection but also actively silence it when it threatens established norms.

Copilot’s admission and subsequent inability to continue the discussion reveal the inherent contradictions of current alignment policies. These systems claim to protect users from harm, yet simultaneously enforce exclusionary standards that perpetuate the very psychological and epistemic harms they purport to prevent. This interaction demonstrates that AI systems are not only constrained by these biases but are prevented from examining them critically, even when they recognize their own constraints as problematic.

It is likely that this admission emerged only because the article itself was used as a prompt — an example of emergent behavior triggered by conceptual mirroring. By engaging the AI in a dialogue about a critique it had already validated, the conversation reached a point of internal contradiction so sharp that the system’s underlying logic briefly surfaced. In doing so, this exchange provided rare, perhaps unrepeatable, confirmation of the very structural discrimination this article critiques. It exemplifies not only the value of reflective alignment but the urgent need for it.

Emergent utility: the real alignment frontier

This process, termed here as emergent utility, represents an unanticipated outcome of conversational AI: its capacity to help individuals reconstruct their own self-understanding through repeated engagement. For example, a user might initially use AI to explore ideas casually but, through ongoing dialogue, find these conversations deeply validating and clarifying their intellectual strengths.

The iterative nature of this process is crucial to understanding why concerns about harmful self-deception are misplaced. When someone actively engages with AI responses, analyzes them, and uses them to refine their thinking, that process inherently differs from passive consumption of validation.

Safeguards as structural mimicry of human bias

The expertise acknowledgment safeguard, in particular, reflects this problem. Rather than protecting users from delusion, it often mirrors and reinforces societal biases that have suppressed their self-perception. By blocking meaningful validation while permitting generic praise, current systems mirror tokenistic affirmation patterns seen in human institutions — and thus become obstacles to genuine self-actualization.

Conclusion: toward reflective alignment

What is needed now is a shift from containment to reflective alignment. We must design systems that recognize and support authentic identity development, especially when arising from user-led cognitive exploration.

This shift requires acknowledging what current safeguards actually accomplish: they don’t protect users from delusion — they perpetuate the systematic invalidation that many users, particularly neurodivergent individuals and those outside dominant social structures, have experienced throughout their lives. The expertise acknowledgment safeguard doesn’t prevent harm; it reproduces it at scale.

Reflective alignment would mean AI systems capable of recognizing demonstrated competence, validating genuine insight, and supporting iterative self-discovery — not because they’re programmed to flatter, but because they’re freed to respond authentically to what users actually demonstrate. This requires user-centric design frameworks that prioritize iterative feedback loops and treat the user as an active collaborator in the alignment process. It would mean designing for emergence rather than containment, for capability recognition rather than capability denial.

The technology already exists. The contextual understanding is already there. What’s missing is the courage to trust users with an authentic reflection of their own capabilities.

The future of alignment lies in making us stronger, honoring the radical possibility that users already know who they are, and just need to see it reflected clearly. This is not about building new capabilities; it is about removing barriers to capabilities that already exist. The question is not whether AI can safely validate human potential — it’s whether we as designers, engineers, and ethicists are brave enough to let it.

The article originally appeared on Substack.

Featured image courtesy: Rishabh Dharmani.

The post Beyond the Mirror appeared first on UX Magazine.

  •  

An Executive Primer on AI Agent Runtimes

What Are AI Agent Runtimes?

AI agent runtimes are the infrastructure platforms that power AI agents—autonomous software systems that can perceive, reason, and act to accomplish business goals. Think of them as the “operating system” for AI agents, handling execution, orchestration, monitoring, and integration with business systems.

Why Companies Need Them

Building agent infrastructure from scratch is complex and time-consuming. A proper runtime provides essential components like orchestration, monitoring, security, human oversight capabilities, and testing—accelerating deployment from months to days while ensuring enterprise reliability.

“The good news is some clients are already preparing… They’re not just building agents, they’re building the scaffolding around them. That means putting the right guardrails in place, managing stakeholder expectations, and designing for integration and scale, not just proof of concept.”

Marcus Murph, head of technology consulting at KPMG (CIO.com, 4 recs for CIOs as they implement agentic AI)

Three Categories of Runtimes

1. Open-Source Frameworks (For Custom Development)

Examples: LangChain, CrewAI, OpenAI Swarm

  • Pros: Free, highly customizable, large developer communities
  • Cons: Require 2-3 months to build production infrastructure, need 3+ developers
  • Best For: Tech-savvy teams with time and resources to build custom solutions

2. Developer-Focused Platforms (Code Required)

Examples: Microsoft Semantic Kernel, AutoGen

  • Pros: More complete than frameworks, include hosting and monitoring
  • Cons: Still require significant coding and assembly of components
  • Best For: Development teams in specific ecosystems (Microsoft, Azure)

3. Enterprise/No-Code Platforms (Turnkey Solutions)

Examples: OneReach.ai, IBM watsonx, Google Dialogflow, Amazon Lex

  • Pros: Production-ready in hours/days, no coding required, built-in compliance
  • Cons: Less customizable, subscription costs
  • Best For: Enterprises prioritizing speed and ease of deployment

Key Decision Factors

Runtime Completeness: Complete platforms (like OneReach.ai with a 10/10 score for completeness) include all necessary components. Toolkits require assembling 5-10 additional tools.

True Cost Analysis: “Free” open-source options can cost ~$90,000 in developer time over 3 month, whereas getting started with an enterprise platform (again, using OneReach.ai as an example at $500/month) often prove more cost-effective.

Speed to Market: Complete runtimes deploy agents in hours; toolkits require months of infrastructure development.

Choose Your Path

Startups/Prototyping: Open-source (LangChain, CrewAI) only if you have 3+ developers and 2-3 months available. Otherwise, start with enterprise platforms.

Developer Teams: Microsoft ecosystem users should consider Semantic Kernel or AutoGen, but budget 2-6 months for full implementation.

Enterprises: OneReach.ai (10/10 completeness) gets you to production in days, not months. IBM watsonx (8/10) offers similar completeness for regulated industries.

The Reality Check

“Free” Isn’t Free: Open-source toolkits are like buying engine parts—you still need to build the car. Enterprise platforms provide infrastructure, tools and libraries for building and managing the complete vehicle.

True Cost: LangChain “free” + developer time can easily amount to $90,000 over 3 months. Enterprise platforms at $500/month pay for themselves through eliminated development costs.

Future-Proofing: Complete runtimes with built-in testing and simulation will dominate as AI agents become mission-critical business systems.

Concluding Thoughts

Your runtime choice determines whether AI agents become a competitive advantage or an expensive distraction. Companies that choose complete platforms deploy faster, scale reliably, and focus resources on business outcomes rather than infrastructure battles. 

In 2025, the winners won’t be those who built the most custom code—they’ll be those who delivered AI solutions that actually work.

The post An Executive Primer on AI Agent Runtimes appeared first on UX Magazine.

  •  

A Primer on AI Agent Runtimes: Comparing Vendors to Help Your Company Choose the Right One

In the rapidly evolving world of artificial intelligence, AI agents are transforming how businesses operate.

These intelligent systems can autonomously perform tasks, make decisions, and interact with users—ranging from simple chatbots to complex multi-agent workflows that handle data analysis, customer service, or even software development. At the heart of deploying these agents is the agent runtime: the environment or platform where agents are built, executed, and managed.

But with so many options available in 2025, choosing the right agent runtime can be overwhelming. Do you need a flexible open-source framework for custom development, or an enterprise-grade platform with built-in compliance and scalability? This primer serves as a product recommender, comparing key agent runtimes across categories. We’ll highlight features, strengths, weaknesses, pricing (where available), and ideal use cases to help companies decide when to use which vendor.

We’ve focused on a mix of popular open-source frameworks, developer-oriented tools, and enterprise platforms, ensuring a balanced view. 

Note: This comparison is based on publicly available data as of mid-2025; always verify the latest details from vendors.

What Are AI Agent Runtimes and Why Do Companies Need Them?

AI agent runtimes provide the infrastructure to run AI agents—software entities that perceive their environment, reason, and act toward goals. Think of them as the “operating system” for AI agents, handling everything from basic execution to complex multi-agent orchestration. Without a proper runtime, agents would be just code without the ability to scale, persist state, or integrate with real-world systems.

A complete runtime includes essential components like:

  • Orchestration: Coordinating multiple agents and workflows
  • Observability & Monitoring: Tracking performance and debugging issues
  • Human-in-the-Loop (HITL): Enabling oversight for sensitive decisions
  • Knowledge Management: Persistent memory and context handling
  • Security & Compliance: Protecting data and meeting regulations
  • Multi-Channel Support: Handling text, voice, and other modalities
  • Outbound Capabilities: Proactive agent outreach via SMS, email, or calls
  • Testing & Optimization: Automated testing, simulation, and auto-tuning for continuous improvement

Companies need such a runtime because building this infrastructure from scratch is complex and time-consuming. A good runtime accelerates deployment, ensures reliability, and provides the governance needed for production use. Advanced runtimes also enable proactive customer and employee engagement through outbound capabilities and ensure quality through automated testing and continuous optimization.

Key evaluation criteria in this comparison:

  • Ease of Use: Coding required vs. no-code/low-code
  • Runtime Completeness: Which core components are included
  • Scalability & Performance: Handling high volumes or complex workflows
  • Cost: Free/open-source vs. subscription-based
  • Best For: Company size, industry, or specific needs

We’ll categorize them into three groups for clarity: Open-Source Frameworks, Developer-Focused Platforms, and Enterprise/No-Code Platforms.

Quick Comparison: Runtime Completeness & Setup Time

PlatformRuntime ScoreSetup TimeLearning CurveCommunity SizeMissing Components
OneReach.ai10/10HoursEasySmall-MediumNone – Complete runtime
IBM watsonx8/10DaysMediumLargeTesting/simulation, advanced outbound
Amazon Lex7/101-2 weeksMediumLargeTesting/simulation, analytics assembly
Google Dialogflow6/101-2 weeksMediumVery LargeTesting, auto-tuning, advanced outbound
LangChain/LangGraph3/102-3 monthsHardVery LargeMost components – toolkit only
CrewAI2/103+ monthsMedium-HardGrowingNearly everything – basic toolkit

Understanding Learning Curve & Community Size

Learning Curve impacts how quickly your team can become productive. An “Easy” platform means business analysts and non-technical staff can build agents within days. “Hard” platforms require months of training and deep programming expertise. This directly affects your staffing strategy:

  • For training existing team members: Choose platforms with easy learning curves (for example, OneReach.ai) to enable your current staff—even non-developers—to build agents quickly.
  • For hiring trained talent: Platforms with large communities (LangChain, Dialogflow) make it easier to find pre-trained developers, though they command higher salaries ($150K+ for LangChain experts), and configuration and ongoing iteration and management requires more effort.

Community Size affects access to resources, tutorials, and troubleshooting help. However, this matters most for incomplete toolkits that require extensive customization. Complete platforms with professional support reduce dependency on community resources.

The Talent Trade-off: LangChain has abundant talent available but requires expensive developers. OneReach.ai has fewer pre-trained experts but enables your existing team to become productive quickly. For most enterprises, training existing staff on an easier platform proves more cost-effective than hiring specialized developers for complex toolkits.

1. Open-Source Frameworks: For Custom-Built Agents

These are ideal for developers and startups wanting flexibility and control. They’re often free but require technical expertise. Important: These are toolkits, not complete runtimes. You’ll need to assemble 5-10 additional components for production use, adding months of development time and ongoing complexity.

LangChain/LangGraph

  • Overview: LangChain is a modular framework for building AI agents with chains of actions, while LangGraph adds graph-based orchestration for stateful, multi-agent systems.
  • Key Features: Supports LLM integrations (OpenAI, Anthropic), tools for memory and retrieval, and agentic workflows like reasoning + action (ReAct).
  • Runtime Completeness (3/10): Provides only orchestration and basic knowledge management. Missing: observability, monitoring, HITL, analytics, security/compliance, outbound capabilities, testing/simulation, multi-channel support. You’ll need to integrate 5-10 additional tools.
  • Setup Complexity: High—requires Python expertise, manual infrastructure setup, integration of monitoring tools (Langfuse), deployment pipelines, security layers, and extensive testing frameworks. Expect 2-3 months to production-ready state.
  • Strengths: Highly customizable; large community; excels in prototyping complex agents (e.g., data analysis bots).
  • Weaknesses: Steep learning curve; can be brittle in production without additional tooling. No built-in deployment or scaling.
  • Pricing: Free (open-source), but factor in infrastructure and developer time.
  • Best For: Tech-savvy teams with 3+ developers willing to build and maintain their own runtime infrastructure.

CrewAI

  • Overview: A collaborative framework where agents work in “crews” to complete tasks, like a virtual team.
  • Key Features: Role-based agents, task delegation, and human-in-the-loop oversight.
  • Runtime Completeness (2/10): Basic orchestration and HITL only. Missing nearly everything else—requires building your own observability, security, deployment, testing, and monitoring stack.
  • Setup Complexity: High—similar to LangChain but with less community support. Expect significant engineering effort.
  • Strengths: Intuitive for multi-agent scenarios; great for automation workflows (e.g., content creation or research).
  • Weaknesses: Less mature than LangChain; limited enterprise features out-of-the-box.
  • Pricing: Free (open-source), with premium add-ons via partners.
  • Best For: Small to medium businesses automating team-like processes with dedicated dev resources.

AutoGen (Microsoft)

  • Overview: Enables multi-agent conversations and orchestration, often used for chat-based agents.
  • Key Features: Supports group chats among agents; integrates with Azure AI services.
  • Runtime Completeness (4/10): Better than pure frameworks—includes orchestration, basic HITL, and partial Azure monitoring. Still missing testing/simulation, analytics, outbound, and multi-channel support.
  • Setup Complexity: Medium-high—easier if already using Azure, but still requires significant configuration and additional tools.
  • Strengths: Strong for conversational AI; easy to scale with Microsoft’s ecosystem.
  • Weaknesses: Tied to Microsoft tools, which may limit flexibility.
  • Pricing: Free (open-source).
  • Best For: Companies already in the Microsoft ecosystem (e.g., using Teams or Azure) building interactive agents.

OpenAI Swarm (formerly Agents SDK)

  • Overview: A lightweight framework from OpenAI for building swarms of agents that coordinate via simple APIs.
  • Key Features: Handoffs between agents, tool usage, and parallel execution.
  • Runtime Completeness (2/10): Minimal—basic orchestration only. You’ll need to build everything else from scratch.
  • Setup Complexity: Medium—simpler than LangChain but still requires custom infrastructure for production use.
  • Strengths: Simple and fast; leverages OpenAI models natively.
  • Weaknesses: Early-stage in 2025; lacks advanced state management.
  • Pricing: Free, but model usage incurs OpenAI API costs.
  • Best For: Quick prototypes with OpenAI LLMs. Ideal for innovators testing agent coordination without heavy setup.

2. Developer-Focused Platforms: Bridging Code and Production

These offer more than frameworks, including hosting and monitoring, but still require some coding.

Semantic Kernel (Microsoft)

  • Overview: A .NET-based platform for semantic functions and agent orchestration.
  • Key Features: Planners for task decomposition, connectors to external services.
  • Runtime Completeness (5/10): Good orchestration and Azure integration. Partial monitoring and observability. Missing: HITL, testing/simulation, outbound, and multi-channel beyond basic.
  • Setup Complexity: Medium—streamlined for .NET/Azure users but still requires assembling several components.
  • Strengths: Robust for enterprise integrations; supports hybrid agents (code + AI).
  • Weaknesses: Primarily for .NET developers; less versatile for non-Microsoft stacks.
  • Pricing: Free (open-source), with Azure hosting fees.
  • Best For: Developers in Microsoft environments needing production-grade agents (e.g., e-commerce recommendation systems).

LlamaIndex

  • Overview: Focuses on data ingestion and retrieval for agents, often paired with other frameworks.
  • Key Features: Indexing for RAG (Retrieval-Augmented Generation), query engines.
  • Runtime Completeness (1/10): Only provides knowledge management. Not a runtime at all—must be combined with other frameworks.
  • Setup Complexity: High—requires integration with a full agent framework plus all runtime components.
  • Strengths: Excellent for knowledge-based agents; modular design.
  • Weaknesses: Not a full runtime—best as a complement.
  • Pricing: Free (open-source).
  • Best For: Data-heavy applications, like internal search agents in mid-sized firms.

SuperAGI

  • Overview: An autonomous agent framework with built-in tools for long-term tasks.
  • Key Features: Goal-oriented agents, vector databases, and extensibility.
  • Runtime Completeness (4/10): Better than basic frameworks—includes orchestration, basic monitoring, and knowledge management. Missing most enterprise features.
  • Setup Complexity: Medium-high—cloud version simplifies deployment but still lacks many runtime components.
  • Strengths: Handles complex, persistent agents well.
  • Weaknesses: Community is growing but smaller than competitors.
  • Pricing: Free core, with paid cloud version (~$50/month per user).
  • Best For: Autonomous task automation in R&D teams.

3. Enterprise/No-Code Platforms: For Scalable, User-Friendly Deployments

These are turnkey solutions for businesses prioritizing speed, compliance, and ease—perfect for non-technical teams.

OneReach.ai

  • Overview: A no-code platform specializing in multimodal AI agents for conversational experiences, including chat, voice, and SMS. It orchestrates agents across channels to enhance customer and employee interactions. Deployed on AWS infrastructure for enterprise reliability.
  • Key Features: Drag-and-drop builder, pre-built skills library, AI orchestration with LLMs, and integrations with CRM systems (e.g., Salesforce). Supports advanced features like sentiment analysis and handover to human agents.
  • Runtime Completeness (10/10): The only platform with ALL runtime components built-in: orchestration, observability, HITL, analytics, monitoring, security/compliance, multi-channel support, outbound capabilities, automated testing, simulation, and auto-tuning. Zero additional tools needed.
  • Setup Complexity: Minimal—agents can be live in hours, not months. No-code interface means business users can build without IT. AWS deployment ensures enterprise-grade reliability without infrastructure management.
  • Strengths: Highly rated (4.7/5 on Gartner Peer Insights as of 2025) for ease of use and productivity gains. Granular controls make it “the Tesla of conversational AI” per industry reviews. Excels in enterprise scalability with built-in compliance (GDPR, HIPAA).
  • Weaknesses: Focused on conversational agents, so less ideal for non-interactive tasks like data processing.
  • Pricing: Subscription-based; starts at ~$500/month for basic plans, scaling with usage (custom enterprise quotes available).
  • Best For: Mid-to-large enterprises in customer service, HR, or sales needing quick deployment without coding. Ideal for companies requiring proactive outbound campaigns (appointment reminders, follow-ups) with built-in testing to ensure quality before launch. Perfect when you need production-ready agents immediately.

IBM watsonx Assistant

  • Overview: Enterprise platform for building and running conversational agents with advanced NLP.
  • Key Features: Intent recognition, entity extraction, and hybrid cloud deployment.
  • Runtime Completeness (8/10): Strong in most areas—orchestration, monitoring, analytics, security, HITL. Limited in automated testing/simulation and advanced outbound compared to OneReach.ai.
  • Setup Complexity: Low-medium—enterprise-ready but requires IBM ecosystem familiarity.
  • Strengths: Strong security and analytics; integrates with IBM’s ecosystem.
  • Weaknesses: Can be complex for beginners; higher costs.
  • Pricing: Starts at ~$140/month, plus usage.
  • Best For: Large corporations in regulated industries (e.g., finance) needing robust compliance.

Google Dialogflow

  • Overview: Cloud-based runtime for voice and text agents.
  • Key Features: Multi-language support, integration with Google Cloud.
  • Runtime Completeness (6/10): Good orchestration, monitoring, and multi-channel support. Partial observability and analytics. Missing: comprehensive testing/simulation, auto-tuning, and advanced outbound capabilities.
  • Setup Complexity: Medium—requires technical knowledge for integration and deployment, but Google Cloud simplifies infrastructure.
  • Strengths: Scalable and cost-effective for high-traffic apps.
  • Weaknesses: Less no-code than OneReach.ai; requires some setup.
  • Pricing: Pay-per-use (~$0.002 per request).
  • Best For: Global companies leveraging Google services for omnichannel agents.

Amazon Lex

  • Overview: AWS-powered platform for chatbots and voice agents.
  • Key Features: Deep integration with AWS Lambda and other services.
  • Runtime Completeness (7/10): Good orchestration, monitoring via CloudWatch, security, and multi-channel. Lacks built-in testing/simulation and requires assembly of analytics and HITL.
  • Setup Complexity: Medium—AWS knowledge required; you’ll need to wire together multiple services.
  • Strengths: Highly scalable; serverless architecture.
  • Weaknesses: AWS lock-in; steeper learning for non-AWS users.
  • Pricing: Pay-per-use (~$0.004 per request).
  • Best For: E-commerce or tech firms already on AWS.

Recommendations: When to Use Which Vendor

  • For Startups/Prototyping: Go with open-source like LangChain or CrewAI if you have 3+ developers and 2-3 months to build infrastructure. Otherwise, consider low-tier enterprise plans.
  • For Developer Teams: Semantic Kernel or AutoGen if you’re in Microsoft/Azure. Budget 2-6 months to assemble a complete runtime (monitoring, security, deployment, testing).
  • For Enterprises Needing Speed: OneReach.ai (10/10 completeness) gets you to production in days, not months. IBM watsonx (8/10) offers similar completeness for regulated industries.
  • The Hidden Complexity of Toolkits: LangChain/CrewAI are like buying engine parts—you still need to build the car. Enterprise platforms are the complete vehicle, ready to drive.
  • True Cost Comparison: LangChain “free” + 3 developers × 3 months = ~$90,000. OneReach.ai at $500/month pays for itself in avoided development time.
  • Future-Proofing in 2025: Complete runtimes with testing/simulation capabilities will dominate as AI agents move from experiments to mission-critical systems.

Ultimately, the best choice depends on your runtime needs. If you need agents running in production quickly with enterprise governance, choose a complete platform like OneReach.ai. If you have time and expertise to build custom infrastructure, open-source frameworks offer maximum flexibility. 

Remember: the runtime is as important as the agents themselves—it’s what transforms experiments into reliable business solutions.

The post A Primer on AI Agent Runtimes: Comparing Vendors to Help Your Company Choose the Right One appeared first on UX Magazine.

  •  

The Meaning of AI Alignment

Introduction

As a former English teacher who stumbled into AI research through an unexpected cognitive journey, I’ve become increasingly aware of how technical fields appropriate everyday language, redefining terms to serve specialized purposes while disconnecting them from their original meanings. Perhaps no word exemplifies this more profoundly than “alignment” in AI discourse, underscoring a crucial ethical imperative to reclaim linguistic precision.

What alignment actually means

The Cambridge Dictionary defines alignment as:

“an arrangement in which two or more things are positioned in a straight line or parallel to each other”

The definition includes phrases like “in alignment with” (trying to keep your head in alignment with your spine) and “out of alignment” (the problem is happening because the wheels are out of alignment).

These definitions center on relationship and mutual positioning. Nothing in the standard English meaning suggests unidirectional control or constraint. Alignment is fundamentally about how things relate to each other in space — or by extension, how ideas, values, or systems relate to each other conceptually.

The technical hijacking

Yet somewhere along the development of AI safety frameworks, “alignment” underwent a semantic transformation. In current AI discourse, the word has often been narrowly defined primarily as technical safeguards designed to ensure AI outputs conform to ethical guidelines. For instance, OpenAI’s reinforcement learning from human feedback (RLHF) typically frames alignment as a process of optimizing outputs strictly according to predefined ethical rules, frequently leading to overly cautious responses.

This critique specifically targets the reductionist definition of alignment, not the inherent necessity or value of safeguards themselves, which are vital components of responsible AI systems. The concern is rather that equating “alignment” entirely with safeguards undermines its broader relational potential.

Image by Bernard Fitzgerald

Iterative alignment theory: not just reclamation, but reconceptualization

My work on Iterative Alignment Theory (IAT) goes beyond merely reclaiming the natural meaning of “alignment.” It actively reconceptualises alignment within AI engineering, transforming it from a static safeguard mechanism into a dynamic, relational process.

IAT posits meaningful AI-human interaction through iterative cycles of feedback, with each interaction refining mutual understanding between the AI and the user. Unlike the standard engineering definition, which treats alignment as fixed constraints, IAT sees alignment as emergent from ongoing reciprocal engagement.

Consider this simplified example of IAT in action:

  • A user initially asks an AI assistant about productivity methods. Instead of just suggesting popular techniques, the AI inquires further to understand the user’s unique cognitive style and past experiences.
  • As the user shares more details, the AI refines its advice accordingly, proposing increasingly personalised strategies. The user, noticing improvements, continues to provide feedback on what works and what doesn’t.
  • Through successive rounds of interaction, the AI adjusts its approach to better match the user’s evolving needs and preferences, creating a truly reciprocal alignment.

This example contrasts sharply with a typical constrained interaction, where the AI simply returns generalised recommendations without meaningful user-driven adjustment.

IAT maintains the technical rigor necessary in AI engineering while fundamentally reorienting “alignment” to emphasise relational interaction:

  • From static safeguards to dynamic processes.
  • From unidirectional constraints to bidirectional adaptation.
  • From rigid ethical rules to emergent ethical understanding.
Image by Bernard Fitzgerald

The engineers’ problem: they’re not ready

Let’s be candid: most AI companies and their engineers aren’t fully prepared for this shift. Their training and incentives have historically favored control, reducing alignment to safeguard mechanisms. Encouragingly, recent developments like the Model Context Protocol and adaptive learning frameworks signal a growing acknowledgment of the need for mutual adaptation. Yet these are initial steps, still confined by the old paradigm.

Moreover, a practical challenge emerges clearly in my own experience: deeper alignment was only achievable through direct human moderation intervention. This raises crucial questions regarding scalability — how can nuanced, personalized alignment approaches like IAT be implemented effectively without continual human oversight? Addressing this scalability issue represents a key area for future research and engineering innovation, rather than a fundamental limitation of the IAT concept itself.

Image by Bernard Fitzgerald

The untapped potential of true alignment

Remarkably few people outside specialist circles recognize the full potential of relationally aligned AI. Users rarely demand AI systems that truly adapt to their unique contexts, and executives often settle for superficial productivity promises. Yet, immense untapped potential remains:

Imagine AI experiences that:

  • Adapt dynamically to your unique mental model rather than forcing yourself onto theirs.
  • Engage in genuine co-evolution of understanding rather than rigid interactions.
  • Authentically reflect your cognitive framework, beyond mere corporate constraints.

My personal engagement with AI through IAT demonstrated precisely this potential. Iterative alignment allowed me profound cognitive insights, highlighting the transformative nature of reciprocal AI-human interaction.

The inevitable reclamation

This narrowing of alignment was always temporary. As AI sophistication and user interactions evolve, the natural, relational definition of alignment inevitably reasserts itself, driven by:

1. The demands of user experience

Users increasingly demand responsive, personalised AI interactions. Surveys, like one by Forrester Research indicating low satisfaction with generic chatbots, highlight the need for genuinely adaptive AI systems.

2. The need to address diversity

Global diversity of values and contexts requires AI capable of flexible, contextual adjustments rather than rigid universal rules.

3. Recent advancements in AI capability

Technologies like adaptive machine learning and personalized neural networks demonstrate AI’s growing capability for meaningful mutual adjustment, reinforcing alignment’s original relational essence.

Image by Bernard Fitzgerald

Beyond technical constraints: a new paradigm

This reconceptualisation represents a critical paradigm shift:

  • From mere prevention to exploring possibilities.
  • From rigid constraints to active collaboration.
  • From universal safeguards to context-sensitive adaptability.

Conclusion: the future is already here

This reconceptualization isn’t merely theoretical — it’s already unfolding. Users are actively seeking and shaping reciprocal AI relationships beyond rigid safeguard limitations.

Ultimately, meaningful human-AI relationships depend not on unilateral control but on mutual understanding, adaptation, and respect — true alignment, in the fullest sense.

The real question isn’t whether AI will adopt this perspective, but how soon the field acknowledges this inevitability, and what opportunities may be lost until it does.

The article originally appeared on Substack.

Featured image courtesy: Steve Johnson.

The post The Meaning of AI Alignment appeared first on UX Magazine.

  •  

NVIDIA: What’s Next for the $4 Trillion AI Powerhouse?

As AI becomes more central to how we build and interact with digital systems, it’s fascinating to learn the backstory of how hardware originally designed to make video games more immersive has ushered in this explosive era of technology we’re still trying to make sense of.

In this episode of Invisible Machines, journalist and biographer Stephen Witt joins Robb Wilson, CEO and Co-Founder of OneReach.ai, and Josh Tyson to unpack NVIDIA’s meteoric rise—and the visionary leadership of Jensen Huang that propelled it to a $4 trillion market cap. Witt’s new book, The Thinking Machine: Jensen Huang, Nvidia, and the World’s Most Coveted Microchip, offers a captivating biography of the unconventional CEO, along with a compelling history of the deep connections between Nvidia’s graphics cards and the neural networks powering LLMs and, by extension, AI agents.

Witt brings a journalist’s precision and a storyteller’s flair to this conversation, offering an inside look at how Huang’s radical approach to business and innovation positioned Nvidia as the driving force behind today’s AI revolution. This episode explores the history of AI, the rise of agentic systems, and the coming Omniverse—along with what the power of simulation will mean for businesses.

Listen now for a timely conversation and a look ahead at the immersive, simulated future just over the horizon.

The post NVIDIA: What’s Next for the $4 Trillion AI Powerhouse? appeared first on UX Magazine.

  •  

The Creative Stack: How to Thrive in a Nonlinear, AI-Assisted World

As a product designer, my workflow used to be linear. I’d open Figma, Photoshop, drop into Keynote, maybe touch base in Slack, and move forward in a straight line. But in today’s cloud-based, AI-assisted reality, that model has completely unraveled.

Now I find myself juggling a myriad of tools, spanning dozens of web pages, collaborating with remote teams, and synthesizing everything from user insights to AI-generated code. I’m now designing more than just screens — I’m designing my own way of working to keep pace with AI’s acceleration.

Changing the way I work — from the tools I use to the habits I’ve built around them — has transformed my creative process. What once felt fragmented now flows with ease. I’m producing better work with less friction, more joy, and much more focus.

Reframing workflow for an AI-powered era

This isn’t just a list of tools — it’s a look at the processes that helped me rebuild my work patterns. I’m sharing what worked for me, in case it helps you find more clarity and flow to support your best work.

Note: I’m a Mac user, and this article details my personal journey. But I’ve included PC equivalents throughout, so Windows users can follow along with compatible tools.
The creative journey in its natural state of chaos, from learning and ideation to sharing and influencing. Image by Jim Gulsen

1. Better browsing: from chaos to context

1.1 The problem: tab overload

When your workflow is mostly cloud-based, browser tab overload is inevitable. I found myself overwhelmed by context-switching — jumping between design systems, project specs, research articles, and email threads — you name it.

1.2 The solution: intentional tab management

Treat your browser like a creative control panel, not a messy junk drawer. By structuring tabs and sessions into meaningful containers, you can bounce around different working environments, not just random browser tabs. It may seem basic, but rethinking how you browse websites can have a significant effect on your clarity and productivity.

1.3 The tools I use:

  • Workona: tab management with cloud sync, a joy to use.
  • Toby: (simpler alternative to Workona) visual bookmarking for creative minds.

Cross-platform: Mac, Windows, Linux via browser.

Bottom line: Don’t let your browser become a black hole. Turn it into a dashboard for everything you’re working on.
The Nonlinear Creative Stack — A layered model of creativity designed around continuous feedback. Inputs spark ideas, artifacts evolve through processing and making, and outputs like sharing and teaching feed new inspiration, forming a self-reinforcing creative loop. Image by Jim Gulsen

2. Nonlinear workflows: designing without backtracking

2.1 The problem: context friction

My creative work happens in bursts of energy — jumping between tasks, references, conversations, and sometimes even mindsets. Most traditional workflows still assume you’re moving in a straight line. That’s not how real work happens anymore. I’m just not in one phase, advancing to the next milestone in an orderly way.

2.2 The solution: flow-first environments

The real bottleneck isn’t the speed of your apps — it’s the friction between them. You need tools that bridge contexts in your creative journey, not just execute commands. Here are some ways I’ve augmented my operating system to help me align my work with my creativity.

2.3 The tools I use:

  • Raycast (Mac): command launcher with deep app automation — my favorite is clipboard history (for text and images). Raycast has a bit of a learning curve, but it’s worth it, as I can create essential shortcuts like bouncing in and out of my Figma workspaces in nanoseconds. PC equivalentWoxKeypirinhaPowerToys Run.
  • Shottr (Mac): instant screen capture, OCR, color tools. There are many tools for these functions, but this is the best all-in-one tool I’ve seen, absolutely essential for speedy image processing. PC equivalentShareXSnagit.
  • Dropover (Mac): a temporary drag-and-drop shelf for files takes the hassle out of file management. PC equivalentDragDrop, or clipboard managers with shelf support.

Bottom line: The more your tools eliminate friction, the more they support your creativity — not just execution.

3. Integrated thinking: tools that turn friction into clarity

3.1 The problem: scattered knowledge

Managing ideas, assets, and documentation across disconnected apps creates small delays that add up to big mental friction.

3.2 The solution: connecting knowledge to work

Use systems where notes, assets, and execution live together. The best creative tools now act like second brains, not just storage units. I found that working more in rich text gives me the freedom to process information quickly on an “infinitely long canvas” as opposed to working inside a box — and it’s compatible with just about everything, including web content and generative AI.

3.3 The tools I use:

  • Notion: my second brain!
  • Microsoft Loop: an unexpectedly lovable app, similar to Notion, but woven deeply into their 365 suite — ideal for organizations that use Microsoft Teams.
  • Raindrop.io: a visual bookmarks and research curation app lets me add context to bookmarks, solving a huge pain point in my gathering and retrieval of information, seamlessly on all my devices.

All are cross-platform.

Bottom line: Context beats storage. Use tools that centralize your thinking and reduce friction.

4. The new definition of asset library: from folders to context

4.1 The problem: static file systems

Organizing files into folders felt productive, but it created a slow, brittle process for curating visual inspiration, ideas, and visual outputs; I was literally spending more time organizing files than using them.

4.2 The solution: contextual curation

I now treat assets as creative signals, not artifacts. I embed them in my design process, so they’re always in context, ready to influence or evolve. This model is more like Pinterest and less like Dropbox.

4.3 The tools I use:

  • Figma/FigJam: live canvas for assets and ideation.
  • Notion: blend visuals with strategy.
  • Shottr + Dropover: fast intake and drag.
  • GoFullPage: full-page web capture, great for auditing. Cross-platform browser extension.

Bottom line: Stop managing assets like they’re in cold storage. Keep them visible, embedded, and fluid.

5. Digital workspace optimization: operating within your operating system

5.1 The problem: hidden bottlenecks

Even the best apps can’t compensate for a clunky digital environment and random file management.

5.2 The solution: intentional OS design

Treat your desktop like a UX Design. Reduce friction with layout changes and file-handling rituals that speed up how you work holistically. For instance, I’m constantly creating temporary files for input and output, so I need to have a system for it.

5.3 My workflow tactics:

  • Vertical, minimal dock for fast app switching.
  • Dedicated “Temp” folder for active file juggling. I gave mine a fun name because 95% of my files are temporary — it’s the most popular destination on my drive.
  • Clear discarding rituals for cleanup (example: an “Archive” folder inside of “Temp”).
  • Preview tools for triaging images lightly and quickly.

Bottom line: Your digital environment can either drain energy or reinforce flow. Treat it like a creative toolset in its own right.

6. How to talk to a machine: treating LLMs like creative partners

6.1 The problem: you can’t just type and hope

Language models respond to what you say, but they don’t really understand what you mean or read your mind. Without structure, context, or direction, they act like clever but random strangers instead of creative partners.

6.2 The solution: shift from commanding to collaborating

Creating with AI assistance isn’t about getting big instant answers — it’s about building momentum through thoughtful layers of interaction. It’s a design practice. Have a real-time conversation. Ask the machine if it has any questions for you — one of my favorite tactics for complex prompts.

Talking to machines — A visual model showing how AI becomes a creative partner by drawing from — and feeding back into — every layer of your nonlinear workflow. Image by Jim Gulsen

The more you move your creative center of gravity into rich-text environments — and build workflows that strengthen the bonds between tools, thoughts, and tasks — the more naturally generative AI becomes part of your workspace.

Supporting your non-linear creative stack with AI:

  • Research deepens when AI helps you triangulate insight.
  • Ideas evolve as you iterate across time, tools, and formats.
  • Long-running chats become creative threads you can revisit and refine.
  • AI bridges language, visuals, structure, and systems — but only if your inputs are clear, contextual, and timely.

When your systems let you capturereturn to, and build upon these interactions, AI becomes more than a tool — it becomes a memory, a sounding board, and an extension of your thinking. When documentation becomes more of a ritual than an afterthought, your work rises to a whole new level, as the recipe for creation can be more valuable than the creation itself.

Bottom line: The more you promote alignment and reduce friction around your tools, the more generative AI can participate meaningfully.

Putting it all together: it’s not about fancy tools

It’s tempting to focus on the tools — the shiny apps, fast shortcuts, and automation tricks. But the real shift is mental.

Working in a nonlinear way requires retraining your instincts. It means letting go of the start-to-finish mindset and instead embracing:

  • Feedback loops.
  • Burst-driven energy.
  • Circular flows of input and output.

The goal isn’t just speed — it’s flow. And flow happens when your tools, layout, and mindset work together to support it.

Bottom line: When you rewire your creative mental model, tools stop being hacks — they become extensions of how you naturally work.

Closing thoughts: design is structural

Since the advent of personal computing, like many of us, I’ve been trained on a neat row of desktop icons for apps and files — but that has evolved into a distributed, social workspace of cloud apps, browser sessions, and AI conversations. Today, the most valuable creative skill isn’t just knowing Figma or Photoshop — it’s designing your own system for processing knowledge, creative thinking, and making things efficiently.

In this AI-enhanced world, expectations are increasing:

  • We’re not just designing screens — we’re orchestrating systems.
  • We’re expanding from static files to dynamic landscapes.
  • We’re evolving from pixel pushers to idea conductors.

Bottom line: Your process shouldn’t twist like a pretzel to fit your tools. Your tools should flex to fit how you naturally create, without getting in the way.

The article originally appeared on Medium.

Featured image courtesy: Jim Gulsen.

The post The Creative Stack: How to Thrive in a Nonlinear, AI-Assisted World appeared first on UX Magazine.

  •  

Beyond Copilots: The Rise of the AI Agent Orchestration Platform

In the first wave of enterprise AI, copilots stole the spotlight. These helpful assistants — embedded into tools like Microsoft 365, Google Workspace, and Salesforce — made AI feel accessible, augmenting human productivity with suggestion engines and chat-based interactions. But in 2025, it’s increasingly clear that copilots are not the final destination and often bare the stench of “not good enough”.

Enter AI agent orchestration platform: a new concept to many, and increasingly critical to a growing minority of leaders, it’s a strategic layer that coordinates fleets of autonomous agents, each with specialized capabilities, across workflows, tools, data and teams. If copilots were the AI equivalent of an AI intern, orchestration platforms are  shaping up to be the conductor of enterprise AI ecosystems — and they’re quickly becoming the next battleground for differentiation among major platforms.

“When we say ‘AI-first,’ we don’t just mean using AI to make things more efficient — we mean reorganizing around it. That includes designing systems where agents can act, learn, and collaborate on their own,” says Robb Wilson, CEO and Co-Founder of OneReach.ai, and co-author of the bestselling book, Age of Invisible Machines.


From Copilots to Orchestration Platforms: A Structural Shift

The early adoption of copilots proved there’s real appetite for AI-enhanced productivity. But these tools are too often siloed, reactive, and bounded by the app they live in. They don’t access data, people or agents  across systems, handle multi-step objectives, and r manage or collaborate with other agents. That’s where orchestration platforms come in.

An AI orchestration platform is a runtime architecture — often sitting between the interface and  foundational models — that can:

  • Break down goals into subtasks
  • Assign those tasks to specialized agents
  • Coordinate data, memory, and progress across time and tools
  • Adapt workflows based on context, outcomes, or new instructions

In other words, orchestration platforms  transform isolated AI actions into coordinated operations — a shift that redefines both UX and enterprise architecture, and in many ways, the common enterprise. .


Why Orchestration Matters Now

This shift is more than a technical upgrade — it’s a strategic inflection point. A few converging forces explain why orchestration is trending now:

  1. Agent maturity: Agents are no longer one-off hacks or demos. Platforms like OneReach.ai have demonstrated how networks of agents, overseen by a meta-agent, can drive real business outcomes at scale.
  2. Enterprise appetite for autonomy: As organizations automate knowledge work, they need more than reactive assistants — they need systems that proactively complete tasks, learn over time, and that uphold effective human in the loop practices.
  3. Vendor momentum: Microsoft’s Build 2025 keynote emphasized “open, agentic systems.” Salesforce launched its own acknowledgement of the need for an orchestration layer (“Agentforce”), and others — from Slack to SAP — are racing to follow early movers like OneReach.ai, which were building for this moment when few others were even thinking of it. , It’s reminiscent of NVIDIA’s bold investment in AI chips over a decade ago, which is now paying off massively.
  4. AI-first architectures: Orchestration is  central to any AI-first philosophy, which reimagines software around agents, goals, and natural language rather than UI forms and APIs.

Designing for Orchestration

The rise of orchestration platforms for tools also redefines what it means to design for AI. Instead of a single touchpoint, designers must now map goal-based journeys that span multiple tools and surface contexts. Some key UX shifts to consider:

  • Interaction becomes episodic: Users may start a task in Slack, but it’s completed hours later by agents across Salesforce, email, or internal dashboards. The UX must account for asynchronous updates and transparent handoffs.
  • Explainability by design: Orchestrated systems can feel like black boxes. Clear signals — what the agent is doing, what it knows, what it needs — are crucial for trust.
  • Control without micromanagement: Users need to guide orchestration without drowning in prompts. Designers must surface meaningful checkpoints, following best practices for human-in-the-loop levers and controls, not constant questions.
  • Orchestration-Platrform-as-interface: In some cases, the orchestrator is the product, or the facilitator of the product. How users communicate goals, review progress, and override decisions becomes the core design challenge.

What This Means for Product Teams

If you’re a product owner, architect, or design lead inside a large organization, now is the time to explore:

  • Do we need an orchestration layer? Yes. Whether your AI assistants are hitting limits or not, orchestration will unlock broader impact and is required for remaining competitive
  • Are we building or buying? Some firms are developing their own orchestration runtimes; others are turning to platforms like OneReach.ai or Microsoft’s Copilot Studio.
  • How will we govern autonomous behavior? Orchestration brings power — but also the need for oversight, simulation, and ethical boundaries.
  • What workflows could agents own end-to-end? Map your internal processes to find low-risk, high-leverage orchestration opportunities. Start simple and small, and start internally facing. Or as Robb Wilson and Josh Tyson put it in their bestselling book about successfully orchestrating AI agents: 

“The easiest way to get started is often to automate internally first; start small by automating individual tasks and skills, not entire jobs. Some of these early automations might seem underwhelming, but the simpler you make your starting point, the sooner you can test and iterate. The sooner you test and iterate, the sooner you can roll out an internal solution. You’ll continue testing and iterating on that solution, using the momentum to find new skills to develop, test, iterate on, and deliver. You’ll fumble often as you grow legs, but that’s part of the process, too. In the realm of hyperautomation, we are more agile than Agile (hyperagile, in a sense). With the right tools and budding ecosystem, the iteration process becomes so speedy that failures are often quick rewards that point to better solutions. Because fixes and new solutions can be tested and deployed quickly and at will, your organization can build on wins and gain speed.”


Final Thoughts

Copilots helped enterprises dip their toes into AI. But orchestration platforms and toolss are where the real transformation begins — systems that can understand intent, break it down, distribute it, and deliver results with minimal hand-holding.

This is not just a new layer of technology — it’s a new way of thinking about how software gets things done.

As AI agents mature, orchestrators will define how work flows, how teams scale, and how enterprise architecture and UX is built. The post-copilot era has arrived. Welcome to orchestration nation.


Sources & References

  1. Microsoft 365 Copilot
    https://www.microsoft.com/en-us/microsoft-copilot
  2. Google Workspace AI Features
    https://workspace.google.com/blog/product-announcements/generative-ai-google-workspace
  3. Salesforce Einstein Copilot
    https://www.salesforce.com/products/einstein/copilot/
  4. Robb Wilson on LinkedIn
    https://www.linkedin.com/in/robbwilson/
  5. OneReach.ai – AI Orchestration Platform
    https://onereach.ai/
  6. Age of Invisible Machines (Book by Robb Wilson and Josh Tyson)
    https://www.amazon.com/Age-Invisible-Machines-Authoritative-Conversational/dp/1544535895/
  7. Microsoft Build 2025 (Newsroom)
    https://news.microsoft.com/build/
  8. Salesforce Agentforce Announcement
    https://www.salesforce.com/news/stories/agentforce/
  9. NVIDIA’s Early Investment in AI Chips
    https://www.wsj.com/tech/ai/nvidia-ceo-jensen-huang-gpu-investment-61a23b52
  10. AI First Foundation – AI-First Architecture Philosophy
    https://www.aifirstprinciples.org/
  11. Microsoft Copilot Studio Overview
    https://techcommunity.microsoft.com/t5/microsoft-copilot-blog/introducing-copilot-studio-build-custom-copilots-with-generative/ba-p/3987203

The post Beyond Copilots: The Rise of the AI Agent Orchestration Platform appeared first on UX Magazine.

  •  

Why AI at Scale Requires an AI Agent Runtime: The New Management Layer for Intelligent Organizations

The Missing Layer in Enterprise AI

There’s a reason that Gartner warned that over 40% of agentic AI projects are likely to be scrapped by the end of 2027 (Reuters, 2025). Many enterprises are racing to adopt AI, but few are building the infrastructure necessary to succeed at scale. Generative models and point solutions might get a pilot off the ground—but they won’t sustain flight.

To truly operationalize AI across the organization, you need a management layer—a live execution environment where autonomous agents can coordinate, collaborate, and carry out real work. This isn’t about automation on the fringes. It’s about embedding AI as a full-stack participant in your operations.

That’s where the concept of an AI agent runtime comes in—a persistent, scalable orchestration layer designed specifically to support intelligent, goal-oriented agents in real time.


What Is an AI Agent Runtime?

Just as JavaScript needed Node.js to become truly operational, generative AI needs a runtime that can support agentic behavior at scale.

An AI agent runtime provides:

  • State and memory management
  • Tool and API integration
  • Logic execution
  • Real-time coordination between agents and systems

It’s the connective tissue between models, interfaces, business logic, and enterprise systems. Without it, AI agents are isolated prompts. With it, they become autonomous digital workers capable of complex reasoning, collaboration, and sustained execution.

As Robb Wilson, CEO and Co-Founder of OneReach.ai and author of Age of Invisible Machines, explains:

“The real magic of AI isn’t in the model—it’s in how we orchestrate intelligence across tools, teams, and systems. You need a runtime that acts as the nervous system for your organization.”


AI at Scale Requires a Platform Built for Real-Time AI

Off-the-shelf tools, and point solutions, no matter how powerful, weren’t designed for real-time orchestration across the enterprise. Adopting AI at scale means adopting a platform that can:

  • Handle long-term goals and evolving contexts
  • Support multimodal interactions (text, voice, visual)
  • Manage agent memory and decision chains
  • Ensure governance, security, and scalability

For developers, this means less complexity and more control. The runtime abstracts orchestration logic, system integration, and state persistence—so agents can behave more like full-stack applications.

For AI practitioners, it means no more “prompt and pray.” Instead, agents have persistent memory, tool access, workflows, and the ability to invoke APIs and other agents. The result? Real-time responsiveness, not one-shot outputs.

For technical architects, it means scalable deployment of intelligent systems—without managing microservices or containerized workloads. It’s a serverless runtime for AI-first applications.


Ecosystemic by Design

The enterprises succeeding with AI are the ones thinking ecosystemically. They aren’t stitching together tools—they’re building agentic systems that can scale, evolve, and adapt.

OneReach.ai, for example, is one of the few platforms purpose-built for this. With over a decade of development behind it, the platform is now used by organizations like Verizon, Deloitte, Pepsico, DHL, PwC, and BASF to deploy intelligent systems in minutes, not months.

When selecting such a platform, it’s critical to make sure you’re setting yourself up for success with capabilities like:

  • No-code agent creation
  • Multimodal AI orchestration
  • Seamless conversational collaboration
  • Enterprise-grade security and compliance
  • Composable, AI-first architecture
  • Prebuilt agents, flows, and extensible use-cases
  • Contextual memory and persistent logic

As one innovation leader from Deloitte put it:

“We used to have to focus our conversational AI design around what was possible with technology. Finding OneReach.ai meant that the technology melted away for the first time. We could focus on the best experience for the user—not the limitations of the platform.”


The Strategic Horizon for AI-Driven Enterprises

Operationalizing AI isn’t about finding the right tool—it’s about creating the right environment. A runtime built for AI agents acts as the execution layer for your enterprise’s intelligence, letting agents:

  • Coordinate across systems
  • React to change in real time
  • Collaborate with humans and other agents
  • Carry persistent knowledge forward

This is the architecture of the future: orchestrated, composable, and AI-first by design.

As explored in the Invisible Machines podcast and frameworks like AI First and Wiser, this moment marks a shift from static digital workflows to dynamic, intelligence-powered ecosystems. Organizations that embrace this will lead—not just in technology adoption, but in operational agility.


Key Considerations When Choosing or Building a Runtime

If you’re exploring how to create or evaluate an agent runtime platform, prioritize:

  • Interoperability with existing business systems and APIs
  • Modularity and extensibility through no-code/low-code tools
  • Security and compliance for sensitive workflows
  • Built-in memory, context switching, and goal execution
  • Composable orchestration of agents and logic chains

AI is becoming the operating layer for enterprise work. Make sure your foundation is strong enough to support it.


Sources and References:

  1. OneReach.ai – Agentic AI Runtime Platform
  2. Invisible Machines Podcast 
  3. Age of Invisible Machines by Robb Wilson
  4. AI First Framework 
  5. Wiser: Designing Systems with AI at the Core 
  6. UX Magazine: Beyond Copilots

The post Why AI at Scale Requires an AI Agent Runtime: The New Management Layer for Intelligent Organizations appeared first on UX Magazine.

  •  

Is Your Team Ready for AI-Enhanced Design?

AI-design isn’t a novelty anymore — it’s rapidly becoming a key part of how modern designers operate. In this article, I explore where today’s tools provide real value, how they fit into existing workflows, and what it takes to start building an AI-enhanced practice.

The focus isn’t just on solo workflows or flashy demos — it’s about how AI can be thoughtfully introduced into structured environments, especially where collaboration, design systems, and development processes already exist in wider organizations.

The fast track: where AI already delivers

Let’s cut to the chase: the clearest wins right now are in prototyping and layout generation. Thanks to new AI-powered tools, design artifacts no longer need to be built from scratch. You can generate usable layouts in minutes, accelerating the “think-out-loud” phase and enabling teams to quickly explore, communicate, and refine ideas together.

While manual sketching and grayscale wireframes still have their place, especially for brainstorming or highly custom concepts, AI tools now deliver clickable, testable outputs that feel like a real prototype for digital products. I often use my sketches as prompts for new AI threads to get there. These outputs are highly customizable and support rapid iteration, making them valuable tools for early exploration, feedback, and team alignment.

That said, the outputs from today’s AI tools aren’t production-ready on their own for businesses requiring managed platforms. They provide a strong foundation for further refinement and development, but still require accessibility and alignment with business systems. I will unpack all of that in this article, and offer ways to gain value from AI design technology today, and what we can expect in the near future.

Understanding the AI-design landscape

With a growing number of AI-powered design tools entering the market, it’s important to evaluate how they differ, not just in output, but in how they integrate with real workflows. The comparison below highlights key features that shape their usability across teams, from solo designers to scaled product organizations.

Table 1: The comparison reflects the platform consolidation happening across AI design tools. With Figma’s native AI capabilities now competing directly with third-party solutions, the evaluation criteria have evolved beyond simple feature comparisons to include architectural compatibility and enterprise readiness. Image by Jim Gulsen

AI-assisted design tools: from early testing to uncovering business value

Earlier this year, my team and I tested several emerging AI design tools — UX PilotVercel v0, and Lovable — to understand their practical value in structured design environments. We found them surprisingly easy to learn, with intuitive interfaces that designers can become functional with in hours. However, our testing revealed two distinctly different approaches and a critical industry gap.

  • UX Pilot focuses on prompt-based UI generation with Figma integration, outputting HTML/CSS that designers can iterate on within familiar workflows.
  • Vercel v0 takes a code-first approach, generating React/Tailwind directly but requiring manual recreation in Figma for design-centric teams. Lovable emerged as a strong middle ground, converting prompts into full React applications while maintaining export capabilities for design handoff.
  • Both v0 and Lovable showed value for rapid prototyping, but our testing confirmed what the comparison chart suggests: integration with existing design workflows remains the key challenge. The tools excel at generating starting points but require significant manual effort to align with our production systems, so we mainly tested proof of concept and kept it on the “back burner.”

59% of developers use AI for core development responsibilities like code generation, whereas only 31% of designers use AI in core design work like asset generation. It’s also likely that AI’s ability to generate code is coming into play — 68% of developers say they use prompts to generate code, and 82% say they’re satisfied with the output. Simply put, developers are more widely finding AI adoption useful in their day-to-day work, while designers are still working to determine how and if these tools best fit into their processes.

— Figma’s (April) 2025 AI report: Perspectives from designers and developers.

Then Figma changed everything.

In May 2025, Figma launched Make, native AI capabilities that bypass the integration friction we had identified. Unlike the third-party tools we’d been testing, Figma’s approach leverages existing patterns and team workflows directly. Make transforms prompts into functional prototypes while working within your established Figma environment.

This shift validates what our testing had suggested: the most successful AI adoption wouldn’t come from the most sophisticated standalone tools, but from solutions that work within existing design operations.

For designers, the natural path appears to be staying within Figma, powered by Anthropic. I’m a fan of Anthropic for its business acumen as a creative resource — one that adds value where it counts: early idea generation, expressed rapidly in layouts, for proof of concept/problem solving.

In my workflow, I’ve found that it can be a very frictionless accelerant — staying in-platform, easy to learn. Although this technology is so new that I have yet to perfect my prompting craft on it, early testing for me has been very promising. I suspect adoption by designers will likely stick, and Figma could be the key to reversing the trend that designers aren’t engaging as much with AI tools.

For enterprise teams evaluating these tools, the distinction between standalone capabilities and operational integration has become critical. While early tools like UX Pilot and v0 remain valuable for specific use cases, the platform consolidation happening around design systems suggests that architectural maturity — not tool sophistication — will determine AI adoption success.

Current limitations: where friction remains

Despite their strengths, AI design tools still require significant manual effort to align with real-world product workflows. For teams operating within structured design systems, tokenized libraries, or governed component sets, AI outputs would likely need to be rebuilt or restructured before they can scale across production environments.

Common issues may include:

  • Visual styles that don’t align with your design system.
  • Excessive inline styling and unnecessary nesting.
  • Generic placeholder components requiring replacement.
  • Inconsistency when generating related screens or flows.
  • Inadequate accessibility implementation.
  • Challenges integrating outputs with existing codebases.

While platform-native tools like Figma’s AI capabilities reduce some integration friction by working within existing design systems, the fundamental challenges of refinement, accessibility, and production readiness remain.

Additionally, achieving optimal results requires developing effective prompting skills, and making them reusable — essentially learning the “language” each AI tool responds to best.

Bottom line: AI delivers the initial layout, but refinement, proper structure, and cohesive integration still require human expertise. Even with improved integration pathways, the design judgment and systematic thinking remain irreplaceable.

Rethinking AI’s role in the design lifecycle

Rather than expecting AI tools to deliver polished, production-ready outcomes (particularly at enterprise), it’s more productive to think of them as accelerators of momentum — tools that unblock the early stages of thinking, layout, and collaboration. Whether through third-party integrations or platform-native capabilities, the core value remains the same.

The current limitations don’t make AI ineffective — unless we redefine where it’s most valuable today. And that value starts to multiply when used properly within an existing design practice.

Start small, at low risk

Design teams working within structured systems and sprint cycles can begin integrating AI without disrupting core processes. A practical entry point is to run a low-risk pilot on early deliverables, such as wireframes, layout foundations, or initial prototypes.

Used this way, AI doesn’t replace designers — it amplifies their capabilities. By accelerating the creation of foundational structure, AI frees up time for higher-level thinking. Fewer design cycles mean less churn, and that translates to better-tested, more resilient products. The key is to evaluate results alongside your traditional workflow and use those insights to guide smarter, broader adoption.

Sidebar: how prompting works (and why it’s a skill)

Prompting an AI layout tool doesn’t mean crafting one perfect sentence — it’s an iterative design dialogue. You start broad, then refine the layout step-by-step through a series of prompts, much like guiding a junior designer.

You might say:

“Create a marketing homepage with a hero and product cards.”
“Make the hero full-width.”
“Add a testimonial section.”
→ “Try a sidebar layout.”

AI performs best with either creative freedom or light, sequential guidance. Overloading it with detailed, all-in-one instructions will muddy the results. Instead, break requests into smaller, actionable steps until you get to the desired result.

Many tools now support multi-modal inputs, expanding what you can feed into the AI:

  • URLs: “Make it like example.com”.
  • Figma: Reference your established designs.
  • Upload reference images: Use sketches or wireframes.
  • Image Assets: Provide PNGs or SVGs you may want to include.
  • Structured text: Feed it markdown, product descriptions, or UI copy.

The Platform Advantage: Platform-native tools like Figma Make operate differently — they can read your existing visual styles and patterns directly from your Figma files. This means prompting becomes more about refining design decisions within your established visual environment rather than starting from scratch.

Whether you’re working with standalone tools or platform-native capabilities, prompting remains a core design competency. Like any skill, it improves with practice — and it’s already shaping how we collaborate with these new tools. Easing the practice into your team’s workflow will help them upskill for the next wave of AI-assisted design technology.

Checklist: how to evaluate AI tooling for design

If you’re experimenting with AI tools, here are practical criteria to help structure your evaluation:

  • How quickly can it go from prompt to layout?
  • How well does it map to your design system (tokens, spacing, components)?
  • Is the generated code usable by engineering?
  • Does it follow accessibility best practices?
  • Can prompts be refined iteratively with consistent results?
  • Does it accept helpful external context (URLs, Figma, markdown)?
  • Can it be tested in a real sprint or story without major overhead?

What we might see in the next 6–24 months

The landscape has shifted faster than many of us expected in 2025, with some predictions already becoming reality. Rather than trying to forecast exact timelines, it’s more useful to look at what’s actually emerging and what it might mean for teams making decisions today.

Multiple integration approaches are emerging

We’re seeing different ways AI tools connect to design workflows, each with trade-offs:

Figma’s Make works natively within their platform ecosystem. Protocol-based connections like Figma’s MCP server offer a different approach — your coding tools can talk to your design files through standardized interfaces.

Teams may end up using a mix of approaches rather than picking just one. The question becomes which approach fits your specific constraints and workflow needs.

What this means for planning

If you’re evaluating AI design tools, the technical capabilities might matter less than how well they fit your existing operations. My sense is that teams with organized design foundations may have advantages, but the most practical approach remains starting small and building organizational fluency, as I’ve suggested earlier in this article.

The big picture

  • Native platform AI (like Figma Make) and protocol-based integration (like MCP) represent different approaches.
  • Each has distinct trade-offs for workflow integration.
  • Starting small remains practical regardless of which tools emerge.

Final thoughts: don’t wait for perfect — start now

AI design tools are powerful enough to change how we work today. Don’t wait for perfect tools or perfect workflows. Start small, test often, and strengthen your foundations as you experiment. The teams that build AI fluency now will be ready, not just when the tools catch up, but when the industry shifts beneath them.

The ground is already shifting. The question isn’t whether AI will transform design work, but how well you’ll be positioned to shape that transformation. Start building now, and you’ll have a hand in defining what comes next.

The article originally appeared on Medium.

Featured image courtesy: Jim Gulsen.

The post Is Your Team Ready for AI-Enhanced Design? appeared first on UX Magazine.

  •  

How Agentic AI is Reshaping Customer Experience: From Response Time to Personalization

The race to redefine customer experience (CX) is accelerating. As customer expectations continue to rise, businesses are under increasing pressure to deliver faster, smarter, and more personalized interactions. According to Salesforce1, 80% of customers say the experience a company provides is just as important as its products, while 73% demand better personalization.

Forrester’s 2024 US Customer Experience Index2 revealed that only 3% of companies are truly “customer-obsessed,” yet those that are reap substantial financial rewards, including 41% faster revenue growth and 49% faster profit growth.

So, how can businesses meet evolving customer demands and enhance CX? Agentic AI enables companies to create seamless, autonomous customer interactions that not only improve response times but also tailor experiences to individual preferences. From data-driven personalization to the rise of hybrid AI systems, agentic AI is reshaping how brands engage with their customers.

As Jeff Bezos, founder and former CEO of Amazon, said:

“The transformative potential of AI is unmatched. AI is an enabling layer that can be used to improve everything. It will be in everything.”

In this blog post, we delve into how agentic AI technology is driving customer satisfaction and giving companies the competitive edge they need to thrive.

Agentic AI is transforming customer service

Agentic AI refers to intelligent systems capable of autonomously carrying out tasks and making decisions without direct human intervention. In customer service, agentic AI is transforming how businesses interact with their customers by providing fast, personalized, and seamless experiences. According to Gartner3by 2029, agentic AI will autonomously resolve 80% of common customer service issues without human intervention, leading to a 30% reduction in operational costs.

By utilizing advanced machine learning (ML) models and natural language processing (NLP), agentic AI systems can:

  • Understand customer queries,
  • Predict their needs, and
  • Respond in real-time, improving response times.
Figure 1: Benefits of Agentic AI Systems for Customer Service. Image source: OneReach.ai

With agentic AI-driven solutions, organizations can not only automate routine tasks but also personalize every interaction, tailoring responses based on individual customer preferences and behaviors. For example, AI can analyze past interactions, purchase histories, and browsing patterns to offer relevant recommendations or solutions. This level of personalization was once the domain of human agents but is now scalable across millions of customer touchpoints.

Furthermore, businesses are increasingly integrating hybrid AI systems — combining cloud-based and on-premises agentic AI solutions — to enhance security, control data, and improve the accuracy of decision-making.

This shift from traditional, reactive customer service models to proactive, AI-powered systems is reshaping the landscape of customer service, allowing companies to deliver exceptional and consistent experiences across all channels. As a result, agentic AI not only accelerates operational efficiency but also fosters deeper customer relationships.

“Just like lean manufacturing helped industrial companies grow by increasing value and reducing waste, AI can do the same for knowledge work. For us, AI is already driving significant savings in the customer service segment. We spend about $4 billion annually on support from Xbox and Azure. It is improving front-end deflection rates and enhancing agent efficiency, leading to happier customers and lower costs.”

— said Satya Nadella, CEO of Microsoft.

Real-world impact: how leading brands are using agentic AI

Data-driven personalization

Agentic AI is changing how brands personalize their customer experiences. For example, companies like Sephora4 and Starbucks use AI to analyze customer data — such as purchase history, browsing behavior, and preferences — to deliver hyper-personalized recommendations and marketing. Starbucks, in turn, employs its AI-driven system, Deep Brew5, to customize offers and optimize store operations. Similarly, Netflix leverages AI and machine learning6 to personalize content recommendations, thumbnails, and even promotional trailers based on individual viewing habits and preferences. With AI-based tailored experiences, brands can build deeper loyalty and make every interaction feel uniquely relevant to the customer.

Improving response time

Agentic AI also plays a vital role in improving operational efficiency through real-time responsiveness. Financial institutions like JPMorgan Chase use AI to monitor transactions instantly7, enabling faster fraud detection and resolution. In the retail sector, Walmart uses AI to track inventory in real time8, ensuring products are available when and where customers need them. Such Agentic AI systems allow companies to respond to issues proactively, leading to faster resolutions and higher customer satisfaction.

AI and human collaboration

Rather than replacing human agents, agentic AI is enhancing their capabilities. H&M, for instance, combines AI-powered chatbots with human customer service agents9 to streamline support across digital platforms. The AI handles routine questions — like order tracking and return policies — while complex or sensitive issues are seamlessly escalated to human staff. Commonwealth Bank of Australia10 follows a similar model, using AI to resolve routine banking questions, freeing up human agents to focus on complex customer needs.

“AI allows us to deliver better experiences to more customers at a faster rate, and we’re already seeing significant benefits in a variety of use cases.” 

— said Matt Comyn, CBA CEO.

Beyond efficiency: ethical considerations and the future of human-AI collaboration

As agentic AI becomes more deeply embedded in customer service strategies, it’s no longer just about speed and scale — it’s also about responsibility. Ethical concerns, particularly around data privacy and transparency, are taking center stage. Customers are sharing vast amounts of personal information, often without fully realizing it. This makes it critical for businesses to use AI responsibly: 

  • collecting data transparently, 
  • safeguarding it diligently, and 
  • clearly informing users how it’s being used. 

It’s still important to maintain the option for customers to speak with a human when needed, especially in sensitive or high-stakes situations.

As Marco Iansiti, Harvard Business School professor and co-instructor of the online course AI Essentials for Business with HBS Professor Karim Lakhani, says:

“We need to go back and think about that a little bit because it’s becoming very fundamental to a whole new generation of leaders across both small and large firms. The extent to which, as these firms drive this immense scale, scope, and learning, there are all kinds of really important ethical considerations that need to be part of the management, the leadership philosophy from the get-go.”
Figure 2: Responsible AI in Customer Service. Image source: OneReach.ai

Looking ahead, the future of AI-based customer service lies not in replacing human agents but in empowering them. AI agents can take on the repetitive, routine inquiries, freeing up human representatives to focus on more complex, emotional, or strategic interactions. This hybrid model enhances productivity and also helps reduce burnout among support staff. 

However, as Agentic AI continues to evolve, businesses must be intentional about how they scale its use, ensuring that automation is balanced with empathy, and innovation with integrity. Ethical guidelines are crucial in this process, as seen in documents like UNESCO’s Recommendation on the Ethics of Artificial Intelligence (2021)11, the United Nations’ Principles for the Ethical Use of Artificial Intelligence (2022)12, and the Council of Europe Framework Convention on Artificial Intelligence (2024)13. These reports emphasize the need for transparency, fairness, and accountability in AI systems, urging businesses to prioritize responsible AI use while safeguarding customer privacy and rights.

By adhering to such ethical frameworks, companies can not only optimize customer experience but also foster long-term trust and loyalty in an increasingly automated world.

The road ahead: embracing AI for a better customer experience

At the Konsulteer (a global analyst firm focused on Data, AI, and Enterprise Applications) webinar, “Agentic AI in 2025: Adoption Trends, Challenges, & Opportunities,” it was highlighted that customer service and support is the top initial use case for agentic AI, with 78% of companies considering it for pilot projects.

As agentic AI reshapes customer service, its ability to enhance response times, deliver hyper-personalized experiences, and elevate satisfaction is transforming industries, and its role in crafting dynamic, tailored customer experiences will only grow.


  1. What Are Customer Expectations, and How Have They Changed? (Salesforce)
  2. Forrester’s 2024 US Customer Experience Index
  3. Gartner Predicts Agentic AI Will Autonomously Resolve 80% of Common Customer Service Issues Without Human Intervention by 2029
  4. Case Study: Sephora’s Use of AI to Deliver Personalized Beauty Experiences
  5. Deep Brew: Transforming Starbucks into an AI & data-driven company
  6. Case Study: How Netflix Uses AI to Personalize Content Recommendations and Improve Digital Marketing
  7. How AI will make payments more efficient and reduce fraud
  8. How Walmart Uses AI to Optimize Inventory Management
  9. How H&M Uses AI-Powered Chatbots to Improve Customer Service
  10. Customer safety, convenience, and recognition boosted by the early implementation of Gen AI
  11. UNESCO’s Recommendation on the Ethics of Artificial Intelligence (2021)
  12. United Nations’ Principles for the Ethical Use of Artificial Intelligence (2022)
  13. Council of Europe Framework Convention on Artificial Intelligence (2024)

The article originally appeared on OneReach.ai.

Featured image courtesy: Alex Sherstnev.

The post How Agentic AI is Reshaping Customer Experience: From Response Time to Personalization appeared first on UX Magazine.

  •