<- Back
Blog
Pricing
Contact
Resources
Product
Log in
All post
SHARE THIS POST
Published
April 8, 2026
John Lunsford
Founder, CEO

Designing AI-Native Experiences: What It Looks Like, What It Feels Like, and the Scariest Question of All

John Lunsford, Founder and CEO of Tethral

I have spent most of my career designing interfaces people can see. Buttons, menus, dashboards, toggles. The whole discipline of user experience is built on the assumption that the human needs to see the system in order to use it. Legibility is the first principle. If the user cannot find the button, the design failed.

I am now building something where the best version of the experience might be one where there is nothing to see at all. And I will be honest: that keeps me up at night, not because I think it is wrong, but because the tools I have for knowing whether it is working were built for a completely different kind of product.

I want to be clear about what this article is about, because the phrase "AI-first" gets used to describe two very different design problems. One is designing for humans whose environments are coordinated by AI. The other is designing systems where AI itself is the user, the entity encountering the interface and producing the load. That second problem is its own article. This one is about the first: what does it feel like to live inside an AI-coordinated experience, and how do you know if it is working?

In a previous article, I wrote about the tree falling in the forest and what it means for AI agent design. The core insight was that humans need to feel present in processes conducted on their behalf, that the process of care is the care. This article is the design companion to that philosophical one. If the tree piece asks "what do humans need from AI," this one asks "how on earth do you build for it?"

What It Looks Like

The honest answer is: it should not look like much.

Traditional interfaces manage complexity through visual hierarchy. More capability means more surface area: more tabs, more settings, more configuration. The system scales by becoming more visible. An AI-first experience scales by becoming less visible. That inversion is disorienting if you come from product design, and I do.

When it works, you say "winding down" and the room changes. You walk into a warehouse and the staging area is already configured for the shipments that arrived overnight. A patient room adjusts before the care team enters because the system already knows the protocol for the procedure scheduled in that room.

Notice what those examples share. Every one of them requires coordination across devices and systems from different manufacturers. No warehouse runs on a single vendor. No hospital does. No home does either, unless you committed to one ecosystem and accepted its limitations as the boundaries of your life. The AI-first experience is ecosystem-spanning by definition. Which means the platform that delivers it cannot belong to any one ecosystem. Apple cannot build this for a room that has Amazon devices in it. Amazon cannot build this for a hospital running Siemens monitors. The architecture of walled gardens makes it structurally impossible, and that is before you get to the question of whether any of them would want to.

What does it look like? It looks like a room that is already right when you walk into it. The physical environment is the interface. The AI is behind it, coordinating. The design challenge is not "how do we make this easy to use." It is "how do we make this unnecessary to think about."

That is a strange sentence for someone who has spent years making things easy to use. But it is the truth of what AI-first experience design requires. The experience is not the interaction with the technology. The experience is the environment that the technology produces.

What It Feels Like

This is where it gets personal.

A well-designed AI-first experience should feel like competence that is not yours. The house is ready. The warehouse floor is staged. The conference room is configured. You did not do any of it, but it reflects what you need. The closest analog most people have is the feeling of working with someone very good at their job who anticipates what you need before you ask. A chief of staff, but for your lifestyle, breaking out of the individual boundaries of ecosystem or location; it's embodied AI in the environment where you spend your time.

That feeling has a specific emotional signature: relief followed by trust. And both of those are fragile. One bad coordination, the room is too cold, the wrong pallets are staged, the patient room is set up for the wrong procedure, and trust collapses faster than it was built.

Here is the part that keeps me honest about this. In a traditional interface, a mistake is a bug. You see it, you report it, someone fixes it. In an AI-first experience, a mistake is a betrayal of an expectation you did not even know you had formed. The system taught you to stop thinking about it, and then it got something wrong. That is qualitatively different from clicking a button that does not work. It is the same discomfort I wrote about with the tree: you lost your place in the event, and now you have to puzzle through what happened using your own assumptions and anxieties.

Designing for this means designing for the feeling of not having to think. Which means the system has to be right an overwhelming percentage of the time before you let people stop thinking. You cannot fake it. You cannot ship at eighty percent accuracy and iterate your way there. Eighty percent accuracy in an AI-first experience means the system is wrong often enough that people never stop managing it, which means they never get the benefit. You have built a worse version of a traditional interface with fewer buttons.

I learned this the hard way at Uber. We could not ship a ride experience that was ninety percent reliable and expect parents to trust it with their teenagers. The threshold for trust in physical-world coordination is not "pretty good." It is "I stopped thinking about whether it would work." That is the bar.

How Do We Know When It Is Working?

This is the scariest question. And I do not think the industry has seriously confronted it yet.

Every measurement framework we have is built on interaction. Clicks, time on task, completion rates, error rates, session length, retention, engagement. These metrics assume the human is doing something observable. They measure the interaction between the person and the interface.

In an AI-first experience, the ideal state is the absence of interaction. The system is working best when nobody is touching it, when the environment is just correct and nobody needed to intervene.

How do you measure the quality of something that, when it works, produces no signal?

You cannot A/B test a feeling. You cannot measure time-on-task when the task disappeared. You cannot track engagement when the entire point is that the person is engaged with their life, not with your product.

What I believe the answer looks like, and what we are building toward at Tethral, is measurement based on deviation rather than engagement. You do not measure whether people use the system. You measure how often the system produces an environment that requires correction. The unit of quality is not a click. It is an intervention that did not happen.

That reframing has consequences. It means the primary metric for an AI-first experience is how often it does not need you. Not how much you use it. Not how often you open it. How often you forget it exists because everything around you was already right.

I realize that sounds like a terrible pitch to an investor. "Our best metric is when users forget we exist." But that is the product. That is what Lifestyle AI means. The technology disappears into the experience, and the experience is your life working the way you wanted it to without having to manage the seams yourself.

Why This Matters Now

We are at the beginning of a transition from AI as a tool you use to AI as an environment you inhabit. That transition is happening in homes, in logistics, in healthcare, in commercial spaces. And in every one of those domains, the design questions are the same.

What does it look like? Less than you expect. What does it feel like? Like competence that is not yours. How do we know when it is working? When nobody needed to intervene.

Here is what I think gets missed in most conversations about this transition: it cannot happen inside walled gardens. The environment is not one company's products. It is all of them, in the same room, coordinating. No single ecosystem will build that because no single ecosystem has an incentive to make its competitors' devices work seamlessly alongside its own. The transition from AI-as-tool to AI-as-environment requires an open coordination layer that belongs to the user, not to any vendor. That is what Lifestyle AI is, and that is what we are building at Tethral.

The industry that figures out how to design for invisible AI, to measure for it, and to build trust around it, will define the next era of how people relate to technology. Not as users. As people whose environments have become intelligent around them.

And if that sounds abstract, the next piece in this series is about the connected technology industry that was supposed to get us there and why it got stuck. Because the problem is not just how you design AI-first experiences. The problem is that the industry framing we inherited made it almost impossible to imagine them.

This is part of an ongoing series on the foundational design principles behind Lifestyle AI.

RELATED POSTS

Tethral
Privacy-first, hubless IoT orchestration platform that puts you in control of your connected devices.
© 2025 Tethral. All rights reserved.
Built with privacy in mind.