Thoughts on the philosophy of building AI-native apps

2025-10-31 [ai-native, development]

While building VibeDict, I got a few early thoughts on what “AI-native” app development really means. These thoughts will surely be replaced by better ones later—but right now I feel excited while thinking about them, so that’s why I want to write them down.

💡 An app is just the body of the AI. AI is the soul.

Of course, the user is also a soul.

The app, the UI, and natural language are all just ways to interact.

Traditional apps use mostly fixed UIs. They define (and limit) how users can interact.

Natural language is a much more advanced way to interact.

A database is just a local cache that stores some memories that are useful to the user.

All features start from natural-language input. The model understands the intent, and the local app executes the actions.

So the interface is not the center of the logic. It’s only a “medium” for the conversation between the human and the AI.

The real intelligence, feature decisions, and learning experience all come from the large model behind it.

The most important job of an AI-native app is to provide the interface and the channel for the user to talk to the AI. Starting from this point: if it’s not necessary, don’t add more things.

AI-native app is an interface between user and AI.

An app is also a semantic execution engine: it turns user intent into local actions.

The whole interaction logic can be implemented as a general framework, for example:

AI Intent Classifier
    ↓
Intent Router (meaning → function mapping)
    ↓
Function Registry (function registry)
    ↓
Local function callings (real execution)

The app provides: