Writing
Building AI features in SwiftUI apps: a practical guide.
After shipping AI features in multiple SwiftUI apps, I have formed clear opinions on what works. This post covers the practical side of integrating LLM APIs, handling streaming responses, and designing AI interactions that feel native on iOS.
The landscape
AI in iOS apps is still early.
Most iOS apps that advertise AI features are thin wrappers around a chat API. The model does the work, the app just passes messages back and forth. That approach ships fast, but it rarely produces something worth using daily. The apps that stand out are the ones where AI is integrated thoughtfully into the native experience rather than bolted on as a feature checkbox.
Building Hustlrr and Arbora taught me that the interesting engineering is not in the API call itself. It is in how you present AI responses, handle errors gracefully, manage streaming state, and make the whole thing feel like a natural part of the app.
Architecture
Separate the AI layer from the UI layer.
The first decision that pays off is keeping your LLM integration completely separate from your SwiftUI views. I use a service layer that handles all API communication, response parsing, and error handling. The views only see clean, typed Swift models, never raw API responses.
This separation matters because LLM APIs change frequently, rate limits vary, and you will almost certainly switch providers or models during the life of the product. If your views are coupled to a specific API response format, every provider change becomes a UI rewrite.
In practice, I define a protocol for the AI service that exposes async methods returning domain-specific types. The concrete implementation handles the HTTP calls, JSON parsing, authentication, and retry logic. SwiftUI views observe a view model that calls through this protocol, so swapping the underlying provider is a one-file change.
Streaming
Streaming responses need careful state management.
Users expect AI responses to stream in token by token. In SwiftUI, this means updating the UI on every chunk while keeping the scroll position, input state, and other views stable. The naive approach of appending to a string and letting SwiftUI re-render works for simple cases, but it falls apart quickly when you have a conversation thread with multiple messages.
What works well is using AsyncSequence to model the stream and the Observation framework to
drive view updates. Each incoming chunk updates a published property on the view model, and SwiftUI's
diffing handles the rest. The key detail is batching updates to avoid overwhelming the render loop.
Accumulating a few tokens before triggering a view update gives a much smoother visual result than
updating on every single token.
Error handling during streaming is its own challenge. The connection can drop mid-response, the model can hit a content filter, or the user can cancel. Each of these needs a different UI treatment, and all of them need to leave the app in a clean state rather than showing a half-rendered response.
UX patterns
AI interactions should feel native, not like a web app.
The biggest UX mistake in AI-powered iOS apps is copying web chat interfaces. A native iOS app has access to haptics, smooth animations, gesture-driven navigation, and platform conventions that web apps cannot match. Using these makes AI features feel integrated rather than imported.
A few patterns that work well in SwiftUI:
- Use
matchedGeometryEffectto animate between the input state and the response state, making the transition feel continuous rather than abrupt. - Add subtle haptic feedback when a response starts streaming to give the user physical confirmation that something is happening.
- Use
ScrollViewReaderwith smooth scrolling to keep the latest content visible without jarring jumps. - Design loading states that show real progress indicators rather than generic spinners, because AI response times are unpredictable.
- Let users copy, share, or act on AI responses using native iOS share sheets and context menus, not custom popovers.
Prompt design
Prompts are product decisions, not engineering details.
In the apps I have built, the system prompts and prompt templates are as much a product decision as the UI layout. They determine the tone, the scope of what the AI will answer, and the format of the response. I treat them as first-class configuration that gets reviewed alongside design changes.
One practical approach that works well is keeping prompts in a dedicated Swift file with clear naming and documentation. This makes it easy to iterate on them, A/B test different versions, and ensure consistency across features. When the prompt changes, it is a code change with a clear diff, not a hidden string update in a view model.
Performance
On-device vs. cloud: choose based on the task.
Apple's Core ML and Vision frameworks are good for specific tasks like image classification, text recognition, and object detection. For these, running on-device gives faster response times, works offline, and avoids API costs. For open-ended text generation, summarization, or complex reasoning, cloud LLM APIs are still the better choice.
In Arbora, I combined both approaches. Initial plant identification uses on-device Vision processing to give near-instant feedback when the user points the camera. The detailed care advice then uses a cloud LLM that has access to a broader knowledge base. The user experiences this as a single fast flow, but behind the scenes it is a pipeline that picks the right tool for each step.
What I would tell another developer
Start with the user experience, not the model.
The most common mistake I see in AI-powered iOS apps is starting with the AI capability and working backward to a UI. The better approach is to design the interaction first and then figure out which AI capabilities serve it. Sometimes the answer is a simple classification model. Sometimes it is a full LLM conversation. Sometimes it is no AI at all, just good native UX.
If you are building AI features in a SwiftUI app, invest the time in clean architecture, native UI patterns, and proper error handling. The model is the easy part. The hard part is making it feel like it belongs in the app.
Next
Building AI features into your SwiftUI app?
I work with founders who want practical AI integration, strong native execution, and someone who can ship the whole product.