Case study
Arbora: building an AI plant identification app for iPhone.
Arbora is an AI-powered plant identification and care app built for iPhone. I designed and built the product from scratch using SwiftUI, integrating computer vision for plant recognition and LLM-powered guidance for ongoing plant care.
Platform
iPhone
Native iOS app built with SwiftUI, optimized for the camera-first identification flow.
Focus
AI plant identification and care
Computer vision for species recognition paired with LLM-generated care advice tailored to each plant.
Role
Product, design, and engineering
End-to-end ownership from concept through architecture, implementation, and App Store delivery.
Tech stack
SwiftUI, Vision, and LLM APIs
Native SwiftUI interface with Apple Vision framework, custom ML pipeline, and cloud LLM integration.
The challenge
Plant apps are either too simple or too noisy.
Most plant identification apps stop at telling you the species name. The ones that go further tend to overwhelm users with encyclopedia-style content that does not help with the actual question: what should I do with this plant right now? The challenge with Arbora was building something that could identify plants accurately, give useful care guidance, and stay simple enough for everyday use.
The approach
Camera-first identification with AI-powered follow-up.
The core interaction is simple: point your camera at a plant and get an answer. Behind that simplicity sits a pipeline that combines image classification with LLM-generated care advice. The identification model handles species recognition, while a separate LLM layer generates personalized care guidance based on the species, the user's location, and the current season.
I chose to keep the AI features purposeful rather than decorative. The LLM does not narrate everything. It answers specific questions about watering, light, repotting, and common problems. The result is an app that feels helpful without feeling like a chatbot.
What I shipped
- Camera-based plant identification using a combination of on-device Vision framework processing and cloud ML models.
- LLM-powered care guidance that adapts recommendations based on species, location, and time of year.
- A plant collection feature where users can save identified plants and receive ongoing care reminders.
- Health diagnosis from photos, allowing users to photograph leaf damage or discoloration and get actionable advice.
- Clean SwiftUI interface with smooth camera integration, result animations, and a reading experience designed for quick reference.
Technical decisions
Balancing on-device and cloud AI.
One of the key architectural decisions was how to split work between on-device processing and cloud APIs. Initial image preprocessing and basic classification happen on-device using Apple's Vision framework, which keeps the identification flow fast and responsive. More detailed species confirmation and all care guidance runs through cloud LLM APIs, which gives better accuracy and the ability to update the knowledge base without app updates.
The SwiftUI architecture follows a clean observation pattern with clear separation between the camera capture layer, the ML pipeline, and the UI. This made it straightforward to iterate on the AI features without touching the core app navigation or data layer.
What this shows
- I can build AI-powered iOS apps that combine multiple ML approaches into a coherent user experience.
- I know how to make AI features feel native and useful rather than bolted on.
- I am comfortable with the full stack of a modern iOS app: camera APIs, ML pipelines, SwiftUI, cloud integration, and App Store delivery.
Next
Building an AI-powered iOS app?
I work with founders who want strong native engineering, practical AI integration, and someone who can ship the whole product.