This project builds on Section 23 of the Complete iOS Development Bootcamp by Angela Yu. Inspired by the "What's That Food" app, I took this concept further as a personal challenge, creating an AI-powered image recognition app using Core ML with a SwiftUI interface.
The AI Image Recognition App uses Core ML and Vision to classify images selected by the user, returning the identified object along with a confidence score. The app leverages SwiftUI's declarative syntax and integrates a custom ImagePicker
for capturing or selecting images.
In developing this app, I focused on:
- Core ML and Vision Frameworks: Implemented Core ML and Vision requests for image classification, utilizing pre-trained models.
- SwiftUI Integration: Built a SwiftUI-based interface for a modern, declarative approach to UI design.
- Handling Image Inputs: Created a custom
ImagePicker
usingUIViewControllerRepresentable
to seamlessly manage image selection within a SwiftUI context.
- Core ML model integration and Vision request handling
- SwiftUI components and layout for an intuitive user interface
- UIKit and SwiftUI interoperability through
UIViewControllerRepresentable
- Provides real-time feedback on object identification with high-confidence results.
- Offers dual options for users to select an image from the photo library or capture a new one using the camera.
- Designed to handle edge cases where the image is unrecognizable.
For more information, feel free to reach out:
- Email: aranfononi@gmail.com
- LinkedIn: Aran Fononi