With the advancement of AI and machine learning, integrating image recognition features into React Native applications has become easier than ever. In this blog, we will explore how to implement AI-powered image recognition in a React Native app and generate suggestions based on the detected objects.
Enhanced User Experience – AI-driven image recognition enables intuitive interactions by recognizing and categorizing objects in real-time.
▪️Automation – Reduces manual effort by automatically tagging and suggesting relevant items.
▪️Improved Accessibility – Helps visually impaired users by describing images.
▪️Business Insights – Useful in e-commerce for product recommendations and in security for surveillance.
▪️Cost Efficiency – Reduces reliance on manual data entry and human intervention.
▪️E-commerce Apps: Auto-tagging products in images.
▪️Healthcare Apps: Identifying medical conditions from images.
▪️Security Apps: Recognizing faces, objects, or anomalies in surveillance footage.
▪️Social Media Apps: Categorizing and filtering uploaded content.
▪️Travel & Navigation Apps: Identifying landmarks and providing information.
▪️Cloud-based & On-device APIs: Offers both cloud-based and offline models.
▪️High Accuracy: Uses Google’s advanced machine learning models.
▪️Easy Integration: Works seamlessly with React Native.
▪️Scalability: Efficiently handles large datasets and multiple users.
Before we start, ensure you have the following:
▪️Node.js installed
▪️React Native 0.76 environment set up
▪️Firebase account (for ML Kit) or TensorFlow.js
▪️Android Studio / Xcode for emulator testing
ImageRecognitionApp/
│── android/ # Android project files
│── ios/ # iOS project files
│── src/ # Main source code
│ │── components/ # UI components
│ │── screens/ # Screens (CameraScreen, RecognitionScreen)
│ │── App.js # Entry point of the app
│── package.json # Dependencies and scripts
│── metro.config.js # Metro bundler configuration
│── babel.config.js # Babel configuration
│── index.js # Entry file
Run the following command to create a new React Native project:
npx react-native@latest init ImageRecognitionApp
cd ImageRecognitionApp
We will use react-native-vision-camera for capturing images and Firebase ML Kit for image recognition.
npm install react-native-vision-camera
npm install @react-native-firebase/app @react-native-firebase/ml
For iOS, run:
cd ios && pod install
▪️Go to Firebase Console
▪️Create a new project
▪️Add your app (Android/iOS)
▪️Download and place google-services.json (Android) or GoogleService-Info.plist (iOS) in your project
▪️Enable ML Kit’s Vision API
We will use react-native-vision-camera to take pictures.
Ensure the app requests the necessary permissions.
import React, {useEffect} from 'react';
import {createStackNavigator} from '@react-navigation/stack';
import {NavigationContainer} from '@react-navigation/native';
import {useCameraPermission} from 'react-native-vision-camera';
import CameraScreen from './src/screens/CameraScreen';
import RecognitionScreen from './src/screens/RecognitionScreen';
type RootStackParamList = {
Camera: undefined;
Recognition: {imageUri: string}; // Ensure this matches the parameter you're passing
};
const Stack = createStackNavigator<RootStackParamList>();
const App = () => {
const {hasPermission, requestPermission} = useCameraPermission();
useEffect(() => {
if (!hasPermission) {
requestPermission();
}
}, [hasPermission, requestPermission]);
return (
<NavigationContainer>
<Stack.Navigator initialRouteName="Camera">
<Stack.Screen name="Camera" component={CameraScreen} />
<Stack.Screen name="Recognition" component={RecognitionScreen} />
</Stack.Navigator>
</NavigationContainer>
);
};
export default App;
▪️React, useEffect: React core functionality and lifecycle method.
▪️CreateStackNavigator, NavigationContainer: Used to handle navigation between screens.
▪️UseCameraPermission: Hook from react-native-vision-camera to manage camera permissions.
▪️CameraScreen, RecognitionScreen: The two screens in our app.
▪️RootStackParamList: Defines the types of screen navigation parameters:
▪️Camera: No parameters.
▪️Recognition: Accepts an imageUri parameter (string).
▪️CreateStackNavigator<RootStackParamList>(): Creates the stack navigator and ensures correct TypeScript type checking.
▪️HasPermission: Checks if the app has camera access.
▪️RequestPermission(): Requests camera access if not granted.
▪️UseEffect(): Runs the permission request logic when the app starts.
▪️NavigationContainer: Wraps the navigation stack.
▪️Stack.Navigator: Starts with the CameraScreen (initialRouteName=”Camera”).
▪️Navigates to RecognitionScreen when a picture is taken.
import React, {useRef} from 'react';
import {View, Text, TouchableOpacity, StyleSheet} from 'react-native';
import {Camera, useCameraDevice} from 'react-native-vision-camera';
import {useNavigation} from '@react-navigation/native';
const CameraScreen: React.FC = () => {
const navigation = useNavigation();
const cameraRef = useRef<Camera>(null);
const device = useCameraDevice('back');
const takePicture = async () => {
if (cameraRef.current) {
const photo = await cameraRef.current.takePhoto();
navigation.navigate('Recognition', {imageUri: photo.path});
}
};
return (
<View style={styles.container}>
{device && (
<Camera
ref={cameraRef}
style={styles.camera}
device={device}
isActive={true}
photo={true}
/>
)}
<TouchableOpacity onPress={takePicture} style={styles.button}>
<Text style={styles.buttonText}>Take Picture</Text>
</TouchableOpacity>
</View>
);
};
const styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
},
camera: {
flex: 1,
},
button: {
padding: 16,
backgroundColor: 'blue',
alignItems: 'center',
borderRadius: 8,
},
buttonText: {
color: 'white',
fontSize: 16,
},
});
export default CameraScreen;
▪️useNavigation(): Gets the navigation object to switch between screens.
▪️useRef<Camera>(null): Stores a reference to the camera component.
▪️useCameraDevice(‘back’): Selects the back camera for capturing images.
▪️cameraRef.current.takePhoto(): Captures an image from the camera.
▪️navigation.navigate(‘Recognition’, {imageUri: photo.path}): Sends the image path to the RecognitionScreen for processing.
▪️Uses ref={cameraRef} to control the camera.
▪️device={device} ensures the back camera is used.
▪️isActive={true} keeps the camera running.
▪️photo={true} enables image capture.
Now, let’s process the captured image using Firebase ML Kit.
import React, {useEffect, useState} from 'react';
import {View, Text, Image, ActivityIndicator, StyleSheet} from 'react-native';
import ml from '@react-native-firebase/ml';
import {RouteProp} from '@react-navigation/native';
import {StackNavigationProp} from '@react-navigation/stack';
// Define navigation stack types
type RootStackParamList = {
Recognition: {imageUri: string};
};
// Define props type for RecognitionScreen
interface RecognitionScreenProps {
route: RouteProp<RootStackParamList, 'Recognition'>;
navigation: StackNavigationProp<RootStackParamList, 'Recognition'>;
}
const suggestionsMap: Record<string, string[]> = {
Dog: ['Buy dog food', 'Take your dog for a walk', 'Find pet-friendly parks'],
Cat: [
'Get cat treats',
'Check out new scratching posts',
'Look for cat grooming services',
],
Car: [
'Check fuel levels',
'Schedule maintenance',
'Look for nearby car washes',
],
Food: ['Try new recipes', 'Order takeout', 'Visit a nearby restaurant'],
Laptop: [
'Update software',
'Clean your keyboard',
'Check for latest accessories',
],
Book: ['Find similar books', 'Join a book club', 'Write a book review'],
};
const getSuggestions = (labels: string[]): string[] => {
return labels.flatMap(label => suggestionsMap[label] || []);
};
const RecognitionScreen: React.FC<RecognitionScreenProps> = ({route}) => {
const {imageUri} = route.params;
const [labels, setLabels] = useState<string[]>([]);
const [loading, setLoading] = useState<boolean>(true);
useEffect(() => {
const recognizeImage = async () => {
try {
const result = await ml().imageLabelerProcessImage(imageUri);
const detectedLabels = result.map((label: { text: any; }) => label.text);
setLabels(detectedLabels);
} catch (error) {
console.error(error);
}
setLoading(false);
};
recognizeImage();
}, [imageUri]);
const suggestions = getSuggestions(labels);
return (
<View style={styles.container}>
<Image source={{uri: imageUri}} style={styles.image} />
{loading ? (
<ActivityIndicator size="large" style={styles.loader} />
) : (
labels.map((label, index) => (
<Text key={index} style={styles.label}>
{label}
</Text>
))
)}
<View>
<Text style={styles.suggestionTitle}>Suggestions:</Text>
{suggestions.length > 0 ? (
suggestions.map((item, index) => (
<Text key={index} style={styles.suggestionItem}>
- {item}
</Text>
))
) : (
<Text style={styles.noSuggestions}>No suggestions available</Text>
)}
</View>
</View>
);
};
const styles = StyleSheet.create({
container: {
flex: 1,
padding: 20,
},
image: {
height: 300,
resizeMode: 'contain',
},
loader: {
marginTop: 20,
},
label: {
fontSize: 16,
marginVertical: 4,
},
suggestionTitle: {
fontSize: 18,
fontWeight: 'bold',
marginTop: 20,
},
suggestionItem: {
fontSize: 16,
marginVertical: 2,
},
noSuggestions: {
fontSize: 16,
fontStyle: 'italic',
marginTop: 10,
},
});
export default RecognitionScreen;
▪️useEffect runs once when the component mounts.
▪️ml().imageLabelerProcessImage(imageUri):
▪️Errors are logged if recognition fails.
▪️After processing, loading is set to false.
In this blog, we explored how to integrate AI-powered image recognition in a React Native app using Firebase ML Kit. We also learned how to generate suggestions based on detected objects. AI-based image recognition provides automation, enhances accessibility, and improves user experience, making it valuable in multiple industries.
Nandkishor Shinde is a React Native Developer with 5+ years of experience. With a primary focus on emerging technologies like React Native and React.js. His expertise spans across the domains of Blockchain and e-commerce, where he has actively contributed and gained valuable insights. His passion for learning is evident as he always remains open to acquiring new knowledge and skills.
A Deep Dive into Modern Clinical Workflows with AI Agents & CDS Hooks
Register NowThe team at Mindbowser was highly professional, patient, and collaborative throughout our engagement. They struck the right balance between offering guidance and taking direction, which made the development process smooth. Although our project wasn’t related to healthcare, we clearly benefited...
Founder, Texas Ranch Security
Mindbowser played a crucial role in helping us bring everything together into a unified, cohesive product. Their commitment to industry-standard coding practices made an enormous difference, allowing developers to seamlessly transition in and out of the project without any confusion....
CEO, MarketsAI
I'm thrilled to be partnering with Mindbowser on our journey with TravelRite. The collaboration has been exceptional, and I’m truly grateful for the dedication and expertise the team has brought to the development process. Their commitment to our mission is...
Founder & CEO, TravelRite
The Mindbowser team's professionalism consistently impressed me. Their commitment to quality shone through in every aspect of the project. They truly went the extra mile, ensuring they understood our needs perfectly and were always willing to invest the time to...
CTO, New Day Therapeutics
I collaborated with Mindbowser for several years on a complex SaaS platform project. They took over a partially completed project and successfully transformed it into a fully functional and robust platform. Throughout the entire process, the quality of their work...
President, E.B. Carlson
Mindbowser and team are professional, talented and very responsive. They got us through a challenging situation with our IOT product successfully. They will be our go to dev team going forward.
Founder, Cascada
Amazing team to work with. Very responsive and very skilled in both front and backend engineering. Looking forward to our next project together.
Co-Founder, Emerge
The team is great to work with. Very professional, on task, and efficient.
Founder, PeriopMD
I can not express enough how pleased we are with the whole team. From the first call and meeting, they took our vision and ran with it. Communication was easy and everyone was flexible to our schedule. I’m excited to...
Founder, Seeke
We had very close go live timeline and Mindbowser team got us live a month before.
CEO, BuyNow WorldWide
If you want a team of great developers, I recommend them for the next project.
Founder, Teach Reach
Mindbowser built both iOS and Android apps for Mindworks, that have stood the test of time. 5 years later they still function quite beautifully. Their team always met their objectives and I'm very happy with the end result. Thank you!
Founder, Mindworks
Mindbowser has delivered a much better quality product than our previous tech vendors. Our product is stable and passed Well Architected Framework Review from AWS.
CEO, PurpleAnt
I am happy to share that we got USD 10k in cloud credits courtesy of our friends at Mindbowser. Thank you Pravin and Ayush, this means a lot to us.
CTO, Shortlist
Mindbowser is one of the reasons that our app is successful. These guys have been a great team.
Founder & CEO, MangoMirror
Kudos for all your hard work and diligence on the Telehealth platform project. You made it possible.
CEO, ThriveHealth
Mindbowser helped us build an awesome iOS app to bring balance to people’s lives.
CEO, SMILINGMIND
They were a very responsive team! Extremely easy to communicate and work with!
Founder & CEO, TotTech
We’ve had very little-to-no hiccups at all—it’s been a really pleasurable experience.
Co-Founder, TEAM8s
Mindbowser was very helpful with explaining the development process and started quickly on the project.
Executive Director of Product Development, Innovation Lab
The greatest benefit we got from Mindbowser is the expertise. Their team has developed apps in all different industries with all types of social proofs.
Co-Founder, Vesica
Mindbowser is professional, efficient and thorough.
Consultant, XPRIZE
Very committed, they create beautiful apps and are very benevolent. They have brilliant Ideas.
Founder, S.T.A.R.S of Wellness
Mindbowser was great; they listened to us a lot and helped us hone in on the actual idea of the app. They had put together fantastic wireframes for us.
Co-Founder, Flat Earth
Ayush was responsive and paired me with the best team member possible, to complete my complex vision and project. Could not be happier.
Founder, Child Life On Call
The team from Mindbowser stayed on task, asked the right questions, and completed the required tasks in a timely fashion! Strong work team!
CEO, SDOH2Health LLC
Mindbowser was easy to work with and hit the ground running, immediately feeling like part of our team.
CEO, Stealth Startup
Mindbowser was an excellent partner in developing my fitness app. They were patient, attentive, & understood my business needs. The end product exceeded my expectations. Thrilled to share it globally.
Owner, Phalanx
Mindbowser's expertise in tech, process & mobile development made them our choice for our app. The team was dedicated to the process & delivered high-quality features on time. They also gave valuable industry advice. Highly recommend them for app development...
Co-Founder, Fox&Fork