Today I am going to show you how to use the Camera module
within a React Native App. We will use the camera to take profile pictures and to recognize text via OCR
to search a word definition. We will also perform IO operations within the app. Welcome back to Just Code Channel! Please support us by subscribing to our channel and like our video. If you have not watched our last tutorial video, please do so by click
on the link provided below. If you have any questions in this tutorial, please leave a comment below. Today tutorial, we will continue to modify the Dictionary app we had created in the past tutorial to illustrate how to use the
device Camera and IO features. We will modify the existing Drawer Navigator to allow the user to click a button to take a profile photo and save the photo into the device storage. We will also modify the existing Search page to allow the user to input text
for searching via the camera OCR. Before we start coding, we need to install a few more npm modules require by the new features we are going to build. Let's bring up our terminal screen. Make sure you are in the project root folder. The first module we going to install is the react-native-camera module. Just type yarn add react-native-camera
and press enter. The most tedious part
in React Native development is to install native npm modules that require special set up steps. The installation had completed. Let's bring out the VS Code
with the project folder opened. In the iOS info.plist file, specify the camera usage as shown. This is to indicate to iOS that the app
will using the camera features. Next, open the AndroidManifest.xml file under the android folder and add in the following permission. These permissions allow the
app to use the camera feature in the Android environment. In the android/app/build.gradle file, insert the missingDimensionStrategy
setting under the defaultConfig block. If we are only using the camera to take a photo, then what we had done so far is sufficient and we can proceed to code. However, we also want to use the camera to
perform optical character recognition (OCR), therefore, we have additional setup to perform. We need to enable the MLKit
from Firebase to use the OCR feature. Let us first configure MLKit for iOS. In the iOS Podfile, add in the pod config for the camera
module with TextDetector as shown. Save the Podfile and run
pod install via the terminal screen. After pod install completed, we need to set up a project
in the Google Firebase Console. Let's browse to
Firebase Console in the browser. You need to have a valid Google
account to access the console. In the Firebase Console, click on the "Create project" button. Enter the project name. You can use any name but be meaningful. or this example, we will enter JustCodeDict and click on the "Continue" button. Accept the default setting and click on the "Continue" button. Choose a Google Analytics
Account from the Dropdown list. If there is no existing account then
select "Create a new account" option. For now, we will select Google Ads Account and
click on the "Create project" button. Firebase will now create the project,
let's wait for the process to complete. Once the process completed, click on the continue button. On the project page, click on the iOS icon to add an iOS app. Key in the iOS bundle ID, we will leave the rest of the fields empty and click on the "Register app" button. Download the GoogleService-Info.plist file. Open up the JustCodeDict workspace in Xcode and the folder where the downloaded file located. Drag the GoogleService-Info.plist file from the folder to the Xcode application folder. Add in pod 'Firebase/Core' in the iOS Podfile and run pod install. In the iOS AppDelegate.m file, import the Firebase header file and call the FIRApp configure method in the didFinishLaunchingWithOptions method. Next, we will configure the MLKit for Android. First, we will modify the missingDimensionStrategy under the defaultConfig block in the android/app/build.gradle file, by changing the general
to mlkit and save the file. Now, bring out the Firebase Console browser. Click on the Android button
to create an Android app. Key in the Android package name as shown and click on the "Register app" button. Click on the
"Download google-services.json" button. Copy the downloaded file to android/app folder. In the android/build.gradle file, add in the classpath for google services
under the dependencies block as shown. Now, open the android/app/build.gradle file and add in the implementation in the dependencies block as shown. In the same file, scroll till the end of the file, add in the apply plugin setting as shown. Now, open the AndroidManifest.xml and add in the meta-data tag under the application tag as shown. This will enable automatically to download the ML model to the device
after the app is installed from the Play Store. We had almost done with the Camera module. However, with the camera module installed, the app for Android will generate the class file that exceeds the
maximum 64K methods allowed. To overcome this issue, we need to enable the
Multi Dex class file in Android. To do that, we need to add in the setting multiDexEnabled under the defaultConfig block in the android/app/build.gradle file as shown. Then, in the same file
at the dependencies block, add in the implementation
for multidex as shown. Now, open the MainApplication.java
in the android folder, replace the extends "Application" to "MultiDexApplication" as shown. Save the MainApplication.java file and clean the Android build files by the gradlew clean command as shown. The next module we going to install is the React Native FileSystem module. In the terminal screen, key in yarn add react-native-fs and press enter. Once the installation completed, open the iOS Podfile and add in the pod config for RNFS as shown. In the same Podfile, scroll to the add_flipper_pods block. Change the Flipper version to 0.37.0. This change is to fix the Image component
bug when displaying base64 image in iOS. Now, let's run the pod install command to
complete the installation for the modules. We had completed all the modules installation. To verify the MLKit in Firebase is set up correctly, now let's bring out the Firebase console. In the Firebase Android App page, click on item 4, you will see the status of the installation. Now, let's run the JustCodeDict
app in the Android device. Once the app runs in the device, the Firebase console will
update the status as successful. Close the Android app page
in the Firebase console. Select the iOS app, then click on item 5 to verify the setup. Now, let's run the JustCodeDict in the iOS simulator. We will see the Firebase
update the status as successful. We had verified the MLKit setup in Firebase for the camera module. Next, we will start to modify the source code to add in the camera features. First, we will create a custom component to wrap around the
react-native-camera module. Let's bring out VS Code and create a camera folder
under the src/components folder. Under it, we will create an index.js file. At the beginning of the file, we will import all the required modules including the RNCamera module. Follow by, we will export
the RNCamera constants. The three dots in front of the RNCamera tell the compiler to de-structure
the RNCamera.Constants object. Next, we will create a Camera class component. Let's leave the component structure empty first. We will proceed to define
the component propTypes. The first property will be the cameraType with the PropTypes set to any. This property indicates which camera front or back to be used in the app. The next property is the flashMode and it is also set to PropTypes.any. This property will turn on and off the flash, or set the flash to auto. Follow by is the autoFocus property. It tells the camera to turn on
or off the autofocus feature. The whiteBalance property tells the camera to use
which white balance profile built in the camera. You can set the whiteBalance to sunny, cloudy, shadow, fluorescent, or auto. The ratio property tells the camera to save the captured
image into a ratio defined in the property. It is a string PropTypes. For example, we can have '1:1', '4:3' or '16:9' ratio. The quality property tells the camera to save the captured
image in a specified quality. It is a numeric value. The range will be from 0 to 1. Where 0 is the lowest quality
and 1 is the best quality. The imageWidth property is used to set the width of the captured image. It is a numeric type. he height of the image will be
automatically calculated base on the ratio. The style is used to set the style on how the camera to be displayed and it is an object type. The onCapture property holds the method for the
camera component to call back when it captured the image. This method will accept two parameters, the data parameter that
contains the base64 image. The recognizedText parameter
will contain the recognized text when enabledOCR property is enabled. The last property is the onClose method. It is used to perform a call back to the caller when the user closes the camera
without taking a picture. Now, we will set the
default value for all the properties. First, we will set the cameraType
default to the back camera, then set the flashMode to off. We will set the autoFocus to on, whiteBalance to auto, ratio to 4 by 3, quality to 50%, imageWidth to 768 pixels, style to null, onCapture to null, enableOCR to false and onClose to null. Next, we will define all the styles
required by the camera component. We will not go into detail on the style definition. Let us go scroll back to
the Camera class component. We first define the camera object variable to hold the RNCamera object reference by setting its initial value to null. Next, we will define the component state. The state will consist of the cameraType, flashMode, and recognizedText object. You might ask hy we need the
cameraType and flashMode state since we already have cameraType
and flashMode property. The reason is the user
can change the cameraType and flashMode within the component, therefore, we need to
have the state to hold the value. We will initialize the default value
for the state as shown. Follow by, we will define
the componentDidMount method. Within the method,
we will set the cameraType and flashMode state to the value passed via the property. Now, we will define the
render method for the component. We will create a view as the component container. Within the container view, we will insert the RNCamera component. In the RNCamera component, we first assign the component reference to the camera class object variable. We will need to use this
reference later to take a picture. Next, we will set the style, camera type, flash mode, ratio, capture audio, autofocus,
white balance, permission options, and the onTextRecognized method. We set the captureAudio to false s we don't need the audio for
taking a picture or recognize text. The onTextRecognized property
holds the callback method if the enabledOCR is enabled. This is to receive the recognized text
whenever the camera detects text in the image. Under the RNCamera component, we will create a view to holds a flash mode toggle button, a capture button,
and a camera type switch button. The flash mode toggle button will toggle the flash mode from off to auto, auto to on and on to off again. The toggle button image will change
according to the current mode. The capture button will
fire the takePciture method to take a photo. We will define takePciture
method in a short while. The camera type switch button will only be displayed when
the enabledOCR is false. This button toggle the front
and back camera alternatively. Last but not least, we will create a button to allow the user to close the camera without taking a picture. Now, let us define the takePicture method. We will define it as an async method. First, we check if the camera object
variable is linked to the RNCamera ref or not. If the camera variable is defined, then we will call the
camera takePictureAsync method by passing in the options that control the quality, width, and the other setting. Once we received the camera data from takePictureAsync method, we will call the onCapture
callback method if defined, passing in the base64 image and the recognizedText state. The next method we need to define
is the onTextRecognized method. This method will receive the recognized
text object via the data parameter. This method will set the data into the state only if enabledOCR is true and the data.textBlocks
contain the recognized text. We had completed the
Camera custom component. Let's start to modify the existing app to allow the user to take a profile picture. Open the App.js file and import the Camera component and the React Native File System module, RNFS. In the App component, we will define the showCamera
and profilePhoto state. The showCamera state is used to decide to show the Camera module depending on user action and the profilePhoto state
will hold the user profile photo. The profile photo can be the default icon.png file or a file capture by the camera. Next, we will define the
componentDidMount method. In this method, we will check is there any file with the name profilePic.png that exists in the DocumentDirectoryPath using the RNFS.exists method. If it exists, then it will read the file content
using RNFS.readFile and update it to the profilePhoto state as an object with the uri property. This is the object required by the Image
component to display the base64 image. Follow by, we will define
the saveProfilePhoto method. This method will save the base64
image returned by the camera into the device storage. It first defined the path
of the image to be saved. Then, it will strip off the base64 image prefix in the data parameter, if any. It will call the RNFS.writeFile method to save the imgData into the path specified. Once the file is saved, it will update the
profilePhoto state with the image data. In the render method, we will add in the Camera component that will display only
if showCamera state is true. Since this Camera component
is for taking profile photos, therefore, we will set
the cameraType to the front camera. We will set the flashMode to off, autoFocus to on, whiteBalance to auto, ratio to 1 by 1 so that it is suitable for the profile photo. We will also set the quality to 50%, image width to 800 pixels. We will set the onCapture event to saveProfilePhoto method
we had defined just now. Next, we define the onClose event to set the showCamera state to false when the user closed the camera
without taking a photo. Now, we need to add a trigger in the UI to allow the user to bring out the camera. We will add a camera button
beside the profile photo. Let's scroll to the
DrawerContent functional component. We will create a View to wrap around
the existing Image component. We then change the
image source from icon.png to the profilePhoto property. Below the Image component, we add in a TouchableOpacity component. Within it, we will add in the Icon component. In the TouchableOpacity onPress event, we will call the toggleCamera method from the props which we will define later. This toggleCamera method will change the showCamera state
in the App component so that the Camera component will show up. Take note of how we write the code to call the toggleCamera method. The first part is to check if
props.toggleCamera is defined or not. The second part will invoke the props.toggleCamera method if the first part is true. This is a shorthand way of writing the code, instead of using an if-else statement. Since the DrawerContent is
used by the DrawerNav component and it expects to have the
profilePhoto and toggleCamera properties, therefore, we need to
modify the DrawerNav component to pass in the two properties from DrawerNav props. DrawerNav act as a proxy when passing
the profilePhoto and toggleCamera properties. Therefore, we need to make sure these two
properties are available when DrawerNav is invoked. Let's scroll to the render
method of the App component. We will define the toggleCamera method to toggle the showCamera state as shown. Next, we will assign the profilePhoto property
to the profilePhoto state. What we had done so far is to change the showCamera state
in the App class component when the user taps on the camera icon in the DrawerContent functional component. Since the App, DrawerNav,
and DrawerContent are different component, therefore, DrawerContent cannot directly
set the showCamera state in the App component. That is the reason why we need to pass in the
toggleCamera method recursively down to the DrawerContent component. This is one of the major drawbacks when using component state
with a deeply nested situation. Luckily, we have React Redux
come to the recuse, which I will cover that in the future tutorial. We had completed the coding
for changing the user profile photo. Now, let's run the app in the android device and I will step into the code
to let you see the logic flow. In the VS Code, let's place a few breakpoints in the App.js and camera/index.js file. Now, we will debug the
application using VS Code. Select the Debug Android option and wait for the application
to run in the Android Device. Once the app launched in the device, you will see that the breakpoint in the
componentDidMount method will be fired. Press F5 to run to the next breakpoint, you will see the file does not exist, therefore it will skip the readFile method. Now, bring out the side menu in the
app and tap on the camera button. The breakpoint in the
DrawerContent component will be fired. If we press F5, then it will fire the toggleCamera method
defined in the App render method when invoking DrawerNav as shown. It will then toggle the showCamera state. When we press F5 to continue, the camera component will display. Let's take a selfie as the profile photo by a tap on the shutter button. The breakpoint in the
saveProfilePhoto method will be fired. It will get the path to save the photo. Press F5 to continue and the next breakpoint will hit. At this stage, the photo had saved
into the device storage successfully. Press F5 to update the state with the imgData and the selfie we just took
will appear in the profile photo. Now, let's reload the application to fired the
breakpoint in the componentDidMount method again. Press F5 to execute the RNFS.exists method. The exist variable will be true and this means that
the profile photo exists in the system. We will press F5 again to read the profile photo
via RNFS.readFile method. We can see the buffer variable will contain
the profile photo base64 image data. Pressing F5 again to
update the profilePhoto state. Bring out the side menu again and we will see the profile photo
display the selfie we took earlier. We had completed the
profile photo feature in the app. Next, we will modify the existing Search
page to use the text recognition feature to input a word to search. Before making any changes
in the Search page, we need to create a custom component to allow the user to select a word from a list of words
returned by the text recognition. First, create a wordSelector folder
under the components folder and then create an index.js file. Let's import the required modules
at the beginning of the file. Then we will create a
class component named WordSelector. Before implementing the component, let's define the component's properties and styles. We have the wordBlock property
to allow the caller to pass in the wordBlock object returned
by the camera module. The onWordSelected property is a callback function when
the user selected a word. Now, we will define the initial state
for the WordSelector component. We will have the
selectedWordIdx and wordList state. The selectedWordIdx state keep tracks
the word index the user had selected. The wordList state will store an array of words converted from the wordBlock property. Next, we define the componentDidMount method which will convert the wordBlock property to a single dimension array and update the wordList state. Follow by we will create
a populateWords method to return a list of
TouchableHighlight components that contain each word in the wordList state. Each of the TouchableHighlight components will handle the user onPress method to update the selectedWordIdx state based on the word the user had selected. In the render method, we will
display a prompt for the user. Called the populateWords method to display
all words and wrap it in the ScrollView component. At the end of the screen, we will place an Ok button for the user to confirm the word selection. Once the user taps on the button, it will perform a callback function
via onWordSelected property. Next, we need to modify
the existing search/index.js file to use the Camera and WordSelector
to perform word input for searching. We will begin by importing the
Camera and WordSelector module. Then we will modify the existing state to add in showCamera,
showWordList, and recogonizedText state. We will add in the onOCRCapture method to handle the Camera
component's onCapture event. In this method, we will update
the states to hide the camera, show the WordSelector component, and update the recogonizedText state. The next method we going to add
is the onWordSelected method. It will handle the WordSelector
component's onWordSelected event by update the userWord state and hide the WordSelector component. It will then wait for 500 milliseconds to fire the onSearch method
to perform a word search. The reason to wait for 500 milliseconds is that the states take time to update. So, if we fired the
onSearch method without waiting, then the userWord state might not
contain the word to search yet. Now we will modify the TextInput component by adding a TouchableOpacity component and wrap it with the View component. When the user taps on the
TouchableOpacity component, it will update the showCamera state to show the Camera component. Next, we add in the Camera component
after the SafeAreaView component. The Camera component will only display if the showCamera state is true. We also assign the onOCRCapture
method to the onCapture event. In the onClose event, we will
update the showCamera state to false. Follow by we will add
the WordSelector component by passing in the recogonizedText state to the wordBlock property and assign the onWordSelected event. Finally, we will update the style's definition. We had completed all the changes
and let us save the files. Before we step through the code, let's remove all existing breakpoints and placed new breakpoints in
those files we had just modified. Now, tap on the camera icon in the
app and take a picture with wording. The onOCRCapture method will be fired and we can see the recogonizedText
contained a complex data structure. Let's press F5 to continue and the populateWords method
in WordSelector will be fired. However, the component is not yet mount, therefore, the populateWords
will return an empty array. Press F5 again and the
componentDidMount will be fired. The componentDidMount will
break down the complex data pass by the recogonizedText to the wordBlock property into
a single dimension array as shown. Press F5 again and populateWords will be fired again. owever, this time we have the
word array in the wordList state. Press F5 to go to the last line
of populateWords method. We will see the wordViews array contain a list
of TouchableHighlight component objects to be returned. Press F5 again and the WordSelector will
appear with a list of words for the user to select. Tap on a word in the list and you will
see the Ok button became enabled. Tap on the Ok button and
onWordSelected method will be fired. It received the selected word
via the word parameter as shown. Now, press F5 to continue the execution. The WordSelector component will be hidden and the Search page will perform
the word lookup via the Oxford Api. We had come to the end of this tutorial. Today, I had show you how to install and setup the React Native
Camera and FileSystem module. How to use the module
to take a profile photo, perform text recognition, and IO operation. If you have any questions
regarding this tutorial, please feel free to leave a comment below. You can follow the link to
get the workable source code for the app in this tutorial on our Github page. Don't forget to change the Oxford API app ID and Key to yours and generate your
own Firebase config file for iOS and Android. If you find our video useful, do support us by like our video, and subscribe to our channel, and click on the notification bell! Thanks for watching, bye!