- Mastering OpenCV Android Application Programming
- Salil Kapur Nisarg Thakkar
- 1173字
- 2021-07-16 13:36:48
Creating our application
Let's create a very basic Android application that will read images from your phone's gallery and display them on the screen using the ImageView control. The application will also have a menu option to open the gallery to choose an image.
We will start off by creating a new Eclipse (or an Android Studio) project with a blank activity, and let's call our application Features App.
Note
Before doing anything to the application, initialize OpenCV in your application (refer to Chapter 1, Applying Effects to Images, on how to initialize OpenCV in an Android project).
To the blank activity, add an ImageView
control (used to display the image), as shown in the following code snippet:
<ImageView android:layout_width="fill_parent" android:layout_height="fill_parent" android:id="@+id/image_view" android:visibility="visible"/>
In the application menu, add an OpenGallery
menu option to open the phone's gallery and help us pick an image. For this, add a new menu item in the project's menu resource XML file (default location of the file is /res/menu/filename.xml
), as follows:
<item android:id="@+id/OpenGallery" android:title="@string/OpenGallery" android:orderInCategory="100" android:showAsAction="never" />
Tip
For more detailed information on menus in Android, refer to http://developer.android.com/guide/topics/ui/menus.html.
Let's now make the OpenGallery
menu option functional. Android API exposes a public boolean onOptionsItemSelected(MenuItem item)
function that allows the developer to program the option selection event. In this function, we will add a piece of code that will open the gallery of your phone to choose an image. Android API provides a predefined intent Intent.ACTION_PICK
just for this task; that is, to open the gallery and pick an image. We will use this intent for our application, as follows:
Intent intent = new Intent(Intent.ACTION_PICK, Uri.parse("content://media/internal/images/media"));
Let's modify the public boolean onOptionsItemSelected(MenuItem item)
function and make it function as per our need.
The final implementation of the function should look like this:
public boolean onOptionsItemSelected(MenuItem item) { // Handle action bar item clicks here. The action bar will // automatically handle clicks on the Home/Up button, so long // as you specify a parent activity in AndroidManifest.xml. int id = item.getItemId(); //noinspection SimplifiableIfStatement if (id == R.id.action_settings) { return true; } else if (id == R.id.open_gallery) { Intent intent = new Intent(Intent.ACTION_PICK, Uri.parse("content://media/internal/images/media")); startActivityForResult(intent, 0); } }
This code has nothing but a bunch of easy-to-understand if else statements. The thing you need to understand here is the startActivityForResult()
function. As you might have realized, we want to bring the image data from ACTION_PICK Intent
in to our application so that we can use it later as an input for our feature detection algorithms. For this reason, instead of using the startActivity()
function, we use startActivityForResult()
. After the user is done with the subsequent activity, the system calls the onActivityResult()
function along with the result from the called intent, which is the gallery picker in our case. Our work now is to implement the onActivityResult()
function in accordance with our application. Let's first enumerate what we want to do with the returned image. Not much actually; correct the orientation of the image and display it on the screen using ImageView
that we added to our activity in the beginning of this section.
Note
You must be wondering what is meant by correcting the orientation of an image. In any Android phone, there can be multiple sources of images, such as the native camera application, the Java camera app, or any other third-party app. Each of them might have different ways of capturing and storing images. Now, in your application, when you load these images, it may so happen that they are rotated by some angle. Before these images can be used in our application, we should correct their orientation so that they appear meaningful to your application users. We will take a look at the code to do this now.
The following is the onActivityResult()
function for our application:
protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == 0 && resultCode == RESULT_OK && null != data) { Uri selectedImage = data.getData(); String[] filePathColumn = {MediaStore.Images.Media.DATA}; Cursor cursor = getContentResolver().query(selectedImage, filePathColumn, null, null, null); cursor.moveToFirst(); int columnIndex = cursor.getColumnIndex(filePathColumn[0]); String picturePath = cursor.getString(columnIndex); cursor.close(); // String picturePath contains the path of selected Image //To speed up loading of image BitmapFactory.Options options = new BitmapFactory.Options(); options.inSampleSize = 2; Bitmap temp = BitmapFactory.decodeFile(picturePath, options); //Get orientation information int orientation = 0; try { ExifInterface imgParams = new ExifInterface(picturePath); orientation = imgParams.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_UNDEFINED); } catch (IOException e) { e.printStackTrace(); } //Rotating the image to get the correct orientation Matrix rotate90 = new Matrix(); rotate90.postRotate(orientation); originalBitmap = rotateBitmap(temp,orientation); //Convert Bitmap to Mat Bitmap tempBitmap = originalBitmap.copy(Bitmap.Config.ARGB_8888,true); originalMat = new Mat(tempBitmap.getHeight(), tempBitmap.getWidth(), CvType.CV_8U); Utils.bitmapToMat(tempBitmap, originalMat); currentBitmap = originalBitmap.copy(Bitmap.Config.ARGB_8888,false); loadImageToImageView(); } }
Let's see what this long piece of code does. First, we do a sanity check and see whether the result is coming from the appropriate intent (that is, the gallery picker) by checking requestCode
and resultCode
. After this is done, we try to retrieve the path of the image in your phone's filesystem. From the ACTION.PICK
intent, we get the Uri
of the selected image, which we will store in Uri selectedImage
. To get the exact path of the image, we make use of the Cursor
class. We initialize a new Cursor
class object with it pointing toward our selectedImage
. Using MediaStore.Images.Media.DATA
, we fetch the column index of the selected image, and then eventually, the path of the image using the cursor class declared earlier, and store it in a string, picturePath
. After we have the path of the image, we create a new Bitmap object temp to store the image. So far, we have been able to read the image and store it in a bitmap object. Next we need to correct the orientation. For this, we first extract the orientation information from the image using the ExifInterface
class. As you can see in the code, the ExifInterface
class gives us the orientation information through ExifInterface.TAG_ORIENTATION
. Using this orientation information, we rotate our bitmap accordingly using the rotateBitmap()
function.
Note
For implementation of the rotateBitmap()
function, refer to the code bundle that accompanies this book.
After correcting the orientation, we make two copies of the bitmap: one to store the original image (originalBitmap
) and the other one to store the processed bitmaps (currentBitmap
), that is, to store the outputs of different algorithms applied to the original bitmap. The only part left is to display the image on the screen. Create a new function loadImageToView()
and add the following lines to it:
private void loadImageToImageView() { ImageView imgView = (ImageView) findViewById(R.id.image_view); imgView.setImageBitmap(currentBitmap); }
The first line creates an instance of ImageView
and the second line sets that image onto the view. Simple!
One last thing and our application is ready! Since our application is going to read data from permanent storage (read images from external storage), we need permission. To the AndroidManifest.xml
file, add the following lines that will allow the application to access external storage for reading data:
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
Now that we have our basic application in place, let's take a look at the different feature detection algorithms, starting with Edge and Corner detection, Hough transformation, and Contours.
- Learning Selenium Testing Tools with Python
- Manga Studio Ex 5 Cookbook
- Rake Task Management Essentials
- Getting Started with PowerShell
- Git高手之路
- R Deep Learning Cookbook
- Visual C#通用范例開發(fā)金典
- Active Directory with PowerShell
- Domain-Driven Design in PHP
- 計算機應(yīng)用基礎(chǔ)教程(Windows 7+Office 2010)
- Machine Learning With Go
- 大學(xué)計算機基礎(chǔ)
- 深度探索Go語言:對象模型與runtime的原理特性及應(yīng)用
- Swift語言實戰(zhàn)晉級
- Arduino機器人系統(tǒng)設(shè)計及開發(fā)