- Mastering OpenCV Android Application Programming
- Salil Kapur Nisarg Thakkar
- 2357字
- 2021-07-16 13:36:47
Linear filters in OpenCV
We all like sharp images. Who doesn't, right? However, there is a trade-off that needs to be made. More information means that the image will require more computation time to complete the same task as compared to an image which has less information. So, to solve this problem, we apply blurring operations.
Many of the linear filtering algorithms make use of an array of numbers called a kernel. A kernel can be thought of as a sliding window that passes over each pixel and calculates the output value for that pixel. This can be understood more clearly by taking a look at the following figure (this image of linear filtering/convolution is taken from http://test.virtual-labs.ac.in/labs/cse19/neigh/convolution.jpg):

In the preceding figure, a 3 x 3 kernel is used on a 10 x 10 image.
One of the most general operations used for linear filtering is convolution. The values in a kernel are coefficients for multiplication of the corresponding pixels. The final result is stored in the anchor point, generally, the center of the kernel:

Note
Linear filtering operations are generally not in-place operations, as for each pixel we use the values present in the original image, and not the modified values.
One of the most common uses of linear filtering is to remove the noise. Noise is the random variation in brightness or color information in images. We use blurring operations to reduce the noise in images.
The mean blur method
A mean filter is the simplest form of blurring. It calculates the mean of all the pixels that the given kernel superimposes. The kernel that is used for this kind of operation is a simple Mat that has all its values as 1, that is, each neighboring pixel is given the same weightage.
For this chapter, we will pick an image from the gallery and apply the respective image transformations. For this, we will add basic code. We are assuming that OpenCV4Android SDK has been set up and is running.
We can use the first OpenCV app that we created at the start of the chapter for the purpose of this chapter. At the time of creating the project, the default names will be as shown in the following screenshot:

Add a new activity by right-clicking on the Java folder and navigate to New | Activity. Then, select Blank Activity. Name the activity MainActivity.java
and the XML file activity_main.xml
. Go to res/menu/menu_main.xml
. Add an item as follows:
<item android:id="@+id/action_load_image" android:title="@string/action_load_image" android:orderInCategory="1" android:showAsAction="ifRoom" />
Since MainActivity
is the activity that we will be using to perform our OpenCV specific tasks, we need to instantiate OpenCV. Add this as a global member of MainActivity.java
:
private BaseLoaderCallback mOpenCVCallBack = new BaseLoaderCallback(this) { @Override public void onManagerConnected(int status) { switch (status) { case LoaderCallbackInterface.SUCCESS: //DO YOUR WORK/STUFF HERE break; default: super.onManagerConnected(status); break; } } }; @Override protected void onResume() { super.onResume(); OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_10, this, mOpenCVCallBack); }
This is a callback, which checks whether the OpenCV manager is installed. We need the OpenCV manager app to be installed on the device because it has all of the OpenCV functions defined. If we do not wish to use the OpenCV manager, we can have the functions present natively, but the APK size then increases significantly. If the OpenCV manager is not present, the app redirects the user to the Play Store to download it. The function call in onResume
loads OpenCV for use.
Next we will add a button to activity_home.xml
:
<Button android:id="@+id/bMean" android:layout_height="wrap_content" android:layout_width="wrap_content" android:text="Mean Blur" />
Then, in HomeActivity.java
, we will instantiate this button, and set an onClickListener
to this button:
Button bMean = (Button)findViewById(R.id.bMean); bMean.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent i = new Intent(getApplicationContext(),MainActivity.class); i.putExtra("ACTION_MODE", MEAN_BLUR); startActivity(i); } });
Tip
Downloading the example code
You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
In the preceding code, MEAN_BLUR
is a constant with value 1
that specifies the type of operation that we want to perform.
Here we have added extra to the activity bundle. This is to differentiate which operation we will be performing.
Open activity_main.xml
. Replace everything with this code snippet. This snippet adds two ImageView
items: one for the original image and one for the processed image:
<?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent"> <ImageView android:layout_width="match_parent" android:layout_height="match_parent" android:layout_weight="0.5" android:id="@+id/ivImage" /> <ImageView android:layout_width="match_parent" android:layout_height="match_parent" android:layout_weight="0.5" android:id="@+id/ivImageProcessed" /> </LinearLayout>
We need to programmatically link these ImageView
items to the ImageView
items in Java in our MainActivity.java
:
private final int SELECT_PHOTO = 1; private ImageView ivImage, ivImageProcessed; Mat src; static int ACTION_MODE = 0; @Override protected void onCreate(Bundle savedInstanceState) { // Android specific code ivImage = (ImageView)findViewById(R.id.ivImage); ivImageProcessed = (ImageView)findViewById(R.id.ivImageProcessed); Intent intent = getIntent(); if(intent.hasExtra("ACTION_MODE")){ ACTION_MODE = intent.getIntExtra("ACTION_MODE", 0); }
Here, the Mat and ImageViews have been made global to the class so that we can use them in other functions, without passing them as parameters. We will use the ACTION_MODE
variable to identify the required operation to be performed.
Now we will add the code to load an image from the gallery. For this, we will use the menu button we created earlier. We will load the menu_main.xml
file, when you click on the menu button:
@Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.menu_main, menu); return true; }
Then we will add the listener that will perform the desired action when an action item is selected. We will use Intent.ACTION_PICK
to get an image from the gallery:
@Override public boolean onOptionsItemSelected(MenuItem item) { int id = item.getItemId(); if (id == R.id.action_load_image) { Intent photoPickerIntent = new Intent(Intent.ACTION_PICK); photoPickerIntent.setType("image/*"); startActivityForResult(photoPickerIntent, SELECT_PHOTO); return true; } return super.onOptionsItemSelected(item); }
As you can see, we have used startActivityForResult()
. This will send the selected image to onActivityResult()
. We will use this to get the Bitmap and convert it to an OpenCV Mat. Once the operation is complete, we want to get the image back from the other activity. For this, we make a new function onActivityResult()
that gets called when the activity has completed its work, and is returned to the calling activity. Add the following code to onActivityResult()
:
switch(requestCode) { case SELECT_PHOTO: if(resultCode == RESULT_OK){ try { //Code to load image into a Bitmap and convert it to a Mat for processing. final Uri imageUri = imageReturnedIntent.getData(); final InputStream imageStream = getContentResolver().openInputStream(imageUri); final Bitmap selectedImage = BitmapFactory.decodeStream(imageStream); src = new Mat(selectedImage.getHeight(), selectedImage.getWidth(), CvType.CV_8UC4); Utils.bitmapToMat(selectedImage, src); switch (ACTION_MODE){ //Add different cases here depending on the required operation } //Code to convert Mat to Bitmap to load in an ImageView. Also load original image in imageView } catch (FileNotFoundException e) { e.printStackTrace(); } } break; }
To apply mean blur to an image, we use the OpenCV provided function blur()
. We have used a 3 x 3 kernel for this purpose:
case HomeActivity.MEAN_BLUR: Imgproc.blur(src, src, new Size(3,3)); break;
Now we will set this image in an ImageView to see the results of the operation:
Bitmap processedImage = Bitmap.createBitmap(src.cols(), src.rows(), Bitmap.Config.ARGB_8888); Utils.matToBitmap(src, processedImage); ivImage.setImageBitmap(selectedImage); ivImageProcessed.setImageBitmap(processedImage);

Original Image (Left) and Image after applying Mean Blur (Right)
The Gaussian blur method
The Gaussian blur is the most commonly used method of blurring. The Gaussian kernel is obtained using the Gaussian function given as follows:


The Gaussian Function in one and two dimensions
The anchor pixel is considered to be at (0, 0). As we can see, the pixels closer to the anchor pixel are given a higher weightage than those further away from it. This is generally the ideal scenario, as the nearby pixels should influence the result of a particular pixel more than those further away. The Gaussian kernels of size 3, 5, and 7 are shown in the following figure (image of 'Gaussian kernels' taken from http://www1.adept.com/main/KE/DATA/ACE/AdeptSight_User/ImageProcessing_Operations.html):

These are the Gaussian kernels of size 3 x 3, 5 x 5 and 7 x 7.
To use the Gaussian blur in your application, OpenCV provides a built-in function called GaussianBlur. We will use this and get the following resulting image. We will add a new case to the same switch block we used earlier. For this code, declare a constant GAUSSIAN_BLUR
with value 2:
case HomeActivity.GAUSSIAN_BLUR: Imgproc.GaussianBlur(src, src, new Size(3,3), 0); break;

Image after applying Gaussian blur on the original image
The median blur method
One of the common types of noise present in images is called salt-and-pepper noise. In this kind of noise, sparsely occurring black and white pixels are distributed over the image. To remove this type of noise, we use median blur. In this kind of blur, we arrange the pixels covered by our kernel in ascending/descending order, and set the value of the middle element as the final value of the anchor pixel. The advantage of using this type of filtering is that salt-and-pepper noise is sparsely occurring, and so its influence is only over a small number of pixels when averaging their values. Thus, over a bigger area, the number of noise pixels is fewer than the number of pixels that are useful, as shown in the following image:

Example of salt-and-pepper noise
To apply median blur in OpenCV, we use the built-in function medianBlur
. As in the previous cases, we have to add a button and add the OnClickListener
functions. We will add another case condition for this operation:
case HomeActivity.MEDIAN_BLUR: Imgproc.medianBlur(src, src, 3); break;

Resulting image after applying median blur
Note
Median blur does not use convolution.
Creating custom kernels
We have seen how different types of kernels affect the image. What if we want to create our own kernels for different applications that aren't natively offered by OpenCV? In this section, we will see how we can achieve just that. We will try to form a sharper image from a given input.
Sharpening can be thought of as a linear filtering operation where the anchor pixel has a high weightage and the surrounding pixels have a low weightage. A kernel satisfying this constraint is shown in the following table:
We will use this kernel to perform the convolution on our image:
case HomeActivity.SHARPEN: Mat kernel = new Mat(3,3,CvType.CV_16SC1); kernel.put(0, 0, 0, -1, 0, -1, 5, -1, 0, -1, 0);
Here we have given the image depth as 16SC1
. This means that each pixel in our image contains a 16-bit signed integer (16S) and the image has 1 channel (C1).
Now we will use the filter2D()
function, which performs the actual convolution when given the input image and a kernel. We will show the image in an ImageView. We will add another case to the switch block created earlier:
Imgproc.filter2D(src, src, src.depth(), kernel);

Original image (left) and sharpened image (right)
Morphological operations
Morphological operations are a set of operations that process an image based on the features of the image and a structuring element. These generally work on binary or grayscale images. We will take a look at some basic morphological operations before moving on to more advance ones.
Dilation
Dilation is a method by which the bright regions of an image are expanded. To achieve this, we take a kernel of the desired size and replace the anchor pixel with the maximum value overlapped by the kernel. Dilation can be used to merge objects that might have been broken off.

A binary image (left) and the result after applying dilation (right)
To apply this operation, we use the dilate()
function. We need to use a kernel to perform dilation. We use the getStructuringElement()
OpenCV function to get the required kernel.
OpenCV provides MORPH_RECT
, MORPH_CROSS
, and MORPH_ELLIPSE
as options to create our required kernels:
case HomeActivity.DILATE: Mat kernelDilate = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(3, 3)); Imgproc.dilate(src, src, kernelDilate); break;

Original image (left) and dilated image (right)
If we use a rectangular structuring element, the image grows in the shape of a rectangle. Similarly, if we use an elliptical structuring element, the image grows in the shape of an ellipse.
Erosion
Similarly, erosion is a method by which the dark regions of an image are expanded. To achieve this, we take a kernel of the desired size and replace the anchor pixel by the minimum value overlapped by the kernel. Erosion can be used to remove the noise from images.

A binary image (left) and the result after applying erosion (right)
To apply this operation, we use the erode()
function:
case HomeActivity.ERODE: Mat kernelErode = Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, new Size(5, 5)); Imgproc.erode(src, src, kernelErode); break;

Original image (left) and eroded image (right)
Note
Erosion and dilation are not inverse operations.
Thresholding
Thresholding is the method of segmenting out sections of an image that we would like to analyze. The value of each pixel is compared to a predefined threshold value and based on this result, we modify the value of the pixel. OpenCV provides five types of thresholding operations.
To perform thresholding, we will use the following code as a template and change the parameters as per the kind of thresholding required. We need to replace THRESH_CONSTANT
with the constant for the required method of thresholding:
case HomeActivity.THRESHOLD: Imgproc.threshold(src, src, 100, 255, Imgproc.THRESH_CONSTANT); break;
Here, 100
is the threshold value and 255
is the maximum value (the value of pure white).
The constants are listed in the following table:
The following image for thresholding results is taken from http://docs.opencv.org/trunk/d7/d4d/tutorial_py_thresholding.html:

Adaptive thresholding
Setting a global threshold value may not be the best option when performing segmentation. Lighting conditions affect the intensity of pixels. So, to overcome this limitation, we will try to calculate the threshold value for any pixel based on its neighboring pixels.
We will use three parameters to calculate the adaptive threshold of an image:
- Adaptive method: The following are the two methods we will use:
ADAPTIVE_THRESH_MEAN_C
: The threshold value is the mean of the neighboring pixelsADAPTIVE_THRESH_GAUSSIAN_C
: The threshold value is the weighted sum of the neighboring pixel values, where weights are Gaussian kernels
- Block Size: This is the size of the neighborhood
- C: This is the constant that has to be subtracted from the mean/weighted mean calculated for each pixel:
case HomeActivity.ADAPTIVE_THRESHOLD: Imgproc.cvtColor(src, src, Imgproc.COLOR_BGR2GRAY); Imgproc.adaptiveThreshold(src, src, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY, 3, 0); break;
Original image (left) and image after applying Adaptive thresholding (right)
Here, the resulting image has a lot of noise present. This can be avoided by applying a blurring operation before applying adaptive thresholding, so as to smooth the image.
- The Complete Rust Programming Reference Guide
- 新編Visual Basic程序設計上機實驗教程
- Learning Spring 5.0
- 基于免疫進化的算法及應用研究
- Java面向對象程序開發及實戰
- Unity Game Development Scripting
- Unreal Engine 4 Shaders and Effects Cookbook
- 運用后端技術處理業務邏輯(藍橋杯軟件大賽培訓教材-Java方向)
- Linux C編程:一站式學習
- Access 2010中文版項目教程
- INSTANT Apache Hive Essentials How-to
- Xamarin Cross-Platform Development Cookbook
- 零基礎學Java第2版
- Android編程權威指南(第4版)
- 征服C指針(第2版)