- OpenCV 4 with Python Blueprints
- Dr. Menua Gevorgyan Arsen Mamikonyan Michael Beyeler
- 590字
- 2021-06-24 16:49:57
Understanding approaches for using dodging and burning techniques
Dodging decreases the exposure for areas of the image that we wish to make lighter (than before) in an image, A. In image processing, we usually select or specify areas of the image that need to be altered using masks. A mask, B, is an array of the same dimensions as the image on which it can be applied (think of it as a sheet of paper you use to cover the image that has holes in it). "Holes" in the sheet of paper are represented with 255 (or ones if we are working on the 0-1 range) in an opaque region with zeros.
In modern image editing tools, such as Photoshop, the color dodging of the image A with the mask B is implemented by using the following ternary statement that acts on every pixel using the index i:
((B[i] == 255) ? B[i] :
min(255, ((A[i] << 8) / (255 - B[i]))))
The previous code essentially divides the value of the A[i] image pixel by the inverse of the B[i] mask pixel value (which are in the range of 0-255), while making sure that the resulting pixel value will be in the range of (0, 255) and that we do not divide by 0.
We could translate the previous complex-looking expression or code into the following naive Python function, which accepts two OpenCV matrices (image and mask) and returns the blended image:
def dodge_naive(image, mask):
# determine the shape of the input image
width, height = image.shape[:2]
# prepare output argument with same size as image
blend = np.zeros((width, height), np.uint8)
for c in range(width):
for r in range(height):
# shift image pixel value by 8 bits
# divide by the inverse of the mask
result = (image[c, r] << 8) / (255 - mask[c, r])
# make sure resulting value stays within bounds
blend[c, r] = min(255, result)
return blend
As you might have guessed, although the previous code might be functionally correct, it will undoubtedly be horrendously slow. Firstly, the function uses the for loops, which are almost always a bad idea in Python. Secondly, the NumPy arrays (the underlying format of OpenCV images in Python) are optimized for the array calculations, so accessing and modifying each image[c, r] pixel separately will be really slow.
Instead, we should realize that the <<8 operation is the same as multiplying the pixel value with the number 28 (=256), and that pixel-wise division can be achieved with the cv2.divide function. Thus, an improved version of our dodge function that takes advantage of matrix multiplication (which is faster) looks like this:
import cv2 def dodge(image, mask): return cv2.divide(image, 255 - mask, scale=256)
Here, we have reduced the dodge function to a single line! The new dodge function produces the same result as dodge_naive, but it is orders of magnitude faster than the naive version. In addition to this, cv2.divide automatically takes care of the division by zero, making the result zero, where 255 - mask is zero.
Here is a dodged version of Lena.png where we have dodges in the square with pixels in the range of (100:300, 100:300):
As you can see, the lightened region is very obvious in the right photograph because the transition is very sharp. There are ways to correct this, one of which we will take a look at in the next section.
Let's learn how to obtain a Gaussian blur by using two-dimensional convolution next.
- Python快樂編程:人工智能深度學習基礎
- C語言程序設計(第3版)
- Learning RabbitMQ
- 深入淺出Spring Boot 2.x
- Java開發入行真功夫
- 深入淺出Windows API程序設計:編程基礎篇
- INSTANT Weka How-to
- Visual Basic程序設計教程
- Building Machine Learning Systems with Python(Second Edition)
- Spring Security Essentials
- Hands-On Nuxt.js Web Development
- Python Machine Learning Blueprints:Intuitive data projects you can relate to
- 石墨烯改性塑料
- Go語言入門經典
- Swift High Performance