Haze Removal Of Underwater Images Using Fusion Technique

Problems with water absorption and dispersion, color loss, poor light, and limited vision are some of the main obstacles and restrictions in underwater photography. A fusion-based method is suggested to enhance these photos' quality; specifically, it would employ the Gray world algorithm and CLAHE (Contrast Limited Adaptive Histogram Equalization) to boost color and contrast. The contrast-enhanced, white-balanced, and weight maps are successfully fused using a multi-scale fusion approach to provide a superior picture. This technique greatly improves underwater picture clarity by eliminating haze and increasing visibility.


I. INTRODUCTION
Underwater life has piqued the interest of researchers and scientists.The use of underwater photography has many practical applications in fields as diverse as ocean engineering, monitoring of marine animals, estimation of marine populations, etc.Despite the fact that water makes about 70% of Earth's surface, very little is understood about life in the ocean.The presence of particles like minerals, sand, and plankton creates a significant problem with turbidity, according to an underwater survey.Underwater images become less clear and comprehensible due to these particles, rendering them unusable for other purposes [1].The main reasons of this deformation, according to scientists, are scattering and absorption, which are caused by moderate water conditions.The density of water means that very little light may get through it without being reflected off its surfaces [2].So, there is still a limit on how much light can reach the water.In addition, the quantity of light that reaches the target item is reduced due to scattering when it interacts with sand particles and dissolved minerals.The gathered underwater photographs end up being dark because of this [3].The absorption of light by water molecules is another component that contributes to the blurriness of the pictures [4].At depths of 5 to 10 meters, the longer-wavelength red light is absorbed first by the water, followed by orange at 20 meters and yellow at 30 meters.This is because there are seven hues in the visible light spectrum, or VIBGYOR, and each of these colors has a specific wavelength.By the time you get down to 40 meters, the greenish tint has faded away.It is the hue blue, which corresponds to the lowest wavelength, that reaches the ocean bottom first.Images taken at great depths, therefore, display a noticeable greenish-blue tint [2,[5][6].To enhance visibility in underwater images, the proposed method makes use of White's gray world algorithm.After splitting the underwater foggy video into many frames, the approach takes one of those frames as input, does some grayscale preprocessing, and uses the gray world algorithm and CLAHE (Contrast Limited Adaptive Histogram Equalization) to improve the image's color and contrast.By applying weight maps to the input photos, we can highlight key information and reduce noise by giving particular areas more priority.By combining the contrastenhanced picture, the white-balanced picture, and the weight maps, the fusion method produces an improved image.The following is the outline of the paper.Section II delves into the discussion of connections to prior research.Section III provides a detailed explanation of the suggested method, while Section IV discusses the experiments and their outcomes.Section V is where you wrap up the paper and talk about what comes next in terms of exploring the work.

II. RELATED WORKS
The quantity of light that reaches an observer may be greatly altered by the atmosphere.As a result, traditional visual aids are designed to work in clear weather but become worthless when the weather becomes bad.To ensure the continued functionality of vision systems during inclement weather, it is necessary to foresee the weather's impact by using concepts from atmospheric scattering.Using the simple dichromatic model, they established some useful constraints on scene color fluctuations due to different atmospheric conditions.Using these constraints, they developed simple algorithms to restore scenes' three-dimensional structure and true colors from inclement weather photographs [7].In the field of defogging, Dark Channel Prior (DCP) is among the most well-known strategies.This approach successfully eliminates fog from photos, but it is very time-consuming, loses edges, and produces a halo appearance.They have thus proposed a trio of methods to circumvent these DCP drawbacks.Forecasting Atmospheric Transmission Predicting Lighting Conditions and Repairing Images The halo effect, edge preservation, and temporal complexity were all accomplished using these approaches [8].An important part of this procedure is the image's disclosure.They provide black channels to prevent blurring and simple, effective visuals [9].A closedloop solution that estimates the values of the alpha channel is obtained by recasting the matting issue as an optimization problem.Even in challenging situations, the approach produces correct answers while being computationally efficient [10].Using just one input picture, the technique can predict optical transmission in hazy environments.In order to restore picture contrasts without blurring and increase scene visibility, flare is reduced based on this assessment.The results show that the novel approach can eliminate the haze layer and provide a trustworthy transmission estimate, both of which have potential uses in areas like picture focussing and view synthesis [11].Using the dark channel prior, the research suggests a straightforward method for removing haze from specific photos.By capitalizing on the statistical fact that haze-free outdoor images have a low intensity in the dark channel, the technique evaluates the haze transmission and restores the haze-free scene brightness.The proposed technique effectively eliminates haze and enhances sharpness in images that are otherwise fuzzy [12].When doing its filtering action, the bilateral filter takes into account not only the geographic closeness but also the intensity similarity of pixels.It is possible to strike a balance between reducing noise and preserving edges by changing the filter settings [13].In order to improve the quality of dehazed photographs, several writers address specific shortcomings of the original approach and suggest enhancements.For better dehazing results, they increase the transmission estimate using a soft matting approach [14].Blurring a single image is essential for computer vision job pre-processing.Given that the previously mentioned dark channel method produces inaccurate parameter estimation in the sky region of the image and that numerous methods have failed for a wide haze band, they suggested an efficient yet simple way to remove haze from a single image by utilizing an improved prebright and predark channel [15].Fog forms as a result of the atmospheric attenuation and incidence of light.Air light makes everything seem whiter, while dimming reduces contrast.The scene's contrast is restored via anisotropic diffusion.The simulation results show that the suggested method outperforms previous state-of-the-art methods in terms of computation time, percentage of saturated pixels, and contrast enhancement.[16]

III. PROPOSED METHODOLOGY
The suggested technique for improving underwater images consists of three steps: (1) preparing the initial hazy image for the next step by making it white-balanced and contrast-improved; (2) applying a Laplacian contrast weight, saliency, and saturation weight map to the two input images (color-balanced and contrast-improved) that were obtained in the previous step; and (3) finally, merging all of the input images and weight maps from steps (1) and ( 2) to obtain the improved underwater image as an output.The final output picture, as seen in figure 1, therefore incorporates key elements from each of the fused pictures.Proper input and weight map picture selection is the method's strong suit.

A. Selection of Underwater Hazy Video
To process, choose an underwater hazy video.Scenes in this film should take place underwater and in environments with varied degrees of haze.

B. Video to Frames Conversion
Extract still images from the chosen underwater blurry footage.Python and other libraries made specifically for processing videos, like OpenCV, can do this.It will be possible to view each video frame independently.

C. Frame Selection and Preprocessing
From the converted frames, choose one to use as a starting point.Get the selected frame ready for grayscale processing.Important structural information is retained while processing is simplified.The pixel values should be normalized to a common range, usually [0, 1].Use wavelet transformations to make the grayscale image sharper..

D. Contrast Enhancement and White Balancing
Process the grayscale frame that was preprocessed by using white balance and contrast enhancement methods, To increase the image's dynamic range, use a contrast enhancement technique, such as adaptive histogram equalization.To achieve white balance, use the Gray World method.Once you have the pixel values for all three color channels added together, you can find the correction factors for the red and blue channels by dividing the average green value by the average of the corresponding channel.Green (G_avg), red (R_avg), and blue (B_avg) are the average values, correspondingly.Next, we use the following formula to get the adjustment factors:

E. Imposing Laplacian Contrast Weight
After adjusting the white balance and contrast, use convolution with a Laplacian kernel to calculate the Laplacian of the picture.Take the absolute value to turn the Laplacian values into positive numbers.Assign a weight to each pixel in the contrast-enhanced picture according to its Laplacian value.

F. Saliency Weighting
For the picture that has been contrast-enhanced and Laplacian-weighted, create a saliency map.Areas of visual attraction are shown on the saliency map.Make sure the saliency map's values match the range of values for the pixels by normalizing it.

G. Saturation Weighting
After transforming the input frame to HSV color space, you may extract the saturation channel and use it to calculate the saturation map.Make sure the saturation map is scaled to the same range as the pixels.Take the derivative of the kth input's luminance Lk and the color channels Rk, Gk, and Bk to calculate the weight map for each input image.

H. Multi-scale Fusion
A potent method for improving the overall comprehension of a picture, multiscale fusion integrates data from many scales.The ability to capture data at various sizes is fundamental for many applications, and it is critical in several cases.Combine the images that have been improved with contrast and Laplacian weighting, as well as the images that have been improved with saliency and saturation weighting, using multi-scale fusion.This fusion method improves overall quality by capturing data at several scales.

IV. RESULTS AND DISCUSSION
As seen in Figure 2, the suggested method is evaluated using actual underwater video footage.The technique takes in underwater foggy footage, splits it into many frames, reads one of those frames as input, does some grayscale preprocessing, and then uses CLAHE and the Gray World White Balance algorithm to improve the image's color and contrast.By combining the contrast-enhanced picture, the white-balanced picture, and the weight maps, the fusion method produces an improved image.The impressive results we've obtained prove that our method is efficient in reducing haze in underwater photos.Even for photographs taken at great depths below the ocean's surface, this method-implemented in Python 3.7.4 on a Core i5 processor-has generated visually interpretable results free of any unpleasant greenish or blue artifacts.

V. CONCLUSION
Color casts and reduced visibility are common in underwater images caused by the water molecules' ability to absorb and disperse light.Because of this degradation, these photographs are no longer very helpful for correctly interpreting underwater scenery.By combining white balance and fusion methods, underwater photographers may effectively eliminate haze from their images, improving visibility and image quality in a variety of underwater settings.Image preprocessing with a grayscale method and contrast enhancement using CLAHE are performed prior to utilizing the fusion technique.We improved the visibility and clarity of the input photographs and created a more reliable haze removal method by merging the inputs and weight map images after using the gray world algorithm for white balancing.More sophisticated fusion algorithms and other image-enhancing techniques tailored to specific underwater scenarios could further enhance the proposed system.

Figures 2 (
Figures 2(a) and 2(b) indicate that, depending on the extent of haze in the input picture, it may provide superior output results than the original hazy image in terms of visual appearance.One measure used in image processing to estimate picture quality is the UCIQE score, which stands for Underwater Color Image Quality Evaluation.The value that was produced for figure 2(b) using UCIQE is 0.569580078125.Higher values of the UCIQE score indicate better picture quality; the score often ranges from 0 to 1.

Fig. 2
Fig. 2 Comparison of hazy images (a) and (c) with the results of proposed approach (b) and (d).