In our Pi Mu Epsilon Talk last week, Dr. Weisbart talked about the noise process and how noise manifests itself in images. Here I say a few words about the image denoising process, i.e. noise removal. Given a grayscale image $f:[0,1]\times[0,1]\rightarrow [0,255]$ where $f(x,y)=0$ denotes black and $f(x,y)=255$ denotes white; any value in between is a shade of gray. Color images are the extension where a pixel location $(x,y)$ maps to a vector $[R,G,B]$. Each component $R$, $G$, and $B$ are grayscale images themselves representing the red, green, and blue channels respectively. In the talk last week, the speaker spoke about Poisson noise processes. In general, since such noise processes can be approximated by Gaussian noise via the Central Limit Theorem, we assume an additive Gaussian noise from a modeling perspective. i.e. if $u_c$ denotes the clean image, then the image $f$ degraded by noise is modeled as the following:
$$f(x,y) = u_c(x,y) + \eta(x,y)$$
where $\eta(x,y)$ is a random variable from a Gaussian distribution of mean zero and standard deviation $\sigma$. Thus, the image denoising problem is the inverse problem of finding $u_c$ given the noisy image $f$ and some statistics on the noise.
One of the most celebrated and utilized image denoising models is the TV-Denoising Model from Rudin, Osher, and Fatemi. The model is in a functional minimization form:
$$\min_{u} \left\{ J[u] = \frac{1}{2}\int_{\Omega} (f-u)^2\,dx + \lambda \int_{\Omega} |\nabla u| \right\}$$
where $\Omega$ is the image domain (rectangle) and the Total variation semi-norm $\int |\nabla u|$ is defined in the distributional sense:
$$TV(u) = \int_{\Omega}|\nabla u| = \sup \left\{\int_{\Omega} u(x) \xi(x) \, dx \, | \, \xi \in C_c^1(\Omega, \mathbf{R}^n), \ \|\xi\|_{\infty} \leq 1\right\}.$$
The model is a balance between data fitting and regularity where the parameter $\lambda$ controls this tradeoff.
There are numerous ways to minimize the above TV model but one of the simplest is via gradient descent:
$$u_ t = -\nabla J[u] = f-u + \nabla \cdot \frac{\nabla u}{|\nabla u|}$$
where $\nabla J[u]$ denotes the functional gradient (more on this in later blog posts!). Intermediate results for increasing values of $t$ from the gradient descent to minimize the TV model are observed below. Note how the noise is removed as the iterates approach a minimum of the functional $J[u]$.
$$f(x,y) = u_c(x,y) + \eta(x,y)$$
where $\eta(x,y)$ is a random variable from a Gaussian distribution of mean zero and standard deviation $\sigma$. Thus, the image denoising problem is the inverse problem of finding $u_c$ given the noisy image $f$ and some statistics on the noise.
One of the most celebrated and utilized image denoising models is the TV-Denoising Model from Rudin, Osher, and Fatemi. The model is in a functional minimization form:
$$\min_{u} \left\{ J[u] = \frac{1}{2}\int_{\Omega} (f-u)^2\,dx + \lambda \int_{\Omega} |\nabla u| \right\}$$
where $\Omega$ is the image domain (rectangle) and the Total variation semi-norm $\int |\nabla u|$ is defined in the distributional sense:
$$TV(u) = \int_{\Omega}|\nabla u| = \sup \left\{\int_{\Omega} u(x) \xi(x) \, dx \, | \, \xi \in C_c^1(\Omega, \mathbf{R}^n), \ \|\xi\|_{\infty} \leq 1\right\}.$$
The model is a balance between data fitting and regularity where the parameter $\lambda$ controls this tradeoff.
There are numerous ways to minimize the above TV model but one of the simplest is via gradient descent:
$$u_ t = -\nabla J[u] = f-u + \nabla \cdot \frac{\nabla u}{|\nabla u|}$$
where $\nabla J[u]$ denotes the functional gradient (more on this in later blog posts!). Intermediate results for increasing values of $t$ from the gradient descent to minimize the TV model are observed below. Note how the noise is removed as the iterates approach a minimum of the functional $J[u]$.