$$f(x,y) = u_c(x,y) + \eta(x,y)$$
where $\eta(x,y)$ is a random variable from a Gaussian distribution of mean zero and standard deviation $\sigma$. Thus, the image denoising problem is the inverse problem of finding $u_c$ given the noisy image $f$ and some statistics on the noise.
One of the most celebrated and utilized image denoising models is the TV-Denoising Model from Rudin, Osher, and Fatemi. The model is in a functional minimization form:
$$\min_{u} \left\{ J[u] = \frac{1}{2}\int_{\Omega} (f-u)^2\,dx + \lambda \int_{\Omega} |\nabla u| \right\}$$
where $\Omega$ is the image domain (rectangle) and the Total variation semi-norm $\int |\nabla u|$ is defined in the distributional sense:
$$TV(u) = \int_{\Omega}|\nabla u| = \sup \left\{\int_{\Omega} u(x) \xi(x) \, dx \, | \, \xi \in C_c^1(\Omega, \mathbf{R}^n), \ \|\xi\|_{\infty} \leq 1\right\}.$$
The model is a balance between data fitting and regularity where the parameter $\lambda$ controls this tradeoff.
There are numerous ways to minimize the above TV model but one of the simplest is via gradient descent:
$$u_ t = -\nabla J[u] = f-u + \nabla \cdot \frac{\nabla u}{|\nabla u|}$$
where $\nabla J[u]$ denotes the functional gradient (more on this in later blog posts!). Intermediate results for increasing values of $t$ from the gradient descent to minimize the TV model are observed below. Note how the noise is removed as the iterates approach a minimum of the functional $J[u]$.