Category Archives: Applied Harmonic Analysis

Proof of Landau’s conditions for the Hankel transform

As promised in my last blog post, I am writing this post to briefly explain the main ideas of the proof of the main Theorem in my paper about Landau’s necessary density conditions for the Hankel transform.

Recall that, given a function {f} defined in {\mathbb{R}^+} we define its Hankel transform as

\displaystyle \mathcal{H}_\alpha(f)(x) = \int_{\mathbb{R}^+}f(t)(xt)^{1/2}J_\alpha(xt)dt,

where {J_\alpha} is the Bessel function of order {\alpha}.

We want to understand how dense sampling or interpolation sequences have to be. Let’s say our objective is to understand how many points a sampling sequence has to have on an interval {I}.

Continue reading Proof of Landau’s conditions for the Hankel transform

Advertisements

Landau’s necessary density conditions for the Hankel transform

When I started this blog (quite some time ago…) I started blogging about a then-preprint (now it’s a published paper) with Luis Daniel Abreu entitled “Landau’s necessary conditions for the Hankel transform” and (as you might have noticed) I left the quest unfinished. Well, last week I spent a few days in Vienna (very nice place!) visiting NuHAG, and Daniel and I remembered that I have to finish the series of posts about our paper. Since I have a lot of other stuff I want to blog about, I am going to write one post explaining the result (and giving some context) and in a future post I will explain the main idea of the proof.

Continue reading Landau’s necessary density conditions for the Hankel transform

Sampling and Interpolation

This post will answer some of the questions posed at the end of the previous post, about what is a proper definition of sampling sequence.

Let us consider a function space \mathcal{B} \subset L^2(\mathbb{R}). We are interested in sequences \{t_n\} where we can sample a function f\in\mathcal{B}. The first natural requirement is that the values of the function at \{t_n\} uniquely define f as an element of \mathcal{B}, meaning that no other function in \mathcal{B} can have those same values at \{t_n\}. We will call such a sequence a uniqueness sequence.

Definition 1: Uniqueness Sequence

Given a function space \mathcal{B} \subset L^2(\mathbb{R}), we say that a sequence \{t_n\} is a uniqueness sequence if there are no two different functions f,g\in\mathcal{B} that agree in \{t_n\}. Continue reading Sampling and Interpolation

Bandlimited Functions and the Whittaker-Shannon-Kotel’nikov Sampling Theorem

During the next weeks I will be writing a series of posts about my pre-print with Daniel Abreu, it essentially has to do with Sampling Theory, Frames and Applied Harmonic Analysis. The first post of this series will be a more elementary one, and will be devoted to introducing bandlimited function and presenting one of the classical results in this subject, the Shannon Sampling Theorem.

In Signal Processing it is usual to represent a signal by a function [;f(t);] depending on [;t\in\mathbb{R};] (usually considered to be time). It is often important to consider the same signal on the frequency side, this is achieved by the Fourier Transform of the signal,

[;\hat{f}(\xi)=\frac1{\sqrt{2\pi}}\int_{\mathbb{R}}f(t)e^{-i\xi t}dt.;]

Roughly speaking, the value of [;\hat{f}(\xi);] stands for how much the frequency [;e^{i\xi\cdot};] is present in [;f;]. The Fourier Transform lies in the heart of the Fourier Analysis, and is a mathematical object with several beautiful (and sometimes amazing) proprieties. Two of them are very important in what follows: Continue reading Bandlimited Functions and the Whittaker-Shannon-Kotel’nikov Sampling Theorem