Emmanuel Candes & Michael Wakin Discuss the New Discovery of Compressive Sampling
Emerging Research FRonts Commentary, June 2011
![]() |
Article: An introduction to compressive sampling
Authors: Candes, EJ;Wakin,
MB |
Emmanuel Candes & Michael Wakin talk with ScienceWatch.com and answer a few questions about this month's Emerging Research Front paper in the field of Engineering.
Why do you think your paper is highly
cited?
This paper provides an expository introduction to the topic of Compressive Sampling (CS), a novel technique that can allow for certain signals and data sets to be acquired much more efficiently and with higher resolution than ever before. This was not the first paper in the field, but it summarized many of the core developments at the time of its publication.
There are a number of reasons why this paper might be highly cited. First, the CS research landscape has been rapidly emerging and has drawn a great deal of attention from researchers in fields as diverse as pure and applied mathematics, statistics, computer science, electrical engineering, biomedical imaging, and so on. Second, we aimed to make the treatment of CS in this paper as broadly accessible as possible. Many of the foundational papers in CS are more technical and mathematical. We hoped to write a paper that could be read by any practicing engineer who is familiar with standard analog-to-digital converter technology. Third, this paper appeared in the IEEE Signal Processing Magazine, which has high visibility and helped us reach our target audience.
Does it describe a new discovery, methodology, or
synthesis of knowledge?
CS is a new discovery. Of course, there were ideas that led to this discovery, in particular methods for solving inverse problems and determining sparse signal expansions in the fields of signal processing, statistics, and geophysics; embeddings, databases, convex optimization, and streaming signal processing in the field of computer science; and high-dimensional geometry in the field of mathematics. Some of this prior work dates back several decades.
Would you summarize the significance of your paper
in layman's terms?
Our paper is a survey of important results in the field of CS. One way to appreciate the ideas underlying CS is to think of a digital camera. In this camera (and in many other modern sensors), we collect a signal/image by recording millions of numbers; in this case, the numbers describe the colors of the pixels in the scene. This is an enormous amount of information just to describe one image, and so within the camera is a digital processor that attempts to remove the redundancy from these numbers as quickly as possible before storing the image in memory. Such redundancy is quite common in real-world signals; smooth regions in an image, for example, contain many similar pixel values.
Coauthor Michael Wakin.
"Compressive Sampling has taken off and is already taught in various
universities around the world."
This data acquisition step can be very successful for reducing the burden of storing the image (and subsequently processing it, emailing it, and so on). However, to quote from our colleague David Brady: "One can regard the possibility of digital compression as a failure of sensor design. If it is possible to compress measured data, one might argue that too many measurements were taken."
CS then asks the question of whether we really need a digital sensor in the first place that collects so many numbers. In some cases, the answer is in fact "no," and CS has inspired the design of new "compressive samplers" which directly acquire signals in compressed form, using far fewer measurements than would be required with a conventional sensor.
A typical compressive sampler requires some degree of randomness in its acquisition scheme. For example, one way to design a compressive imager would be to build a camera that measures only a small random fraction of the pixels in the image. (There are better ways to design compressive imagers, but the details are more technical.) From this subset of the original data, then, CS requires an algorithm to determine what the missing pixel values are. Such algorithms operate by exploiting a model for how the signal/image is expected to behave. Often these models are the same ones used for removing redundancy when applying data compression in traditional sensing devices.
How did you become involved in this research, and
how would you describe the particular challenges, setbacks, and
successes that you've encountered along the way?
One of us got involved in this research because of a great question posed by radiologists: how can one speed up the acquisition time in Magnetic Resonance Imaging (MRI)? Here speedup means taking fewer samples. Classical Shannon/Nyquist theory asserts that this is not possible unless one is willing to suffer substantial information loss. What we discovered while working on this problem is that if one assumes a realistic image model, one could acquire data without much information loss at a much lower sampling rate than that predicted by the Shannon/Nyquist theory. This caught people by surprise.
Compressive Sampling has taken off and is already taught in various universities around the world. However, there have been setbacks as well. For instance, Candes teamed up with radiologists early on, that is, shortly after the discovery of CS: together they wrote a grant proposal outlining the benefits for MRI. This proposal was flat-out rejected. A few years later, the research discussed in the proposal became a major topic of research in the field.
Where do you see your research leading in the
future?
It is a bit hard to answer this question since we do many different things. However, with CS theory well in place, it is not hard to predict that future research in the field will concern the implementation of this theory into—hopefully—effective hardware. Personally, we are working on novel analog-to-digital converters, and on novel microscopy techniques which can acquire certain types of high-quality images by taking comparably fewer measurements.
Do you foresee any social or political
implications for your research?
CS has a number of potential applications in improving modern technology. Among those being actively explored are more efficient communication systems and remote sensor networks, improved wideband RF receivers, and hyperspectral imaging. One particular success story of CS so far has been in medicine, where researchers at Stanford have used CS ideas to develop techniques for faster and higher resolution MRI. We hear that this is used in pediatrics with great success.
CS has also spawned a research thrust in recovering matrices (rather than
signals) from partial information. For matrices that represent databases of
demographic data, such research could lead to better understanding of the
trends of human preferences and social behavior.
Emmanuel Candes
Professor of Mathematics, of Statistics, and of Electrical Engineering (by
courtesy)
Stanford University
Palo Alto, CA, USA
and
Ronald and Maxine Linde Professor of Applied and Computational Mathematics
(on leave)
California Institute of Technology
Pasadena, CA, USA
Michael Wakin
Assistant Professor
Division of Engineering
Colorado School of Mines
Golden, CO, USA
KEYWORDS: COMPRESSIVE SAMPLING, SIGNAL RECOVERY, UNCERTAINTY PRINCIPLES, PURSUIT.