Compressed Sensing

In this blog post, we take a look at a very exciting and active area of research, compressed sensing. In compressed sensing, a signal is sampled well below the Nyquist rate, which is the traditional standard for determining how many samples to take depending on the camera’s spatial frequency resolution. The Nyquist method has been used for a very long time and is still the standard today. The problem is that it takes in way too much data, most of which ends up essentially getting thrown out during compression.

Compressed sensing is a relatively new and unknown alternative to Nyquist sampling. It takes in much less data by combining sampling and compression in one step! It can be used on any signal that is sparse in some basis (i.e. has a lot of zeros and a few nonzeros), which is just about any type of signal that’s out there. From here on out I’ll be referring to an image as the signal, as images are signals and it’s easier to understand with images.

In compressed sensing (CS), the pixels in an image can be stacked on top of each other to form a single column vector with n dimensions, which we’ll call x. n is often very large, at least 10,000.

jcmeme2

To sample x in CS, all you have to do is multiply x by a random, m x n Gaussian matrix A, where m<<n to get an m-dimensional vector b (the compressed measurements).

ax=b

 To get back the original image from b, solve the equation

Ax = b

Since m is less than n, there are many possible solutions x that will solve this equation, but, since satisfies the restricted isometry property, we are able to get back the correct x if it is sparse in some basis (the actual image could have a few nonzeros, or it could be explained with just a few sine/cosine waves, etc.). One way to  recover the sparse would be to check every possible solution to this problem by seeing if it has the most number of zeros out of any solution. However, that would take way too long for any signal that is relatively large, and, hence, it’s what is called an NP-hard problem. So, another way would be to sum up every element in the vector and find the solution with the lowest sum. This would then make most of the elements very small (essentially zero). The expression

minx ||Ax – b||2 + ||x||1

allows us to get back the image from the compressed measurements b, where we minimize the MSE between Ax and b plus ||x||1, which is the sum of x and is called the l1-norm of x.

First, we import the needed modules as usual. Matplotlib, tensorflow, numpy.

In [31]:
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np

For this example, a simple, hand-engineered signal is used. n is the number of elements or dimensions in the signal. m is the number of samples we’re going to take (note that it is only .1n), and k is the number of nonzeros in the signal. The signal is created by making k random elements of the vector nonzero.

In [32]:
n=1000

m=100

k=10

x=np.zeros([n, 1])

x[np.int32(np.random.rand(k, 1)*x.shape[0])]=1

We begin a Tensorflow session and create the LCA function that describes the locally-competitive algorithm. It takes as input the compressed measurements b, the random matrix A, and the number of iterations the network will use to settle on a solution, as it is a dynamical system. It is able to settle on a solution to this problem because x is sparse, and the network operates on lateral inhibition like the front end of the human visual system. This means that nodes with low activations will be inhibited by their neighbors with higher activations, pushing the low-activity nodes down to zero.

lambd and h are hyper-parameters that describe the firing threshold of each node and the scale constant, respectively. u is the input layer, and x is the reconstruction. The algorithm is extremely simple, and, since it is a vectorized version, can be run on modern GPU’s making it very fast.

In [33]:
with tf.Session() as sess:

    def LCA(b, A, iters):
    
        lambd=4.0

        h=0.001A_shp=A.get_shape()

        u=tf.to_float(tf.zeros([A_shp[1], 1]))

        for i in range(iters):

            x=(u-tf.sign(u)*lambd)*tf.to_float(tf.abs(u)>lambd)

            u=u+h*(tf.matmul(tf.transpose(A), (b-tf.matmul(A, x)))-u-x) 
    
        return sess.run(tf.round(x))

The sampling matrix is created…

In [34]:
    A=tf.random_normal([m, n])

Compressed measurements are taken by just performing a matrix multiplication between phi and x.

In [35]:
b=tf.matmul(A, tf.to_float(x))

We then send the measurements, the matrix, and the number of iterations to the network, and it outputs the reconstruction.

In [36]:
x_rec=LCA(b, A, 150)

    fig=plt.figure()
    
    ax1=fig.add_subplot(121)
    ax1.plot(x)
    ax1.set_xlabel('Original Signal')
    
    ax2=fig.add_subplot(122)
    ax2.plot(x_rec)
    ax2.set_xlabel('No Dropout')
    
    plt.show()
    
    print(np.mean([float(x==y) for (x, y) in zip(x_rec, x)]))
cs
1.0

As can be seen, the network correctly recovers the signal x from only 10% of the measurements needed with the Nyquist sampling method! The next blog post will dive further into compressed sensing and look at how to use it to sample and recover natural images that are not inherently sparse.

There are actually people making cameras now that perform compressed sensing in hardware. They are called single-pixel cameras and have a micro-mirror array where the sensors in a typical camera would be. The random matrix is made in the hardware by randomly turning each mirror towards or away from a single sensor. Light comes in and hits each mirror and either goes into the sensor or is thrown out depending on the random configuration of the mirrors. The mirrors are then changed number of times so the sensor collects measurements.single-pixel_camera

Another great thing about this LCA-CS algorithm is that LCA can also be easily built into hardware, so you could have a whole camera that does this. For more information on compressed sensing, we turn to a lecture by Richard Baraniuk of Rice University who invented the single-pixel camera. I’ve seen this video around 50 times and I still find stuff I didn’t know. It’s a classic.